Artificial intelligence: from research to application

Thema:
Artifical intelligence

Artificial Intelligence (AI) is the ability of machines to exhibit intelligent behaviour. AI will change the way we live, work, and travel. It helps us to develop innovative solutions that keep us healthy and safe. The big challenge? Bringing together AI expertise and domain knowledge. And that’s our strength.

Research on AI

We see great opportunities in applied research on AI, because it has enormous economic and social potential. Especially in the areas of security, efficiency, and ethics. That’s why we started our Appl.AI research programme. The goal of this programme is to make AI systems work in a world of uncertainties. We also focus on efficient collaboration between humans and machines.

Our Appl.AI programme currently has 2 lines of research. Several use cases fall under the 2 programme lines, which are:

Read more about our Appl.AI research programme

Download our paper 'Research strategy trustworthy adaptive AI'

Future vision of AI in 2032

During TNO's 90th anniversary in 2022, experts from the organisation wrote a future vision of AI. In it, the expectations of artificial intelligence in ten years and the predictions.

Read TNO's vison of AI

Download the vision paper or watch videos featuring prominent figures on AI, including David Deutsch and Georgette Fijneman.

AI technologies

AI is a key enabling technology for new applications that contribute to a stronger industry and society. Together, we can make the Netherlands safer, more efficient, healthier, and more prosperous through AI applications. This goal is in our sights, which is why we continue to invest in AI technologies. All AI technologies are widely applicable and help us to be effective in more domains. We’re currently making progress on the following AI technologies:

AI applications in 6 domains

We combine different AI technologies with multidisciplinary expertise to create responsible system solutions. Our goal? To help create a better society. That means we want to make the Netherlands safer, more efficient, healthier, and more prosperous. We’re doing this through innovations with AI, among other things. In order to make a real impact with AI applications, research institutions, companies, and public authorities must work together. Within our research programme, we do this in 6 application areas:

  1. Cyber crime: AI helps us improve cyber security by detecting and predicting fraud, threats, and actual attacks.
  2. Data-driven decision-making: AI helps us make the right decisions in public policy and services with data-driven insights.
  3. Personalised healthcare: AI gives us personal health advice on how to prevent and cure lifestyle diseases.
  4. Predictive maintenance: with AI, we make predictions for effective planning and optimisation of operations and maintenance.
  5. Smart industry: AI is digitalising industry and facilitating an optimal working relationship between humans and machines.
  6. Self-driving cars: AI-driven autonomous vehicles and systems will support or even replace people in critical, dangerous, and complex situations.
testing smart devices wearing VR and robot gloves

1. Cyber crime

Cyber security is a serious matter. As our dependence on IT increases, so does the potential impact of a cyber attack. Today, these attacks are carried out by professionals. They have considerable technical skills and resources. So, how do we protect organisations from cyber threats? Through innovative solutions with better and faster detection of – and response to – these attacks. The attacks often have a specific target and a high degree of automation, persistence, and technical sophistication. Working closely with academics and businesses, we’re reducing the average time it takes to detect and respond to a cyber security breach.

Improving cyber security

We’re making strides in the field of cyber security, with AI playing a key role. For example, in threat detection, where we combine AI with human intelligence and contextual interpretation. The tools that we design assume that an intruder has already compromised a system. They then detect anomalous internal and external patterns in network data and system registrations. By linking network data to centrally available system log data, we filter and enrich that data. This maximises the effectiveness of AI detection tools.

Detecting cyber attacks

In the field of automated security, we now use AI to increase both efficiency and effectiveness. Cyber security requires a combination of information and analysis. When the system detects a cyber attack, it identifies the potential attacker, the target, and the likely attack route. It then identifies the response options, as well as their potential impact on business continuity.

Download our white paper ‘Security at machine speed’ (pdf)

Our work against cyber crime

We bring parties together to overcome cyber security challenges. This collaboration produces smart AI-based algorithms with which we protect organisations from cyber threats. This is what we offer:

  • We contribute with technological expertise to combining cyber security, AI, and mathematical modelling.
  • We develop innovative technologies for threat detection. These technologies combine AI technologies with datasets, such as network data and system registrations.
  • By automating the detection of vulnerabilities and generating effective patches, we develop and apply technologies that strengthen system security.
  • In partnerships, we develop technology to enable automated security logic to support mitigation and response decisions.
  • We work with academic partners and companies to translate cutting-edge AI into practical applications for cyber security. These include financial institutions and insurance companies, but also government organisations.

2. Data-driven decision-making

Every day, policymakers and officials face complex challenges, such as:

  • combating long-term unemployment
  • achieving sustainability goals
  • anticipating technological effects.

Policy makers and officials are expected to develop effective measures to address these issues. We’re exploring with government agencies the possibilities of using artificial intelligence to support policy development and decision-making processes with data-driven insights.

AI supports effective government action

We greatly improve decision-making through data-driven insights. With AI, we identify social threats, risks, and opportunities at an early stage. It also improves the way we use existing data. Governments usually have access to a rich dataset, such as data on residential energy consumption, income, and media consumption. AI is also excellent for searching and analysing large quantities of public documents, such as case law and permits. Moreover, AI technologies are suitable for identifying patterns in society and in individual behaviour. For example, technologies such as natural language processing and image recognition. The use of AI can support effective government action.

Our work in data-driven decision-making

  • We design and supervise experiments for innovation and data-driven policy in the public sector.
  • We develop methodologies for multidisciplinary collaboration and hybrid AI statistics.
  • We provide a critical evaluation of the suitability of AI-based insights and applications in relation to specific societal challenges.
  • We combine social science theories and insights with AI techniques and expertise.
  • We’re working with policy makers on new privacy, ethical, and regulatory frameworks to ensure responsible data policies.

3. Personalised healthcare

An increasingly unhealthy lifestyle leads to higher healthcare costs. Fortunately, the use of artificial intelligence can improve the quality of care, reduce costs, and make us healthier. AI in healthcare can provide data on individuals, personalised advice, and active support. Analysing personal data and predicting what to do helps us stay healthy or change our lifestyle. This is how we reduce the burden of disease.

Vision on artificial intelligence in healthcare

We collaborate with commercial organisations and healthcare-related organisations. These collaborations are aimed at providing personalised advice as well as increasing knowledge about our health and necessary interventions. Our overall vision is that AI should work together with the general public and healthcare providers to improve individual health.

AI and health personalisation

With the help of artificial intelligence, we can:

  • adopt preventive health measures
  • develop predictive models
  • process healthcare data
  • exchange data in a secure manner between all parties.

Health advice must be personal

Healthcare today is mostly based on evidence from large-scale clinical trials. But health is personal, so advice should also be personal. AI in healthcare offers new opportunities for this. AI recommendations are based on personal data from apps, consultations, and medical data. All this is made possible by the personal health train – data secure infrastructure for the reuse of data and advice. And predictive models in healthcare, in which data from a large section of the general public and patients can be linked and then fed into AI models.

Our services in personalised healthcare

  • We develop privacy-by-design technology to share data securely.
  • We combine biological, psychological, and social data and insights with AI in healthcare technologies and expertise.
  • We develop predictive models and related interventions.
  • We develop digital biomarkers for health diagnosis (including proxies).
  • We design hybrid models (knowledge-data driven) and beta apps for diagnosis and health (behavioural change), e.g., igrow.

4. Predictive maintenance

Many sectors strive for the reliable and safe use of equipment, machines, and other infrastructures. How can a wind farm safely generate maximum energy at minimum cost? When should the government carry out maintenance on a bridge to ensure its safety? How can a manufacturer ensure automatic updating of high-tech instruments? We answer these questions using AI. This facilitates better planning of predictive maintenance and smart operation. We do this in the energy, construction, and industrial sectors.

Increasing efficiency in 3 areas

Using AI, we increase the efficiency of operation and predictive maintenance in 3 areas:

  1. Energy production and transport systems

    We develop data-driven models and optimisation routines to support strategic and operational decisions. This makes it possible to deal with a high degree of uncertainty and complexity.

  2. Predictive maintenance of structures

    Inspecting infrastructures (e.g., bridges and production facilities) is complex, labour-intensive, and requires human interpretation. Here, predictive maintenance is of great value for safety. Through intelligent digital twin technology, we can automatically recognise damage patterns. Based on this, we improve monitoring, maintenance planning, and degradation assessment.

  3. Manufacturing industry

    There is an increasing demand for flexibility in the product mix. That's why we must maintain a high level of quality control while ensuring real-time and continuous monitoring. We look at the entire workflow and accessible data and, in doing so, determine the right AI-driven solution. This ranges from the development of specific quality sensors and intelligent digital twin technology to a physics-based model supported by AI.

Our predictive maintenance services

  • We help organisations develop AI applications, exposing the opportunities and pitfalls of AI.
  • In close collaboration with industrial partners and public authorities, we develop AI workflows to support decisions on energy production and transport (road, rail, water).
  • We create digital twins of structures, machines, or systems by combining physics-based modelling skills, domain knowledge, and data analytics or artificial intelligence techniques.
  • We develop system solutions that enable innovations in computer vision, robotics, machine learning, and human-machine collaboration.
  • Thanks to our experience in sensor data collection, (OPC-UA) data communication, processing, and storage in a digital twin data platform, we can manage data to train your AI algorithm.

5. Smart industry

Smart industry is concerned with the digitalisation of the production and service sectors. This is known internationally as the fourth industrial revolution. An example of smart equipment is robots that supervise remotely and carry out predictive maintenance assessments. Through AI analyses, we’re constantly increasing our knowledge of how to use AI correctly. For example, in the area of cobots, which are robots that work together with people. In doing so, we help define a smart balance of collaboration between humans and robots. This is how we improve the performance of these systems and their supervisors.

Data from a single source

In business, digitally available data often comes from a single source. For example, a camera that transmits the position of parts to a flexible assembly robot. Such robots are used in agriculture and horticulture. Another example is high-tech industrial sensors, which generate enormous amounts of data with a lot of static. Here, AI algorithms significantly improve detection accuracy. For example, by detecting dust particles in the semiconductor industry.

Aspects of more advanced applications

In more advanced applications, we’re researching better digital twin models with multiple data sources and mathematical/physical simulation models. In such cases, we take other aspects into account. For example:

  • collection
  • clean up
  • visualisation
  • data exchange
  • data sovereignty or confidentiality
  • cyber security
  • the use of standards

Our services in smart industry

  • We’re active in developing AI-based solutions to achieve radical goals. For example, malfunction-free, surprise-free, programming-free, and so on.
  • We’re active in several smart industry field labs based on Dutch public-private partnerships.
  • We promote standards such as the International Data Spaces (IDS), Open Platform Communication (OPC-UA), and the use of open hardware, software, and toolware for AI.
  • We help develop policies for open data ecosystems for business.

6. Self-driving cars

We ensure that new autonomous vehicles and systems are safe and reliable. They are AI-controlled. Moreover, we ensure that they comply with the law and ethical and social standards. This is not easy, especially when you consider that these systems will soon trigger a transformation in AI domains ranging from mobility and traffic management to security.

AI's potential in mobility and security

There are huge opportunities for AI in mobility and security. Faster, lighter, and more powerful sensors improve AI-controlled autonomous vehicle systems. Excellent statistical results from current AI methods improve the performance of self-driving vehicles in complex environments. These systems can support, and even replace, human decision-making when dealing with large datasets, analytical problems, or time-bound tasks. They improve the quality of decision-making and minimise human error. In cases in which humans are more capable of performing the task, the AI system should transfer control to humans, giving them enough time to adapt to the new situation.

Examples

AI has enormous potential to increase safety and efficiency in mobility:

  • Self-organising logistics can reduce the number of transport kilometres (reducing costs and environmental impact).
  • AI-supported traffic management can improve traffic safety and network efficiency, and reduce traffic congestion.
  • In the meantime, self-driving vehicles connected to each other can ensure optimal traffic flow, increasing safety, comfort, and efficiency.
  • Surveillance robots will be able to monitor areas almost continuously.

Our services related to self-driving cars

  • We provide a safe, open environment for collaborative research.
  • We weigh the economic benefits against the ethical and legal constraints.
  • We develop safety protocols for governments and car manufacturers.
  • We test and validate self-driving systems for manufacturers.
  • We develop architectures and algorithms for performance and health assessments.
  • We develop algorithms for connected vehicles. In doing so, we use our multidisciplinary domain expertise in traffic and vehicle management.
  • We develop verification and validation technology for joint human-machine systems.
  • We offer integrated solutions by combining our domain knowledge from multiple and relevant disciplines. For example, data scientists, psychologists, lawyers, environmental scientists, etc.
Graphical representation of smart sensors in cars on a busy road
Artificial intelligence in autonomous vehicles and systems

5 key challenges in AI

Our goal is to contribute to a better society, which is why we want to create AI applications that have impact and are responsible. This contributes to a safer, more efficient, healthier, and more prosperous Netherlands. AI is widely applicable and often very complex, which creates challenges. In our research programme Appl.AI, we focus strongly on 5 AI challenges:

1. Reliable, transparent, and fair AI

Current AI systems are far from perfect and make decisions that are difficult to understand. Acceptance of AI systems in society requires transparent algorithms and compliance with legislation and regulations. AI systems must function reliably in adverse conditions. In addition, they must observe ethical principles, guidelines, and values. We’ll make sure that AI systems are fair, transparent, and reliable.

Design challenges for AI systems

AI algorithms must be fair, objective, and accurate. But it’s difficult to make them that way. Datasets (feed for AI algorithms) are often biased. Designers of AI algorithms must be aware of unwanted biases. They must develop methods to detect and remove them. Laws against discrimination require that decisions of AI algorithms do not depend on protected data characteristics, such as gender and ethnicity. Simply removing these attributes from the data is not enough, because the information can still be found in correlating characteristics.

Accuracy

Modern AI algorithms are very accurate. But their opacity makes them difficult to understand. In many applications, this lack of AI transparency hinders acceptance by society. In order to operate safely, AI systems must also be able to cope with unforeseen situations and hostile attacks.

Reliability and the role of AI in coronavirus apps

What role can AI and coronavirus apps play in the fight against the corona virus? And how reliable is the data? We asked Marc Steen, senior researcher at TNO.

‘As data scientists say: "Garbage in, garbage out." If few people are tested, you have a lot of missing data, making a tracking app less useful. If we want many people to participate in these apps, transparency is the key word. It may help if the government clearly explains what exactly these apps do:

  • Which data will and will not be used?
  • Where and for how long is it stored?
  • What may it be used for?’

An AI system can make simulations or predictions about coronavirus distribution in the future. This is useful, for example, when reversing a lockdown step by step or continuing it. ‘If you collect data from the previous 30 days, you can do simulations for the next 30 days. The government can use these insights to explore policy options and make informed choices. That’s also the approach in ERP Wise Policy Making.’

Our work on fair, transparent and reliable AI

  • We design fair algorithms that reduce the impact of unwanted biases. Our algorithms also protect against the improper use of protected data characteristics, such as gender and ethnicity.
  • We design transparent AI algorithms that enable the user to be accountable and clear about the factors that influence algorithmic decisions.
  • We design reliable AI-based systems that can be used effectively, safely, and reliably in dynamic, complex, and adverse situations.

2. Securing and sharing data

We assist organisations with data sharing and the corresponding data security. This support is badly needed, as data is usually fragmented across different databases and organisations. We ensure that it’s the data owners who decide whether others can access their data. And under what conditions.

Download our report ‘Responsible data sharing for AI’ (pdf)

Different technologies and methods

Depending on the wishes of the data owners, we use different technologies and methods. These range from low-threshold access to highly secure data exchange. Everything we do in the field of data security is carefully coordinated and complies with business and legal requirements.

With our secure data solution, we can safely analyse sensitive data from different parties. We can do this without having to share the data.’ – Thijs Veugen, Senior Information Security Scientist

Multidisciplinary approach to safe data sharing

All AI applications feed on data. They need it to improve their predictions and ultimately their decisions. Bringing together the relevant data is often difficult and expensive. It requires built-in trust, technical expertise, and a thorough knowledge of the processes and business logic. We take a multidisciplinary approach to data security in order to meet all the requirements in a given situation. We also create standards for the exchange of data between domains and sectors.

Healthcare and manufacturing

Security of shared data is particularly important in healthcare and the manufacturing industry. In healthcare, it significantly improves decision-making. Diagnoses are better and faster. This results in earlier warnings, better treatments, and lives saved.

In the manufacturing industry, suppliers often only produce part of a final product. Here, information sharing may give an unfair advantage to unscrupulous suppliers. Any mechanism that shares data therefore needs built-in security measures. In both areas, privacy is crucial and must be taken into account in the design. We use various technologies to secure data.

Our work in data security and data exchange

  • We develop algorithms for multi-party calculations and blockchain solutions. This allows organisations to learn from each other's data without having to make it public.
  • We set up ecosystems/governance structures to share data between multiple parties.
  • We develop standards for autonomous, controlled, and ultimately audit-proof data exchange, i.e., data platforms.
  • We support organisations or groups of organisations that need control over data sharing.
  • We also participate in various standardisation initiatives, such as International Data Spaces. Among other things, this lays the foundation for the concept of data autonomy.

3. Learning from limited datasets

Optimal AI algorithms depend on very large amounts of representative training data. This can be 100 million items. However, this amount of data is simply not available. But limited datasets can lead to unreliable results. So, it’s important that we develop algorithms that can deal with this. We offer various methods for dealing effectively with small datasets. These include the following methods:

  • transfer learning
  • online learning
  • using high-fidelity models to generate simulated data

Meeting the challenges of small datasets

Modern machine learning algorithms have millions of parameters that offer strong predictive values when trained with large datasets. Unfortunately, they perform much worse when trained with small datasets. However, often only small datasets are available as training data. What's more, obtaining sufficient data is difficult, time-consuming, and expensive. There are also legal and ethical restrictions on the amount of data. In rare cases, it may even be impossible to obtain sufficient data.

3 reliability and performance challenges

The use of AI applications on small datasets carries risks for reliability and performance. There may also be bias. This results in numerous challenges:

  1. Developing effective algorithms with small datasets that are reliable, unbiased, and secure.
  2. Combining small datasets with existing model-based approaches.
  3. Dealing with the problem of missing data and unreliable and changing data sources.

Small and limited datasets are strongly represented in the fields of artificial intelligence in healthcare, predictive maintenance, and autonomous vehicles.

The technology of learning from small and limited datasets allows us to make use of the advantages of current developments. For example, the use of artificial intelligence, without the need for prohibitively large efforts.’ – Klamer Schutte, lead scientist

Our services in learning from limited and small datasets

  • We develop transfer learning. This makes it possible to use data that’s already available but less representative.
  • We develop active and online learning that uses the availability of scarce domain expertise to annotate only essential examples.
  • We supplement small datasets by using existing, highly reliable models to generate simulated training data.
  • By integrating domain knowledge, model-based reasoning, and automated learning, we reduce the need for training data.

4. Collaboration between humans and machines

With the growing potential of AI, our lives are increasingly intertwined with the use of AI technology. It's becoming a partner rather than an instrument. Yet AI technology is always embedded in a larger organisation, in which humans determine the purpose and framework of AI technology. AI technology does not stand alone, so it must be part of a larger network: a human-machine team.

Mutual understanding and anticipation

There are crucial requirements for sustainable human-machine collaboration. People and AI technologies must be able to understand each other and anticipate each other's needs, capabilities, and shortcomings. At present, the right degree of mutual understanding and anticipation is lacking. We promote this mutual understanding through the development of:

  • AI that learns to understand and interpret human skills.
  • AI that’s able to explain its decisions and actions to humans.

By addressing these challenges, we can realise the full potential of AI as a partner.

Unpredictable or adverse circumstances

Human-machine collaboration is strong in the areas of health, mobility, safety, and security. This collaboration faces unpredictable or unfavourable circumstances. A human-machine team must be able to reorganise itself efficiently. This maximises the potential of the team. AI technology can function effectively and reliably, but only if the technology is socially adept, flexible, and aware of the larger context in which it operates.

Shared understanding

The success of human-computer collaboration depends on whether humans and AI technologies have a shared understanding. This relates to the team context, the roles of the team members, and the need for resources. AI technology must be able to proactively invite humans into the cycle, sharing information about relevant developments. Moreover, AI technology must be able to develop lasting and reliable relationships with its team members. For example, by explaining its reasoning when it proposes a solution.

Gradual process

Achieving a shared understanding of roles and competences is a gradual process. Relationships between team members develop over time through experience gained in training, exercises, and operations. These are all occasions in which AI systems receive constant input, in order to learn from the dynamics with their team members and environment.

Our services in human-machine collaboration

  • We develop AI algorithms that provide meaningful explanations regarding the advice and decisions they generate.
  • We develop frameworks, methods, and software for the design and evaluation of human-machine teams.
  • We design patterns for teams based on standardised and proven forms of collaboration between people and intelligent autonomous actors in different contexts and problem domains.

5. Privacy and ethics in artificial intelligence

The expected social impact of AI is considerable. For example, the use of AI for staff recruitment, using algorithms to select employees. However, there’s a danger of discrimination being reinforced by some AI applications in this field. Benefiting from AI requires careful consideration and consultation on ethical standards and regulations.

One way of making it practical to act ethically is to enable researchers and developers to develop relevant virtues. For example, justice, self-control, politeness, or courage. This allows them to apply and express these virtues during the design process.’ – Marc Steen, Senior Research Scientist

Such standards and regulations deal with, for example, the protection of privacy and data security during data collection, storage, and processing. In addition, they’re aimed at ensuring fairness, accountability, and transparency of AI-based decisions and processes. We provide advice and conduct research on social impact, regulation, and ethics.

3 challenges

We identify 3 challenges in the application of AI:

  1. Privacy

    When it comes to privacy, the application and enforcement of the GDPR and other privacy directives often lag behind current and new applications of AI. For example, AI applications force us to rethink what identifiable personal data is.

  2. Ethics

    The challenge for the application of ethical guidelines is that many guidelines are of a general nature and require interpretation. For example, autonomous driving requires an interpretation of the principle of meaningful human control.

  3. Standards and values

    Some values, such as honesty, are not as self-evident as you might think. If we want to include values in AI applications, we have to make them explicit. What’s more, standards and values depend on the context and may change over time. The challenge is to define these standards and values in such a way that we facilitate the boundaries of responsibility of different stakeholders.

Our services regarding privacy and ethics in AI

  • We contribute to fundamental and applied research and participate in research networks and institutions. For example, the Leibniz Institute, VWData, and the European Big Data Values Association (BDVA).
  • We conduct research and share insights on the social, legal, and ethical implications of existing and emerging AI applications. Our advice is based on a unique combination of legal and technical expertise.
  • We conduct research and share insights on the development of standards, guidelines, policies, and processes to ensure ethical and responsible AI. For example, we participate in the NEN Norm Committee on Artificial Intelligence.
  • We conduct research on how to make responsible AI ethics an integral part of technological development. For example, through value-sensitive design (VSD).
  • We develop innovation strategies, networks, and research agendas for ethical and responsible AI in the Netherlands and Europe.
  • We actively participate in the public discussion on ethics in AI, such as recently at the ECP Annual Conference.
Picture of effective collaboration between human and machine in healthcare.

Get inspired

31 resultaten, getoond 6 t/m 10

Rob de Wijk on the rise of AI in geopolitical context

Informatietype:
Insight
27 September 2022

Anne Fleur van Veenstra, director of science at TNO’s SA&P unit, interviews Rob de Wijk, emeritus professor of international relations in Leiden and founder of The Hague Centre for Strategic Studies. Rob is also a much sought-after expert who appears on radio and television programmes. What does the rise of AI mean geopolitically and in armed conflicts?

Bram Schot on the impact of AI on mobility

Informatietype:
Insight
27 September 2022

Marieke Martens, science director at TNO and professor of automated vehicles at the Eindhoven University of Technology, talks to Bram Schot. Schot was the CEO of Audi until 2020, having previously held management positions at various car makers, including Mercedes and Volkswagen. Their conversation concerns the influence of AI on mobility. How will AI impact the production process? And what does a future with autonomous vehicles look like?

Eppo Bruins on AI in different government domains

Informatietype:
Insight
27 September 2022

Michiel van der Meulen, chief geologist for the Geological Survey of the Netherlands (GDN), speaks with Eppo Bruins. Bruins was educated as a nuclear physicist and has spent many years working in the world of science, innovation, and technology. Between 2015 and 2021, he was a Dutch member of parliament for the Christian Union. He was recently appointed chairman of the Advisory council for science, technology and innovation (AWTI). What will AI mean for the various government domains in the coming years?

Bas Haring on AI, science and philosophy

Informatietype:
Insight
27 September 2022

Michiel van der Meulen, chief geologist for the Geological Survey of the Netherlands (GDN), speaks with Bas Haring. Haring originally studied artificial intelligence, which at the time still fell under the umbrella of philosophy, which is why people started calling him a philosopher. He himself feels more like a ‘folk philosopher’: Haring tries to make science and philosophy accessible to a wider audience. In 2001, he published a children’s book about evolution, Cheese and the Theory of Evolution. What better springboard for a geologist and a philosopher to talk about AI?

Arnon Grunberg on AI, creativity and morality

Informatietype:
Insight
27 September 2022

Peter Werkhoven, chief scientific officer at TNO, talks to Arnon Grunberg from his base in New York. Grunberg made his breakthrough in 1994 with his novel, Blue Mondays. He has since become one of the Netherlands’ best-known authors. The two talked about AI over dinner some years ago. Today, they finally get the chance to continue their conversation. What is Grunberg’s view on creativity? Can it be taught to machines? And how do humans morally relate to machines?