The expected societal impact of AI is considerable. Think of the application of AI to recruit employees, where algorithms may be used to select employees. However, some AI applications in this field are at risk of reinforcing discrimination. Therefore, careful consideration and deliberation is needed on ethical standards and regulations to benefit from AI.
Privacy and ethics in AI
One way to make ethics practical is to enable researchers and developers to cultivate relevant virtues—e.g., justice, self-control, civility or courage—so they can apply and express these virtues during the research and design process
Such standards and regulations include the protection of privacy and data security during data collection, storage and processing. In addition, they include ensuring fairness, accountability, controllability and transparency of AI-based decisions and processes. At TNO, we conduct advice and do research on societal impact, regulations and ethics.
We identify three challenges in the governance of AI:
1. Privacy: Regarding privacy, application and enforcement of the GDPR and other privacy guidelines often run behind current and emerging applications of AI. AI applications for instance force us to rethink what counts as (in)directly identifiable personal data.
2. Ethics: The challenge for the application of ethical guidelines is that many guidelines are of a general nature and need interpretation. For instance, the principle of meaningful human control needs to be interpreted for autonomous driving.
3. Norms and Values: Some norms and values are not so self-evident as you might think, for example, fairness. In order to include values in AI applications they need to be made more explicit or even quantified. Furthermore, norms and values depend on context and may change over time. The challenge is how to define these norms and values in ways that facilitate accountability and empowers different stakeholders.