AI algorithms must be fair, unbiased and accurate. But making them so can be difficult. Data sets (= fuel of AI algorithms) are often biased. Designers of AI algorithms need to become aware of unwanted biases and should develop methods to detect and remove them. Laws against discrimination demand that decision taken by AI algorithms should not depend on protected data attributes, such as gender and ethnicity. Simply removing these attributes from the data is not enough, because that information can still be found in correlated attributes.
Modern AI algorithms are very accurate. But their ‘black box’ nature, makes them non-transparent and hard to understand. In many applications, this lack of AI transparency impedes their acceptance by society. In order to operate safely and securely, AI systems must also be able to deal with unforeseen situations and hostile attacks.
In conclusion, we need to design and verify AI systems that are fair, transparent and trustworthy.