Fair Machine Learning combats biases
An AI tool bases its calculations on data. If the data is biased, the calculations will be biased. If there was once a male preference within a profession, then this will be adopted by AI tools for recruitment. So the AI tool may wrongly give a better judgement to men. This can be prevented by de-correlating the data from gender. Gender and possible related proxies will no longer be predictive for job suitability. TNO expects to use Fair Machine Learning to select appropriate candidates in a fair and unbiased manner.
TNO makes generative adversarial network models using fair machine learning
TNO carries out the de-correlation for Fair Machine Learning using a Generative Adversarial Network (GAN) model. This model tries to balance two conflicting criteria:
- Minimising the number of changes to the dataset
- Making sure that somebody’s gender is no longer identifiable from the remaining characteristics
When weighing up the criteria, the model generalises the existing characteristics of individuals into more general characteristics. An example would be generalising postcodes according to neighbourhoods, neighbourhoods according to cities and cities according to countries. The end result is a dataset in which a person’s gender (criterion 2) is practically unrecognisable. In short, the gender bias has disappeared from the dataset.
Fair machine learning is relevant to all forms of discrimination arising from historical data
Fair Machine Learning is relevant to all forms of discrimination and prejudice that arise from the use of biased data. In addition to recruitment and selection, it is also important that the AI algorithm is fair when it comes to supervision, inspection and enforcement tasks. Gender, religion and ethnicity should not be used as selection characteristics.
If used responsibly, AI machine learning tools can increase efficiency and effectiveness when finding comparable individuals for all kinds of selection tasks. However, historical biases (which are less striking without these AI tools) are being structurally and systematically furthered by them. Fair Machine Learning reduces and prevents such discrimination.
Christopher BrewsterFunctie:Senior scientist
Christopher Brewster is a Senior Scientist in the Data Science group and Professor of the Application of Emerging Tecnologies in the Institute of Data Science, Maastricht University. His research has focussed on the application of Semantic Technologies, Open and Linked Data, interoperability architectures and Data Governance, mostly to the food and agriculture domains.
Daniël WormFunctie:Senior consultant
Joris SijsFunction not known
Judith DijkFunctie:senior research scientist
Judith is specialised in extracting information from camera images. She now applies the subject of her PhD thesis in Physics, which she obtained 18 years ago, to her work as a research scientist at TNO, including in a research programme on camera systems for the Dutch Ministry of Defence.
Looking for another expert?View all experts
Responsible decision-making between people and machines
Bias in facial recognition and accidents with self-driving cars. AI must be developed further. The fastest way to do this is in close cooperation with people.
Knowledge representation and reasoning
Correct and unambiguous information is needed when making a decision. That is why we use AI technology called "knowledge representation & reasoning".
Natural language processing
What is natural language processing (NLP) and how do we use it intelligently? Find out how we use this AI technique to gather information from textual data.
Robotics and autonomous agents
Robotics brings a future-proof industry a big step closer. For example, we are working on automatic path planning with AI techniques.
Through Deep Vision, we’re developing AI algorithms to make automatic image analysis possible. Learn more!