Cor Veenman
- Machine learning
- Data science
- Responsible data science
- Fairness
- Bias
Contact us about Fair Machine Learning
TNO carries out the de-correlation for Fair Machine Learning using a Generative Adversarial Network (GAN) model. This model tries to balance two conflicting criteria:
1) Minimising the number of changes to the dataset
2) Making sure that somebody’s gender is no longer identifiable from the remaining characteristics
When weighing up the criteria, the model generalises the existing characteristics of individuals into more general characteristics. An example would be generalising postcodes according to neighbourhoods, neighbourhoods according to cities and cities according to countries. The end result is a dataset in which a person’s gender (criterion 2) is practically unrecognisable. In short, the gender bias has disappeared from the dataset.
Fair Machine Learning is relevant to all forms of discrimination and prejudice that arise from the use of biased data. In addition to recruitment and selection, it is also important that the AI algorithm is fair when it comes to supervision, inspection and enforcement tasks. Gender, religion and ethnicity should not be used as selection characteristics. If used responsibly, AI machine learning tools can increase efficiency and effectiveness when finding comparable individuals for all kinds of selection tasks. However, historical biases (which are less striking without these AI tools) are being structurally and systematically furthered by them. Fair Machine Learning reduces and prevents such discrimination.
Your question has been sent.
Sorry! Something unexpected happened. Please try again later.