Main Content

Bias test to prevent algorithms discriminating unfairly

COMPUTERS are getting ethical. A new approach for testing whether algorithms contain hidden biases aims to prevent automated systems from perpetuating human discrimination. Machine learning is increasingly being used to make sensitive decisions, says Matt Kusner at the Alan Turing Institute in London. In some US states, judges make sentencing decisions and set bail conditions using algorithms that calculate the likelihood that someone will reoffend. Other algorithms assess whether a person should be offered a loan or a job interview. But it is often unclear how these systems come to their conclusions, which makes it impossible to tell if they are fair ones. An algorithm might conclude that people from a certain demographic are less likely to pay back a loan, for example, if it is trained on a data set in which loans were unfairly distributed in the first place. “In machine learning, we have this problem of racism in and racism out,” says Chris Russell, also at the Alan Turing Institute.”

Link to article