Main Content

UW-Madison researchers tackle bias in algorithms

If you’ve ever applied for a loan or checked your credit score, algorithms have played a role in your life. These mathematical models allow computers to use data to predict many things — who is likely to pay back a loan, who may be a suitable employee, or whether a person who has broken the law is likely to reoffend, to name just a few examples. Yet while some may assume that computers remove human bias from decision-making, research has shown that is not true. Biases on the part of those designing algorithms, as well as biases in the data used by an algorithm, can introduce human prejudices into a situation. A seemingly neutral process becomes fraught with complications. For the past year, University of Wisconsin–Madison faculty in the Department of Computer Sciences have been working on tools to address unfairness in algorithms. Now, a $1 million grant from the National Science Foundation will accelerate their efforts. Their project, “Formal Methods for Program Fairness,” is funded through NSF’s Software and Hardware Foundations program. UW-Madison computer science professors Aws Albarghouthi, Shuchi Chawla, Loris D’Antoni and Jerry Zhu are leading the development of a tool called FairSquare. Computer sciences graduate students Samuel Drews and David Merrell are also involved. What sets FairSquare apart is that it will not only detect bias, but also employ automated solutions. “Ultimately, we’d like this to be a regulatory tool when you’re deploying an algorithm making sensitive decisions. You can verify it’s indeed fair, and then fix it if it’s not,” says Albarghouthi. Decision-making algorithms can be mysterious even to those who use them, say the researchers, making a tool like FairSquare necessary.”

Link to article