I’ll be running an ICML tutorial: “Defining and Designing Fair Algorithms”

The tutorial will discuss the most popular notions of fairness — calibration, predictive parity, equal opportunity/odds, statistical parity, etc — and the serious shortcomings of each definition. Given these problems, where should we go next as a field?

The abstract is below, hope to see you in Stockholm on July 10th!

Machine learning algorithms are increasingly used to guide decisions by human experts, including judges, doctors, and managers. Researchers and policymakers, however, have raised concerns that these systems might inadvertently exacerbate societal biases. To measure and mitigate such potential bias, there has recently been an explosion of competing mathematical definitions of what it means for an algorithm to be fair. But there’s a problem: nearly all of the prominent definitions of fairness suffer from subtle shortcomings that can lead to serious adverse consequences when used as an objective. In this tutorial, we illustrate these problems that lie at the foundation of this nascent field of algorithmic fairness, drawing on ideas from machine learning, economics, and legal theory. In doing so we hope to offer researchers and practitioners a way to advance the area.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s