Abstract:Individual Fairness (IF) is a very intuitive and desirable notion of fairness: we want ML models to treat similar individuals similarly, that is, to be fair for
Individual Fairness (IF) is a very intuitive and desirable notion of fairness: we want ML models to treat similar individuals similarly, that is, to be fair for every person. For example, two resumes of individuals that only differ in their name and gender pronouns should be treated similarly by the model. Despite the intuition, training ML/AI models that abide by this rule in theory and in practice poses several challenges. In this talk, I will introduce a notion of Distributional Individual Fairness (DIF) highlighting similarities and differences with the original notion of IF introduced by Dwork et al. in 2011. DIF suggests a transport-based regularizer that is easy to incorporate into modern training algorithms while controlling the fairness-accuracy tradeoff by varying the regularization strength. Corresponding algorithm guarantees to train certifiably fair ML models theoretically and achieves individual fairness in practice on a variety of tasks. DIF can also be readily extended to other ML problems, such as Learning to Rank.
What You’ll Learn:
This talk presents the key practical insights from a series of 8 papers published by our group in top ML/AI conferences (NeurIPS/ICLR/ICML).
Mikhail is a Research Staff Member at IBM Research and MIT-IBM Watson AI Lab in Cambridge, Massachusetts. His research interests are Model fusion and federated learning; Algorithmic fairness; Applications of optimal transport in machine learning; Bayesian (nonparametric) modeling and inference. Before joining IBM, he completed Ph.D. in Statistics at the University of Michigan, where he worked with Long Nguyen. He received his bachelor’s degree in applied mathematics and physics from the Moscow Institute of Physics and Technology.
(Thursday) 12:45 PM - 1:30 PM