Loading Events
DPE Tuesday Talk Series: Ines Levin, “Machine Unlearning: Adjusting Automatically for Human Biases in Decision Making”

The Division of Politics and Economics invites the CGU community to attend this week’s Tuesday Lunch Talk featuring Ines Levin, assistant professor in the Department of Political Science at the University of California, Irvine. Lunch will be provided.

Bio:
Ines Levin’s research focuses on quantitative research methods with substantive applications in the areas of elections, public opinion, and political behavior.

Ines Levin

Talk Title: Machine Unlearning: Adjusting Automatically for Human Biases in Decision Making

Description:
Recent technological advances are making it possible for all types of organizations, from private companies to government agencies, to amass unprecedented quantities of data and employ novel computational tools to inform and automatize decisions. The mechanization of decision-making processes is not necessarily an indication of a future where decisions and ensuing societal outcomes are free of the prejudices and unfairness characterizing decision-making by human beings. If algorithms are trained on data reflecting human biases, then algorithmic decision-making can easily inherit the biases associated with human judgment. But just as algorithms can be taught to learn from previous human experiences, they can be taught to forget undesirable behaviors.
In this presentation, Dr. Levin illustrates the general problem (i.e. how algorithms may discriminate just as humans do), proposed solutions, and limitations of proposed solutions, using synthetic examples and data from a recent audit study on bureaucratic decision-making. Dr. Levin first shows that supervised learning algorithms are prone to reproducing human biases contained in data sets used to train the algorithm. Then, Dr. Levin shows how, for different types of supervised learning algorithms, it is possible to adjust the training data or the algorithm itself, such that conclusions and advice drawn from the analysis do not replicate biases in the training examples. Lastly, Dr. Levin discusses limitations of some of the proposed solutions; most importantly, the trade-off between achieving greater predictive accuracy vs. ensuring fair outcomes.