Dissertation Defense
Unfairness Detection and Evaluation in Data-driven Decision-making Algorithms
This event is free and open to the publicAdd to Google Calendar

Hybrid Event: 3316 EECS / Zoom
Abstract: Recent years have witnessed a surge in the application of data-driven algorithms to assist human decision-making across various sectors, including industry, government, and non-profit organizations. Many of these applications significantly impact our daily lives. Concerns are growing about the potential biases that may be present in the data, amplified in the algorithmic processes, or introduced by the algorithms themselves. Such biases have been observed to result in injustices, particularly against specific demographic groups, highlighting the need for careful examination and correction. These concerns have given rise to a recent body of literature, which has focused primarily on biases in alphanumeric relational tables and consequent biases in labels applied in a classification task (such as who to admit). This thesis focuses on developing efficient algorithms to detect potential biases in richer and more complex datasets and to assess the fairness status of the results of these algorithms in tasks other than classification. Specifically, it addresses the following problems:
In-processing: Relational queries are often used to define candidate pools, from available data sources. This thesis develops techniques to minimally modify these relational queries to satisfy diversity constraints in the outcome. This approach facilitates the selection of a diverse candidate pool while preserving the core objectives of the selection process.
Post-processing: This thesis introduces methods to evaluate the fairness of algorithmic outcomes for certain fairness metrics, focusing on ranking. In particular, it identifies disproportionately under-represented groups in top ranks.
Post-processing: This thesis addresses the overlooked aspect of fairness in data streams by proposing algorithms to measure fairness metrics with time decay, providing an evolving reflection of fairness, for classification tasks, in non-stationary environments.