Dissertation Defense

An Empirical Exploration of Algorithmic Accountability

Divya RameshPh.D. Candidate
WHERE:
3725 Beyster Building
SHARE:

Hybrid Event:  3725 BBB / Zoom  Passcode: 073478

Abstract: AI governance could ensure that the benefits of the technology could be distributed equitably through society. Several stakeholders including private sector, civil society, academic, lawyers, and policymakers have called for algorithmic accountability as a necessary mechanism for enabling governance. Despite this heightened focus, accountability remains an unsettled and contested problem across technical, policy, and legal domains. The term is often used without agreement on what it entails: who should be accountable, to whom, and for what. These conceptual tensions are not merely academic; they shape how institutions respond to harms and design interventions.

This dissertation aims to capture divergent and sometimes contradictory perspectives on algorithmic accountability, not to offer a single definition, but to provide a pathway for computing researchers interested in the topic. This dissertation employs mixed-methods, consisting of critical discourse analysis, in-depth qualitative interviews, and content analysis to unpack the tensions, identify the underlying assumptions and power dynamics embedded in existing discourse, and to provide a path forward.

The main argument that this dissertation puts forth is that algorithmic accountability, contrary to what the phrase suggests, is less a property of an algorithm or a technical system, but more an iterative, context-sensitive process to be embedded in system design and evaluation that continuously negotiates trade-offs among competing values and actors. The findings in this dissertation demonstrate that algorithmic accountability is a contested and multidimensional concept, marked by fuzziness across its meanings, aims, understandings of stakeholders, their dynamics, and barriers. Through two case studies of deployed AI systems, this dissertation shows how interpretivist empirical inquiry can be leveraged as a diagnostic tool to uncover contextual complexities and hidden dimensions of accountability that are overlooked in the current discourse. The insights in this dissertation challenge efforts to standardize accountability as a fixed technical criterion, revealing the risk that rigid frameworks may obscure local realities and stakeholder needs.  The fuzziness of algorithmic discourse can be leveraged to generative ends: this dissertation concludes with directions for realizing accountability in the practice of AI innovation. Together, this dissertation makes design, empirical, and
theoretical contributions that advances scholarly knowledge on algorithmic accountability.

Organizer

CSE Graduate Programs Office

Faculty Host

Prof. Nikola Banovic