Dissertation Defense

Interactional Slingshots: Providing Support Structure to User Interactions in Hybrid Intelligence Systems

Sai Gouravajhala


ABSTRACT: Artificial intelligence (AI) systems often fail in contexts that require human understanding, are never-before-seen, or complex. In such cases, though the AI-only approaches cannot solve the full task, their ability to solve a piece of the task can be combined with human effort to become more robust to handling complexity and uncertainty. A hybrid intelligence system—one that combines human and machine skill sets— can make intelligent systems more operable in real-world settings.

In this dissertation, we propose the idea of using interactional slingshots as a means of providing support structure to user interactions in such systems. Much like how gravitational slingshots provide boosts to spacecraft en route to their destinations, so do interactional slingshots provide boosts to user interactions en route to solving tasks.

To do this as a tractable socio-technical problem, we explore this idea in the context of data annotation, especially in those domains where AI methods fail to solve the overall task. Getting annotated (labeled) data is crucial for successful AI methods, since problems in such domains require human understanding to fully solve, but also present challenges related to annotator expertise, annotation freedom, and context curation from the data. First, we provide support by nudging non-experts’ interactions as they annotate conversational data to create collective memory. Second, we add support by assisting non-expert users as they annotate 3D point cloud data to ground natural language references to objects. Finally, we supply support by guiding expert and non-expert user interactions as they disentangle conversations across multiple technical domains.

We demonstrate that building hybrid intelligence systems with each of these interactional slingshot support mechanisms—nudging, assisting, and guiding a user’s interaction with data—improves annotation outcomes, such as annotation speed, accuracy, and effort level, even when annotators’ expertise and skill levels vary.


Sonya Siddique

Faculty Host

Prof. Mark Ackerman