Dissertation Defense
Simulation-based Approaches for Evaluating Information Elicitation and Information Aggregation Mechanisms
This event is free and open to the publicAdd to Google Calendar
Hybrid Event: Zoom Passcode: 877976
Abstract: The mathematical study of information elicitation has led to elegant theories about the behavior of economic agents asked to share their private information. Similarly, the study of information aggregation has illuminated the possibility of combining independent sources of imperfect information such that the combined information is more valuable than that from any single source. However, despite a flourishing academic literature in both areas, some of their key insights have yet to be embraced in many of their purported applications. In this dissertation, we revisit prior work in the applications of crowdsourcing and peer assessment to address overlooked obstacles to more widespread adoption of their key contributions.
We apply simulation-based methods to the evaluation of information elicitation and aggregation mechanisms. First, we use real crowdsourcing data to explore common assumptions about the way that crowd workers make mistakes in labeling. We find different forms of heterogeneity among both tasks and workers, which have different implications for the design and evaluation of label aggregation algorithms.
Then, we turn to peer assessment. Despite many potential benefits from peer grading, the traditional paradigm, where one instructor grades each submission, predominates. One persistent impediment to adopting a new grading paradigm is doubt that it will assign grades that are at least as good as those that would have been assigned under the existing paradigm. We address this impediment by using tools from economics to define a practical framework for determining when peer grades clearly exceed the standard set by the instructor baseline.
Lastly, we propose measurement integrity, a property related to ex post reward fairness, as a novel desideratum for mechanisms that elicit information without verification in many applications. We perform computational experiments in the setting of peer assessment to empirically evaluate mechanisms according to both measurement integrity and robustness against strategic reporting. We find an apparent trade-off between these properties; the best-performing mechanisms in terms of measurement integrity are highly susceptible to strategic reporting. But we also find that supplementing mechanisms with realistic parametric statistical models results in mechanisms that strike the best balance between them.