Theory Seminar

Efficient Massively Parallel Methods for Dynamic Programming

Timothy LewisUniversity of Michigan
SHARE:

p> I will present a paper by Sungjin Im, Benjamin Moseley, and Xiaorui Sun which appeared at STOC 2017.

Abstract:

Modern science and engineering is driven by massively large data sets and its advance heavily relies on massively parallel computing platforms such as Spark, MapReduce, and Hadoop. Theoretical models have been proposed to understand the power and limitations of such platforms. Recent study of developed theoretical models has led to the discovery of new algorithms that are fast and efficient in both theory and practice, thereby beginning to unlock their underlying power. Given recent promising results, the area has turned its focus on discovering widely applicable algorithmic techniques for solving problems efficiently.

In this paper we make progress towards this goal by giving a principled framework for simulating sequential dynamic programs in the distributed setting. In particular, we identify two key properties, monotonicity and decomposability, which allow us to derive efficient distributed algorithms for problems possessing the properties. We showcase our framework by considering several core dynamic programming applications, Longest Increasing Subsequence, Optimal Binary Search Tree, and Weighted Interval Selection. For these problems, we derive algorithms yielding solutions that are arbitrarily close to the optimum, using O(1) rounds and (n/m) memory on each machine where n is the input size and m is the number of machines available.

Sponsored by

CSE