Theory Seminar

Generalization and Equilibrium in Generative Adversarial Nets (GANs)

Bo LiPost DocUniversity of Michigan
SHARE:

This paper by Sanjeev Arora, Rong Ge, Yingyu Liang, Tengyu Ma, Yi Zhang makes progress on several open theoretical issues related to Generative Adversarial Networks. A definition is provided for what it means for the training to generalize, and it is shown that generalization is not guaranteed for the popular distances between distributions such as Jensen-Shannon or Wasserstein. We introduce a new metric called neural net distance for which generalization does occur. We also show that an approximate pure equilibrium in the 2-player game exists for a natural training objective (Wasserstein). Showing such a result has been an open problem (for any training objective).
Finally, the above theoretical ideas lead us to propose a new training protocol, MIX+GAN, which can be combined with any existing method. We present experiments showing that it stabilizes and improves some existing methods.

Requisite background knowledge on GANs will be covered.

Sponsored by

CSE