Friday, December 9, 2016

A short summary of the paper - Generative adversarial nets

This is a short summary of [1] which I wrote for a lecture of the Deep Learning course I am taking this semester.

This paper [1] proposes a method for estimating generative models using an adversarial process. The authors train two networks simultaneously: a generative model, $G$,  and  a discriminative model, $D$. The generative model learns the data distribution and the discriminative model estimates whether the probability that a sample came from the data rather than $G$. The two networks are trained using a two-player minimax game. The discriminative network, $D$, is trained to maximise the probability of correctly classifying samples from both training data and data generated by $G$. Simultaneously, $G$ is trained to maximise the probability of $D$ making a mistake, i.e., in a way that the data generated by G is indistinguishable from the training data. The authors train these two networks iteratively, alternating between $k$ steps of optimising $D$ and one step of optimising $G$.

The authors also present theoretical guarantees on the convergence of the algorithm to the optimal value (in the sense of global minimum of their objective function). However, the guarantees are not applicable to the case presented in the paper because they make some assumptions which are infeasible to implement using neural networks. However, the paper argues that since deep neural networks perform very well in several domains, they are reasonable models to use here.

In this paper, the authors use $k = 1$ to minimize the training cost. I think that it would be interesting to try higher values of $k$. This would bring the framework closer to one of the conditions of their guarantees; the condition that $D$ is allowed to reach its optimum given $G$. The authors don't mention whether they tried this. This could lead to better convergence even though it will be more expensive to train.

A disadvantage of this approach is that $D$ should be well synchronised with $G$, i.e., $G$ cannot be trained too much without updating $D$. Though, adversarial nets offer several advantages over traditional generative models. This paper is an important step towards unsupervised learning and has already inspired some work in that direction [2].

References:

[1] Goodfellow, Ian, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. "Generative adversarial nets." In Advances in Neural Information Processing Systems, pp. 2672-2680. 2014.
[2] Radford, Alec, Luke Metz, and Soumith Chintala. "Unsupervised representation learning with deep convolutional generative adversarial networks." arXiv preprint arXiv:1511.06434 (2015).

No comments:

Post a Comment