Generative models lie at the heart of machine learning. Among them, Generative Adversarial Networks (GANs) are one of the most popular (implicit) generative models. When a condition, e.g. prior information, is available, the model is modified to include the condition. The conditional model is named conditional generative adversarial networks (cGANs).
In the first part of the talk, we will review the (conditional) generative adversarial networks, their core components and recent applications. In the second part, we will focus on the robustness of the generator (of cGAN). In the original framework, the regression of the generator is unconstrained, which might lead to arbitrarily large errors in the output. We instead introduce a novel GAN model, called RoCGAN, which leverages structure in the target space of the model. We study how RoCGAN remains robust even in the face of intense noise.