International Journal of Advanced Research in Computer Engineering & Technology (IJARCET) Volume 9, Issue 4, April 2020, ISSN: 2278 – 1323 www.ijarcet.org 116 Abstract— The primary objective of this paper is to provide a comparative analysis of various generative adversarial networks (GANs). In this paper we present our study of — SinGAN, Conditional Generative Adversarial Networks (CGAN), Star generative adversarial network (StarGAN) and Cycle generative adversarial network (CycleGAN). We also present our results generated from these GANs and provide their comparisons on different metrics such as RMSE, UQI, MS-SSIM, VIF and also comparing the images generated by the FCN-8s architecture in this paper. The main motive behind writing this paper is to provide a one place study for the different variants of GAN which is currently lacking in the literature available. This work will help grasping the concepts of the different GAN architectures and their loss functions which are explained in detail. Index Terms—Generative Adversarial Networks (GANs), SinGAN, CGAN, StarGAN, CycleGAN, FCN results. I. INTRODUCTION The Generative Adversarial Networks (GANs) have been studied and applied to tremendous applications. The versatility of GANs have made them popular. The earlier literature studies and research provide specific applications of the GANs. In this paper we present the comparative examination of the recent variations of GANs available. We have performed the literature survey as well as performed the performance comparison of different GAN architectures on several parameters. The research paper such as [1] provide the comparative study of different GANs but lacks in the technical comparison. But in this paper we provide the details of SinGAN, CGAN, StarGAN, CycleGAN as well as a detailed mathematical comparison on several parameters. This is the contribution from the authors. The literature and surveys available here lack these detailed mathematical comparisons. We bridge this gap in this paper. The paper is arranged as follows. We present the fundamental details of the GANs in the beginning in II. This section majorly contains the detailed study performed by the authors. The details that we provide, primarily cover brief introduction of a particular GAN, loss function, the architecture, advantages of that particular GAN as well as the resultant images generated from that GAN in the results part. We also provide the comparison analysis in a summarized form in section III. The section provides mathematical metric comparisons of SinGAN, CGAN, StarGAN, CycleGAN in a tabular form. This will be helpful to study the performance examination of a particular GAN. This is our contribution towards the research community. In the end we conclude the paper with a conclusion section IV. In which we provide a conclusive note on the comparative analysis of the GANs. II. STUDY OF GENERATIVE ADVERSARIAL NETWORKS In this section we provide the detailed study of different GANs performed by us. First we give brief information about a particular GAN and then we give the mathematical details such as loss function, its architecture and the resultant image produced by that particular GAN. We have used the pre-trained models of the GANs available online to produce and compare the resultant images. A. SinGAN 1) A brief introduction of SinGAN: SinGAN [2] contains a pyramid of fully convolutional GANs where each of them is responsible for learning the patch distribution of the image at a different scale of the image. It is an unconditional generative model. SinGAN could be used for a variety of tasks such as image super-resolution, paint-to-image, harmonization, editing and single image animation. 2) Loss functions: The loss function of the n th GAN in SinGAN is given by: minGn maxDn adv(Gn,Dn)+αrec(Gn), (1) The 2 terms in the loss function signify 2 different types of losses which are: 1. Adversarial loss : The basic aim of this loss is to penalize the model for the distance between the distribution of patches in n x (the real image) and the distribution of patches in generated samples ~ n x . WGAN-GP [3] loss is used since the authors of the original paper found it to increase the training stability. 2. Reconstruction loss : This term ensures that whenever the input does not contain any noise, the generator is able to reconstruct the original image. The authors of the original paper specifically chose { rec N z , rec N z 1 ,…, rec z 0 }= { * z ,0,…,0}, where * z is some fixed noise map (drawn once and kept fixed during training). The generated image at the nth scale (n < N), when the noise maps are used, is denoted by rec n x ~ . rec = ||Gn(0, ( rec n x 1 ~ )↑ r )- n x || 2 , (2) Generative Adversarial Networks: A Comparative Analysis Tilak Nanavati, Hastin Modi, Drishti Patel, Vedant Parikh, Jahnavi Gupta