Determining overfitting and underfitting in generative adversarial networks using Fréchet distance
Yükleniyor...
Dosyalar
Tarih
2021
Yazarlar
Dergi Başlığı
Dergi ISSN
Cilt Başlığı
Yayıncı
Türkiye Klinikleri
Erişim Hakkı
Attribution-NonCommercial-NoDerivs 3.0 United States
info:eu-repo/semantics/openAccess
info:eu-repo/semantics/openAccess
Özet
Generative adversarial networks (GANs) can be used in a wide range of applications where drawing samples from a data probability distribution without explicitly representing it is essential. Unlike the deep convolutional neural networks (CNNs) trained for mapping an input to one of the multiple outputs, monitoring the overfitting and underfitting in GANs is not trivial since they are not classifying but generating a data. While training set and validation set accuracy give a direct sense of success in terms of overfitting and underfitting for CNNs during the training process, evaluating the GANs mainly depends on the visual inspection of the generated samples and generator/discriminator costs of the GANs. Unfortunately, visual inspection is far away of being objective and generator/discriminator costs are very nonintuitive. In this paper, a method was proposed for quantitatively determining the overfitting and underfitting in the GANs during the training process by calculating the approximate derivative of the Fréchet distance between generated data distribution and real data distribution unconditionally or conditioned on a specific class. Both of the distributions can be obtained from the distribution of the embedding in the discriminator network of the GAN. The method is independent of the design architecture and the cost function of the GAN and empirical results on MNIST and CIFAR-10 support the effectiveness of the proposed method.
Açıklama
Anahtar Kelimeler
Fréchet Inception Distance, Generative Adversarial Networks, Overfitting, Underfitting
Kaynak
Turkish Journal of Electrical Engineering and Computer Sciences
WoS Q Değeri
Q4
Scopus Q Değeri
Q2
Cilt
29
Sayı
3