Content loss and conditional space relationship in conditional generative adversarial networks

dc.authorid0000-0002-7534-6247
dc.contributor.authorEken, Enes
dc.date.accessioned2023-01-19T07:33:14Z
dc.date.available2023-01-19T07:33:14Z
dc.date.issued2022
dc.departmentMühendislik Fakültesi
dc.description.abstractIn the machine learning community, generative models, especially generative adversarial networks (GANs) continue to be an attractive yet challenging research topic. Right after the invention of GAN, many GAN models have been proposed by the researchers with the same goal: creating better images. The first and foremost feature that a GAN model should have is that creating realistic images that cannot be distinguished from genuine ones. A large portion of the GAN models proposed to this end have a common approach which can be defined as factoring the image generation process into multiple states for decomposing the difficult task into several more manageable sub tasks. This can be realized by using sequential conditional/unconditional generators. Although images generated by sequential generators experimentally prove the effectiveness of this approach, visually inspecting the generated images are far away of being objective and it is not yet quantitatively showed in an objective manner. In this paper, we quantitatively show the effectiveness of shrinking the conditional space by using the sequential generators instead of utilizing single but large generator. At the light of the content loss we demonstrate that in sequential designs, each generator helps to shrink the conditional space, and therefore reduces the loss and the uncertainties at the generated images. In order to quantitatively validate this approach, we tried different combinations of connecting generators sequentially and/or increasing the capacity of generators and using single or multiple discriminators under four different scenarios applied to image-to-image translation tasks. Scenario-1 uses the conventional pix2pix GAN model which serves as the based line model for the rest of the scenarios. In Scenario-2, we utilized two generators connected sequentially. Each generator is identical to the one used in Scenario-1. Another possibility is just doubling the size of a single generator which is evaluated in the Scenario-3. In the last scenario, we used two different discriminators in order to train two sequentially connected generators. Our quantitative results support that simply increasing the capacity of one generator, instead of using sequential generators, does not help a lot to reduce the content loss which is used in addition to adversarial loss and hence does not create better images.
dc.identifier.doi10.55730/1300-0632.3902
dc.identifier.endpage1757en_US
dc.identifier.issn1300-0632
dc.identifier.issn1303-6203
dc.identifier.issue5en_US
dc.identifier.startpage1741en_US
dc.identifier.urihttps:/dx.doi.org/10.55730/1300-0632.3902
dc.identifier.urihttps://hdl.handle.net/20.500.12451/9976
dc.identifier.volume30en_US
dc.identifier.wosWOS:000904725600005
dc.identifier.wosqualityQ4
dc.indekslendigikaynakWeb of Science
dc.indekslendigikaynakTR-Dizin
dc.language.isoen
dc.publisherTÜBİTAK (Scientific and Technological Research Council of Turkey)
dc.relation.ispartofTurkish Journal of Electrical Engineering and Computer Science
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanı
dc.rightsinfo:eu-repo/semantics/openAccess
dc.subjectGenerative Adversarial Networks
dc.subjectConditional Space
dc.subjectContent Loss
dc.subjectSequential Generators
dc.subjectImage-to-image Translation
dc.titleContent loss and conditional space relationship in conditional generative adversarial networks
dc.typeArticle

Dosyalar

Orijinal paket
Listeleniyor 1 - 1 / 1
Yükleniyor...
Küçük Resim
İsim:
eken-enes-2022.pdf
Boyut:
1.77 MB
Biçim:
Adobe Portable Document Format
Açıklama:
Tam Metin / Full Text
Lisans paketi
Listeleniyor 1 - 1 / 1
[ X ]
İsim:
license.txt
Boyut:
1.44 KB
Biçim:
Item-specific license agreed upon to submission
Açıklama: