Performance analysis of optimization algorithms on stacked autoencoder

Yükleniyor...
Küçük Resim

Tarih

2019

Dergi Başlığı

Dergi ISSN

Cilt Başlığı

Yayıncı

Institute of Electrical and Electronics Engineers Inc.

Erişim Hakkı

info:eu-repo/semantics/closedAccess

Özet

Stacked autoencoder (SAE) model, which is one of the deep learning methods, has been widely used in one dimensional data sets in recent years. In this study, a comparative performance analysis was performed using the five most commonly used optimization techniques and two well-known activation functions in SAE architecture. Stochastic Gradient Descent (SGD), Root Mean Square Propagation (RmsProp), Adaptive Moment Estimation (Adam), Adaptive Delta (Adadelta) and Nesterov-accelerated Adaptive Moment Estimation (Nadam) and Softmax and Sigmoid were used as optimization techniques. In this study, two different data sets in public UCI database were used. In order to verify the performance of the SAE model, experimental studies were performed by using the obtained data sets together with optimization and activation techniques separately. As a result of the experimental studies, the success rate of 88.89%, 85.19% in Cryotherapy and Immunotherapy data set was achieved by using Softmax activation function with SGD optimization method on three-layer SAE. After a successful training phase, adaptive optimization techniques Adam, Adadelta, Nadam and RmsProp methods were observed to have a weaker learning process than the stochastic method SGD.

Açıklama

Adem, Kemal ( Aksaray, Yazar )

Anahtar Kelimeler

Deep Learning, Optimization, Stacked Autoencoder

Kaynak

3rd International Symposium on Multidisciplinary Studies and Innovative Technologies, ISMSIT 2019 - Proceedings

WoS Q Değeri

Scopus Q Değeri

N/A

Cilt

-

Sayı

-

Künye