Yazar "Eken, Enes" seçeneğine göre listele
Listeleniyor 1 - 5 / 5
Sayfa Başına Sonuç
Sıralama seçenekleri
Öğe A novel breath molecule sensing system based on deep neural network employing multiple-line direct absorption spectroscopy(Elsevier Ltd, 2023) Bayraklı, İsmail; Eken, EnesA novel ppb-level biomedical sensor is developed to analyze breath samples for continuous monitoring of diseases. The setup is very compact, consisting of a distributed feedback quantum cascade laser (DFB-QCL) and a single-pass absorption cell. To make the sensor more compact and functional, a deep neural network (DNN) model is utilized for predicting gas concentrations. In order to evaluate the performance of the sensor, N2O is used as the target molecule. A minimum detection limit of 500 ppb is achieved in a single-pass absorption cell configuration. The model is trained on multiple N2O/CO2 absorption lines (instead of an isolated line) with concentrations between 0 to 500 ppm generated using the HITRAN database. The trained model is tested on measured spectra and compared to a non-linear least squares fitting algorithm. The coefficients of determination (R2) were found to be 0.997 and 0.981 for the predictions of N2O concentrations in the N2O/N2 gas mixture and the breath air, respectively. The accuracies of 2.5% and 2.9% were achieved by the sensor for both cases.Öğe Compact laser spectroscopy-based sensor using a transformer-based model for analysis of multiple molecules(Optica Publishing Group (formerly OSA), 2024) Bayraklı, İsmail; Eken, EnesInterest in the development of compact sensors that consume low energy is increasing day by day. This study reports, to our knowledge, such a novel sensor system that can analyze multiple molecules simultaneously with high sensitivity under ambient conditions (900 mbar and 300 K). To quantify molecules, a distributed feedback quantum cascade laser (DFB QCL) was combined with a compact multi-pass absorption (mpass) cell without the need for vacuum components, lock-in amplifier, or any electric filters. By using a transformer-encoder-based model, the noise level was reduced and the pressure-broadened absorption lines of the molecules were separated, narrowed (resolved), and displayed one by one. In this way, molecules can be quantified using pressure-broadened overlapping absorption lines under ambient conditions. To test our sensor system, CO2 and N2O molecules were used. Depending on the concentration values, SNR can be improved by up to 50 times. Better results are obtained at higher concentration values. Detection limits for N2O and CO2 molecules were determined to be 30 ppb and 180 ppm, respectively. The analysis time of molecules is around 80 ms.Öğe Content loss and conditional space relationship in conditional generative adversarial networks(TÜBİTAK (Scientific and Technological Research Council of Turkey), 2022) Eken, EnesIn the machine learning community, generative models, especially generative adversarial networks (GANs) continue to be an attractive yet challenging research topic. Right after the invention of GAN, many GAN models have been proposed by the researchers with the same goal: creating better images. The first and foremost feature that a GAN model should have is that creating realistic images that cannot be distinguished from genuine ones. A large portion of the GAN models proposed to this end have a common approach which can be defined as factoring the image generation process into multiple states for decomposing the difficult task into several more manageable sub tasks. This can be realized by using sequential conditional/unconditional generators. Although images generated by sequential generators experimentally prove the effectiveness of this approach, visually inspecting the generated images are far away of being objective and it is not yet quantitatively showed in an objective manner. In this paper, we quantitatively show the effectiveness of shrinking the conditional space by using the sequential generators instead of utilizing single but large generator. At the light of the content loss we demonstrate that in sequential designs, each generator helps to shrink the conditional space, and therefore reduces the loss and the uncertainties at the generated images. In order to quantitatively validate this approach, we tried different combinations of connecting generators sequentially and/or increasing the capacity of generators and using single or multiple discriminators under four different scenarios applied to image-to-image translation tasks. Scenario-1 uses the conventional pix2pix GAN model which serves as the based line model for the rest of the scenarios. In Scenario-2, we utilized two generators connected sequentially. Each generator is identical to the one used in Scenario-1. Another possibility is just doubling the size of a single generator which is evaluated in the Scenario-3. In the last scenario, we used two different discriminators in order to train two sequentially connected generators. Our quantitative results support that simply increasing the capacity of one generator, instead of using sequential generators, does not help a lot to reduce the content loss which is used in addition to adversarial loss and hence does not create better images.Öğe Determining overfitting and underfitting in generative adversarial networks using Fréchet distance(Türkiye Klinikleri, 2021) Eken, EnesGenerative adversarial networks (GANs) can be used in a wide range of applications where drawing samples from a data probability distribution without explicitly representing it is essential. Unlike the deep convolutional neural networks (CNNs) trained for mapping an input to one of the multiple outputs, monitoring the overfitting and underfitting in GANs is not trivial since they are not classifying but generating a data. While training set and validation set accuracy give a direct sense of success in terms of overfitting and underfitting for CNNs during the training process, evaluating the GANs mainly depends on the visual inspection of the generated samples and generator/discriminator costs of the GANs. Unfortunately, visual inspection is far away of being objective and generator/discriminator costs are very nonintuitive. In this paper, a method was proposed for quantitatively determining the overfitting and underfitting in the GANs during the training process by calculating the approximate derivative of the Fréchet distance between generated data distribution and real data distribution unconditionally or conditioned on a specific class. Both of the distributions can be obtained from the distribution of the embedding in the discriminator network of the GAN. The method is independent of the design architecture and the cost function of the GAN and empirical results on MNIST and CIFAR-10 support the effectiveness of the proposed method.Öğe Using subspaces of weight matrix for evaluating generative adversarial networks with Frechet distance(Wiley, 2022) Eken, EnesFrechet inception distance (FID) has gained a better reputation as an evaluation metric for generative adversarial networks (GANs). However, it is subjected to fluctuation, namely, the same GAN model, when trained at different times can have different FID scores, due to the randomness of the weight matrices in the networks, stochastic gradient descent, and the embedded distribution (activation outputs at a hidden layer). In calculating the FIDs, embedded distribution plays the key role and it is not a trivial question from where obtaining it since it contributes to the fluctuation also. In this article, I showed that embedded distribution can be obtained from three different subspaces of the weight matrix, namely, from the row space, the null space, and the column space, and I analyzed the effect of the each space to Frechet distances (FDs). Since the different spaces show different behaviors, choosing a subspace is not an insignificant decision. Instead of directly using the embedded distribution obtained from hidden layer's activations to calculate the FD, I proposed to use projection of embedded distribution onto the null space of the weight matrix among the three subspaces to avoid the fluctuations. My simulation results conducted at MNIST, CIFAR10, and CelebA datasets, show that, by projecting the embedded distributions onto the null spaces, possible parasitic effects coming from the randomness are being eliminated and reduces the number of needed simulations approximate to 25x in MNIST dataset, approximate to 21x in CIFAR10, and approximate to 12x in CelebA dataset.