Arşiv logosu
  • Türkçe
  • English
  • Giriş
    Yeni kullanıcı mısınız? Kayıt için tıklayın. Şifrenizi mi unuttunuz?
Arşiv logosu
  • Koleksiyonlar
  • Sistem İçeriği
  • Analiz
  • Talep/Soru
  • Türkçe
  • English
  • Giriş
    Yeni kullanıcı mısınız? Kayıt için tıklayın. Şifrenizi mi unuttunuz?
  1. Ana Sayfa
  2. Yazara Göre Listele

Yazar "Ertan, Pelin" seçeneğine göre listele

Listeleniyor 1 - 3 / 3
Sayfa Başına Sonuç
Sıralama seçenekleri
  • Yükleniyor...
    Küçük Resim
    Öğe
    A Comparative Assessment of Large Language Models in Pediatric Dialysis: Reliability, Quality and Readability
    (John Wiley and Sons Inc, 2025) Ensari, Esra; Akyol Önder, Esra Nagehan; Ertan, Pelin
    This study evaluated the reliability, quality, and readability of ChatGPT (OpenAI, San Francisco, CA), Gemini (Google, Mountain View, CA), and Copilot (Microsoft Corp., Washington, DC) which are among the most widely used large language models (LLMs) today in answering frequently asked questions (FAQs) related to pediatric dialysis. Methods: A total of 45 FAQs were entered into LLM. The Modified DISCERN (mDISCERN) scale assessed reliability; the Global Quality Score (GQS) evaluated quality; and readability was assessed using five metrics: Coleman-Liau Index (CLI), Simple Measure of Gobbledygook (SMOG), Gunning Fog Index (GFI), Flesch Reading Ease (FRE) and Flesch–Kincaid Grade Level (FKGL). Questions were directed to the chat robots twice, on January 25, 2025, and February 1, 2025. Results: All three chatbots displayed high reliability, achieving median mDISCERN scores of 5. Quality scores on the GQS were similarly high, with median scores of 5 across platforms; however, Gemini exhibited greater variability (range 1–5) compared to ChatGPT-4o and Copilot (ranges 3–5). Readability scores revealed that chatbot responses were written at an advanced level. Conclusion: This study found that LLMs responses to dialysis FAQs were reliable and high quality, but difficult to read; improving readability through expert-reviewed content could increase their impact on public health.
  • Yükleniyor...
    Küçük Resim
    Öğe
    ChatGPT-4o's performance on pediatric Vesicoureteral reflux
    (Elsevier Ltd, 2025) Akyol Önder, Esra Nagehan; Ensari, Esra; Ertan, Pelin
    Vesicoureteral reflux (VUR) is a common congenital or acquired urinary disorder in children. Chat Generative Pre-trained Transformer (ChatGPT) is an artificial intelligence-driven platform offering medical information. This research aims to assess the reliability and readability of ChatGPT-4o?s answers regarding pediatric VUR for general, non-medical audience. Materials and methods: Twenty of the most frequently asked English-language questions about VUR in children were used to evaluate ChatGPT-4o?s responses. Two independent reviewers rated the reliability and quality using the Global Quality Scale (GQS) and a modified version of the DISCERN tool. The readability of ChatGPT responses was assessed through the Flesch Reading Ease (FRE) Score, Flesch-Kincaid Grade Level (FKGL), Gunning Fog Index (GFI), Coleman-Liau Index (CLI), and Simple Measure of Gobbledygook (SMOG). Results: Median mDISCERN and GQS scores were 4 (4–5) and 5 (3–5), respectively. Most of the responses of ChatGPT have moderate (55 %) and good (45 %) reliability according to the mDISCERN score and high quality (95 %) according to GQS. The mean ± standard deviation scores for FRE, FKGL, SMOG, GFI, and CLI of the text were 26 ± 12, 15 ± 2.5, 16.3 ± 2, 18.8 ± 2.9, and 15.3 ± 2.2, respectively, indicating a high level of reading difficulty. Discussion: While ChatGPT-4o offers accurate and high-quality information about pediatric VUR, its readability poses challenges, as the content is difficult to understand for a general audience. Conclusion: ChatGPT provides high-quality, accessible information about VUR. However, improving readability should be a priority to make this information more user-friendly for a broader audience.
  • Yükleniyor...
    Küçük Resim
    Öğe
    Response to commentary on: ChatGPT-4o's performance on pediatric vesicoureteral reflux
    (Elsevier Ltd, 2025) Akyol Önder, Esra Nagehan; Ensari, Esra; Ertan, Pelin
    The current study evaluated the reliability and readability of ChatGPT-4o’s responses regarding pediatric vesicoureteral reflux [1]. The sources of information have been rapidly evolving, with AI and chatbots, such as ChatGPT, emerging as significant contributors. The scope of artificial intelligence (AI) usage has been expanding in the medical field. However, further research and validation by researchers and healthcare professionals are required before AI can be widely used as a reliable public source of information.

| Aksaray Üniversitesi | Kütüphane | Açık Bilim Politikası | Açık Erişim Politikası | Rehber | OAI-PMH |

Bu site Creative Commons Alıntı-Gayri Ticari-Türetilemez 4.0 Uluslararası Lisansı ile korunmaktadır.


Aksaray Üniversitesi Kütüphane ve Dokümantasyon Daire Başkanlığı, Aksaray, TÜRKİYE
İçerikte herhangi bir hata görürseniz lütfen bize bildirin

Powered by İdeal DSpace

DSpace yazılımı telif hakkı © 2002-2025 LYRASIS

  • Çerez Ayarları
  • Gizlilik Politikası
  • Son Kullanıcı Sözleşmesi
  • Geri Bildirim