Evaluating vision transformer models for breast cancer detection in mammographic imaging

Yükleniyor...
Küçük Resim

Tarih

2025

Dergi Başlığı

Dergi ISSN

Cilt Başlığı

Yayıncı

Bitlis Eren Üniversitesi Rektörlüğü

Erişim Hakkı

info:eu-repo/semantics/openAccess

Özet

Breast cancer is a leading cause of mortality among women, with early detection being crucial for effective treatment. Mammographic analysis, particularly the identification and classification of breast masses, plays a crucial role in early diagnosis. Recent advancements in deep learning, particularly Vision Transformers (ViTs), have shown significant potential in image classification tasks across various domains, including medical imaging. This study evaluates the performance of different Vision Transformer (ViT) models—specifically, base-16, small-16, and tiny-16—on a dataset of breast mammography images with masses. We perform a comparative analysis of these ViT models to determine their effectiveness in classifying mammographic images. By leveraging the self-attention mechanism of ViTs, our approach addresses the challenges posed by complex mammographic textures and low contrast in medical imaging. The experimental results provide insights into the strengths and limitations of each ViT model configuration, contributing to an informed selection of architectures for breast mass classification tasks in mammography. This research underscores the potential of ViTs in enhancing diagnostic accuracy and serves as a benchmark for future exploration of transformer-based architectures in the field of medical image classification.

Açıklama

Anahtar Kelimeler

Breast Mammography With Masses, Image Classification, Vision Transformers, Base-16, Small-16, Tiny-16

Kaynak

Bitlis Eren Üniversitesi Fen Bilimleri Dergisi

WoS Q Değeri

Scopus Q Değeri

Cilt

14

Sayı

1

Künye