Breast Ultrasound Image Classification and Segmentation Using Convolutional Neural Networks and Vision Transformer

Click the Poster to View Full Screen, Right click to save image

Julio Rodriguez

College:
The Dorothy and George Hennings College of Science, Mathematics, and Technology

Major:
Computer Information Systems

Faculty Research Advisor(s):
Kuan Huang

Abstract:
This study delves into applying advanced machine learning techniques, specifically convolutional neural networks (CNNs) and Vision Transformers, to classify and segment breast ultrasound images. The primary objective is to enhance breast cancer detection by differentiating between benign and malignant tumors with higher precision. By employing renowned deep learning architectures such as VGG-16 and ResNet-50, this research not only benchmarks their performance in medical image analysis but also explores the integration of classification and segmentation tasks to improve diagnostic accuracy. Furthermore, the introduction of Spring Transformers offers a novel approach to handling image data, promising to refine model efficiency and effectiveness in medical imaging tasks. The study conducted on the Google Colab Notebook platform presents a comparative analysis of the models' performance, revealing VGG-16's superior accuracy. However, the most significant contribution lies in the proposed multi-task of VGG-16 and ResNet-50 framework, which amalgamates classification and segmentation, potentially setting a new standard in AI-assisted medical imaging. This research underscores the pivotal role of AI in early cancer detection, pushing the boundaries of what is possible in medical diagnostics.


Previous
Previous

Technology Adoption and Employee Adaptation

Next
Next

Exploring Existence: Philosophers’ Lived Experiences, Minds, and Their Theories of Meaning