Skin cancer, a condition that originates in the skin tissue, can damage surrounding tissues and, in severe cases, lead to disability or even death. Early and accurate diagnosis, coupled with appropriate treatment, plays a crucial role in minimizing its harmful effects. However, diagnosing skin cancer can be challenging for physicians due to the visual similarities between cancerous lesions and benign tumors, often resulting in a time-consuming process. This project focuses on creating an automated system to distinguish between skin cancer and benign tumors using Convolutional Neural Networks (CNNs). The approach introduces a cutting-edge application of deep learning techniques, utilizing CNNs to analyze images of skin lesions uploaded for evaluation. In addition to providing accurate diagnostic support, the system delivers personalized recommendations.
Introduction
Overview of Skin Cancer
Skin cancer is one of the most common global health issues, with a rising incidence of both melanoma and non-melanoma types.
According to the WHO, 75% of cancer cases worldwide are skin-related, especially in countries like the US, Canada, and Australia.
Major types: Melanoma, Basal Cell Carcinoma (BCC), and Squamous Cell Carcinoma (SCC).
UV radiation is the leading cause; others include genetics, aging, smoking, and HPV infections.
Early detection is essential, especially for melanoma, which accounts for most skin cancer deaths despite being less common.
Use of Computer Vision and Deep Learning
Recent advances in computer vision and Convolutional Neural Networks (CNNs) have enhanced disease identification.
CNNs excel in image classification tasks such as skin lesion analysis.
Deep learning models like ResNet-101 and Inception-v3 have shown strong results in classifying skin lesions.
Literature Insights
Multiple studies have successfully applied deep learning, particularly CNNs, to distinguish between malignant and benign lesions.
Generative Adversarial Networks (GANs) are used to augment datasets by creating synthetic images, improving classification accuracy from 53% to 71%.
Hybrid models and architectures like Xception have achieved classification accuracy above 85%.
Segmentation and classification are essential for accurate diagnosis and treatment.
Proposed Methodology
The model classifies seven types of skin cancer using CNNs, particularly the ResNet-101 architecture, enhanced by:
Transfer learning from ImageNet
Soft Attention mechanisms to focus on key lesion areas
YCbCr color space for effective skin detection in varied lighting
The architecture uses:
Convolutional Layers for feature extraction
Pooling Layers to reduce dimensionality
Fully Connected Layers for classification
Dataset
Based on the ISIC 2018 (HAM10000) dataset with over 10,000 dermatoscopic images.
Hair Removal: DullRazor algorithm removes artifacts from dermoscopic images.
Data Augmentation: Enhances dataset size and diversity, boosting model generalizability.
Deep Learning Techniques
CNN: Used for feature extraction and classification.
Soft Attention Module: Highlights diagnostically important image areas.
YCbCr Skin Detection: More robust to lighting changes than RGB, ensuring precise lesion isolation.
Results and Performance
The model achieved over 90% accuracy in classifying skin lesions.
Highest F1-scores were for vascular lesions (1.00) and dermatofibroma (0.99).
Melanoma detection had a lower F1-score (0.76), suggesting room for improvement.
Confusion matrix and precision-recall metrics indicate high reliability and clinical potential.
Conclusion
While our study has made notable advancements, certain limitations must be acknowledged. Although data augmentation helps increase the diversity of training samples, it may inadvertently cause the model to favor specific augmentation techniques. Another challenge lies in obtaining a well-balanced and diverse dataset, which can affect the model’s accuracy in detecting rare or atypical skin lesions. Additionally, the imbalance in the number of images across different classes makes classification more difficult, as the model may struggle to generalize effectively for underrepresented categories. Furthermore, limited access to high-performance GPUs and computational resources presents challenges in efficiently training and fine-tuning deep learning models.
References
[1] Priyanka, S., Kavitha, C. and Kumar, M.P.: Deep Learning based Approach for Prediction of Diabetes. In 2023 2nd International Conference for Innovation in Technology (INOCON). IEEE. pp. 1-6. (2023).
[2] Moganam, P.K. and Sathia Seelan, D.A.: Deep learning and machine learning neural network approaches for multi class leather texture defect classification and segmentation. Journal of Leather Science and Engineering, 4(1), p.7. (2022).
[3] Kavitha, C. and Ashok, S.D.: A new approach to spindle radial error evaluation using a machine vision system. Metrology and Measurement Systems, 24(1), 201-219. (2017).
[4] Priyanka, S., Diego Oliva., Kethepalli Mallikarjuna., and M. S. Sudhakar. : l-shaped geometry-based pattern descriptor serving shape retrieval. Expert Systems with Applications, 213. (2023).
[5] Priyanka, S. and Sudhakar. M.S.: Lightweight Spatial Geometric Models Assisting Shape Description and Retrieval. In Computational Intelligence Methods for Super-Resolution in Image Processing Applications. Cham: Springer International Publishing. pp. 209-230. 2021.
[6] Aburaed, N., Panthakkan, A., Al-Saad, M., Amin, S.A. and Mansoor, W., 2020, November. Deep convolutional neural network (DCNN) for skin cancer classification. In 2020 27th IEEE International Conference on Electronics, Circuits and Systems (ICECS) (pp. 1-4). IEEE.
[7] Demir, A., Yilmaz, F. and Kose, O., 2019, October. Early detection of skin cancer using deep learning architectures: resnet-101 and inceptionv3. In 2019 medical technologies congress (TIPTEKNO) (pp. 1-4). IEEE.
[8] Kondaveeti, H.K. and Edupuganti, P., 2020, December. Skin cancer classification using transfer learning. In 2020 IEEE International Conference on Advent Trends in Multidisciplinary Research and Innovation (ICATMRI) (pp. 1-4). IEEE.
[9] Sedigh, P., Sadeghian, R. and Masouleh, M.T., 2019, November. Generating synthetic medical images by using GAN to improve CNN performance in skin cancer classification. In 2019 7th International Conference on Robotics and Mechatronics (ICRoM) (pp. 497-502). IEEE.
[10] Pacheco, A.G. and Krohling, R.A., 2021. An attention-based mechanism to combine images and metadata in deep learning models applied to skin cancer classification. IEEE journal of biomedical and health informatics, 25(9), pp.3554-3563.
[11] Rezaoana, N., Hossain, M.S. and Andersson, K., 2020, December. Detection and classification of skin cancer by using a parallel CNN model.
[12] In 2020 IEEE International Women in Engineering (WIE) Conference on Electrical and Computer Engineering (WIECON-ECE) (pp. 380386). IEEE.
[13] Diame, Z.E., Al-Berry, M.N., Salem, M.A.M. and Roushdy, M., 2021, May. Deep learning architectures for aided melanoma skin disease recognition: a review. In 2021 International Mobile, Intelligent, and Ubiquitous Computing Conference (MIUCC) (pp. 324-329). IEEE.
[14] Hasan, H.A. and Ibrahim, A.A., 2020, October. Hybrid detection techniques for skin cancer images. In 2020 4th International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT) (pp. 1-8). IEEE.