Skin disease detection is a critical task in medical image analysis due to its impact on early diagnosis and treatment. This study presents a deep learning-based framework for automated skin lesion segmentation using dermoscopic images. The proposed approach incorporates multiple segmentation architectures, including Fully Convolutional Networks (FCN), U-Net, and SegNet, to evaluate their effectiveness in accurately identifying lesion regions. The preprocessing stage involves image resizing, normalization, and data augmentation techniques such as random rotation and horizontal flipping to enhance model generalization. The models are trained and evaluated using standard benchmark datasets, and their performance is assessed using metrics such as accuracy, Dice coefficient, Intersection over Union (IoU), precision, recall, and loss. Experimental results demonstrate that the U-Net model outperforms FCN and SegNet, achieving superior segmentation accuracy and better generalization capability. The findings highlight the effectiveness of deep learning techniques in improving automated skin disease diagnosis and support the development of reliable computer-aided dermatological systems.
Introduction
The skin, the largest organ of the human body, serves as a vital protective barrier, but its health is affected by lifestyle and environmental factors such as sun exposure, pollution, smoking, and infections. Skin diseases are highly prevalent worldwide and can lead to both physical discomfort and psychological issues. Diagnosing these conditions is challenging due to overlapping symptoms and a shortage of trained dermatologists, highlighting the need for automated, affordable, and accurate diagnostic systems.
Recent advancements in deep learning (DL) and machine learning (ML) have enabled AI-based skin disease detection with accuracy comparable to dermatologists. Multiple datasets, including HAM10000, PH2, ISIC, and BCN20000, support model training, validation, and generalization. Common DL architectures like FCN, U-Net, and SegNet are used for lesion segmentation, combined with preprocessing and data augmentation techniques.
Evaluation results show that U-Net outperforms FCN and SegNet, achieving the highest Dice coefficient, IoU, precision, and recall, making it the most reliable for accurate skin lesion segmentation. This demonstrates the potential of AI-based systems to improve early detection, reduce misdiagnosis, and enhance dermatological care.
Conclusion
This study presents a comparative analysis of deep learning-based segmentation models, including FCN, SegNet, and U-Net, for automated skin lesion detection. The results indicate that while FCN and SegNet provide reasonable performance, U-Net achieves the highest accuracy, Dice score, and IoU, demonstrating superior capability in capturing fine-grained lesion boundaries. The use of preprocessing and data augmentation techniques significantly improves model robustness and generalization. The proposed framework highlights the potential of deep learning in enhancing the accuracy and efficiency of skin disease diagnosis. Future work may focus on integrating hybrid architectures, improving model interpretability, and deploying the system in real-time clinical applications for broader accessibility and practical use.
References
[1] M. M. Shahin and M. Arun, \"Skin Disease Detection using Machine Learning (ML) and Convolutional Neural Networks (CNNs),\" Int. J. Res. Publ. Rev., vol. 6, no. 1, pp. 4686-4689, Jan. 2025
[2] N. Lama, \"Deep Learning Techniques for Image Segmentation in Dermoscopic Skin Cancer Images,\" Ph.D. dissertation, Dept. Comput. Sci., Missouri Univ. Sci. Technol., Rolla, MO, USA, 2023.
[3] Ferro, P., Vemanaboina, H., & Prakash, C. (Eds.). (2026). Computational Techniques and Smart Manufacturing (1st ed.). CRC Press. https://doi.org/10.1201/9781003679622
[4] Liu, Suxing & Himel, Galib Muhammad Shahriar & Wang, Jiahao. (2024). Breast Cancer Classification with Enhanced Interpretability: DALAResNet50 and DT Grad-CAM. IEEE Access. 12. 10.1109/ACCESS.2024.3520608.
[5] D. Stoyanov et al., Eds., Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 20, 2018, Proceedings (Lecture Notes in Computer Science, vol. 11045).
[6] Kumar, S.S., Vinod Kumar, R.S. and Subbulekshmi, D. (2025), A Review of U-Net-Based Deep Learning Models for Skin Lesion Segmentation. Int J Imaging Syst Technol, 35: e70107. https://doi.org/10.1002/ima.70107
[7] Ba Gao, Lina Yang, Yunguang Guan, Haoyan Yang, Changxin Liu, Yifeng Tan, AMST-Net: An adaptive multi-scale transformer dual encoder network for skin lesion segmentation, Expert Systems with Applications, Volume 299, Part C, 2026, 130058, ISSN 0957-4174,
https://doi.org/10.1016/j.eswa.2025.130058.
[8] T. Mendonça, P. M. Ferreira, J. S. Marques, A. R. S. Marçal, and J. Rozeira, \"PH² - A dermoscopic image database for research and benchmarking,\" in Proc. 35th Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. (EMBC), Osaka, Japan, 2013, pp. 5437-5440, doi: 10.1109/EMBC.2013.6610779.
[9] Uddyalok Chakraborty, D. Thilagavathy, Suresh Kumar Sharma and Awadh Kishore Singh, “Hybrid Deep Learning with Alexnet Feature Extraction and Unet Classification for Early Detection in Leaf Diseases”, ICTACT Journal on Soft Computing Vol. 14, No. 3, pp. 3255-3262, 2024.
[10] Holger A. Haenssle, Christian Fink, R. Schneiderbauer et al., “Man against machine: Diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists,” Annals of Oncology, vol. 29, no. 8, pp. 1836–1842, 2018.
[11] Noel C. Codella, David Gutman, Metin E. Celebi, Brian Helba, Michael A. Marchetti, Stephen W. Dusza, and Allan Halpern, “Deep learning, sparse coding, and SVM for melanoma recognition in dermoscopy images,” in IEEE Engineering in Medicine and Biology Society Annual International Conference, 2018, pp. 1365–1368.
[12] T. J. Brinker, A. Hekler, A. H. Enk, C. Berking, S. Haferkamp, A. Hauschild, M. Weichenthal, J. Klode, D. Schadendorf, T. Holland-Letz, C. von Kalle, S. Frohling, B. Schilling, and J. S. Utikal, “Deep neural networks are superior to dermatologists in melanoma image classification,” Eur. J. Cancer 119, 11–17 (2019).
[13] P. Tschandl, C. Rinner, Z. Apalla, G. Argenziano, N. Codella, A. Halpern, and H. Kittler, “Human–computer collaboration for skin cancer recognition,” Nat. Med. 25(8), 1215–1218 (2019).
[14] P. Rajpurkar, J. Irvin, R. L. Ball, K. Zhu, B. Yang, H. Mehta, T. Duan et al., “Deep learning for chest radiograph diagnosis: A retrospective comparison of the CheXNeXt algorithm to practicing radiologists,” PLoS Med. 15(11), e1002686 (2018).
[15] Z. Liu, Z. Li, J. Qu, R. Zhang, X. Zhou, L. Li, K. Sun et al., “Radiomics of multiparametric MRI for pretreatment prediction of pathologic complete response to neoadjuvant chemotherapy in breast cancer: A multicenter study,” Clin. Cancer Res. 25(12), 3538–3547 (2019).
[16] T. J. Brinker, A. Hekler, A. H. Enk, J. Klode, A. Hauschild, C. Berking, and D. Schadendorf, “A convolutional neural network trained with dermoscopic images performed on par with 145 dermatologists in a clinical melanoma image classification task,” Eur. J. Cancer 111, 148–154 (2019);
[17] M. E. Celebi, H. A. Kingravi, and H. Iyatomi, “Border detection in dermoscopy images using statistical region merging,” Skin Res. Technol. 23(1), 14–23 (2017).
[18] M. Binder, H. Kittler, A. Seeber, A. Steiner, H. Pehamberger, and K. Wolff, “Epiluminescence microscopy-based classification of pigmented skin lesions using computerized image analysis and an artificial neural network,” Melanoma Res. 5(4), 255–261 (1995).
[19] S. W. Menzies, J. Emery, M. Staples, and S. Davies, “Impact of dermoscopy and short-term sequential digital dermoscopy imaging for the management of pigmented lesions in primary care: A sequential intervention trial,” Br. J. Dermatol. 154(4), 624–632 (2006).