The rising incidence and mortality rates of oral cancer have turned it into a significant global public health concern. Detecting the condition at an early stage is essential for achieving better health outcomes, as timely intervention significantly boosts the chances of effectively treating conditions such as Squamous Cell Carcinoma, Lymphoma, Melanoma, Kaposi’s Sarcoma, Osteosarcoma, and Adenoid Cystic Carcinoma. This proposal presents an AI-driven system designed to detect oral cancer through hybrid deep learning models. By integrating the EfficientNet and XceptionNet frameworks, the system autonomously extracts pertinent features from medical images and categorizes them as either cancerous or non-cancerous. The method utilizes a comprehensive dataset of oral cancer images, encompassing various lesion types, Gathered via clinical imaging techniques, including digital photography . The proposed system seeks to enhance the precision, dependability, and speed of oral cancer detection, providing a non-invasive tool to aid in early diagnosis and clinical decision-making.
Introduction
Oral cancer is a common and deadly disease, especially in low- and middle-income countries, where early detection is often delayed due to low awareness and lack of medical resources. Tobacco and alcohol are leading risk factors. Early diagnosis is critical for improving survival, but current diagnosis heavily relies on subjective clinical expertise, leading to variability.
???? Research Aim
This study proposes an AI-based system using deep learning—specifically EfficientNet and XceptionNet—to automate and improve the accuracy of oral cancer detection from medical images.
???? Methodology
???? A. Data Collection
Dataset: 2,000+ annotated oral cavity images from Kaggle
Cancerous: 1,200
Non-cancerous: 800
Images resized to 256×256 pixels
Data split: 70% training, 15% validation, 15% testing
???? B. Preprocessing Steps
Grayscale Conversion – reduces complexity by removing color
Contrast Enhancement & Noise Removal – improves image clarity using adaptive histogram equalization and Gaussian filtering
Thresholding – segments lesions from background
Histogram Equalization – improves visibility of lesions
Outputs from both models concatenated for richer feature representation
???? D. Classification
Two-class model: Cancerous vs Non-cancerous
Softmax activation in the output layer
Loss Function: Categorical Cross-Entropy
Optimizer: Adam
Model built and trained in Python using OpenCV and deep learning libraries
???? Results & Analysis
???? Hybrid Model Performance:
Metric
Value (%)
Accuracy
92.7
Sensitivity
91.3
Specificity
94.1
F1 Score
93.0
???? Comparison with Standard CNN:
Model
Accuracy
Sensitivity
Specificity
F1 Score
Standard CNN
85.4%
83.0%
87.2%
84.0%
Hybrid (Proposed)
92.7%
91.3%
94.1%
93.0%
Training curves showed stable convergence and no overfitting
The hybrid model significantly outperformed the standard CNN baseline
Conclusion
This work presents A dual-architecture deep learning solution for oral cancer detection for classifying oral cancer,integrating XceptionNet and EfficientNet to deliver exceptional performance. The model attained 100% accuracy during training, 97% accuracy in validation, and 98.4% accuracy in testing, indicating its robustness and dependability. Promising outcomes indicate the need for future research using expansive and heterogeneous datasets for validation and more varied datasets and investigate further improvements through data augmentation and architectural fine-tuning. This method holds significant promise for facilitating precise and prompt diagnosis in clinical environments.
References
[1] S. He, R. Chakraborty, and S. Ranganathan, “Proliferation and apoptosis pathways and factors in oral squamous cell carcinoma,” International Journal of Molecular Sciences, vol. 23, no. 3, p. 1562, Jan. 2022.
[2] R. F. Mansour, N. M. Alfar, S. Abdel-Khalek, M. Abdelhaq, R. A. Saeed, and R. Alsaqour, “Optimal deep learning based fusion model for biomedical image classification,” Expert Systems, vol. 39, no. 3, Mar. 2022, Art. no. e12764.
[3] R. A. Welikala, P. Remagnino, J. H. Lim, C. S. Chan, S. Rajendran, T. G. Kallarakkal, R. B. Zain, R. D. Jayasinghe, J. Rimal, A. R. Kerr, R. Amtha, K. Patil, W. M. Tilakaratne, J. Gibson, S. C. Cheong, and S. A. Barman, “Automated detection and classification of oral lesions using deep learning for early detection of oral cancer,” IEEE Access, vol. 8, pp. 132677–132693, 2020