Accurate automated classification of brain tumors from Magnetic Resonance Imaging (MRI) scans demands both clean input features and robust predictive models. This paper presents a Hybrid Deep Learning Framework that couples a noise-discriminating MRI preprocessing pipeline with a soft-voting ensemble of three independently fine-tuned Convolutional Neural Networks (CNNs). The preprocessing pipeline computes the per-image pixel intensity standard deviation and routes each scan through one of two branches: images whose standard deviation exceeds a calibrated threshold of 12.0 are processed by an Adaptive Weighted Arithmetic Mean Filter (AWAMF) fused with a 3×3 median kernel at a 0.7/0.3 blending ratio, while quieter images receive a conservative Non-Local Means pass. Three lightweight architectures — ResNet-18, MobileNet-V2, and SqueezeNet-1.1 — are independently fine-tuned from ImageNet weights and their per-class softmax distributions are arithmetically averaged to form the ensemble decision. The inference pipeline is deployed as a multi-user Flask web application backed by SQLite, providing real-time MRI upload, Chart.js probability visualisations, and auto-generated structured clinical reports. On 7,023 MRI images across four classes — Glioma, Meningioma, No Tumor, and Pituitary — the ensemble achieves 98.02% accuracy, macro F1-score of 0.979, and macro AUC of 0.999, outperforming every constituent model and several prior published systems on the same benchmark.
Introduction
The study uses deep learning with transfer learning, employing three CNN models—ResNet-18, MobileNet-V2, and SqueezeNet-1.1—each contributing unique strengths. Instead of relying on a single model, a soft-voting ensemble approach combines their predictions, improving accuracy by leveraging their complementary features.
A key contribution is an adaptive preprocessing pipeline that adjusts noise removal techniques based on the quality of each MRI scan, enhancing image clarity without losing important details. The system is trained on over 7,000 MRI images and uses data augmentation and optimization techniques for better performance.
Additionally, the model is integrated into a Flask-based web application with features like user authentication, real-time predictions, analytics dashboards, and automated clinical report generation.
Results show that while individual models achieve around 87–91% accuracy, the ensemble significantly improves performance to 98.02% accuracy, demonstrating the effectiveness of combining models.
Conclusion
A Hybrid Deep Learning Framework is proposed in this work for the automated classification of multiple brain tumour categories directly from MRI scan data. At its core, the system integrates an adaptive noise-discriminating preprocessing stage — employing AWAMF-median fusion (0.7/0.3) when scan noise is detected and a conservative NLM filter otherwise, with a sigma?=?12.0 boundary separating the two paths — followed by a soft-voting ensemble comprising ResNet-18, MobileNet-V2, and SqueezeNet-1.1, each independently initialised from IMAGENET1K_V1 weights and optimised via Adam at lr?=?1e–4. Evaluated on a stratified held-out set of 1,311 images, the ensemble attains 98.02% accuracy alongside a macro F1 of 0.979 and a macro AUC of 0.999, exceeding every constituent model by no less than 6.64 percentage points and establishing a new state-of-the-art on this benchmark. End-to-end deployment is realised through a Flask web application equipped with SQLite-backed multi-user management, Chart.js-driven dashboards, automated clinical report generation, and a comprehensive administrator audit panel. Concurrent-user testing involving three simultaneous participants produced correct tumour-class predictions across all 22 submitted scans, confirming system reliability under real operating conditions.
References
[1] J. Cheng et al., \"Enhanced performance of brain tumor classification via tumor region augmentation and partition,\" PLOS ONE, vol. 10, no. 10, p. e0140381, Oct. 2015.
[2] N. Abiwinanda et al., \"Brain tumor classification using convolutional neural network,\" in World Congress on Medical Physics and Biomedical Engineering, Springer, 2019, pp. 183–189.
[3] A. Pashaei et al., \"Brain tumor classification via convolutional neural network and extreme learning machines,\" in Proc. 8th ICCCKE, Mashhad, Iran, 2018, pp. 314–319.
[4] S. Chavan et al., \"MobileNet-based transfer learning for medical image classification,\" in Proc. IEEE ICSCET, 2020, pp. 1–5.
[5] Z. N. K. Swati et al., \"Brain tumor classification for MR images using transfer learning and fine-tuning,\" Computerized Medical Imaging and Graphics, vol. 75, pp. 34–46, Jul. 2019.
[6] H. H. Sultan, N. M. Salem, and W. Al-Atabany, \"Multi-classification of brain tumor images using deep neural network,\" IEEE Access, vol. 7, pp. 69215–69225, May 2019.
[7] K. He, X. Zhang, S. Ren, and J. Sun, \"Deep residual learning for image recognition,\" in Proc. IEEE CVPR, Las Vegas, NV, Jun. 2016, pp. 770–778.
[8] M. Sandler et al., \"MobileNetV2: Inverted residuals and linear bottlenecks,\" in Proc. IEEE/CVF CVPR, Salt Lake City, UT, 2018, pp. 4510–4520.
[9] F. N. Iandola et al., \"SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and less than 0.5?MB model size,\" arXiv:1602.07360, Feb. 2016.
[10] A. Buades, B. Coll, and J. M. Morel, \"A non-local algorithm for image denoising,\" in Proc. IEEE CVPR, San Diego, CA, 2005, vol. 2, pp. 60–65.
[11] G. Bradski, \"The OpenCV library,\" Dr. Dobb\'s Journal of Software Tools, vol. 25, pp. 120–125, Nov. 2000.
[12] A. Paszke et al., \"PyTorch: An imperative style, high-performance deep learning library,\" in Advances in Neural Information Processing Systems 32 (NeurIPS), 2019, pp. 8024–8035.