The increasing adoption of medical imaging tech- nologies has significantly amplified the need for intelligent, efficient, and automated diagnostic systems to assist healthcare professionals. Manual interpretation of radiological images is often constrained by expert availability, time limitations, and growing patient demand. This paper introduces an AI-based multi-task medical image analyzer capable of identifying disease conditions from X-ray, CT, and MRI images. The proposed framework supports disease detection for pneumonia, COVID-19, and brain tumors by employing specialized deep learning models tailored to each modality and diagnostic task. Architectures such as DenseNet121, VGG16, and EfficientNet-B0 are utilized to optimize performance across different imaging scenarios. An automated routing mechanism directs each input image to the appropriate pretrained model based on disease and modality selection. Image preprocessing methods, including contrast en- hancement, histogram normalization, and data augmentation, are applied to strengthen model generalization. Experimental evaluation demonstrates consistent and reliable performance across all datasets, highlighting the effectiveness of the proposed unified yet disease-specific diagnostic framework for clinical decision support and large-scale screening applications.
Introduction
The text presents a disease-centric, modality-aware deep learning framework for medical image–based diagnosis that addresses key limitations of existing automated diagnostic systems. Medical imaging modalities such as X-ray, CT, and MRI are widely used in clinical practice, and recent advances in convolutional neural networks (CNNs) have enabled accurate detection of diseases like pneumonia, COVID-19, and brain tumors. However, many current systems are restricted to single diseases or fixed imaging modalities, limiting their clinical adaptability.
The proposed approach challenges the misconception that multimodality means combining multiple diseases into a single model. Instead, it treats each disease as an independent diagnostic task, supporting multiple imaging modalities only when clinically appropriate. A modality-aware routing mechanism ensures that each uploaded image is processed by the correct CNN model, improving reliability and reducing incorrect predictions.
The framework includes separate pipelines for pneumonia (chest X-ray), COVID-19 (CT and X-ray), and brain tumor detection (MRI and CT). Optimized deep learning models—such as DenseNet121, EfficientNet-B0, and VGG16—are used with transfer learning to achieve high diagnostic performance. The system validates disease–modality compatibility before classification and routes images to the appropriate model accordingly.
Experimental results demonstrate strong performance across all diseases, with CT-based COVID-19 detection outperforming X-ray-based methods and MRI-based brain tumor detection showing high accuracy due to superior feature representation. Confusion matrix analysis indicates low misclassification rates, particularly low false negatives, which is critical in medical diagnosis.
Conclusion
This paper presented a disease-centric and modality-aware AI-based medical image analysis framework designed for the automated detection of pneumonia, COVID-19, and brain tumors using X-ray, CT, and MRI images. By employing specialized deep learning models for each disease and en- forcing a structured modality-aware routing mechanism, the proposed system ensures accurate, consistent, and clinically meaningful diagnostic predictions. The experimental results clearly demonstrate that separating diseases into independent diagnostic tasks improves classification reliability and mini- mizes feature interference across modalities.
The proposed framework effectively balances diagnostic performance, computational efficiency, and system flexibility, making it well-suited for real-world clinical environments and large-scale screening applications. Its modular architecture enables seamless scalability, allowing additional diseases and imaging modalities to be incorporated with minimal system redesign. Furthermore, the system’s ability to support multiple diagnostic tasks within a unified framework highlights its practicality for deployment in resource-constrained healthcare settings.
Moreover, the framework has the potential to reduce diag- nostic workload for medical professionals by assisting in early disease screening and prioritization of critical cases, thereby supporting timely and informed clinical decision-making. Future work will focus on integrating explainable AI techniques to improve transparency and trust in model predictions, as well as extending the framework to support real-time deployment and validation on larger, multi-institutional datasets. These enhancements will further strengthen the clinical applicability of the system and contribute to the advancement of intelligent healthcare diagnostics.