Healthcare diagnostics increasingly rely on Artificial Intelligence (AI) for accurate and timely decision-making. However, integrating AI outputs into clinical workflows through a structured Decision Support System (DSS) remains a major challenge. This paper proposes a hybrid AI-based Decision Support Framework designed to enhance healthcare diagnostics and patient triage through intelligent data analysis and clinician-centric visualization. The framework integrates structured and unstructured medical data using AI models and displays interpretable results through a real-time dashboard. The implementation includes a prototype using Python and Streamlit, demonstrating disease prediction based on clinical data. The proposed system shows potential for improving diagnostic accuracy, supporting clinician decisions, and reducing response time in healthcare environments
Introduction
The study focuses on the integration of Artificial Intelligence (AI) into healthcare diagnostics through a structured AI-Based Decision Support System (AI-DSS). AI has enhanced medical diagnostics by enabling automated pattern recognition, predictive analysis, and disease risk assessment across specialties like pathology, cardiology, oncology, and radiology. However, challenges remain in real-time clinical usability due to complex model outputs, lack of explainability, data privacy concerns, and poor integration with Electronic Health Records (EHRs).
Decision Support Systems (DSS) help bridge this gap by translating AI outputs into actionable recommendations, but traditional and even some AI-driven DSS face limitations in adaptability, interpretability, and user interface design.
The proposed AI-DSS framework addresses these issues through a modular, layered architecture:
Data Acquisition Layer – Collects structured and unstructured patient data (EHRs, imaging, wearables, clinical notes) and performs preprocessing.
AI Analytics Layer – Applies machine learning, deep learning, and hybrid models for disease prediction, anomaly detection, and risk scoring.
Inference & Knowledge Integration Layer – Aligns AI outputs with medical guidelines and ontologies, integrating Explainable AI (XAI) tools like SHAP and LIME for interpretability.
Decision Support Layer – Generates actionable diagnostic alerts, triage recommendations, and confidence scores.
Visualization & Dashboard Layer – Provides real-time, interactive, and interpretable insights for clinicians.
Workflow: Data collection → AI prediction → reasoning & interpretation → decision generation → dashboard visualization, enabling continuous feedback and improvement.
Advantages include improved diagnostic accuracy, explainable AI integration, usability in real-time decision-making, scalability across medical domains, and ethical compliance.
Implementation: The framework is demonstrated using Python, with Random Forest models for cardiac disease prediction on the UCI Heart Disease dataset. Integration of SHAP provides feature-level interpretability, and a Streamlit-based dashboard allows clinicians to view predictions, explanations, and triage recommendations in real time. The model achieved 89% accuracy, illustrating its potential reliability in clinical decision support.
Conclusion
The incorporation of Artificial Intelligence into healthcare decision-making has opened up new possibilities for improving diagnostic accuracy, reducing clinical workload, and enabling early disease detection. This paper proposed an AI-Based Decision Support System (AI-DSS) designed to close the divide between data-driven AI outputs and practical clinical usability. The framework unifies data acquisition, AI analytics, knowledge-based reasoning, and an interactive visualization dashboard to support physicians in real-time diagnostic evaluation and patient triage.
The implemented system, using a Random Forest-based AI model and SHAP explainability integration, demonstrated an accuracy of 89% on the UCI Heart Disease datasets. The inclusion of Explainable AI (XAI) elements allowed clinicians to interpret predictions transparently, increasing trust in the model’s decision-making process. Furthermore, the Streamlit-powered dashboard provided a user-friendly, real-time interface that translated complex AI outputs into actionable medical recommendations, facilitating evidence-based decision support.
References
[1] M. Kumar, P. Mukherjee, S. Verma, K. Jana, M. Wozniak, and M. F. Ijaz, “A Smart Privacy Preserving Framework for Industrial IoT using Hybrid Meta-Heuristic Algorithm,” Scientific Reports, vol. 13, no. 1, pp. 5372, 2023.
[2] H. Zhang and Y. Q., “Applying Deep Learning to Medical Imaging: A Review,” Applied Sciences, vol. 13, no. 18, p. 10521, 2023.
[3] A. Esteva, A. Robicquet, B. Ramsundar et al., “A Guide to Deep Learning in Healthcare,” Nature Medicine, vol. 25, pp. 24–29, 2019.
[4] T. Young, D. Hazarika, S. Poria, and E. Cambria, “Recent Trends in Deep Learning Based Natural Language Processing,” IEEE Computational Intelligence Magazine, vol. 13, no. 3, pp. 55–75, 2021.
[5] P. Rajpurkar, J. Irvin, and A. Ng, “CheXNet: Radiologist-Level Pneumonia Detection on Chest X-rays with Deep Learning,” arXiv preprint arXiv:1711.05225, 2018.
[6] M. Chen, Y. Hao, K. Hwang, L. Wang, and L. Wang, “Disease Prediction by Machine Learning over Big Healthcare Data,” IEEE Access, vol. 5, pp. 8869–8879, 2021.
[7] Y. LeCun, Y. Bengio, and G. Hinton, “Deep Learning,” Nature, vol. 521, pp. 436–444, 2015.
[8] S. Lundervold and A. Lundervold, “An Overview of Deep Learning in Medical Imaging,” Zeitschrift für Medizinische Physik, vol. 29, no. 2, pp. 102–127, 2019.
[9] H. Jiang, N. Kim, and J. Zhang, “Explainable Artificial Intelligence in Healthcare: A Systematic Review,” Journal of Biomedical Informatics, vol. 135, p. 104284, 2022.
[10] M. Tjoa and C. Guan, “A Survey on Explainable Artificial Intelligence (XAI): Toward Medical Decision Support,” IEEE Transactions on Neural Networks and Learning Systems, vol. 32, no. 11, pp. 4793–4813, 2021.
[11] J. Holzinger, G. Langs, H. Denk, K. Zatloukal, and H. Müller, “Causability and Explainability of Artificial Intelligence in Medicine,” Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, vol. 9, no. 4, e1312, 2019.
[12] R. Samek and W. Müller, “Explainable AI for Healthcare: Recent Advances and Challenges,” Information Fusion, vol. 89, pp. 123–139, 2023.
[13] M. Ribeiro, S. Singh, and C. Guestrin, “Why Should I Trust You? Explaining the Predictions of Any Classifier,” in Proc. 22nd ACM SIGKDD Int. Conf. Knowledge Discovery and Data Mining (KDD), 2016, pp. 1135–1144.
[14] L. Shapley, “A Value for n-Person Games,” in Contributions to the Theory of Games, Princeton University Press, 1953.
[15] D. Berrar, “Cross-Validation,” in Encyclopedia of Bioinformatics and Computational Biology, Elsevier, 2018, pp. 542–545.
[16] A. Dey, “Machine Learning Algorithms: A Review,” International Journal of Computer Science and Information Technologies, vol. 7, no. 3, pp. 1174–1179, 2016.
[17] H. Zhang, C. Xu, and K. Yu, “AI-Driven Decision Support Systems in Healthcare: Challenges and Opportunities,” Healthcare Analytics, vol. 5, p. 100219, 2023.
[18] S. Dash, S. Shakyawar, M. Sharma, and S. Kaushik, “Big Data in Healthcare: Management, Analysis, and Future Prospects,” Journal of Big Data, vol. 6, p. 54, 2019.
[19] P. Lucas, “Bayesian Analysis, Pattern Analysis, and Data Mining in Health Information Systems,” Journal of Biomedical Informatics, vol. 36, no. 3, pp. 193–198, 2020.
[20] E. Shortliffe and J. Cimino, Biomedical Informatics: Computer Applications in Health Care and Biomedicine, Springer, 2021.
[21] T. Chen and C. Guestrin, “XGBoost: A Scalable Tree Boosting System,” in Proc. 22nd ACM SIGKDD Int. Conf. Knowledge Discovery and Data Mining (KDD), 2016.
[22] F. Rahman, J. Davis, and K. Ahmed, “AI-Assisted Clinical Decision Support Systems: Design and Implementation,” Artificial Intelligence in Medicine, vol. 136, p. 102458, 2023.
[23] S. Krittanawong, H. Zhang, and S. Wang, “Artificial Intelligence in Precision Cardiovascular Medicine,” Journal of the American College of Cardiology, vol. 77, no. 12, pp. 1484–1499, 2021.
[24] M. Abadi, P. Barham, and J. Chen, “TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems,” Proc. 12th USENIX Symp. Operating Systems Design and Implementation (OSDI), 2016.