The rapid growth of digital services and interconnected systems has led to an increase in sophisticated cyber threats such as phishing, malware, network intrusions, ransomware, and deepfake-based attacks. Conventional security solutions are often limited to single-domain detection and lack adaptability, explainability, and user-centric intelligence. This paper presents CHRIS (Cyber Security Hub for Responsible Intelligence System), a unified, web-based cybersecurity platform that integrates Machine Learning (ML), Deep Learning (DL), and Generative AI to provide comprehensive and explainable threat detection. CHRIS incorporates six security modules: phishing detection, malware detection, network intrusion detection, password strength evaluation, deepfake detection, and ransomware detection, all accessible through a single interface. Random Forest, XGBoost, and Xception models are employed for predictive analysis, while Google Gemini is integrated to generate natural-language explanations, recommendations, and interactive assistance via an AI-powered chatbot. Experimental analysis demonstrates that the proposed system achieves high detection accuracy while significantly improving interpretability and usability. The results highlight the effectiveness of combining predictive security analytics with Generative AI, making CHRIS a practical and scalable solution for next-generation cybersecurity applications.
Introduction
The paper introduces CHRIS (Cybersecurity Hub for Responsible Intelligence Scanning), a unified, web-based cybersecurity platform designed to address the growing complexity and frequency of modern cyber threats such as phishing, malware, ransomware, network intrusions, deepfakes, and weak passwords. Traditional rule-based security systems often operate in isolation and struggle to adapt to evolving attack patterns. CHRIS overcomes these limitations by integrating Machine Learning (ML), Deep Learning (DL), and Generative AI into a single, intelligent framework.
The platform consolidates six interoperable cybersecurity modules:
Phishing detection
Malware detection
Network intrusion detection
Password strength evaluation
Deepfake detection
Ransomware detection
A key innovation of CHRIS is the integration of Google Gemini, which provides natural-language explanations, risk summaries, and actionable recommendations. This enhances transparency, usability, and user trust by transforming technical outputs into human-readable guidance.
Related Work Overview
Existing research typically focuses on individual cybersecurity problems:
Phishing and malware detection use ML models like Random Forest, SVM, and DL approaches.
Network intrusion detection often relies on XGBoost and hybrid ensemble methods.
Deepfake detection uses CNN architectures such as Xception.
Password evaluation tools like zxcvbn focus on behavioral password analysis.
However, prior systems rarely integrate multiple threat detection mechanisms with Generative AI explainability into a single platform. CHRIS fills this research gap.
Proposed Methodology
CHRIS follows a hybrid detection approach, combining:
ML models (for phishing and malware),
Deep learning models (for deepfake and intrusion detection),
Heuristic and behavioral methods (for ransomware and password strength).
User inputs (URLs, files, passwords, images, network data) are processed through a centralized engine that routes them to appropriate modules. Detection results, alerts, and recommendations are displayed on a unified dashboard, with secure logging for auditing and system improvement.
System Architecture
CHRIS is built on a layered, modular architecture consisting of:
User Interface Layer (React.js) – Centralized web dashboard for submissions and result visualization.
Data & Monitoring Layer – Handles datasets, feature pipelines, logging, quarantined files, and real-time monitoring (e.g., watchdog for ransomware detection).
This modular design ensures scalability, real-time responsiveness, and independent module updates.
User Interface Layer
The UI presents real-time threat alerts, confidence scores, explanations, and recommended actions in a clear, user-friendly manner. It emphasizes:
Transparency (explaining why a threat was flagged)
Security awareness
Log summaries and system monitoring
Centralized control across all detection modules
Conclusion
This paper presented CHRIS (Cyber Security Hub for Responsible Intelligence System), a unified and intelligent cybersecurity platform that integrates Machine Learning, Deep Learning, and Generative AI to address a wide spectrum of modern cyber threats. By consolidating phishing detection, malware detection, network intrusion detection, password strength evaluation, deepfake detection, and ransomware detection into a single web-based system, CHRIS overcomes the limitations of traditional siloed security solutions. The modular architecture ensures scalability and flexibility, enabling each detection component to operate independently while contributing to a holistic security view. Experimental evaluations demonstrate that the proposed models achieve high detection accuracy across multiple threat domains. Random Forest models provide reliable and interpretable performance for phishing and malware detection, while the XGBoost-based intrusion detection system effectively identifies anomalous network behaviour with reduced false alerts. The Xception-based deepfake detection module successfully identifies manipulated media using datasets sourced from Hugging Face, highlighting the adaptability of the system to diverse and evolving data sources.
In addition, the ransomware detection module emphasizes early-stage behavioural monitoring and automated containment, offering proactive protection against one of the most destructive forms of cyber-attacks. A key contribution of CHRIS lies in the integration of Generative AI through Google Gemini, which significantly enhances explainability, user awareness, and interaction. Instead of presenting raw model predictions, the system provides natural-language explanations, contextual risk summaries, and actionable recommendations. This human- centric design bridges the gap between complex security analytics and end-user understanding, thereby improving trust and usability.
Overall, CHRIS demonstrates that combining predictive security analytics with Generative AI-driven intelligence can lead to more effective, explainable, and user-friendly cybersecurity solutions. The proposed platform is well suited for real-world deployment in personal and organizational environments. Future work will focus on extending CHRIS with dynamic malware analysis, real-time network traffic capture, multimodal deepfake detection (image and video), and continuous learning mechanisms to further improve resilience against emerging and zero-day threats.
References
[1] Z. Li, Q. Yan, R. Deng, W. Liu, and D. Wang, “A survey on phishing attacks and countermeasures,” IEEE Communications Surveys & Tuto rials, vol. 20, no. 1, pp. 1–28, First Quarter 2018.
[2] S. Marchal, J. Francois, R. State, and T. Engel, “PhishStorm: Detecting phishing with streaming analytics,” IEEE Transactions on Network and Service Management, vol. 11, no. 4, pp. 458–471, Dec. 2014.
[3] Y. Ye, T. Li, Q. Jiang, and Y. Wang, “CIMDS: Adapting postprocessing techniques of associative classification for malware detection,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 40, no. 3, pp. 298 307, May 2010.
[4] M. Egele, T. Scholte, E. Kirda, and C. Kruegel, “A survey on automated dynamic malware-analysis techniques and tools,” ACM Computing Sur veys, vol. 44, no. 2, pp. 1–42, 2012.
[5] N. Scaife, H. Carter, P. Traynor, and K. Butler, “CryptoLock (and Drop It): Stopping ransomware attacks on user data,” in Proc. IEEE International Conference on Distributed Computing Systems (ICDCS), 2016, pp. 303 312.
[6] A. Sgandurra, L. Mu˜noz-Gonz´alez, R. Mohsen, and E. C. Lupu, “Au tomated dynamic analysis of ransomware: Benefits, limitations, and use for detection,” arXiv preprint arXiv:1609.03020, 2016.
[7] I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. Cambridge, MA, USA: MIT Press, 2016.
[8] N. M. Karie, V. R. Kebande, H. S. Venter, and N. I. Choo, “On the importance of forensic readiness in digital investigations,” Digital Investigation, vol. 32, pp. 200–214, 2020.
[9] I. Sharafaldin, A. H. Lashkari, and A. A. Ghorbani, “Toward generating a new intrusion detection dataset and intrusion traffic characterization,” in Proc. International Conference on Information Systems Security and Privacy (ICISSP), 2018, pp. 108–116.
[10] A. H. Lashkari, G. Draper-Gil, M. S. I. Mamun, and A. A. Ghorbani, “Characterization of Tor traffic using time-based features,” in Proc. International Conference on Information Systems Security and Privacy (ICISSP), 2017, pp. 253–262.
[11] W. Wang, M. Zhu, J. Wang, X. Zeng, and Z. Yang, “End-to-end encrypted traffic classification with one-dimensional CNN,” in Proc. IEEE International Conference on Intelligence and Security Informatics (ISI), 2017, pp. 43–48.
[12] R. Sommer and V. Paxson, “Outside the closed world: On using machine learning for network intrusion detection,” in Proc. IEEE Symposium on Security and Privacy, 2010, pp. 305–316.
[13] M. Masood, M. Nawaz, K. M. Malik, A. Javed, A. Irtaza, and H. Malik, “Deepfakes generation and detection: State-of-the-art, open challenges, countermeasures, and way forward,” Applied Intelligence, vol. 53, no. 4, pp. 3974–4026, 2023.
[14] H. Zhao, W. Zhou, D. Chen, W. Zhang, and N. Yu, “Multi-attentional deepfake detection,” in Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 2185–2194.
[15] R. Lanzino, F. Fontana, A. Diko, M. R. Marini, and L. Cinque, “Faster than lies: Real-time deepfake detection using binary neural networks,” in Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 3771–3780.
[16] J. Guan, H. Zhou, Z. Hong, E. Ding, J. Wang, C. Quan, and Y. Zhao, “Delving into sequential patches for deepfake detection,” Advances in Neural Information Processing Systems, vol. 35, pp. 4517–4530, 2022.
[17] B. Liu, M. Ding, T. Zhu, and X. Yu, “TI2Net: Temporal identity inconsistency network for deepfake detection,” in Proc. IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2023, pp. 4691 4700.
[18] H. Zhao, W. Zhou, D. Chen, W. Zhang, and N. Yu, “Self-supervised transformer for deepfake detection,” arXiv preprint arXiv:2203.01265, 2022.
[19] M. S. M. Altaei, “Detection of deep fake in face images using deep learning,” Wasit Journal of Computer and Mathematics Science, vol. 1, no. 4, pp. 60–71, 2022.
[20] D. Battista, “Political communication in the age of artificial intelligence: An overview of deepfakes and their implications,” Society Register, vol. 8, no. 2, pp. 7–24, 2024.