The rapid growth of digital learning resources has created challenges in personalized learning, automated evaluation, and interview preparation. Existing platforms typically address these tasks independently and often rely on generic content that is not grounded in user-provided study material. This paper presents a unified Large Language Model (LLM)-based smart learning, evaluation, and interview simulation framework that integrates document summarization, context-aware assessment, and resume-driven interview preparation within a single architecture. The system is implemented using a MERN-stack web framework, with Google Gemini API providing LLM-based intelligence and Cloudinary enabling secure cloud storage for uploaded PDF documents. The proposed pipeline processes user-provided notes and resumes, performs contextual text extraction, and generates summaries, adaptive practice tests, and role-specific interview questions grounded in the uploaded content. The evaluation module supports multiple difficulty levels and question formats, while the interview simulator analyzes resume and job description inputs to identify skill gaps and generate targeted questions. Experimental testing was conducted using multiple educational documents, resumes, and job descriptions to measure contextual accuracy and system latency. Results indicate that grounding LLM responses within user-provided documents improves question relevance and reduces hallucinated outputs while maintaining low processing latency. The proposed framework provides a unified, context-aware learning environment that supports personalized study, automated evaluation, and interview preparation within a scalable full-stack architecture.
Introduction
The text proposes a unified LLM-based educational platform designed to overcome limitations in traditional digital learning systems, which often act as static content repositories and fail to provide personalized guidance, evaluation, or interview preparation. Learners typically face cognitive overload, lack of adaptive assessment, and generic interview preparation tools that do not consider individual study material or resumes.
To address these issues, the system integrates document-grounded summarization, adaptive question generation, and resume-aware interview simulation into a single full-stack architecture powered by the Google Gemini API. It enables users to upload study materials and resumes, which are processed through a web-based pipeline built using React.js, Node.js, Express, MongoDB, and Cloudinary.
The system performs three core functions: (1) summarizes uploaded documents into structured key points, (2) generates customized assessments based on difficulty level and content context, and (3) simulates interviews by analyzing the user’s resume against job descriptions to identify skill gaps and generate targeted questions. A backend orchestration layer manages authentication, routing, and secure data handling, while cloud storage ensures scalable file management.
The methodology includes PDF text extraction, preprocessing, prompt engineering for Gemini, and RAG-based context feeding to ensure responses are grounded in user-specific content. The system also supports real-time scoring, performance analytics, and personalized feedback.
Experimentally, the framework is implemented in a full-stack environment with optimized API parameters, low-temperature settings for factual tasks, and chunked input processing for long documents. Overall, the proposed solution provides a unified, context-aware, and personalized learning ecosystem that integrates studying, evaluation, and career preparation within a single intelligent platform.
Conclusion
This paper presented a unified LLM-based smart learning, evaluation, and interview simulation framework using document-grounded prompting to transform static digital education into an interactive, personalized experience. By integrating the Google Gemini API with a full-stack architecture and Cloudinary, the system offered a unified smart learning pipeline covering document summarization, dynamic evaluation, and role-specific interview simulation. Experimental results demonstrated that using context-aware prompting with strict document boundaries significantly reduced AI hallucinations compared to standard ungrounded prompting, achieving over 92% accuracy in test generation and skill-gap detection. Furthermore, the asynchronous Node.js architecture minimized backend latency, meeting the fast processing requirements for real-time educational web applications. The adoption of a modular, secure platform provides a robust solution for students and job seekers facing cognitive overload.
Future work includes OCR support for scanned documents and voice-based interview simulation using speech processing modules.
References
[1] P. Lewis et al., “Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks,” Advances in Neural Information Processing Systems, vol. 33, pp. 9459–9474, 2020.
[2] A. Vaswani et al., “Attention Is All You Need,” Advances in Neural Information Processing Systems, vol. 30, 2017.
[3] R. Vatankhah Barenji, N. Salimi, and S. Khoshgoftar, “An LLM-Powered Assessment Retrieval-Augmented Generation for Higher Education,” arXiv preprint arXiv:2601.06141, 2026.
[4] T. Zheng, W. Li, J. Bai, W. Wang, and Y. Song, “Assessing the Robustness of Retrieval-Augmented Generation Systems in K-12 Educational Question Answering,” arXiv preprint arXiv:2412.08985, 2024.
[5] S. Sharma et al., “Retrieval Augmented Generation for Domain-Specific Question Answering,” arXiv preprint arXiv:2404.14760, 2024.
[6] Y. Wang et al., “REAR: A Relevance-Aware Retrieval-Augmented Framework for Open-Domain Question Answering,” arXiv preprint arXiv:2402.17497, 2024.
[7] P. Zhao et al., “Retrieval-Augmented Generation for AI-Generated Content: A Survey,” Data Science and Engineering, 2026.
[8] “Retrieval-Augmented Generation for Educational Application: A Systematic Survey,” Computers and Education: Artificial Intelligence, vol. 8, 2025.
[9] A. Pardasani, K. Maleski, and S. Roy, “Knowledge Retrieval-Based Intelligent Question and Answer Generation Framework for Education,” Journal of Student Research, vol. 14, no. 1, 2025.
[10] B. Aljohani and T. Alsanoosy, “Enhancing Medical Question Answering with LLMs via a Hybrid Retrieval-Augmented Generation Framework,” Information, vol. 17, no. 2, 2026.
[11] P. Chavan and S. Jadhav, “AI Based Mock Interview System,” International Journal of Scientific Research in Science, Engineering and Technology, vol. 13, no. 1, pp. 118–126, 2026.
[12] N. Shrivastava et al., “AI MockPrep: An AI-Driven Interview Simulation and Resume Optimization System,” International Journal of Engineering Research & Technology, vol. 15, no. 1, 2026.
[13] R. B. E. et al., “AI-Powered Resume Analyzer and Interview Preparation System,” International Journal for Research in Applied Science and Engineering Technology, 2025.
[14] V. Verma et al., “AI-Powered Mock Interview System for Automated Skill Assessment,” International Journal for Research in Applied Science and Engineering Technology, 2025.
[15] R. Goli et al., “AI-Powered Mock Interview Preparation,” Zenodo Research Publication, 2025.
[16] J. White et al., “A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT,” arXiv preprint arXiv:2302.11382, 2023.
[17] Google DeepMind, “Gemini: A Family of Highly Capable Multimodal Models,” arXiv preprint arXiv:2312.11805, 2023.
[18] T. Brown et al., “Language Models Are Few-Shot Learners,” Advances in Neural Information Processing Systems, vol. 33, 2020.
[19] Y. Mao et al., “Generation-Augmented Retrieval for Open-Domain Question Answering,” ACL Conference, 2021.
[20] J. Devlin et al., “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding,” NAACL-HLT, 2019.