Wireless Communication technology has made it much easier for students and faculty members to interact effectively in digital classrooms, ultimately leading to better educational outcomes. However, evaluating descriptive answers still remains a significant challenge, as current online testing systems have not yet provided a completely reliable solution. Manual evaluation, in particular, takes a considerable amount of time and often results in inconsistencies, especially during large-scale examinations, where variations in grading and assessment bias can occur. To address these issues, this paper presents an intelligent Descriptive Answer Evaluation System that combines Wireless Communication infrastructure with advanced Natural Language Processing (NLP) techniques to support real-time automated grading. The proposed system makes use of Sentence-BERT (SBERT) to measure semantic similarity between answers and a finite-tuned Text-to-Text Transfer Transformer (T5) model to evaluate contextual correctness, coherence, and completeness of responses. The system follows a modular design approach, consisting of components such as Admin, Staff, Student, Question Management, Answer Submission, NLP Processing, and Result Allocation modules. These modules work together to ensure smooth operation and allow easy scalability of the system. By reducing the dependency on manual evaluation, the system helps in maintaining consistent grading standards, making it highly suitable for educational institutions conducting digital assessments. Furthermore, experimental evaluation and results demonstrate that the proposed grading system provides better reliability compared to traditional systems that rely on keyword matching methods for assessment.
Introduction
The paper proposes an AI-powered automated descriptive answer evaluation system that addresses limitations of traditional manual grading, such as bias, inconsistency, and time consumption. With the growth of online education, there is a need for scalable and reliable systems that can accurately assess large volumes of student responses.
The system uses Natural Language Processing (NLP) with advanced transformer models—SBERT for semantic similarity and T5 for contextual understanding. Unlike traditional keyword-based methods, this approach evaluates both the meaning and logical quality of answers, enabling more accurate and fair assessment.
The architecture is modular, consisting of Admin, Staff, Student, Question Management, Answer Submission, and NLP Processing modules. Student answers are preprocessed (tokenization, stop-word removal, lemmatization, normalization) before evaluation. SBERT computes similarity between student and reference answers, while T5 evaluates coherence and completeness.
A hybrid scoring mechanism combines both models (70% SBERT, 30% T5) to generate final marks, ensuring balanced evaluation of conceptual accuracy and explanation quality. The system also includes secure authentication, role-based access control, and a PostgreSQL database for managing users, questions, and results.
Overall, the system provides a scalable, unbiased, and efficient solution for automated answer evaluation, improving accuracy, reducing manual effort, and enabling real-time result generation in modern digital learning environments.
Conclusion
The research introduces an intelligent Descriptive Answer Evaluation System that combines Wireless Communication with advanced NLP models. By integrating SBERT for semantic similarity and T5 for contextual reasoning, the system is able to provide accurate, unbiased, and efficient grading. Its modular architecture supports scalability, and experimental results confirm improved consistency along with reduced manual effort.
Overall, the proposed solution represents a significant advancement in automated academic assessment and effectively addresses the growing need for scalable digital education technologies.
The research presents an intelligent descriptive answer evaluation system that integrates wireless technology with advanced NLP models. By combining semantic similarity analysis using SBERT and contextual reasoning using T5, the system achieves accurate grading. Its modular architecture ensures scalability and supports efficient workflow management. Experimental results also demonstrate strong agreement with human evaluators.
References
[1] L. Ralitha Manasachanadrapati and C. Koteswara Rao, “Descriptive Answer Evaluation Using NLP Approaches,” IEEE,2024. DOI: 10.1109/ACCESS.2024.3391201.
[2] G. M. Rasiqul Islam, A. Saifat, and M. M. Fahim Hasin, “Mobile Based MCQ Answer Sheet Analysis and Evaluation Application,” IEEE, 2024. DOI: 10.1109/ACCESS.2024.3387456.
[3] P. Sushila Devi, S. Sarkar, S. Sonamani Singh, and H. Hotham, “An Approach to Evaluate Subjective Answers Using BERT Model,” IEEE, 2022. DOI: 10.1109/ACCESS.2022.3164725.
[4] P. Kapprand and S. Koteswara Rao, “Subjective Answer Evaluation Using Keyword Similarity and Regression Techniques,” IEEE, 2024. DOI: 10.1109/ACCESS.2024.3389024.
[5] V. Suresh, R. Agasthiya, J. Ajay, A. Amrith Gold, and D. Chandra, “AI Based Automated Essay Grading System Using NLP,” IEEE, 2023. DOI: 10.1109/ACCESS.2023.3278415.
[6] P. S. Raut, C. Adhesh, D. Choudhari, V. B. Waghole, and P. U. Jadhav, “Automatic Evaluation of Descriptive Answers Using NLP and Machine Learning,” IEEE, 2022. DOI: 10.1109/ACCESS.2022.3149872.
[7] M. Menakshi, “Descriptive Answer Evaluation System Using Cosine Similarity,” IEEE, 2019. DOI: 10.1109/ACCESS.2019.2942156.
[8] A. Chaayni et al., “Automatic Essay Scoring Using Jaccard and Cosine Similarity,” IEEE,2025. DOI: 10.1109/ACCESS.2025.3401267.
[9] V. Sreedhivya and J. Narayan, “Short Descriptive Answer Evaluation Using Word Embedding Techniques,” IEEE, 2021. DOI: 10.1109/ACCESS.2021.3098764.
[10] A. C. Condor, “Exploring Automatic Short Answer Grading as a Tool to Assist in Human Rating,” IEEE, 2020. DOI: 10.1109/ACCESS.2020.2986543.