Exams including practical programming are a crucial component of evaluating students\' coding and problem-solving capabilities. However, it might be challenging to objectively evaluate a candidate\'s skills because standard programming exam approaches are prone to cheating. It takes a lot of time to review programming assignments. These drawbacks emphasize the need for a new coding evaluation method that may offer a more impartial and precise evaluation of a candidate\'s coding abilities. By using an automated approach, the initiative seeks to revolutionize the way practical tests are administered using cutting-edge technologies. Exam scheduling, assigning distinct problem statements, automatically evaluating submissions, providing a fair evaluation, and preventing malpractice with secure exam controls are all made easier by the platform. Additionally, the system is adaptable to all pupils because it supports a variety of programming languages.
Introduction
The proposed system automates and modernizes practical programming exams to enhance efficiency, fairness, security, and accuracy in evaluating students’ coding and problem-solving skills. It addresses shortcomings of traditional exams, such as manual grading, cheating, administrative burden, and limited software environments.
Objectives
Streamline exam processes: scheduling, problem assignment, and evaluation.
Prevent cheating via browser lockdowns, AI proctoring, and access restrictions.
Ensure fair and consistent grading using AI and machine learning based on functionality, efficiency, and coding best practices.
Support multiple programming languages to accommodate diverse student skill sets.
Problem Statement
Traditional programming exams are time-consuming, subjective, and prone to malpractice.
Manual distribution of problem statements and grading leads to biases and errors.
Students often face software limitations, increasing dependency on external tools and the risk of cheating.
Teachers spend excessive time on administrative tasks rather than instruction.
Proposed Solutions
Automate exam management and evaluation using AI and ML.
Assign randomized, unique problem statements to minimize cheating.
Integrate multi-language compilers (via JDoodle, HackerRank) to allow coding in preferred languages.
Secure exams with watermarking, full-screen mode, and copy-paste restrictions.
Use ML algorithms (Decision Trees, SVM) for unbiased, automated code grading.
Developed on MERN stack (MongoDB, Express.js, React.js, Node.js) for scalability and user-friendliness.
System Architecture & Methodology
Core modules: Exam creation, scheduling, coding interface, automated evaluation, dashboards for students and faculty, result generation.
Architecture: Modular, scalable, and flexible to support future enhancements.
Implementation: MERN stack for backend/frontend; OpenCV, TensorFlow, Scikit-Learn for AI-powered assessment.
Key Modules
User Registration/Login: Secure authentication and role-based access.
Faculty Dashboard: Upload problem statements, schedule exams, monitor student performance.
Student Dashboard: Access exams, view problem statements, and submit code.
Exam Management: Unique problem allocation, timed submissions.
Code Submission & Compilation: Multi-language coding interface integrated with compilers.
AI-Powered Assessment: Automated, unbiased evaluation based on accuracy, efficiency, and coding standards.
Report Generation: Generates performance reports for faculty and students.
Expected Outcomes
Faster and automated exam process.
Improved security and reduced cheating risk.
Fair, consistent, and objective code evaluation using AI/ML.
Multi-language support for diverse programming skills.
Challenges
Accuracy of AI proctoring and preventing false positives.
Scalability for handling large numbers of students.
Ensuring security and preventing unauthorized access.
Avoiding bias in AI evaluation of innovative solutions.
Internet dependency for online exams.
Key Takeaway:
The system aims to modernize practical programming exams by combining automation, AI-based grading, multi-language support, and enhanced security, providing an efficient, fair, and reliable assessment platform for both students and educators.
Conclusion
Coding tests are now fair, effective, and secure thanks to the automated programming testing system. Using AI and machine learning, it reduces cheating, does away with manual grading, and offers objective assessments. The system is adaptable for students because it supports a variety of programming languages. It lessens the strain for professors and guarantees a seamless examination procedure by automating functions like scheduling, issue assignment, and evaluation.
References
[1] Douce, C., Livingstone, D., & Orwell, J. (2005). Automatic Test-Based Assessment of Programming: A Review. Journal on Educational Resources in Computing, 5(3), Article,4
[2] Cotroneo, D., Foggia, A., Improta, C., Liguori, P., & Natella, R. (2024). Automating the Correctness Assessment of AI-Generated Code for Security Contexts. Preprint submitted to Journal of Systems and Software, November 6, 2024.
[3] Mekterovi?, I., Brki?, L., Milašinovi?, B., & Baranovi?, M. (2020). Building a comprehensive automated programming assessment system. IEEE Access, 8.
[4] Saikkonen, R., Malmi, L., & Korhonen, A. (2001). Fully Automatic Assessment of Programming Exercises. Proceedings of the 6th Annual Conference on Innovation and Technology in Computer Science Education (ITiCSE), Canterbury, UK.
[5] Liu, D., Qu, X., Dong, J., Zhou, P., Xu, Z., Wang, H., Di, X., Lu, W., & Cheng, Y. (2023). Transform-Equivariant Consistency Learning for Temporal Sentence Grounding. Journal of the ACM, 37(4), Article 111.