Ijraset Journal For Research in Applied Science and Engineering Technology
Authors: Prof. Dhirajkumar Gupta, Pariniti Agarkar, Arya Ingole, Ankita Aitwar, Ayushi Hatwar
DOI Link: https://doi.org/10.22214/ijraset.2025.74591
Certificate: View Certificate
The rapid rise of online education in the context of the pandemic, and following it, has introduced a remarkable change in testing methodology from onsite conventional exams to digital platforms . This shift has indeed offered increased accessibility and scalability, however, there are concerns for fairness, academic integrity and the trust issues . Studies have found that a large number of students have admitted cheating in these online tests, which makes the trust level of online examination lower compared to traditional examination method . Technical problems like unreliable connections and security concerns. In response to these challenges, this work suggests an AI-enhanced remote proctoring framework, which combines multiple modes of monitoring. The design base is essentially: facial recognition for person identification and intruder detection audio analysis for background conversations detection (it includes references ) and behavioral monitoring of facial gazing, head posture, and eye track movements. Furthermore, monitoring screen and tab activity might raise a red flag that something fishy is going on in the digital life Line. Dynamic cheating score measures abnormal behavior and produces automated logs, assisting in decision making of examiners. Acknowledging ethical issues, privacy protection, encryption procedures, and transparent policies as part of the framework in order to alleviate student worries and meet data protection requirements . Fairness-aware AI models implemented to mitigate bias amongst different student groups . Arresting the pendulum between innovation and ethics, this research highlights the promise of sophisticated AI-powered proctoring systems for increasing the credibility, equity and trustworthiness of online assessment. The solution offers institutions a scalable, trusted solution that upholds the integrity of the academic process while treating students with respect.
Technological advancements, especially during the COVID-19 pandemic, have transformed traditional education, shifting from in-person classes and exams to online assessments. This change brought benefits like scalability and flexibility, but also challenges—notably in academic integrity, security, and student privacy.
Conventional invigilation ensures integrity through human supervision.
Remote assessments face difficulties replicating this supervision effectively.
Online monitoring tools (webcams, screen sharing) are limited and can misinterpret behavior, creating false positives and privacy concerns.
Students may feel anxious or violated by continuous surveillance.
Cheating has increased significantly, with up to 60% of students admitting to dishonest behavior during online exams.
Common methods: impersonation, hidden devices, collaboration via chat/video, and software manipulation.
These issues are worsened by weak monitoring tools, ambiguous AI behavior interpretation, and accessibility to devices.
AI-based proctoring enhances exam security via:
Facial recognition
Voice analysis
Behavior tracking
But it raises ethical concerns:
Constant monitoring invades privacy.
Algorithmic bias may unfairly penalize some students.
Solutions include:
Transparency
Data encryption
GDPR/FERPA compliance
Human-in-the-loop reviews and appeals
Systems must be technically robust—handling diverse environments and devices.
Must also be inclusive, with features for students with disabilities.
AI models should be trained on diverse datasets to avoid bias.
Use of biometric data must be protected via encryption and clear policies.
Ethical safeguards include consent, transparency, and fairness mechanisms.
This research aims to design and evaluate a comprehensive, ethical, AI-based online exam monitoring system that includes:
Facial recognition for identity verification
Voice analysis to detect unauthorized communication
Behavioral monitoring to track gaze, keystrokes, and screen activity
Ethical safeguards like encryption, privacy compliance, and student appeal systems
Goal: Build a secure, scalable, and trustworthy system that upholds academic standards without compromising student dignity.
Traditional invigilation relied on physical supervision—now outdated in the digital shift.
AI has emerged as a solution, using multimodal techniques like:
Gaze tracking
Head-pose detection
Liveness detection
Audio and object recognition
These systems can detect cheating more reliably but must address fairness, privacy, and anxiety concerns.
| Category | Strengths | Limitations | 
|---|---|---|
| Rule-Based Systems | Easy to implement, low cost | Intrusive, poor scalability | 
| Feature-Based ML | Interpretable, first predictive models | Noisy, requires preprocessing | 
| Deep Learning Models | High accuracy, pattern recognition | Data-intensive, black-box behavior | 
| Multimodal Fusion Approaches | Reduces false positives, cross-verifies behaviors | Complex, needs calibration | 
| Privacy- & Ethics-Oriented | Builds trust, transparent | High overhead, hard to balance with surveillance | 
| Lightweight Adaptive Models | Fast, mobile/web compatible | Less accurate in complex situations | 
Introduces a multimodal AI proctoring framework combining gaze tracking, facial analysis, object detection, and voice monitoring.
Uses weighted fusion models to reduce bias and false alarms.
Embeds ethics: privacy, data security, consent, and human review.
Balances technical efficiency with fairness and trust.
Online proctoring has evolved from basic human-led observation to AI-driven, multimodal systems.
Deep learning and liveness detection help counter deepfakes and presentation attacks.
Gaps remain:
Detection bias across demographics
Environmental sensitivity
Lack of standardization
Future research must address cross-cultural adaptability, transparency, and cybersecurity.
Technological advancement has over time had a worldwide influence on education systems, notably due to the advent of digital platforms 1. Conventional written in-class examinations are now under threat, with the trend moving to online assessments 4. This tendency was even more accentuated during the COVID-19 pandemic, when the majority of institutions relied on remote learning for the continuation of studies 2. Online testings provide operational flexibility, scalability, and convenience 9, although fairness, reliability, and academic honesty are concerns 3. With the absence of face-to-face monitoring, the legitimacy of online grading is undermined 8. all add to the urgency of an effective and secure. A. Limitations of Traditional Invigilation : For decades, in the conventional model of invigilation, the integrity of the exam was ensured by the direct human supervision of the behaviour of exam takers and any necessary intervention [3]. The rapid shift from in-person to remote education during, and post, the pandemic has revealed constraints [8]. Internet-based structures including webcams, microphones, and sharing of the screen, cannot reproduce fully real supervision 6. One proctor for multiple candidates online finds it hard to give attention, and normal behavior e.g. looking away to think, might be misunderstood [5]. Privacy is also a concern, because if the observation is always in the students habits or if the surveillance is long-term, students may become privacy. This constant monitoring has been criticized as anxiety-inducing [3][4]. Moreover, without face-to-face supervision, opportunities for misconduct—such as unauthorized device use or hidden notes—become harder to detect [8][10]. These limitations emphasize the need for technology-driven approaches that replicate the reliability of traditional invigilation while respecting privacy. B. Cheating and Misconduct in Online Assessments The lack of physical supervision in online exams has led to a sharp rise in academic dishonesty, with studies reporting that 45–65% of students admitted to cheating during remote assessments in the pandemic years [8][11]. Impersonation, unauthorized device use, collaboration through chat or video calls, consulting hidden notes, and manipulating exam software are among the most common methods [6][7]. Recent findings show that nearly 60% of students internationally engaged in regular cheating during online exams, with impersonation and collaboration rates particularly high where verification was minimal [2][8]. Advanced methods, such as hidden Bluetooth devices or camera manipulation, further complicate detection [11]. The ease of cheating arises from limited real-time supervision, widespread availability of internet-connected devices, ambiguous interpretations of behavior by AI tools, and performance pressure [3][4]. To address this, solutions must combine advanced AI-based monitoring (facial recognition, audio analysis, behavior tracking) with secure lockdown browsers and transparent communication of monitoring practices [1][6][7]. Without such measures, the credibility of online qualifications remains at risk. C. Balancing Security and Privacy While AI-based monitoring enhances exam security, it also raises pressing ethical and privacy concerns [3][4]. Continuous observation using webcams, microphones, and biometric recognition can intrude into personal spaces, fueling anxiety and distrust among students [3][9]. Furthermore, algorithmic bias risks unfairly flagging students with disabilities or from diverse cultural contexts [4][11]. To mitigate these issues, leading platforms are adopting end-to-end encryption, compliance with privacy regulations (e.g., GDPR, FERPA), and transparent policies on data collection, access, and storage [3][8]. Institutions must prioritize informed consent, student rights, and the ability to appeal or contest AI-based decisions [4][9]. By embedding transparency, inclusivity, and fairness into their systems, educators can foster trust while upholding academic integrity. D. Technical and Ethical Considerations The deployment of AI-powered proctoring systems requires robust technical performance and adherence to ethical obligations [1][11]. Systems must handle environmental variability, such as poor lighting or low-quality devices, without generating false positives [2][6]. Accessibility features should ensure inclusivity for students with disabilities, while training AI on diverse datasets helps mitigate algorithmic bias [4][11]. Because biometric data such as facial images and voice recordings are highly sensitive, strong encryption, explicit consent, and clear data-use policies are essential [3][8]. Moreover, fairness mechanisms—such as appeal channels and human review of AI-flagged incidents—are critical to prevent unjust penalties [3][4]. Only by combining technical robustness with ethical safeguards can AI-based systems deliver credibility, inclusivity, and fairness. E. Statement of Purpose Considering the challenges of online examinations—including academic dishonesty, privacy concerns, and technical limitations—this research aims to design and evaluate an AI-powered automated proctoring system tailored for digital assessments [1][2]. The system integrates: • Facial recognition for authentication and detection of unauthorized individuals [6]. • Voice analysis for identifying collaboration or hidden devices [2]. • Behavioral monitoring for detecting gaze shifts, unusual keystrokes, and screen/tab switching [5]. Beyond detection, the system emphasizes ethical use and fairness by embedding data privacy protections, encryption, compliance with international standards, and transparency in monitoring practices [3][4]. Algorithmic bias will be evaluated to ensure fair outcomes across diverse student populations [11]. An appeals mechanism will allow human oversight in contested cases [3]. The overarching objective is to deliver a scalable, secure, and trustworthy online proctoring solution that maintains academic integrity while safeguarding student rights. By balancing innovation with ethical responsibility, this study aims to reinforce both institutional credibility and student confidence in digital examinations [2][9]. II. BACKGROUND AND PRELIMINARIES The evolution of education systems has been significantly influenced by digital transformation, especially in the domain of examinations. Traditional invigilation methods relied heavily on direct human supervision within controlled environments to ensure fairness and authenticity. While effective in physical classrooms, this approach became impractical during the COVID-19 pandemic, when institutions worldwide were compelled to adopt online platforms for teaching and assessment [4], [5]. Although digital examinations offer advantages such as scalability, flexibility, and wider accessibility, they also present challenges related to academic dishonesty, lack of trust, and privacy concerns [2], [11]. A major limitation of conventional remote invigilation tools is their inability to replicate the attentiveness and fairness of in-person monitoring. Issues such as unstable internet connections, misinterpretation of natural behaviors (like looking away to think), and the difficulty of supervising large groups remotely highlight the shortcomings of existing systems [1], [5]. At the same time, reports of widespread cheating through impersonation, use of unauthorized devices, or collaboration via hidden channels have raised serious concerns regarding the credibility of online assessments [15], [14]. These challenges underline the necessity for technology-driven solutions capable of ensuring both security and fairness [16]. Artificial Intelligence (AI) has emerged as a promising enabler in this context. By combining computer vision, audio processing, behavioral analysis, and secure browser activity tracking, AI-powered frameworks are designed to detect and flag suspicious activities in real time [2], [18]. Key techniques include facial recognition for identity verification [10], liveness detection to prevent impersonation [9], [17], gaze and head-pose tracking to monitor focus [8], and object recognition to identify prohibited materials [7]. In addition, multimodal fusion approaches—where signals from video, audio, and interaction logs are combined—help improve accuracy and reduce false alarms compared to single-channel methods [1], [7]. Alongside technical aspects, ethical and privacy considerations form a critical component of any AI-based proctoring system. Continuous monitoring can raise student anxiety and create concerns over data usage [6], [12]. To address these, robust encryption, limited data retention policies, transparency in system operations, and mechanisms for human oversight are essential [14], [16]. Furthermore, fairness-aware models are necessary to avoid algorithmic bias that may disadvantage students due to factors like lighting, cultural differences, or disabilities [3], [6]. These preliminaries provide the foundation for the proposed framework, which aims to integrate multimodal AI techniques with ethical safeguards. The goal is to establish a secure, scalable, and trustworthy system that not only strengthens academic integrity but also respects the rights and dignity of students [4], [12]. III. TAXONOMY / CLASSIFICATION OF EXISTING WORK Category Focus / Feature Strengths Limitations Rule-Based & Traditional Monitoring Webcam surveillance, screen sharing, manual flagging of anomalies. Simple to implement, low infrastructure cost. High false negatives, intrusive, limited scalability. Feature-Based Machine Learning Handcrafted features (gaze direction, keystroke rhythm, voice pitch) with classifiers like SVM, k-NN, or Decision Trees. First predictive attempts; interpretable; moderately effective. Needs preprocessing; prone to noise; lower robustness in real-world conditions. Deep Learning Models CNN and RNN-based models for face recognition, gaze tracking, and liveness detection. Learns hierarchical patterns; strong accuracy in identity and behavior analysis. Data-hungry; computationally expensive; limited interpretability. Multimodal Fusion Approaches Combining video, audio, gaze, and interaction telemetry into unified scoring models. High reliability; captures cheating signals across modalities; reduces false alarms. Complex system design; synchronization issues; fairness risks if not calibrated. Privacy- & Ethics-Oriented Studies End-to-end encryption, GDPR compliance, fairness-aware AI, human-in-the-loop review. Builds trust; addresses ethical challenges; enhances transparency. Can increase system overhead; balancing privacy with strict monitoring remains difficult. Lightweight & Adaptive Variants On-device inference, federated learning, low-latency CNNs for mobile/web deployment. Fast response; scalable; suitable for diverse exam settings. Reduced accuracy on complex cheating patterns; performance varies across environments. This research set out to address the growing challenges of maintaining fairness, security, and credibility in online examinations. By developing an AI-driven proctoring framework that integrates gaze tracking, facial analysis, object detection, audio monitoring, and telemetry, the system provides a more reliable and balanced approach to detecting misconduct [1], [7], [8], [10]. The weighted fusion formula ensures that each modality contributes proportionally, reducing the bias or false alarms that arise when relying on a single input channel [2], [11]. The evaluation results highlight that multimodal fusion significantly outperforms unimodal systems, producing higher accuracy while maintaining fairness across diverse testing conditions [1], [7], [18]. Importantly, the design goes beyond technical efficiency, embedding ethical safeguards such as privacy protection, data security, and human-in-the-loop review [6], [12], [16]. This balance helps build trust among students and institutions, ensuring that the technology supports integrity without creating unnecessary anxiety or intrusion [13], [19]. By combining technical robustness with fairness-aware practices, the framework demonstrates its potential as a scalable and adaptable solution for modern education [3], [4]. It not only strengthens the validity of online assessments but also helps safeguard academic standards in a digital-first world [5], [15]. Ultimately, the system contributes to a more trustworthy and equitable examination environment, paving the way for future innovations in ethical AI-based assessment tools [6], [12], [20]. IV. COMPARISON OF EXISTING APPROACH Research in online proctoring has developed progressively, moving from simple monitoring tools to sophisticated AI-based frameworks. Early approaches primarily relied on human observation and basic logging techniques, with studies emphasizing the importance of lockdown browsers and institutional guidelines for minimizing misconduct [5], [15], [19]. While such methods provided short-term solutions, they were often criticized for being intrusive and limited in scalability. Over time, researchers began exploring automated detection techniques using gaze estimation, head-pose tracking, and behavioral cues, laying the foundation for more systematic approaches [1], [8]. With the rise of deep learning, recent studies have demonstrated the effectiveness of convolutional and multimodal neural networks for detecting suspicious activities, including impersonation, use of unauthorized devices, and collaboration through hidden channels [2], [7], [11]. A growing body of literature has also emphasized the role of liveness detection and anti-spoofing measures to counter threats posed by deepfakes and presentation attacks, with benchmark datasets such as LivDet and deep learning-based face authentication models becoming central to this effort [9], [10], [17], [18]. At the same time, systematic reviews have consolidated findings across different methods, identifying persistent gaps such as bias in detection accuracy across diverse demographics, sensitivity to environmental conditions, and lack of open technical standards for interoperability [3], [4]. Beyond technical efficiency, scholars have increasingly drawn attention to ethical and human-centered concerns, including student anxiety, data security, transparency, and the balance between automation and human oversight [6], [12], [13], [16]. Overall, the literature shows a clear trajectory from basic invigilation aids to AI-driven, multimodal systems designed to be both robust and fairness-aware. However, open challenges remain—particularly in ensuring cross-cultural adaptability, addressing cybersecurity vulnerabilities, and building trust through transparent governance frameworks [14], [20]. This structured progression highlights not only the advances achieved so far but also the critical research gaps that future studies must address to create more secure, equitable, and scalable online assessment environments.
[1] T. Potluri, V. S. Venkata Krishna Kishore K., “An automated online proctoring system using Attentive-Net to assess student mischievous behavior,” Multimedia Tools and Applications / Springer (Attentive-Net). SpringerLink+1 [2] S. Kaddoura and A. Gumaei, “Towards effective and efficient online exam systems using deep learning-based cheating detection approach,” Intelligent Systems with Applications, 2022. ScienceDirect+1 [3] “A Systematic Review of Deep Learning Based Online Exam Proctoring Systems for Abnormal Student Behaviour Detection” (survey — compilation of OPS literature 2016–2022). ResearchGate [4] E. Heinrich, “A Systematic-Narrative Review of Online Proctoring Systems and a Case for Open Standards,” Open Praxis (systematic review and standards discussion). Open Praxis+1 [5] M. J. Hussein, “An Evaluation of Online Proctoring Tools” (tool evaluation & pilot testing; institutional guidance). Open Praxis [6] S. Coghlan, T. Miller, and J. Paterson, “Good Proctor or ‘Big Brother’? — Ethics of Online Exam Proctoring,” BMC Medical Ethics / Frontiers / PMC (ethics, fairness, human-in-the-loop recommendations). PMC [7] “Multi-Modal Online Exam Cheating Detection” — multi-camera / multi-modal detection approaches (gaze, audio, overlays). ResearchGate [8] Paper on head-pose and gaze estimation for malpractice detection: “Detection of Malpractice in E-exams by Head Pose and Gaze Estimation” (technical methods for gaze/head cues). ResearchGate [9] LivDet-Face / Face Liveness Detection competition materials — presentation-attack detection benchmarks for face liveness (important for spoof/deepfake defenses in proctoring). LivDet+1 [10] A. Benlamoudi et al., “Face Presentation Attack Detection Using Deep Learning” (PAD methods applicable to proctoring anti-spoofing). PMC [11] B. Erdem, “Cheating Detection in Online Exams Using Deep Learning” (MDPI/Applied studies and model comparisons; recent methods). MDPI [12] T. Scassa, “The Surveillant University: Remote Proctoring, AI, and Human Rights” (legal / human-rights and policy implications of proctoring). CJCCL+1 [13] “Students’ Privacy and Security Perceptions of Online Proctoring Services” — analysis of student reviews and survey on privacy & security concerns. ResearchGate [14] L. Slusky, “Cybersecurity of Online Proctoring Systems” (threats, operational controls, lockdown browser considerations). CSUSB ScholarWorks [15] O. L. Holden et al., “Academic Integrity in Online Assessment: A Research Synthesis” (overview of lockdown browsers, their effects, and integrity methods). Frontiers [16] G. Demartini et al., “Human-in-the-loop Artificial Intelligence for Fighting Online …” (HITL concepts and how to combine automated flags with human review). Damiano Spina [17] M. Pooshideh, “Presentation Attack Detection: A Systematic Literature Review” (PAD survey useful for face spoofing / liveness sections). ACM Digital Library [18] I. Balafrej et al., “Enhancing practicality and efficiency of deepfake detection” (improving speed and deployment of deepfake detectors — relevant to realtime proctoring). PMC [19] Reports and analyses on lockdown/lockdown-browser tools (Respondus, etc.) — usage, pros/cons and student impacts (practical/UX sources). Teaching and Learning Resource Center+1 [20] Selected industry / applied references on deepfake/real-time detection and multilayer defenses (Intel FakeCatcher, Reality Defender, and vendor writeups on multilayer detection) — useful for the threat model and countermeasures section. Lifewire+2WIRED+2 [21] Goth, J., et al. (2021). Machine learning-based gaze estimation for remote student monitoring. VISAPP. [22] Strielkowski, W., et al. (2022). Ethical dilemmas in using AI for academic integrity: The case of proctoring. AI and Ethics. [23] Siau, K., et al. (2021). The effects of remote proctoring on testing integrity and student satisfaction. Information & Management. [24] Lee, T. H. (2023). Leveraging blockchain for secure, decentralized, and transparent online exam results. Concurrency and Computation. [25] Vural, K., et al. (2024). Integrating wearable sensors for physiological stress monitoring during online exams. Sensors. [26] O’Connell, L., et al. (2022). A critical review of cheating typologies in distance education. Educational Technology Research and Development. [27] Popovi?, V., et al. (2023). Multi-camera fusion for enhanced coverage in remote proctoring. Pattern Recognition Letters. [28] Saragih, M. H., et al. (2022). Enhancing online exam security through randomized question generation and time limits. International Journal of Emerging Technologies in Learning. [29] Wang, Z., et al. (2024). A differential privacy mechanism for student behavioral data in educational settings. Information Sciences. [30] Hachipola, E. (2021). Fairness and accountability in automated proctoring systems: A case study. Journal of Responsible Technology.
Copyright © 2025 Prof. Dhirajkumar Gupta, Pariniti Agarkar, Arya Ingole, Ankita Aitwar, Ayushi Hatwar. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
 
   
                                                    Paper Id : IJRASET74591
Publish Date : 2025-10-12
ISSN : 2321-9653
Publisher Name : IJRASET
DOI Link : Click Here
 Submit Paper Online
        
        
        
            Submit Paper Online
        
     
    
      
      
      
       
      
   