Hit-and-run incidents involving heavy-duty trucks are a persistent and dangerous phenomenon on highways, especially in developing countries. Despite the rise in road safety technologies, the ability to track, identify, and respond to trucks that flee accident scenes remains insufficient. This paper proposes a real-time, AI-powered detection system that uses computer vision and vehicular telemetry to identify trucks involved in collisions and monitor their post-accident behavior.
The framework integrates visual input (license plate recognition, object tracking) with vehicular data (GPS trajectory, acceleration, CAN bus signals) to classify “escape behavior” patterns such as rapid acceleration, route deviation, and abnormal braking following collision-like events. A hybrid model combining YOLOv8, custom LPR modules, and route deviation analysis detects hit-and-run suspects with over 92% accuracy in simulated environments. Telemetry anomalies such as high jerk values (derivative of acceleration) and abrupt heading changes serve as core behavioral indicators. All processing occurs on edge hardware (e.g., NVIDIA Jetson Nano) ensuring real-time performance and privacy preservation.
This system creates a new post-accident response paradigm—where trucks are not only monitored for fatigue (as in previous research) but also held accountable for unsafe or evasive actions. Together, these intelligent modules form a comprehensive smart-truck safety suite for both accident prevention and accountability. The implementation is available on the presented GitHub repository, https://github.com/Paras-Vermaa/SmartTruck-HitAndRun-Detection-
Introduction
Background
Road accidents cause over 1.3 million deaths annually, with heavy trucks disproportionately responsible. In India, over 60% of truck-involved hit-and-run cases remain unresolved due to lack of evidence and real-time tracking. Existing systems mainly focus on accident prevention, leaving a gap in post-collision tracking and accountability. This research proposes an AI-powered framework to detect, analyze, and track hit-and-run incidents involving commercial trucks, even in the absence of centralized surveillance.
Problem Statement
There is no scalable system that:
Detects real-time truck collisions using in-cabin or external vehicle data.
Analyzes escape behavior (e.g., sudden acceleration or route deviation).
Matches fleeing trucks to license plates via dash or roadside cameras.
Operates without cloud dependency, preserving privacy and enabling offline use.
Sample Python code provided for both vision-based license recognition and EscapeScore calculation.
Conclusion
Road safety continues to be one of the most pressing public health and infrastructure challenges of our era, and the threat posed by hit-and-run incidents involving commercial trucks is particularly acute. These events not only cause tragic loss of life but also expose massive gaps in accountability, enforcement, and system-level intelligence in the current transportation ecosystem.
In this research, we designed, implemented, and evaluated a Data-Driven Hit-and-Run Detection Framework—a real-time, AI-powered system capable of identifying and responding to post-accident escape behaviors in long-haul trucks. The proposed solution stands apart from traditional accident response tools by introducing sensor fusion, behavioral AI modeling, and edge deployment architecture designed specifically for the trucking industry.
The framework successfully merges three core pillars of intelligence:
• Computer vision for vehicle identification, collision confirmation, and license plate recognition;
• Telemetry analysis through GPS, accelerometer, and CAN bus data to capture behavioral deviations;
• Machine learning algorithms (CNN, GRU, rule-based models) to infer and classify escape behavior in real time.
Experimental results confirm that this system operates with high precision and low latency, detecting hit-and-run behaviors with an F1 score of 92.4%, and producing meaningful insights such as jerk spikes, sudden accelerations, and route deviations. The EscapeScore, a novel metric developed in this study, encapsulates complex multi-modal inputs into a single actionable value, enabling swift post-collision decisions.
What makes this system particularly valuable is its deployability. Unlike centralized cloud systems or passive telematics dashboards, this framework is capable of functioning on affordable edge devices like the Jetson Nano, even in low-connectivity highway regions. It also adheres to privacy-by-design principles by storing only event-triggered snapshots, and avoiding persistent surveillance—thereby balancing ethics, legality, and functionality.
Beyond its technological achievements, this system also represents a philosophical shift in how we think about road safety. Traditionally, efforts have focused on either preventing accidents or responding after they occur. This system does both: it detects the accident and the behavior immediately after—capturing what was previously an unmonitored grey zone of accountability.
References
[1] Paras Verma, “Sleep Detection System for Trucks: A Real-Time Multi-Modal Data-Driven AI Framework for Driver Fatigue Monitoring,” International Journal for Research in Applied Science & Engineering Technology (IJRASET), vol. 13, no. IV, pp. 2229–2245, Apr. 2025. https://www.ijraset.com/best-journal/sleep-detection-system-for-trucks-a-realtime-multimodal-datadriven-ai-framework-for-driver-fatigue-monitoring
[2] World Health Organization (WHO). Global Status Report on Road Safety 2023. Geneva: WHO, 2023.
[3] Ministry of Road Transport and Highways (MoRTH), Government of India. Annual Report on Road Accidents in India – 2023.
[4] National Highway Traffic Safety Administration (NHTSA). Hit-and-Run Crashes. U.S. Department of Transportation, 2022.
[5] Ultralytics. YOLOv8 Documentation. Available at: https://docs.ultralytics.com/
[6] OpenCV. Real-Time Object Detection with YOLO and OpenCV. https://docs.opencv.org/
[7] Tesseract OCR Engine. GitHub Repository: https://github.com/tesseract-ocr/tesseract
[8] UC Berkeley DeepDrive. BDD100K Dataset. https://bdd-data.berkeley.edu/
[9] UA-DETRAC Benchmark Dataset. Chinese National Lab of Pattern Recognition. http://detrac-db.rit.albany.edu/
[10] Kratz, L., Nishino, K. (2010). Tracking with Particle Filtering: Collision and Path Analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence.
[11] Ramer, U. (1972). An Iterative Procedure for the Polygonal Approximation of Plane Curves. Computer Graphics and Image Processing, 1(3), 244–256.