This project presents a real-time home hazard detection and alert system using computer vision and deep learning. YOLOv3 is employed to identify dangerous objects like knives and guns in a live video feed. When a hazard is detected near a person, the system triggers visual overlays and audio alerts using text-to-speech. It processes frames continuously, drawing labeled bounding boxes to distinguish threats from safe objects. Pre-trained YOLOv3 weights and COCO dataset classes are utilized for accurate detection. This approach improves home safety by enabling continuous, automated monitoring and minimizing human error
Introduction
Overview
In a time where smart technologies are increasingly embedded in daily life, this project presents a real-time object detection and proximity alert system designed to enhance safety within homes. The system uses computer vision to detect hazardous objects (e.g., knives, scissors) and issue visual and audio alerts when someone gets too close, aiming to prevent accidents before they occur. It acts like a virtual safety assistant, especially useful for children, the elderly, and individuals with limited awareness.
Literature Review
Prior research highlights:
Use of object detection models like YOLO, SSD, and OpenCV for real-time recognition.
Integration of multi-sensory alert systems (visual + audio) in industrial and healthcare settings for accident prevention.
Existing home automation tools often focus on security rather than hazard-specific proximity detection.
This project builds on these foundations, focusing specifically on home safety through real-time hazard detection.
Existing Systems – Limitations
Traditional systems (e.g., fire alarms, motion detectors, CCTV) lack context awareness and real-time analysis of dangerous objects.
AI-based smart systems mostly perform basic detection like identifying people or movements, not the nature or threat level of specific objects.
Alerts are typically triggered by motion or environmental changes, not proximity to hazards.
Surveillance systems require human monitoring and often fail to prevent accidents proactively.
There’s a clear gap for a system that detects and reacts to dangerous objects in real-time, particularly for residential use.
Problem Statement
Hazardous household objects can be left in open areas and may go unnoticed.
Current smart systems don’t provide real-time alerts about proximity to these items.
Children and adults need an intelligent, proactive system that offers immediate feedback to prevent harm.
Proposed System (Conflicting Section)
?? Note: The section labeled “Proposed System” appears to describe an unrelated QR-code based student attendance system, not the hazard detection system. This may be an editing error or copy-paste mix-up in the original text.
Conclusion
This project has successfully developed an intelligent, real-time home hazard detection and alert system using state-of-the-art deep learning and computer vision technologies. By employing the YOLOv3 object detection model, the system is capable of accurately identifying a range of hazardous objects, such as knives, scissors, and firearms, within a live camera feed. The incorporation of proximity-based detection ensures that alerts are only triggered when dangerous objects come close to a person, minimizing unnecessary warnings and focusing on critical situations. The dual-mode alert system, combining visual overlays with audible text-to-speech warnings, ensures that users are immediately informed of potential threats, even if they are not actively observing the video feed.
This approach overcomes the significant limitations of traditional home safety methods that rely on manual monitoring or simple sensor triggers, which often lack context or timely response capabilities. The automated and continuous nature of this system reduces human error and improves the overall safety of the home environment, making it particularly beneficial for vulnerable populations such as children, elderly individuals, and persons with disabilities.
Furthermore, the system’s use of pre-trained YOLOv3 weights and the COCO dataset enables efficient deployment without the need for extensive custom training, making it scalable and adaptable to various household settings. The project illustrates the practical application of AI in creating smarter and safer living spaces.Looking ahead, this system can be enhanced by expanding its detection capabilities to include a wider variety of household hazards, such as electrical appliances, chemicals, or slippery surfaces. Integrating additional sensors like infrared or ultrasonic distance detectors could improve proximity accuracy. Moreover, coupling the system with home automation platforms could enable automatic actions—such as locking cabinets or shutting off appliances—when hazards are detected. Personalized alert settings and remote monitoring features could further increase usability and convenience.
Overall, this project highlights the transformative potential of AI-driven hazard detection in promoting safer homes, reducing accident risks, and providing peace of mind to residents. It lays a strong foundation for future innovations that blend technology with everyday safety needs.
References
[1] Gonzalez, R. C., & Woods, R. E. (2018). Digital Image Processing (4th Edition). Pearson.
[2] Szeliski, R. (2010). Computer Vision: Algorithms and Applications. Springer.
[3] Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. MIT Press.
[4] Buttazzo, G. C. (2011). Hard Real-Time Computing Systems: Predictable Scheduling Algorithms and Applications (3rd Edition). Springer.
[5] Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You Only Look Once: Unified, Real-Time Object Detection. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 779-788.
[6] Bochkovskiy, A., Wang, C. Y., & Liao, H. Y. M. (2020). YOLOv4: Optimal Speed and Accuracy of Object Detection. arXiv preprint arXiv:2004.10934.
[7] Girshick, R. (2015). Fast R-CNN. Proceedings of the IEEE International Conference on Computer Vision (ICCV), 1440-1448.
[8] Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). ImageNet Classification with Deep Convolutional Neural Networks. Advances in Neural Information Processing Systems (NeurIPS), 1097-1105.
[9] Redmon, J., & Farhadi, A. (2018). YOLOv3: An Incremental Improvement. arXiv preprint arXiv:1804.02767.
[10] Li, Y., Li, Z., Wu, F., & Zhu, J. (2019). Real-time Object Detection Using Deep Learning on Embedded Devices: A Survey. IEEE Access, 7, 164460-164476.