The object recognition is achieved through OpenCV.YOLO uses a similar phase while training, to match the appropriate anchor box with the bounding boxes of each ground truth object within an image. Essentially, the anchor box with the highest degree of flap with an object is responsible for predicting that object\\\'s class and its location. It has microcontroller which has wi-fi inbuilt module. This guide is convenient and offers data to the client to move around in new condition, regardless of whether indoor or open air, through an ease to use interface.
Vision is one of the very essential human senses and it plays the most important role in human perception of our environment, unfortunately there are many people who are visually impaired. Blind people today rely on sighted guides, seeing-eye dogs and canes even a century after these came into existence. The present work aims to aid the blind through a wrist wearable. The wearable is designed to capture the user\'s environment through a camera and recognize the objects present in image. These identified objects are informed to user through an audio output.
In addition to object detection, the device can adapt to different environments and user preferences, offering personalized feedback. The system’s accuracy and reliability are bolstered by extensive use of diverse datasets and real-time processing capabilities, ensuring the wearable remains effective in a wide range of real-world scenarios. This innovation has the potential to enhance the independence of visually impaired individuals, providing them with more autonomy and a greater sense of security while navigating unfamiliar spaces.
Introduction
The document outlines a project designed to assist visually impaired individuals by using real-time object detection through a mobile or Raspberry Pi-based system. Visually impaired people face challenges due to inaccessible infrastructure and new environments. While existing assistive technologies exist, they often have limitations such as cost, size, or accuracy.
Key Components of the Project:
Objective:
To develop an affordable, lightweight, and real-time object detection system that aids visually impaired individuals in navigating both indoor and outdoor environments.
Technology Used:
Utilizes computer vision techniques, primarily the YOLO (You Only Look Once) and Haar Cascade algorithms.
Employs a Raspberry Pi 4B, camera module, and Bluetooth headset for capturing images and delivering audio feedback.
System processes image data locally and sends it to a server for further analysis or storage.
System Functionality:
The camera captures the surroundings.
The image is divided into grids; each grid is analyzed for object detection.
Detected objects are labeled and enclosed in bounding boxes.
Detected information is converted into audio and relayed to the user.
Data Handling:
Uses diverse datasets (like COCO, ImageNet) with annotations and variations in lighting, backgrounds, angles, and distances to improve model performance.
Output Filtering:
Utilizes Non-Maximum Suppression (NMS) and confidence score thresholding to remove duplicate or low-confidence detections.
Results:
System performs well in real-time, including in various environments (indoor/outdoor), at distances up to 10 meters.
Provides clear audio feedback.
Effective, intuitive, and user-friendly for a broad audience.
Some challenges exist in cluttered scenes and bright lighting.
Comparison and Literature Survey:
Reviews several previous models using YOLO, CNN, RetinaNet, and Tesseract OCR.
Highlights improvements over existing systems in terms of cost-efficiency, portability, and real-time capability.
Conclusion
In this project we present a visual system for blind people based on object like images and video scene. This system uses Deep Learning for object identification. In order to detect some objects with different conditions. Object detection deals with detecting objects of inside a certain image or video. The TensorFlow Object Detection API easily create or use an object detection model Blind peoples they have a very little information on self-velocity objects, direction which is essential for travel. The navigation systems is costly which is not affordable by the common blind people. So this project main aim is to the help of blind people.
References
[1] Chen X, Yuille AL. A time-efficient cascade for real-time object detection: With applications for the visually impaired. In2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05)-Workshops 2005 Sep 21:28-28.
[2] Chi-Sheng, Hsieh. “Electronic talking stick for the blind.” U.S. Patent No. 5,097,856, 24 Mar. 1992.
[3] WafaMElmannai, KhaledM.Elleithy. “A Highly Accurate and Reliable Data Fusion Framework for Guiding the Visually Impaired”. IEEE Access 6 (2018) :33029-33054. [1]
[4] Ifukube, T., Sasaki, T., Peng, C., 1991. A blind mobility aid modelled after echolocation of bats, IEEE Transactions on Biomedical Engineering 38, pp. 461 - 465.
[5] Cantoni, V., Lombardi, L., Porta, M., Sicard, N., 2001. Vanishing Point Detection: Representation Analysis and New Approaches, 11th International Conference on Image Analysis and Processing.
[6] Balakrishnan, G. N. R. Y. S., Sainarayanan, G., 2006. A Stereo Image Processing System for Visually Impaired, International Journal of Information and Communication Engineering 2, pp. 136 145.
[7] C.S. Kher, Y.A. Dabhade, S. sK Kadam., S.D.Dhamdhere and A.V. Deshpande “An Intelligent Walking Stick for the Blind.” International Journal of Engineering Research and General Science, vol. 3, number 1, pp. 1057-1062
[8] G. Prasanthi and P. Tejaswitha “Sensor Assisted Stick for the Blind People.” Transactions on Engineering and Sciences, vol. 3, number 1, pp. 12-16, 2015.