A small gadget worn on the body might let people with poor eyesight notice things around them more clearly. Instead of just spotting objects close by, it also picks up signs of how others are feeling. Built around a Raspberry Pi 5, the system relies on a connected camera to gather what’s happening in real time. One moment it could highlight a chair in its path, the next it may sense a smile across a face. Not everything is perfect yet, still, progress moves forward through testing and tweaks. Some parts work faster than expected, while certain reactions take longer to process. Vision isn’t restored, but awareness grows in subtle ways Now imagine sound guiding steps through space. This tool listens closely, sharing updates as things happen. Instead of showing images, it speaks what is seen. Hearing replaces seeing here, quietly helping those who cannot rely on sight. A face appears - the system notices right away. By studying expressions, it guesses feelings using smart software built over time. What shows up gets shared without delay Sound carries information to people who cannot see well. As the camera spots barriers instantly, it triggers warnings that help those with vision loss stay clear of danger. This tool Running on a Raspberry Pi 5 ensures instant handling of data, so there’s no lag during use. Tests took place indoors and outdoors, under live conditions Under open skies, tests showed steady performance without slowing down tasks much. People now see how it helps keep things safer People who can’t see well need tools that are light and cheap to carry around.
Introduction
Navigating the world is challenging for individuals with visual impairments, as traditional aids like white canes or guide dogs cannot convey detailed environmental or social cues. To address this, the paper proposes smart glasses that enhance situational awareness by translating visual information into audio feedback.
Proposed System:
Hardware: A Raspberry Pi serves as the processing unit, connected to a USB camera and audio output via headphones.
Functionality: The camera captures the surroundings in real time. Computer vision techniques detect objects, obstacles, human faces, and emotions. A trained neural network identifies emotions such as happiness, sadness, or anger.
Feedback: Processed information is delivered as audio cues through earpieces, allowing users to navigate safely and perceive social signals.
System Architecture:
The device integrates sensing, processing, and feedback modules, creating a self-contained wearable assistive tool.
Audio guidance informs the user about nearby obstacles and the emotional state of people around them.
Results:
Obstacle detection was effective up to 4 meters, providing timely audio warnings.
Facial and emotion recognition successfully identified expressions in indoor and outdoor settings, enhancing social awareness.
The system operates with low response time, demonstrating practical usability for visually impaired individuals.
Overall, the smart glasses combine real-time computer vision and audio feedback to improve both navigation safety and social interaction for users with impaired vision.
Conclusion
The paper was focused on an IoT-based smart glasses system for visually impaired people to increase their level of awareness in their environment and their level of interaction in society. The system is based on a Raspberry Pi platform combined with a USB camera to detect obstacles and facial expressions. Computer vision and machine learning algorithms are applied to recognize facial emotions. Information is con- veyed to the user in an auditory manner.
The experimental results confirmed that the proposed sys- tem can detect obstacles and recognize facial emotions in a proper manner. In the future, the system can be improved in various ways. For instance, the accuracy of facial emotion can be improved using machine learning models. Additionally, other sensors can be integrated into the system to increase its ability to understand the environment. Such sensors can include a GPS sensor for navigation.
References
[1] H. Ali A., S. U. Rao, S. Ranganath, T. S. Ashwin, and G. R. M. Reddy, ”A Google Glass Based Real-Time Scene Analysis for the Visually Impaired.” IEEE Access, vol. 9, pp. 166351–166367, 2021.
[2] S. Ikram, I. S. Bajwa, A. Ikram, I. De La Torre D´?ez, C. E. Uc R´?os, and A´ . G. Kuc Castilla, ”Obstacle Detection and Warning System for Visually Impaired Using IoT Sensors.” IEEE Access, vol. 13, pp. 35309– 35321, 2025.
[3] M. N. Narayana, R. V. V. S. V. Prasad, K. Munni, V. D. Anandi, and B. Nikhita, ”Real-time Facial Recognition and Emotion Detection System.” International Journal of Scientific Research and Engineering Development, vol. 8, no. 2, pp. 1914–1919, 2025.
[4] W.-J. Chang, L.-B. Chen, M.-C. Chen, Y.-P. Su, C.-Y. Sie, and C.-H.
[5] Yang, ”Design and Implementation of an Intelligent Assistive System for Visually Impaired People for Aerial Obstacle Avoidance and Fall Detection.” IEEE Sensors Journal, vol. 20, no. 17, pp. 10199–10210, Sept. 2020.
[6] B. Jiang, J. Yang, Z. Lv, and H. Song, ”Wearable Vision Assistance System Based on Binocular Sensors for Visually Impaired Users.” IEEE Internet of Things Journal, vol. 6, no. 2, pp. 1375–1383, Apr. 2019.
[7] P. Washington, C. Voss, N. Haber, S. Tanaka, J. Daniels, C. Feinstein, T. Winograd, and D. Wall, ”A Wearable Social Interaction Aid for Children with Autism.” Proceedings of the CHI Conference on Human Factors in Computing Systems, pp. 2348–2354, 2016.