The development of vehicles that operate without direct human control has significantly transformed the automotive industry, paving the way for safer and more efficient transportation systems. With the rising number of road accidents and the lack of a structured approach to road safety, there is an urgent need for intelligent solutions. In this project, we present a cost-effective autonomous driving prototype built using the Raspberry Pi platform. The system includes a Pi Camera Module for real-time image and video capture, enabling effective lane detection through image processing algorithms. These techniques help the vehicle identify road lanes and maintain proper positioning while navigating. By integrating lightweight deep learning models with affordable hardware, our solution is suitable for applications in research, academic learning, and early-stage prototyping. The system highlights the practical potential of autonomous vehicles to independently and intelligently operate in real-world driving environments.
Introduction
This project presents a low-cost, vision-guided autonomous robot built using a Raspberry Pi 4, capable of real-time lane detection, obstacle avoidance, and autonomous navigation. It integrates camera vision, sensor fusion, and machine learning for self-driving on a fixed track.
Key Features:
1. Self-Driving System
Uses a Pi Camera for lane detection via Canny edge detection and Hough Transform.
Detects traffic lights using a machine learning model trained on Google Colab.
Supports basic movement: forward, backward, turns, and stops.
2. Navigation & Obstacle Avoidance
Ultrasonic sensors detect nearby obstacles.
Combines sensor data using Bayesian inference for more accurate obstacle detection.
Uses non-linear kinematic model for motion control (bicycle model).
3. Real-Time Telecommunication
Integrates tools like Blynk, Telegram, or Wi-Fi modules for remote monitoring and control.
Supports alerts and commands, enabling integration into IoT-based smart transport systems.
System Architecture:
Vision Module: Pi Camera + OpenCV for real-time lane tracking.
Control Module: Raspberry Pi sends commands to an Arduino Uno, which controls the motors via L298N motor driver.
Sensor Module: Ultrasonic and optional IR sensors for obstacle and edge detection.
Hardware Components:
Raspberry Pi 4
Pi Camera
Arduino Uno
L298N Motor Driver
2 DC Gear Motors
HC-SR04 Ultrasonic Sensor
5V & 12V power supplies
Software Tools:
Python 3 with OpenCV and NumPy
Image processing includes:
Grayscale conversion
Gaussian blur
Canny edge detection
Region of interest (ROI) masking
Hough line transform
Serial communication to Arduino for motor control
Implementation & Testing:
Tested on a custom track under varying lighting.
Achieved:
Lane detection with ~5 cm width accuracy
Obstacle detection at 20 cm range
~15 FPS processing speed
<300ms reaction time
Results & Limitations:
Strengths:
Reliable lane tracking under good lighting
Accurate obstacle avoidance
Real-time control on low-cost hardware
Limitations:
Struggles in low-light or poor weather
Limited processing power reduces frame rate
Sensitive to poor lane markings
Literature Survey Insights:
Highlights the rise of embedded AI and ANNs in real-time autonomous systems.
References include successful deployments of lightweight models like Tiny-YOLOv3, YOLOv7-Tiny, and reinforcement learning methods on Raspberry Pi for vision-based navigation.
Conclusion
This project successfully demonstrates a cost-effective, modular autonomous vehicle using embedded AI and sensor fusion. It serves as an excellent platform for education, research, and experimentation in autonomous driving technologies, with potential for further enhancements like GPS, path planning, and improved ML models.
References
[1] X. Chen et al., “A Survey of Computer Vision- Based Autonomous Vehicle Systems,” 2024.
[2] A. Falaschetti, “Tiny-YOLOv3 Optimization for Embedded Vision,” 2023.
[3] MDPIElectronics,“YOLOv7-Tinyfor Real-Time Object Detection on Edge Devices,” Electronics, vol. 12, no. 6, pp. 1–10, 2023.
[4] R. Girshick et al., “Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation,” 2022.
[5] H. Hosseini and A. Etemad-Shahidi, “Machine Learning Algorithms for Autonomous Vehicles: A Review,” IEEE Access, vol. 9, pp. 123456–123470, 2021.
[6] A. Srivastava et al., “Design and Implementation ofanIndoorAutonomousRobotUsingMLand Sensors,”inProc.Int.Conf.IntelligentSystems, 2021.
[7] R. Raju, “Reinforcement Learning on Raspberry Pi for Autonomous Navigation,” Journal of Embedded Systems and Robotics, vol. 3, no. 2, pp. 45–52, 2021.