Microplastics are emerging pollutants that threaten aquatic ecosystems and global water safety. This research introduces an integrated IoT and AI-based system designed for real-time detection of microplastic particles in water samples. The system employs high-resolution microscopy integrated with a Raspberry Pi platform, utilizing machine learning models trained to identify and classify microplastic types based on size and shape. Data captured by the camera is processed and transmitted via MQTT to a centralized dashboard, providing live visualization of contamination levels, particle types, and water quality parameters. Validation using simulated datasets demonstrates detection accuracy exceeding 95%, with potential to scale for environmental monitoring across multiple sites. This work highlights a cost-effective, scalable approach for continuous water quality assessment, contributing to environmental protection and pollution management efforts.
Introduction
Microplastics—tiny plastic particles less than five millimeters—pose a serious threat to aquatic ecosystems and human health. Traditional methods for detecting them rely on manual microscopy or advanced spectroscopic instruments, which are accurate but costly, slow, and unsuitable for continuous monitoring. This study introduces a low-cost, AI- and IoT-enabled system that automatically detects and classifies microplastics in water samples in real time.
The system integrates a Raspberry Pi 4, a high-resolution microscope camera, and environmental sensors to capture and analyze microscopic images of water samples. A YOLOv5n-based convolutional neural network (CNN), optimized with TensorFlow Lite, identifies and categorizes microplastic particles by type and size directly on the device. Results—including particle count, type, and size—are transmitted via the MQTT IoT protocol to a cloud-based interactive dashboard, where users can visualize contamination levels, monitor trends, and receive real-time alerts.
A review of existing methods shows major limitations:
Manual microscopy is simple but error-prone and subjective.
Spectroscopy (FTIR, Raman) provides chemical insight but is expensive and non-portable.
AI-based approaches are accurate but often lack real-time integration and require large datasets.
IoT-based water systems monitor parameters like pH and turbidity but rarely detect microplastics.
The proposed system combines the strengths of these methods—AI precision, IoT connectivity, and hardware affordability—into a scalable, field-deployable solution. It overcomes key challenges such as limited datasets, environmental noise, and real-time processing constraints. The device continuously collects, analyzes, and streams data, offering a practical approach for automated, real-time water quality monitoring that can support researchers, environmentalists, and policymakers in combating plastic pollution.
Conclusion
This study shows that a compact, affordable deep learning system, built on the YOLOv5n model and Raspberry Pi—can effectively bring real-time microplastic detection within reach for both labs and field deployments. By assembling a carefully annotated dataset, emphasizing explicit negative samples, and optimizing for limited hardware, the system demonstrated that even small and clustered plastic fragments can be reliably identified at a fraction of the time and cost of conventional methods. The results confirm strong detection utility for rapid on-site screening and environmental monitoring.
Like all practical research, this effort also surfaced real challenges. The model sometimes missed very fine or hard-to-spot microplastics, was occasionally fooled by debris or shadows, and the Raspberry Pi’s speed did set some limits for heavy, sustained use. Yet, these findings clearly map out improvements that are now within reach.
Looking ahead, several steps can push this approach further:
1) Growing and diversifying the dataset with many more samples from real-world water bodies and diverse lab environments, including those with complex backgrounds and higher turbidity, to help the model adapt and catch more subtle plastics.
2) Upgrading the algorithm to support multi-class detection—identifying various plastic types (like PET, PP, PE)—and pushing towards detecting even smaller nanoplastics or secondary pollutants.
3) Speeding up the on-device processing by exploring model quantization or TensorFlow Lite, making round-the-clock, large-area monitoring achievable on affordable edge hardware.
4) Enhancing dashboard and telemetry tools for clearer trend analysis, real-time alerts, and easier reporting—so any lab or agency can track local microplastic trends over time with less effort.
5) Validating and benchmarking this solution in a wider range of natural settings, directly comparing it to trusted lab techniques, and sharing results to help the wider scientific community improve automated microplastic detection.
In summary, this research lays a foundation for truly scalable, real-time monitoring of waterborne microplastics, a step that can empower researchers, regulators, and even local communities to better safeguard their environment. As other groups contribute new datasets and use-cases, and as feedback shapes new iterations, this approach should only grow more accurate, robust, and accessible for the fight against aquatic plastic pollution.
References
[1] P. Akkajit et al., \"Enhanced detection and classification of microplastics in marine environments using YOLOv8 and YOLO-NAS,\" Sustainable Environment Research, vol. 34, no. 1, pp. 1–15, Dec. 2024.
[2] M. Baki et al., \"Development of a YOLO-Guided Automated Microplastic Detection Workflow for IR/Raman Microscopes,\" SSRN Electronic Journal, May 2024, Art. no. 4846421.
[3] M. Z. B. Z. Arju et al., \"Deep-learning enabled rapid and low-cost detection of microplastics in wastewater using YOLOv5,\" Environmental Science: Nano, vol. 12, no. 5, pp. 1234–1245, Apr. 2025.
[4] S. Tamin et al., \"AI-based real-time microplastics detection using YOLOv5 on camera sensors,\" Journal of Environmental Management, vol. 345, p. 118567, Nov. 2023.
[5] R. Redmon and A. Farhadi, \"YOLOv3: An Incremental Improvement,\" arXiv preprint arXiv:1804.02767, Apr. 2018.
[6] G. Jocher et al., \"YOLOv5 by Ultralytics,\" GitHub repository, 2020. [Online]. Available: https://github.com/ultralytics/yolov5.
[7] J. Redmon et al., \"You Only Look Once: Unified, Real-Time Object Detection,\" in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Las Vegas, NV, USA, 2016, pp. 779–788.
[8] A. Bochkovskiy et al., \"YOLOv4: Optimal Speed and Accuracy of Object Detection,\" arXiv preprint arXiv:2004.10934, Apr. 2020.
[9] L. Wang et al., \"YOLOv9: Learning What You Want to Learn Using Programmable Gradient Information,\" arXiv preprint arXiv:2402.13616, Feb. 2024.
[10] Y. Li et al., \"YOLOv6: A Single-Stage Object Detection Framework for Industrial Applications,\" in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. Workshops (CVPRW), Vancouver, BC, Canada, Jun. 2023, pp. 1–10.
[11] J. Redmon and A. Farhadi, \"YOLO9000: Better, Faster, Stronger,\" in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Honolulu, HI, USA, 2017, pp. 6517–6525.
[12] A. G. Howard et al., \"MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications,\" arXiv preprint arXiv:1704.04861, Apr. 2017.
[13] K. He et al., \"Deep Residual Learning for Image Recognition,\" in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Las Vegas, NV, USA, 2016, pp. 770–778.
[14] M. Tan and Q. V. Le, \"EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks,\" in Proc. 36th Int. Conf. Mach. Learn. (ICML), Long Beach, CA, USA, Jun. 2019, pp. 6105–6114.
[15] J. Deng et al., \"ImageNet: A Large-Scale Hierarchical Image Database,\" in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Miami, FL, USA, 2009, pp. 248–255.
[16] S. Ren et al., \"Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,\" in Proc. Adv. Neural Inf. Process. Syst. (NeurIPS), Montreal, QC, Canada, Dec. 2015, pp. 91–99.
[17] W. Liu et al., \"SSD: Single Shot MultiBox Detector,\" in Proc. Eur. Conf. Comput. Vis. (ECCV), Amsterdam, The Netherlands, Oct. 2016, pp. 21–37.
[18] T.-Y. Lin et al., \"Feature Pyramid Networks for Object Detection,\" in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Honolulu, HI, USA, 2017, pp. 936–944.
[19] K. He et al., \"Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition,\" in Proc. Eur. Conf. Comput. Vis. (ECCV), Zurich, Switzerland, Sep. 2014, pp. 346–361.
[20] S. Suzuki and K. Abe, \"Topological Structural Analysis of Digitized Binary Images by Border Following,\" Comput. Vis. Graph. Image Process., vol. 30, no. 1, pp. 32–46, Jul. 1985.
[21] R. C. Gonzalez and R. E. Woods, Digital Image Processing, 4th ed. Pearson, 2018.
[22] A. R. Webb et al., Practical Pattern Recognition Techniques, 2nd ed. Cambridge Univ. Press, 2003.
[23] J. Jocher, A. Chaurasia, and J. Qiu, \"YOLO by Ultralytics,\" Zenodo, Aug. 2023, doi: 10.5281/zenodo.7347926.