Rural populations and forestry workers face significant safety risks due to inadequate surveillance methods for detecting animal types and movements. To address this challenge, this project introduces a hybrid VGG-19 and Bi-LSTM network aimed at enhancing safety monitoring in forested areas. The VGG-19 model excels in feature extraction, while the Bi-LSTM network specializes in learning sequential data, collectively enabling precise detection of animal types and behaviours. This integrated approach not only improves accuracy in identifying wild animals but also provides a cost-effective alternative to traditional surveillance techniques.The primary objectives of this project include improving the accuracy of animal detection and activity classification in forested regions by leveraging the strengths of VGG-19 and Bi-LSTM algorithms. Furthermore, the project seeks to mitigate the high computational expenses associated with existing surveillance methods through model optimization, ensuring both cost-effectiveness and superior detection performance. The efficiency and accuracy of the proposed VGG-19 + Bi-LSTM model is rigorously evaluated against traditional Convolutional Neural Network (CNN) approaches, offering valuable insights into the advancements and benefits of the new methodology.
By incorporating advanced deep learning techniques, the proposed system ensures comprehensive monitoring of forest animal activity, enhancing safety measures. Through meticulous comparison with existing CNN models, the system identifies the most effective method for real-time animal activity detection, facilitating optimized surveillance implementation. This project aims to ensure the safety of individuals in rural and forested areas, delivering improved safety measures and peace of mind through accurate and reliable animal detection.
Introduction
Overview
Real-time detection of wild animal activity is a challenging task due to:
Constant data flow from surveillance
Diverse species and natural environments
High computational demands
Advanced deep learning models are essential for monitoring wildlife, preventing animal attacks, and generating location-based alerts for forest officials. These systems also help detect illegal hunting and track animal movements.
Objective
The paper proposes a Hybrid VGG-19 + Bi-LSTM model to:
Detect wild animals via surveillance cameras and drones
Send SMS alerts to forest officials
Achieve high performance: 98% classification accuracy, 77.2% mAP, and 170 FPS
Additionally, a CNN + BiGRU model further improves detection precision.
Problem Statement
Rural communities and forest workers face dangers from wild animal attacks. Existing methods are often:
Expensive
Complex
Inaccurate
A cost-effective, efficient, and accurate solution is required to ensure public and ecological safety.
Existing System Limitations
Use of CNNs without specialized adaptation
High computing cost
Poor real-time performance and accuracy
Lack of effective alerting mechanisms
Proposed System
A five-phase system is introduced using:
Phase 1 & 2: Image preprocessing and object detection
Phase 3: Hybrid model (VGG19 + Bi-LSTM) for classification
Phase 4 & 5: SMS alerting and on-ground response (excluded in testing due to cost constraints)
Uses the Kaggle Wild Animal Dataset for training and evaluation.
Algorithms
VGG19 + Bi-LSTM:
VGG19 extracts image features
Bi-LSTM captures temporal dependencies
Enables classification and activity-based alerts with location data
CNN + BiGRU:
CNN for feature extraction
BiGRU for improved accuracy and real-time detection
Outperforms traditional methods across all metrics
Expand dataset to cover more species and environments
Collaborate with wildlife authorities for real-world deployment
Integrate thermal and acoustic sensors for multi-modal detection
Improve model architecture using newer deep learning techniques
Ensure ethical use by minimizing dataset biases
Conclusion
This paper presents a hybrid framework combining VGG-19 and Bi-LSTM for wild animal detection and activity monitoring. The proposed approach plays a crucial role in safeguarding both wildlife from poaching and humans from unexpected animal encounters by sending automated alerts to forest authorities. This model introduces innovative methods to enhance the efficiency of deep learning techniques for broader real-time applications. To assess its effectiveness, the model has been tested on four benchmark datasets, including the Camera Trap Dataset, Wild Animal Dataset, Hoofed Animal Dataset, and CDnet Dataset. Experimental evaluations demonstrate superior performance across multiple quality metrics. The hybrid VGG-19+Bi-LSTM model achieves an impressive 98% average classification accuracy, surpassing previous approaches while maintaining lower computational complexity
References
[1] N. Banupriya, S. Saranya, R. Swaminathan, S. Harikumar, and S. Palanisamy, ‘‘Animal detection using deep learning algorithm,’’ J. Crit. Rev., vol. 7, no. 1, pp. 434–439, 2020.
[2] B. Natarajan, E. Rajalakshmi, R. Elakkiya, K. Kotecha, A. Abraham, L. A. Gabralla, and V. Subramaniyaswamy, ‘‘Development of an end-to-end deep learning framework for sign language recognition, translation, and video generation,’’ IEEE Access, vol. 10, pp. 104358–104374, 2022.
[3] W. Dong, P. Roy, C. Peng, and V. Isler, ‘‘Ellipse R-CNN: Learning to infer elliptical object from clustering and occlusion,’’ IEEE Trans. Image Process., vol. 30, pp. 2193–2206, 2021.
[4] R. Elakkiya, P. Vijayakumar, and M. Karuppiah, ‘‘COVID_SCREENET: COVID-19 screening in chest radiography images using deep transfer stacking,’’ Inf. Syst. Frontiers, vol. 23, pp. 1369–1383, Mar. 2021.
[5] R. Elakkiya, V. Subramaniyaswamy, V. Vijayakumar, and A. Mahanti, ‘‘Cervical cancer diagnostics healthcare system using hybrid object detection adversarial networks,’’ IEEE J. Biomed. Health Informat., vol. 26, no. 4, pp. 1464–1471, Apr. 2022.
[6] J. Imran and B. Raman, ‘‘Evaluating fusion of RGB-D and inertial sensors for multimodal human action recognition,’’ J. Ambient Intell. Humanized Comput., vol. 11, no. 1, pp. 189–208, Jan. 2020.
[7] E. Fernández-Carrión, J. Barasona, J. Sánchez-Vizcaíno, et. al., “Computer Vision Applied to Detect Lethargy through Animal Motion Monitoring.