This study proposes a framework for semi-automated assessment of building damage caused by earthquakes using remote sensing imagerydatasets, combined with advanced machine learning techniques. The framework uses high-resolution post-event InSAR images. A machine learning approach is employed to classify the damage states of buildings in earthquake-affected areas. Multi-class damage classification are performed for earthquake events. The binary classification successfully identified over fifty percent of damaged buildings in previous studies. The multi-class damage classification using InSAR data represents a relatively novel application, and the case studies presented highlight one of the first such efforts to leverage InSAR imagery for building-level damage assessment using CNN model.
Introduction
Overview
Buildings are highly vulnerable to earthquakes, with recent disasters (e.g., Sichuan 2008, Chile 2010, Nepal 2015) demonstrating the need for rapid, reliable post-disaster damage assessment. Traditional ground surveys are slow and risky, motivating the use of satellite-based remote sensing techniques like:
LiDAR and optical imagery – useful for identifying structural variations.
While traditional remote sensing methods use machine learning (e.g., SVM, Random Forest, OBIA), they often require manual intervention, limiting speed and scalability.
???? Shift to Deep Learning
Deep learning, especially Convolutional Neural Networks (CNNs), allows for automatic, scalable, and accurate damage classification. Models like VGG, GoogleNet, and AlexNet outperform traditional approaches in image recognition and are being increasingly adopted in disaster response.
However, the use of CNNs for post-earthquake building damage classification is still in early stages and limited by lack of large, labeled datasets.
???? Objective
This research aims to:
Use SAR-based remote sensing data for large-scale damage assessment.
Apply CNNs (specifically VGG19) to classify building damage into:
Intact
Moderately Damaged
Heavily Damaged
Automate the process for faster, scalable, and accurate emergency response.
???? Literature Insights
Wang et al. (2023) – Reviewed 242 studies; CNNs effective, but limited by lack of multi-class datasets.
Hong et al. – Developed EBDC-Net for multi-level damage classification; performance drops as damage categories increase.
Sajitha et al. – Introduced EU-Net (a Siamese U-Net variant) for faster, more accurate classification using pre/post-event image pairs.
???? Proposed Methodology
? System Design
Uses VGG19 CNN architecture with transfer learning.
CNN-based architecture for advanced feature recognition.
Web app interface for real-world usability.
???? Results (UI Snapshots):
Web interface includes:
Home page
Registration/login
User dashboard
Image upload and damage prediction results
Conclusion
The proposed CNN-based framework for classifying damaged buildings from remote sensing imagery demonstrates significant potential in enhancing the accuracy, speed, and reliability of post-disaster damage assessment. Unlike conventional approaches, which often rely on labor-intensive manual inspections or rule-based techniques with limited adaptability, the CNN model can automatically extract complex spatial and texture features from satellite images, enabling precise differentiation between varying levels of building damage.
By minimizing reliance on human interpretation, the system not only reduces processing time but also ensures consistency and objectivity in classification results. In the context of large-scale natural disasters, such as earthquakes, floods, or cyclones, this capability is particularly valuable, allowing emergency response teams to rapidly identify severely affected areas. Consequently, authorities can prioritize rescue operations, optimize resource allocation, and implement recovery strategies more effectively. Overall, this approach provides a robust, scalable, and practical solution for disaster management, supporting faster decision-making and improving resilience in post-disaster scenarios.
References
[1] D. Contreras, T. Blaschke, D. Tiede, and M. Jilge, “Monitoring recovery after earthquakes through the integration of remote sensing, GIS, and ground observations: The case of L’Aquila (Italy),” Cartogr. Geogr. Inf. Sci., vol. 43, pp. 115–133, 2016.
[2] E. Guirado, S. Tabik, D. Alcaraz-Segura, J. Cabello, and F. Herrera, “Deep-learning versus OBIA for scattered shrub detection with Google Earth imagery: Ziziphus Lotus as case study,” Remote Sens., vol. 9, p. 1220, 2017.
[3] J. Tu, H. Sui, W. Feng, and Z. Song, “Automatic building damage detection method using high-resolution remote sensing images and 3D GIS model,” ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., vol. 3, pp. 43–50, 2016.
[4] L. Gong, C. Wang, F. Wu, J. Zhang, H. Zhang, and Q. Li, “Earthquake-induced building damage detection with post-event sub-meter VHR TerraSAR-X staring spotlight imagery,” Remote Sens., vol. 8, p. 887, 2016.
[5] M. Buchroithner, R. G. Gevorkian, and A. S. Karachanian, “Results of a structural-geological analysis of aero-cosmic data of the Spitak earthquake zone (Armenia),” in Proc. 10th Micro Symp. Relativistic Planetology, Moscow, Russia, Aug. 7–11, 1989, p. 2.
[6] M. Chini, N. Pierdicca, and W. J. Emery, “Exploiting SAR and VHR optical images to quantify damage caused by the 2003 Bam earthquake,” IEEE Trans. Geosci. Remote Sens., vol. 47, pp. 145–152, 2009.
[7] M. He, Q. Zhu, Z. Du, H. Hu, Y. Ding, and M. Chen, “A 3D shape descriptor based on contour clusters for damaged roof detection using airborne LiDAR point clouds,” Remote Sens., vol. 8, p. 189, 2016.
[8] M. Janalipour and A. Mohammadzadeh, “Building damage detection using object-based image analysis and ANFIS from high-resolution image (case study: Bam earthquake, Iran),” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., vol. 9, pp. 1937–1945, 2016.
[9] P. Gamba and F. Casciati, “GIS and image understanding for near-real-time earthquake damage assessment,” Photogramm. Eng. Remote Sens., vol. 64, pp. 987–994, 1998.
[10] T. Blaschke and J. Strobl, “What’s wrong with pixels? Some recent developments interfacing remote sensing and GIS,” GeoBIT/GIS, vol. 6, pp. 12–17, 2001.
[11] T. Liu, A. Abd-elrahman, J. Morton, and V. L. Wilhelm, “Comparing fully convolutional networks, random forest, support vector machine, and patch-based deep convolutional neural networks for object-based wetland mapping using images from small unmanned aircraft system,” GISci. Remote Sens., vol. 55, pp. 243–264, 2018.
[12] Y. Bai, B. Adriano, E. Mas, H. Gokon, and S. Koshimura, “Object-based building damage assessment methodology using only post event ALOS-2/PALSAR-2 dual polarimetric SAR intensity images,” J. Disaster Res., vol. 12, pp. 259–271, 2017.
[13] Y. Dong, Q. Li, A. Dou, and X. Wang, “Extracting damages caused by the 2008 Ms 8.0 Wenchuan earthquake from SAR remote sensing data,” J. Asian Earth Sci., vol. 40, pp. 907–914, 2011