Detection of damaged buildings after an earthquake with convolutional neural networks in conjunction with image segmentation
Access
info:eu-repo/semantics/closedAccessDate
2021Access
info:eu-repo/semantics/closedAccessMetadata
Show full item recordAbstract
Detecting damaged buildings after an earthquake as quickly as possible is important for emergency teams to reach these buildings and save the lives of many people. Today, damaged buildings after the earthquake are carried out by the survivors contacting the authorities or using some air vehicles such as helicopters. In this study, AI-based systems were tested to detect damaged or destroyed buildings by integrating into street camera systems after unexpected disasters. For this purpose, we have used VGG-16, VGG-19, and NASNet convolutional neural network models which are often used for image recognition problems in the literature to detect damaged buildings. In order to effectively implement these models, we have first segmented all the images with the K-means clustering algorithm. Thereafter, for the first phase of this study, segmented images labeled damaged buildings and normal were classified and the VGG-19 model was the most successful model with a 90% accuracy in the test set. Besides, as the second phase of the study, we have created a multiclass classification problem by labeling segmented images as damaged buildings, less damaged buildings, and normal. The same three architectures are used to achieve the most accurate classification results on the test set. VGG-19 and VGG-16, and NASNet have achieved considerable success in the test set with about 70%, 67%, and 62% accuracy, respectively.