Researchers develop tools to automatically detect natural disasters in social media images
The tools can help with damage assessment, rescue operations and public alert.
Researchers at the University of California at Los Angeles (UCLA) and the University of North Carolina at Chapel Hill (UNC) have developed tools to automatically detect natural disasters in images using deep learning and computer vision. Their work was presented at the Computer Vision and Pattern Recognition (CVPR) 2023 conference.
The tools, called DisasterNet and DisasterMapper, are able to recognize various types of natural disasters such as fires, floods, earthquakes, hurricanes, and tsunamis, and localize them on images with high accuracy. This can help with damage assessment, rescue operations and public alert.
DisasterNet is a neural network that categorizes images based on the type of disaster or lack thereof. It is trained on a large dataset containing over 100,000 images from various sources such as satellites, drones, social networks and news sites. DisasterNet can recognize 12 disaster classes with 94% accuracy.
DisasterMapper is a tool for semantic image segmentation, that is, dividing an image into areas corresponding to different objects or categories. It uses DisasterNet as input and generates a segmentation map showing the location of the disaster in the image. DisasterMapper can locate natural disasters with 87% accuracy.
The researchers plan to further improve their tools and their application in real situations. They also hope that their work will contribute to the development of computer vision to analyze images of natural disasters and help those affected.