Self-Supervised Deep Learning for Urban Flood Mapping with Bi-temporal Satellite Imagery

Authors: Bo Peng*, University of Wisconsin - Madison, Qunying Huang, University of Wisconsin - Madison
Topics: Hazards, Risks, and Disasters, Remote Sensing, Urban Geography
Keywords: Urban, food mapping, deep learning, satellite imagery, self-supervised learning
Session Type: Virtual Paper
Presentation File: No File Uploaded

Floods have brought catastrophy to human settlements. As the most frequent natural disaster, floods account for more than 75% of federally-declared disasters in the U.S. Optical remote sensing imagery has unique advantages for identifying flooded areas by virtue of the abundant spectral information associated with floodwater. Patch-based image analysis, as a special type of object-based image analysis, has received significant attention for mapping floods over heterogeneous urban areas by leveraging supervised deep learning approaches with massive human labels. However, such a time-consuming manual labeling process poses further challenges for near real-time emergency response. Additionally, such models often fail to generalize to other geographical locations. To address the challenges associated with supervised methods, this study proposes a fully-automated self-supervised learning framework for patch-wise urban flood mapping with bi-temporal multispectral satellite imagery without any manual labels. Patch-wise change vector analysis is used with patch features learned through a self-supervised autoencoder to produce patch-wise change maps showing potentially flood affected areas. Post-processing including spectral and spatial filtering is applied to these patch-wise change maps to remove non-flood related changes. Two urban flood events were experimented: Hurricane Harvey flood (2017) in Houston, Texas and Hurricane Florence flood (2018) in Lumberton, North Carolina. Final flood maps show strong performance of the proposed method for near real-time self-supervised urban flood mapping.

Abstract Information

This abstract is already part of a session. View the session here.

To access contact information login