Variational Autoencoder for High-resolution Remote Sensing image Retrieval

Authors: Wenxuan Liu*, Wuhan University, Yi Fang, Santa Clara University, Huayi Wu, Wuhan University
Topics: Remote Sensing
Keywords: Remote sensing image, Image retrieval, Variational autoencoder, Unsupervised learning
Session Type: Paper
Day: 4/14/2018
Start / End Time: 10:00 AM / 11:40 AM
Room: Grand Ballroom A, Astor, 2nd Floor
Presentation File: No File Uploaded


With the development of remote sensing sensors, a huge volume of high-resolution remote sensing (HRRS) image is now available, which makes it possible for detailed image interpretation. However, how to organize and manage HRRS images effectively have always been a challenge. The recent advances of deep learning have demonstrated its capability to learn powerful feature representations for HRRS images. While large sets of labeled images have been assembled, there are far more images without labels in the remote sensing community. To leverage the vast quantity of these unlabeled images, it is necessary to employ a deep generative image model, which naturally combine the expressiveness of probabilistic generative models with the high capacity of deep neural networks. We propose a novel deep image generative models for HRRS image retrieval, which can be interpreted as encoder-decoder deep neural networks and thus it is capable of learning complex nonlinear distributed representations of the remote sensing images. We conduct a comprehensive set of experiments on two public datasets. The experimental results have demonstrated the effectiveness of the proposed unsupervised learning models for HRRS image retrieval.

Abstract Information

This abstract is already part of a session. View the session here.

To access contact information login