xNet+SC: Classifying Places Based on Images by Incorporating Spatial Contexts

Authors: Bo Yan*, University of California, Santa Barbara, Krzysztof Janowicz, University of California, Santa Barbara, Gengchen Mai, University of California, Santa Barbara, Rui Zhu, University of California, Santa Barbara
Topics: Geographic Information Science and Systems
Keywords: Spatial context, Image classification, Place types, Convolutional neural network, Recurrent neural network
Session Type: Paper
Day: 4/5/2019
Start / End Time: 5:00 PM / 6:40 PM
Room: Washington 6, Marriott, Exhibition Level
Presentation File: No File Uploaded


With recent advancements in deep convolutional neural networks, researchers in geographic information science gained access to powerful models to address challenging problems such as extracting objects from satellite imagery. However, as the underlying techniques are essentially borrowed from other research fields, e.g., computer vision or machine translation, they are often not spatially explicit. In this paper, we demonstrate how utilizing the rich information embedded in spatial contexts (SC) can substantially improve the classification of place types from images of their facades and interiors. By experimenting with different types of spatial contexts, namely spatial relatedness, spatial co-location, and spatial sequence pattern, we improve the accuracy of state-of-the-art models such as ResNet – which are known to outperform humans on the ImageNet dataset – by over 40%. Our study raises awareness for leveraging spatial contexts and domain knowledge in general in advancing deep learning models, thereby also demonstrating that theory-driven and data-driven approaches are mutually beneficial.

Abstract Information

This abstract is already part of a session. View the session here.

To access contact information login