PlanetSense Lènz:Contextual Deep Learning Framework for Joint Object Detection & Scene Classification of Ground Level Photos

Authors: Seema Chouhan*, Oak Ridge National Laboratory, Gautam Thakur, Oak Ridge National Laboratory
Topics: Quantitative Methods, Geographic Information Science and Systems
Keywords: Location-based social networks (LBSNs), Context-aware, Deep Learning, Scene Classification, Semantic Segmentation
Session Type: Paper
Day: 4/5/2019
Start / End Time: 8:00 AM / 9:40 AM
Room: Congressional B, Omni, West
Presentation File: No File Uploaded


The recent popularity of location-based social networks (LBSNs) and easily available geotagged images opens an opportunity to gain valuable location-related information for the entire world. We propose PlanetSense Lènz, a framework that focuses on the utilization of ground photos to augment efforts in improving location intelligence, such as context-aware Points of Interests (POIs) mapping and for developing activity-based intelligence (ABI) applications. We perform multisource data fusion and use the information extracted from ground level images obtained from different LBSNs and Google Street view. In our framework, we use deep convolutional neural network architectures to accurately detect categories of respective POIs. Also, we extract relevant features from ground level photos obtained from different LBSNs and Google Street View and combine semantic image segmentation, scene classification for both indoor and outdoor scenes, object and boundaries detection. The objects detected in geotagged ground-level imagery from LBSNs will be compared with objects detected in georeferenced Google Street View images to correctly identify categories of respective POIs. PlanetSense Lènz will help in improving the positional accuracy and mapping of the POIs and reducing the occurrence of miscategorized POIs. Our proposed framework has many uses, from contributing towards POI/ AOI (Area of Interest) discovery, urban planning to studying human activity patterns.

Abstract Information

This abstract is already part of a session. View the session here.

To access contact information login