Authors: Shihong Du*, Peking University, Xiuyuan Zhang, Peking Univseristy
Topics: Geography and Urban Health, Urban and Regional Planning, Remote Sensing
Keywords: Urban landscape, urban functional zones, object-based image analysis, multiscale image segmentation
Session Type: Paper
Start / End Time: 9:55 AM / 11:35 AM
Room: Balcony A, Marriott, Mezzanine Level
Presentation File: No File Uploaded
Geographic object-based image analysis (GEOBIA) has been widely used in analyzing homogeneous patches from very-high-resolution (VHR) images. However, urban areas are dominated by heterogeneous and man-made patches, characterized by diverse urban functional zones, ranging from relatively natural systems to human-dominated environments. GEOBIA can solely analyze individual objects and ignores spatial patterns of objects, thus it cannot satisfy the demand of landscape-ecology investigation. Accordingly, a novel strategy of geoscene-based image analysis (GEOSBIA) is proposed in this report to model landscape patterns and analyze the functions of landscape-ecology zones (i.e. “geoscenes”) at multiple scales. To investigate urban landscapes with geoscenes, we addressed two techniques: geoscene segmentation and hierarchical semantic classification. First, the multi-level aggregation method is first proposed to extract multiscale geoscenes from VHR images. Second, the hierarchical semantic classification method is presented to classify the generated geoscenes into different categories of urban functional zones with the hierarchical Bayesian method by considering four semantic layers (i.e., visual features, object semantics, spatial object patterns, and geoscene categories). The presented methods are used to map urban functional zones in Beijing, Zhuhai and Putian Cities in China. The experimental results indicate our methods can effectively extract urban functional zones and produce more accurate classification results from VHR imagery and POI data than Support Vector Machine (SVM) and Latent Dirichlet Allocation (LDA) with a larger overall accuracy of 90.8%.