Deep Convolutional Neural Network Training Enrichment using Multi-View Object-based Analysis of Unmanned Aerial Systems Imagery for Wetlands Classification

Authors: Tao Liu*, University of Florida, Amr Abd-Elrahman, School of Forest Resources and Conservation, University of Florida
Topics: Remote Sensing, Land Use, Agricultural Geography
Keywords: Deep learning, convolutional neural network, object-based classification, random forest, support vector machine, small UAS, OBIA
Session Type: Paper
Day: 4/11/2018
Start / End Time: 8:00 AM / 9:40 AM
Room: Studio 7, Marriott, 2nd Floor
Presentation File: No File Uploaded


Deep convolutional neural network (DCNN) requires massive training datasets to trigger its image classification power, while collecting training samples for remote sensing application is usually an expensive process. When DCNN is simply implemented with traditional object-based image analysis (OBIA) for classification of Unmanned Aerial systems (UAS) orthoimage, its power may be undermined if the number training samples is relatively small. This research aims to develop a novel OBIA classification approach that can take advantage of DCNN by enriching the training dataset automatically using multi-view data. Specifically, this study introduces a Multi-View Object-based classification using Deep convolutional neural network (MODe) method to process UAS images for land cover classification. MODe conducts the classification on multi-view UAS images instead of directly on the orthoimage, and gets the final results via a voting procedure. 10-fold cross validation results show the mean overall classification accuracy increasing substantially from 65.32%, when DCNN was applied on the orthoimage to 82.08% achieved when MODe was implemented. This study also compared the performances of the support vector machine (SVM) and random forest (RF) classifiers with DCNN under traditional OBIA and the proposed multi-view OBIA frameworks. The results indicate that the advantage of DCNN over traditional classifiers in terms of accuracy is more obvious when these classifiers were applied with the proposed multi-view OBIA framework than when these classifiers were applied within the traditional OBIA framework.

Abstract Information

This abstract is already part of a session. View the session here.

To access contact information login