In order to join virtual sessions, you must be registered and logged-in(Were you registered for the in-person meeting in Denver? if yes, just log in.) 
Note: All session times are in Mountain Daylight Time.

3D LiDAR Point-Cloud Classification using Deep Learning

Authors: Andong Ma*, Texas A&M University, Anthony M. Filippi, Texas A&M University
Topics: Remote Sensing, Geographic Information Science and Systems
Keywords: LiDAR, Classification, Deep Learning, PointNet, PointNet++
Session Type: Paper
Presentation File: No File Uploaded


Light Detection and Ranging (LiDAR), is a unique remote-sensing technology where light in the form of a pulsed laser is utilized to measure ranges (variable distances) to objects/targets—on the Earth, for example. These light pulses generate precise, three-dimensional information about the shape of the target objects and their surface characteristics. Moreover, by incorporating other data recorded by passive optical imaging systems, the 3D model reconstruction can be easily implemented with higher precision. Over the past decade, successful applications of deep learning, especially in the subfields of machine learning, computer vision, and autonomous driving, are attracting more attention from the remote-sensing and geosciences communities. An important research topic is to classify individual 3D points into semantic labels. In this study, two deep-learning algorithms, which were designed originally for 3D LiDAR point-cloud classification (i.e., PointNet and PointNet++), were employed to investigate its classification performance on aerial LiDAR datasets. We conduct experiments on two benchmark LiDAR datasets: one was provided by the ISPRS 3D Semantic Labeling Contest, and the other one was released from the 2019 IEEE Geoscience and Remote Sensing Society (GRSS) Data Fusion Contest. Experimental results illustrate that those aforementioned deep learning-based algorithms can achieve promising accuracies and robust classification performance at different scales.

Abstract Information

This abstract is already part of a session. View the session here.

To access contact information login