Using deep learning and high accuracy regions for vehicle detection and traffic density estimation from traffic camera feeds

Authors: Yue Lin*, The Ohio State University, Ningchuan Xiao, The Ohio State University
Topics: Geographic Information Science and Systems, Transportation Geography, Cyberinfrastructure
Keywords: traffic count, object detection, deep neural network, quadtree, image segmentation
Session Type: Virtual Paper
Day: 4/10/2021
Start / End Time: 4:40 PM / 5:55 PM
Room: Virtual 43
Presentation File: No File Uploaded


The growing number of real-time camera feeds in many urban areas has made it possible to provide traffic density estimates in order to provide data support for a wide range of applications such as effective traffic management and control. However, reliable estimation of traffic density has been a challenge due to variable conditions of the cameras (e.g., heights and resolutions). In this work, we develop an efficient and accurate traffic density estimation method by introducing a quadtree segmentation approach to extract the high-accuracy regions of vehicle detection, High-Accuracy Identification Region (HAIR). State-of-the-art object detection models, EfficientDet, are applied to identify the motor vehicles present in traffic images, and vehicles within HAIR are then used for traffic density estimation. The proposed method is validated using images from traffic cameras of different heights and resolutions located in Central Ohio. The results show that HAIR extracted using our method can achieve a high vehicle detection accuracy with a small amount of labeled data. In addition, incorporating HAIR with the deep learning-based object detectors can significantly improve the accuracy of density estimation especially for cameras that are mounted high and at the same time at a low resolution.

Abstract Information

This abstract is already part of a session. View the session here.

To access contact information login