Authors: Xiao Huang*, University of South Carolina, Zhenlong Li, University of South Carolina, Cuizhen Wang, University of South Carolina
Topics: Geographic Information Science and Systems, Cyberinfrastructure, Hazards, Risks, and Disasters
Keywords: Social media, natural disaster, convolutional neural network, visual-textual fused classification
Session Type: Paper
Start / End Time: 5:00 PM / 6:40 PM
Room: 8228, Park Tower Suites, Marriott, Lobby Level
Presentation File: No File Uploaded
In recent years, social media platforms have played a critical role in mitigation for a wide range of disasters. The highly up-to-date social responses and vast spatial coverage from millions of citizen sensors enable a timely and comprehensive disaster investigation. However, automatic retrieval of on-topic social media posts, especially considering both of their visual and textual information, remains a challenge. This paper presents a novel machine learning-based approach to labelling on-topic social media posts in a visual-textual fused convolutional neural network (CNN) architecture. Two CNNs, the Inception-V3 CNN and word embedded CNN, are applied to extract visual and textual features, respectively, from geotagged social media posts. Well trained on our designed training sets, the extracted visual and textual features are further concatenated to form a fused feature to feed the final classification process. The results suggest that both CNNs perform remarkably well in learning visual and textual features. The fused feature proves that additional visual feature leads to more robustness compared with the situation where only textual feature is used. The on-topic posts, classified by their texts and pictures in an automatic manner, represent timely disaster documentation during an event. Coupling with rich spatial contexts when geotagged, this approach could greatly aid in a variety of disaster mitigation approaches leveraging social media data.