Authors: Dylan Keon*, Oregon State University, Chris Daly, Oregon State University
Topics: Cyberinfrastructure, Geographic Information Science and Systems, Climatology and Meteorology
Keywords: Geocomputation, GIS, Time Series, Web Mapping, Automation
Session Type: Paper
Start / End Time: 9:55 AM / 11:35 AM
Room: Marshall North, Marriott, Mezzanine Level
Presentation File: No File Uploaded
Ongoing operational data collection and creation of weather and climate data products at fine spatial and temporal scales place increasing pressure on hardware and software solutions to store, manage, analyze, explore, visualize, and distribute an ever-growing quantity of output datasets effectively. The PRISM Climate Group at Oregon State University models several climate variables at multiple spatial scales across the conterminous US at daily, monthly, and annual temporal scales (from 1895 to the present), creating large amounts of processed data for external use as well as source data and intermediate data products saved for internal use. A complex backend system (computational cluster, dedicated data ingestion and processing servers, fast network, etc.) helps drive the automated production of these data, and primarily web-based tools (both internal and external) not only distribute the external data products (4km resolution data products receive about 700,000 downloads per month) but also allow users to retrieve detailed information for a particular location, explore anomalies and drought conditions, and compare average precipitation and temperature conditions over time. A web-based Data Explorer provides fast real-time retrieval of time-series data retrieved directly from the full range of spatiotemporal raster datasets, using a system with multiple dedicated servers configured specifically for this purpose, and running software designed to leverage spatial consistency, data format, memory configuration, and filesystem organization to improve speed. Modeling and producing data at finer spatial and temporal scales will require hardware and software improvements, and possibly a move to a GPU-based computing approach, particularly for areal requests.