Authors: Sam Lumley*, McGill University, Renee Sieber, McGill University
Topics: Geographic Information Science and Systems, Communication, Climatology and Meteorology
Keywords: climate change, visualization, evaluation, interview
Session Type: Virtual Paper
Start / End Time: 9:35 AM / 10:50 AM
Room: Virtual 16
Presentation File: No File Uploaded
A central challenge for developers of interactive climate change visualization tools is evaluation of their impact in real-world scenarios. This is particularly problematic for web-based visualisations where it can be impractical to observe end users directly. To better understand the metrics of evaluation used by practitioners, we conducted semi-structured interviews with 22 developers of climate visualization tools from government, journalism, science outreach and research. We asked developers about their intended outcomes, how they gauged success and the challenges they faced. Our findings reveal a wide range of success metrics being used, including the number of users, theoretical efficacy, level of community engagement, amount of media coverage and policy impact. Developers often triangulate multiple evaluation methods to meet the challenge of evaluating outcomes directly. During the development process, evaluations included controlled user studies, expert consultation, market analysis and informal feedback. During dissemination, success was often measured using user feedback, web analytics and impact assessments. The findings suggest that creating interactive climate visualizations can be an increasingly interdisciplinary process, requiring skills in design, web development, climate science, marketing, user research and outreach. To meet these challenges, we encourage wider discussions on the gaps between intented outcomes and real-world success.