Authors: Jing Gao*, University of Delaware
Topics: Spatial Analysis & Modeling, Geographic Information Science and Systems, Land Use and Land Cover Change
Keywords: Spatial Analysis and Modeling, Errors, Uncertainties, Monte Carlo Simulations, Land Cover/Land Use Change
Session Type: Paper
Presentation File: No File Uploaded
Monte Carlo (MC) experiment is a commonly-used method for propagating thematic mapping errors and investigating potential impacts of these errors on the endpoint of geospatial analysis and modeling. An MC experiment uses known probability distributions (often not spatially explicit) of thematic mapping errors to simulate supposedly plausible error maps, adds these simulated errors to original geospatial data, repeats planned analysis and/or modeling procedures on the error-added data, and examines how the simulated errors affect final results of the planned analysis and/or modeling. This kind of uncertainty analyses have many advantages, however, we argue that if the probability distributions used as boundary conditions for simulating spatial error patterns are too crude (which often are the case in practice), the uncertainty space spanned by the repeated error simulations is likely too wide, and can significantly bias the MC experiment’s conclusions about the output uncertainty range of the planned analysis and/or modeling. In this presentation, we demonstrate this idea with an empirical application of an MC experiment in simple land cover/land use change analyses (e.g. spatial change detection) for selected landscapes in the Midwest U.S., with the error maps simulated using commonly available information on land cover/land use mapping errors (i.e. confusion matrices). The empirical results strongly support the conceptual argument, suggesting that much more detailed information on the spatial and the probabilistic distributions of thematic mapping errors (than what is typically available) is necessary for MC experiments to be useful.
To access contact information login