In an era where computer vision classification accuracy is said to be approaching ‘human performance,’ it is important to view these claims with scientific skepticism, taking care to evaluate the methods they employ and the metrics against which they were assessed. Classification accuracy has long been a focus, and considerable principles and practices for measuring and assessing error and bias have solidified in the field. However, with modern, deep convolutional neural networks (CNN), we see a new challenge for measuring error and bias – deep CNNs excel at pattern detection, and bias is a powerful pattern. While the concept of optimistic bias, attributable in part to homogenous sampling, is well-known in remote sensing, modern classifiers can exhibit this bias in ways that traditional accuracy assessments may not accurately measure. The consequence of this homogenous sample induced bias is a that homogenous areas tend to have very high classification accuracy, while the transition zones between classes exhibit lower accuracy rates, or worse, may be no more accurate than a random assignment of classes.
Specifically, this session will:
1) examine how modern classifiers exhibit bias;
2) quantify how this error manifests itself spatially in unique ways;
3) and discuss practical solutions to better measure and mitigate bias.
To present a paper in this session, please send your abstract and the Participation Identification Number (PIN) to Kevin Magee (firstname.lastname@example.org) or Eliza Bradley (email@example.com).
Classification accuracy assessments are arguably the most important act of the classification process – without it, the results are an uncertain collection of opinions. It is the measurement of error, both thematic and spatial, that provides confidence in the quality of the data, its relative strengths and weaknesses across classes, its locational reliability, and allows scientists a means to communicate its value and limitations for particular applications not only to one another, but to customers and policy makers. Working towards a greater understanding of how bias can differentially impact modern classification methods such as deep CNNs is key to advancing the state of this field, and ensuring that we are properly assessing and mitigating bias across the wider computer vision and remote sensing community.
Approved for public release, 20-069.
|Introduction||Eliza Bradley Penn State University||15||9:35 AM|
|Presenter||Emily Evenden*, Clark University, Robert Gilmore Pontius Jr, Clark University, Categorical independent variables should be encoded as Empirical Probability, not as Evidence Likelihood, for input to the Multi-Layer Perceptron neural network||15||9:50 AM|
|Presenter||Huijie Zhang*, San Diego State University, Adaptive Convolutional Neural Network (A-CNN) Model for Small Urban Element Detection||15||10:05 AM|
|Discussant||Elizabeth Delgado Pennsylvania State University||15||10:20 AM|
|Discussant||Kevin Magee NGA||15||10:35 AM|
To access contact information login