Authors: Jeremy Crampton*, Newcastle University, Renee Sieber, McGill
Topics: Cyberinfrastructure, Spatial Analysis & Modeling, Transportation Geography
Keywords: AI, Counter-AI, GeoAI, Justice, Machine Learning
Session Type: Paper
Presentation File: No File Uploaded
Inspired by the conceptual and methodological advances achieved by counter-mapping and critical GIS, we propose a counter-AI that would be co-designed by the people and that would not treat them as data subjects. Today’s AI, including GeoAI, are often trained on labelled training datasets (cat/not a cat; damaged/undamaged building) in supervised learning. Even if training data biases are eliminated (unlikely), people are treated as data subjects and exploited for machine learning. In unsupervised or adversarial learning, where the AI learns against non-labelled data, humans can still be treated as passive data subjects.
“Ethical AI”, which provides informed consent over data extraction, is a step in the right direction but falls short because it is confined to the ethics of data input/output. Researchers at the latest Fairness, Accountability and Transparency in Machine Learning conference noted the limitations of their own framing, calling instead for “justice, liberation and rights” in ML. In this presentation we propose and explore the implications of a people’s AI: a counter-AI that would be designed by the people. A key research challenge is how non-experts can genuinely design (not just test or put “humans in the loop”) a complex socio-technical object as an AI. A second challenge is to determine how transferable a trained AI is from one place to another, or if in some sense AI are always place-based. We explore these challenges in light of a proposed joint Canadian-UK research project on providing mobility-as-a-service, and how we propose to collaborate with citizens.