An Edgelands Institute/Urban AI Research Project
Surveillance technologies, including those powered by AI systems, are increasingly being deployed and implemented in cities throughout the world. Cities in China are the most surveilled with 372 cameras per thousand inhabitants, compared to Los Angeles (USA’s most surveilled city in relative terms), which has 8.7 per thousand. Given the recent rise in the number of violent crimes and homicides in the United States, heightened security measures brought about by health restrictions during the pandemic, large scale venues and other security challenges faced by cities throughout the world, City Governments are responding by expanding their surveillance systems, including CCTV, live video streams and AI-powered facial recognition software.
Even if surveillance systems are an essential tool for law enforcement and urban security strategies, AI surveillance technologies pose a set of risks and challenges for citizens and decision makers. For example, when used by police, government agencies or private companies, facial recognition systems can potentially compromise people's privacy, freedom of expression and data security. This technology empowers governments to conduct mass surveillance in public spaces and limits our ability to remain anonymous and be left alone. How then can cities balance and harmonize the use of surveillance as a tool for security with people’s rights, and their duty to not cause harm?
We believe a key part of this answer lies in the role and ability of cities to regulate AI technologies. After all, cities are producers and users of vast amounts of data, where the largest proportion of the population is concentrated and where violent crimes are increasing. Through a variety of local regulatory instruments, a handful of cities have taken to regulate surveillance systems: from outright bans of biometric and predictive policing systems in Oakland and Portland, local AI registers in Helsinki and Amsterdam, and surveillance ordinances in Seattle (among others), cities are acting in this space to mitigate harms from algorithmic systems, and put systems and safeguards in place to ensure accountability.
Together with Urban AI and the AI Localism Project, The Edgelands Institute is conducting a joint research/action project on local surveillance ordinances and tools for algorithmic accountability in the sphere of public safety. Through this effort, we aim to 1) better understand how to govern AI for safety and/or surveillance technologies in cities, and 2) provide a resource for international stakeholders to respond to the risks that come with adopting surveillance technologies.
This resource will take the shape of a “policy roadmap” and playbook that cities and local stakeholders can use to guide their efforts to mitigate risks and increase public trust as they implement automated decision-making systems for surveillance and/or safety applications. While taking into account best practices and standard-setting, the playbook will also highlight key context-specific considerations that actors must address.The project aims to contribute to the work developed by organizations such as the ACLU model bill for Community Control over Police Surveillance (CCOPS) and the “Algorithmic Accountability for the public sector” report by the Ada Lovelace Institute, AI Now Institute, and Open Government Partnerships. Our project aims to provide decision makers, policy makers and advocacy groups with a set of practical tools and good practices for regulating and implementing AI surveillance technology. This Playbook, will address identified challenges that municipalities face when implementing transparency principles, impact assessment and auditing processes for algorithmic systems.
We will be conducting research over the next three months on approaches to algorithmic accountability and surveillance regulation, with a focus on the role of local actors. Through this, we aim to learn what has worked so far and what could be done better, as well as the challenges that arise in the implementation of such policies. We are now wrapping up the literature review phase, and our next step will be conducting 20-30 interviews of stakeholders in 3 case study cities. The case studies will allow us to dive deeper into the role that local context has to play in algorithmic accountability.
The team brings together expertise on security, technology governance, and ethical use of AI in the urban context.
The Edgelands Institute
The Edgelands Institute is a multi-disciplinary organization that uses academic research, data and art to explore how the digitalization of urban security is changing the urban social contract — the often-unseen rules that govern our cities. We create pop-up spaces that bring citizens, policymakers, academics, and other stakeholders into dialogue about the way that digital tools are being used by city governments and transforming urban social fabric. Learn more at https://www.edgelands.institute/
Urban AI is a Think Tank that federates a global community of pioneers (CEO, Researchers, Public Stakeholders,..) in the emerging field of Urban Artificial Intelligence (Urban AI). Together, we propose ethical modes of governance and sustainable uses of Urban AI. Urban AI is also a space for exchange and meeting, a place of debate that welcomes a diversity of points of view around AI and the future of cities. Instead of creating "Smart Cities", our ambition is to urbanize Artificial Intelligence. To imagine and develop AIs that preserve our social contract, empower people, embrace our cultures and contribute to making cities vibrant and sustainable. For more information, visit urbanai.fr
The Governance Lab at the NYU Tandon School of Engineering
The Governance Lab's mission is to improve people's lives by changing the way we govern. Our goal at The GovLab is to strengthen the ability of institutions — including but not limited to governments — and people to work more openly, collaboratively, effectively, and legitimately to make better decisions and solve public problems. We believe that increased availability and use of data, new ways to leverage the capacity, intelligence, and expertise of people in the problem-solving process, combined with new advances in technology and science, can transform governance. We approach each challenge and opportunity in an interdisciplinary, collaborative way, irrespective of the problem, sector, geography, and level of government. For more information, visit thegovlab.org.
Have you been involved in the development or implementation of a policy or initiative aimed at governing the local use of AI for safety/ surveillance?
Do you have a great example of a case study we should look into?
Would you be interested in being interviewed as part of this project?
Reach out to email@example.com
Participant in Edgelands' Fellowship program, Sophia Sennett explores in this blog post the relations between climate change, climate intelligence and social contract.
If you are interested in researching how digital technologies are used in Nairobi to monitor security issues, we invite you apply to “We are Recording you Nairobi”, a research sprint on security policies, coexistence and digital surveillance in Nairobi.
Last month at the International Museum of the Red Cross in Geneva, the first two photographers involved in this adventure came to present their work. This discussion allowed us to discover two very different approaches but also to evoke the specificity of the medium of photography in research.