March 27, 2024

Digital Integrity Bingo

Communication Team

For the first Edgelands' digital intercity dialogue, participants discussed the uses of AI and facial recognition in the four cities the institute has popped up.

"press play to start" sign

Photo: Anwar Hakim for Unsplash

What do Medellín databases, Geneva train stations and Nairobi regulations have in common? Sounds like the start of a nonsensical joke. At the Edgelands Institute, however, this symbolizes a common thread of how cities around the world have been using AI and facial recognition technologies to mitigate - with very little proven success and transparency - security threats, many times to the detriment of civil freedoms and safety.

This was the main topic of our Edgelands' digital intercity dialogue, a consensus and insight-generation space convening researchers, policy, and security experts across four cities where Edgelands has popped-up: Medellín, Geneva, Nairobi, and Houston. The first convening of this new project starts with a simple task: finding common ground. When talking about city tools, digital space, and infrastructure, what exactly are we talking about?

As a way to set the ground for the next dialogues coming later this year, we kickstarted our project with an intense game of digital-integrity-bingo, putting together the four themes through which we analyze digital integrity - A.I., Transparency, Stakeholder Responsibilities and Digital Sovereignty - and the three dimensions with which to explore them - City Tools, Digital Space, and Infrastructure:

At first glance, even though the themes and dimensions are fundamental to better structuring a complex conversation, they are all intertwined. It seemed preferable to put these concepts into concrete examples to better understand how they function in each of the Edgelands cities.

When talking about A.I., for instance, most of the examples were related to facial recognition technologies. Our Medellín team reported that, even though the local government is consistently using it in the city’s security system, there’s no facial database in Colombia, so it’s unclear what this data is being matched to. The Geneva representatives mentioned that there has been a recent public debate about installing facial recognition tech in Swiss train stations. In Nairobi , even though the public is aware that A.I. tech is being used for government infrastructure, there are no regulations or policies around it yet. There’s also an ongoing debate in Medellín about the possibility of using A.I. to regulate the use of international apps such as Airbnb in order to ensure the safety of tourists visiting the city, but also protect its citizens against predatory tourism practices. One more interesting topic for discussion is a recent collaboration between Open AI and the newspaper Le Monde - what is this collaboration about, exactly? How will it affect the news we consume?

Transparency is a more straightforward theme, but Nairobi raised an interesting point: what if citizens are not necessarily interested in data transparency, when they feel safer leaving things as they are? The global team also brought to the table the fact that places such as Helsinki and Amsterdam are creating transparency registers: databases of software and technologies being used for surveillance in the cities so that people can have easy access to this information.

When it comes to stakeholder responsibilities, the most interesting discussions happened around the theme of digital spaces: who is responsible for what happens online, for example in the case of cyberbullying or deepfakes? It’s important to understand the roles and responsibilities of platforms and governments, as well as citizens, security forces, schools, tutors, and parents.

Finally, digital sovereignty is a complex issue in itself, and as much as it becomes concrete with examples, these situations are also not easy to wrap our heads around: Nairobi has an excellent example of tools such as e-citizen and e-teams, which are used by the government but are privately owned - who owns the data and where is it stored? In the same vein, we also talked about the use of cloud technologies, where data is stored in privately owned data centers, usually located in a foreign country - should governments rely on these services to store data about citizens and the functioning of the government? How can we discuss ethics and confidentiality when private entities own information about entire populations, including health registries and identification documents? Also important to question is what happens to informal means of social contract when digital technology takes over smaller financial transactions, for example - as a way to track tax payments and personal information.

It’s clear that there are numerous approaches to the questions of security, digital technologies and digital integrity, as well as many concrete ways in which these topics affect our lives—maybe way more than we realize. As pointed out during the conversation and also deeply discussed during our Medellín Symposium, the concept of trust seems to be at the heart of many of these issues and an essential component of the feeling of safety. We look forward to further developing these conversations throughout the year.

For our next dialogue, we will participate in the Explore Geneva festival to discuss the Use of A.I. to Provide Security as a Public Service with special guests from each of Edgeland’s cities. Stay tuned to watch our panel live on April 27th, and subscribe to our newsletter to keep up with the developments of our project—Intercity Virtual Pop-up Dialogues.