December 12, 2023

Announcing our Partnership with - Digital Integrity Conversational Chatbot

Nicola Tenca chatbots avatars

GENEVA –– Edgelands Institute seeks to bring opposite views to the table through local conversations about security and the urban social contract in cities. This is partly achieved by raising awareness about the impact that surveillance technology tools have on citizens and how these tools can create asymmetries of power.

We believe that local conversations should use whatever is available to increase the quality and vulnerability of these conversations. To refer to our method, we start with an in-depth research process on city security tools, which produces a report. This report is used by artists, facilitators, photographers, and interested stakeholders to drive and deepen the conversations across multiple areas of the city. These conversations frequently generate a new report that kicks off another cycle of conversations. Rinse and repeat.

In the second iteration of Edgelands Geneva, we are bringing together the Edgelands network for a series of six inter-city discussions on digital integrity and its implications for the urban social contract for security. We aim to bring together the diverse perspectives and experiences of different stakeholders in the Edgelands cities to build a collective understanding of this crucial issue. An artificial intelligence agent will also join us to help us catalyze some of these discussions.

In partnership with, a Swiss-based AI firm and creators of Argo, we are developing a series of AI chatbots that have been trained on all Edgelands reports, artworks, field notes, communication memos, and a few meeting transcripts to inform and support these discussions. 

This is by no means a substitute for local conversations, but quite the opposite. We hope to use AI to support specific parts of the conversation that require extensive summarizing, randomizing, information raising, and contextualizing, always accompanied by a trained facilitator as a navigator of this tool. With this, we hope to increase the ability of groups across divides to access the knowledge hitherto produced and share it with a larger crowd of interested parties and stakeholders.

Here's how it works: Argo uses various LLM models to train specific "agents" with different organizational and conversational focuses. For instance, Edgar knows a ton about our reports, while Don knows very well how to share an idea in a story. When asked, "What is the main takeaway of the work of Edgelands in Medellin?" for instance, “Edgar” refers to a list of documents it has been trained on to give a short and insightful answer while referencing the original documents for further consultation. While nothing substitutes the act of reading primary and secondary sources, this allows interested parties to access, at first glance, an "apparently objective" tertiary source that helps spark interest and start conversations. 

We are currently training the agents. We hope they will soon help us to catalyze these discussions, and that together we can explore the possibility of generating new ideas around the urban social contract and digital integrity.