
Online safety is often treated as a technical problem to be solved with better tools, stronger laws, or smarter algorithms. But what if safety cannot be engineered at all? What if it is something that must be negotiated collectively across the digital and physical spaces we inhabit?
Eight weeks ago, we launched a Research Sprint to explore how communities create, inhabit, and safeguard online spaces. The Sprint brought together scholars, designers, technologists, artists and activists from all over the world to reimagine the urban social contract for the digital age.
During the Sprint, we had the pleasure of engaging in conversations with six inspiring experts— Eriol Fox, Anhxela Maloku, Suzie Dunn, Qingyi Ren, Cade Diehm, and Caroline Sinder—whose experience and insight guided our collective inquiry.
A core idea that emerged across these conversations is that safety and security are relational and contextual. They are not fixed states or technical endpoints, but rather outcomes that are shaped by interactions, trust, environments, and shared norms.
Focusing on communities rather than individuals when discussing safe spaces prompted us to ask deeper, value-driven questions: Who defines what a community is? Whose needs take priority? Whose experiences influence design decisions? Who is listened to, and who is systematically excluded?
A key concept that emerged from these questions was trust. Not only trust in a particular platform, but also the trust that community members cultivate (or struggle to cultivate) through their participation in an online space.
In this context, trust is not just a buzzword to be added at the end of a project. It requires communities to be involved from the outset and to have the opportunity to express themselves creatively. They must also be given the space to shape the tools that will affect their lives. After all, tools developed by and for a particular community are much more likely to reflect its identity, culture, and values.
This is particularly important for communities that are at a higher risk. For these communities, privacy and security features are not just conveniences; they are lifelines. What some people see as optional can be a matter of survival for others. This reinforced a recurring insight from the Sprint: good design prioritises the safety of the most vulnerable because, when they feel secure, the whole community benefits.
The idea of trust also formed the basis of conversations about queer representation, digital permanence, and the fluidity of identity. These conversations reminded us that trust is also essential for inclusive design, which should accommodate evolving self-definitions rather than locking people into static categories or rigid profiles.
Grounding our discussion in communities also led us to a simple yet frequently overlooked reality: the digital and physical realms are inextricably linked.
Over the past four years at Edgelands, we have explored how the growing digitalisation of security is changing our *urban social contract—*the evolving web of rules and expectations that shape how people live, move around, and interact in urban spaces. The discussions during the Sprint made me realise that what we are actually negotiating is an urban-digital social contract: an evolving web of rules and expectations that shape not only how we live together in physical urban spaces, but also in digital spaces, and in the hybrid realm between them (which one the speakers referred to as the “para-real”).
These spheres are now so deeply intertwined that attempting to separate them feels artificial and often obscures the most important dynamics. The conditions that make safe spaces possible online are rooted in the cultural and social structures of the physical world, and the harm that people experience online spills over into their everyday lives.
Meaningful safety therefore depends on multiple, interlocking elements, and cannot be outsourced to any one of them: not law, not code, not design, and not individual responsibility.
Recognising this complexity reveals a recurring pattern: deep structural gaps.
Gaps between coders and users.Gaps between social scientists and technologists.Gaps between profit-driven design and community needs.
These gaps give rise to real tensions in the way safety is conceived and put into practice.
For example, even when platforms adopt privacy defaults or offer more accessible settings, the effectiveness of these safety measures still depends on whether people can understand and use them. All too often, these tools are communicated through opaque, highly technical language. When this happens, even well-intentioned designs fall short: users feel overwhelmed, coders feel misunderstood, and critical decisions become obscured by jargon.
Here, empathy was identified as essential. Taking an empathetic approach to design, from the earliest conceptual stages to the final interface, helps bridge the gap between what coders assume and what users actually need. This approach transforms safety from a technical feature into something that can be understood, navigated, and trusted.
Yet design alone cannot carry the full burden. The Sprint made it clear that even the most carefully crafted tools depend on cultural, social, and political foundations that can either strengthen or weaken their impact.
Although laws and regulations have their place, we were reminded that legal tools should be a last resort, reserved for the most serious harm. Everyday safety is shaped far more by cultural, educational, and design-driven strategies than by punitive bans, which rarely address the root causes of issues. A restorative and human-centred approach to moderation, one that encourages dialogue rather than resorting to punishment, builds trust and strengthens communities.
The emphasis on restorative approaches should not be mistaken for a call for inaction. The absence of regulation creates its own form of instability, fragmenting communities, enabling misinformation and exposing vulnerable people to greater risk. Inaction, therefore, is never neutral because it carries real consequences in both digital and physical spaces.
The inseparability of our digital and physical worlds also became evident in our discussion of what happens when the digital environment itself becomes unstable. In situations where connectivity is intentionally shut down, or unintentionally lost, certain platforms must retain essential capabilities, such as safety alerts, emergency contacts, and meeting points. Protesters and at-risk groups cannot afford to lose access to these tools the moment networks fail. Resilience must be intentionally designed, not assumed.
At the same time, the global infrastructure that supports our digital lives is both uneven and fragile. ****The hard work of moderators and users often goes unrecognised, and the physical systems that underpin digital platforms are vulnerable to natural disasters, political interference and a lack of human oversight, which is exacerbated by companies cutting staff. And yet, these vulnerabilities are rarely acknowledged in narratives of endless digital expansion.
As we reimagine our urban-digital social contract, one thing becomes clear: any sustainable, community-driven solution must move fluidly across the physical and digital. The two cannot, and should not, be separated.
Moderation is a clear example of this. While it is essential for creating safe digital environments, it cannot be reduced to automated systems or outsourced to underpaid workers without adequate cultural, linguistic, or emotional support. Effective moderation thrives when led by community members who understand local contexts, and when people have access to real human support. Regional help desks, culturally specific hotlines, and individuals who are already confronting violence within their own communities have emerged as promising approaches, provided that we also protect and support these frontline workers.
Similarly, while cybersecurity practices are crucial for safeguarding platforms against external threats, they cannot ensure safety on their own. They must be paired with human-centred mechanisms. One such approach is the use of “trusted flagged persons”: designated individuals within a community or institution who can support users directly and act as intermediaries between people, technology, and governance structures. These roles help to ensure that feedback about safety concerns is heard and acted upon.
Taken together, these insights point towards an urban-digital social contract rooted in community values and empathy-led design, striking a careful balance between technological safeguards and human connection. This vision for a safer digital future is one that I hope will continue to evolve through ongoing research, collaboration, and shared responsibility. After all, in a world where digital and physical lives are inseparable, creating safer spaces is no longer optional. It is a collective responsibility and an ongoing negotiation.
If safety is something that we build together rather than something that can be imposed from above, then communities need more than just principles—they need tools, too. To this end, the Sprint Researchers have developed a toolkit shaped by lived experience and designed to help online communities create, reflect on, and sustain safer spaces on their own terms.
The toolkit is due to be published in early 2026, and I encourage readers to keep an eye out for its release as a continuation of the conversations that emerged from the Sprint.
I would like to extend my sincere thanks to all the speakers who kindly shared their knowledge and experience, and to all the participants of the Research Sprint, for their time, energy, and thoughtful contributions over the past eight weeks.

Eriol has been working as a designer for 15+ years working in for-profits and then NGO’s and open-source software organisations, working on complex problems like sustainable food systems, peace-building and crisis response technology. Eriol now works at Superbloom design, research, open-source and technology projects.
They are also part of the core teams at Open Source Design and Human Rights Centred Design working group and Sustain UX & Design working group and help hosts podcast about open source and design.
Eriol is a non-binary, queer person who uses they/them pronouns

A UX Designer driven by the complexity behind what feels simple, how systems and stories shape the way people experience technology. My work explores how design can make technical depth feel human and transparent.
Published research author on privacy-preserving UX patterns, with its primary artifact being a UI/UX Privacy Pattern Catalog to help designers embed such patterns into everyday interfaces.
.jpg)
Suzie Dunn the Director of the Law and Technology Institute and an assistant professor at Dalhousie’s Schulich School of Law. Her research centers on the intersections of equality, technology and the law, with a specific focus on technology-facilitated gender-based violence, artificial intelligence, and deepfakes. She is a research partner on a SSHRC funded research project on young people’s experiences with sexual violence online, DIY Digital Safety. She is also a Senior Fellow with CIGI where she led phase one of CIGI’s Supporting Safer Digital Spaces project, and a member of the Women’s Legal Education and Action Fund technology-facilitated violence committee.

Qingyi Ren 任晴宜 is a non-binary digital artist and Science, Technology and Society (STS) researcher in Critical Media Lab Basel, whose work explores the intersection of art, technology, and social justice. With a passion for unravelling the complex web of gender theory, AI ethics, and digital identity. They are currently based in Basel, dedicate their artistic practice to unveiling the subtle biases ingrained within the realm of machine learning and its profound impact on marginalised identities. Through their thought-provoking artistry, they challenge the status quo and inspire critical conversations surrounding the ethics of artificial intelligence.

Cade is the founder of The New Design Congress, an international research organisation forging a nuanced understanding of technology's role as a social, political and environmental accelerant. He studies, writes, consults and speaks regularly on topics such as digital power structures, privacy, information warfare, resilience, internet economies and the digitisation of cities
Caroline Sinders is a machine-learning-design researcher and artist. For the past few years, she has been examining the intersections of technology’s impact in society, interface design, artificial intelligence, abuse, and politics in digital, conversational spaces. Sinders is the founder of Convocation Design + Research, an agency focusing on the intersections of machine learning, user research, designing for public good, and solving difficult communication problems. As a designer and researcher, she has worked with Amnesty International, Intel, IBM Watson, the Wikimedia Foundation, and others.