糖心直播

Walking and talking about AI and public safety in Lombardijen

Blogpost for the AI-MAPS project

What happens when technology marketed to 鈥渒eep us safe鈥 starts quietly shaping how we live together?

From facial recognition to violence detection software, AI systems are finding their way into policing and public spaces, sometimes as decision-making tools, sometimes just as 鈥渁dvisory鈥 systems. But even when their role is described as minimal, their influence is anything but. And outside of official settings, the eyes of smart doorbells and smartphones are already everywhere. Together, these technologies are reshaping what it means to be and feel safe in public. Sometimes fostering security, sometimes deepening division or distrust.

That鈥檚 what 鈥檚 research as part of the AI MAPS project tries to understand. AI MAPS brings together researchers, policymakers, and citizens to explore the Ethical, Legal, and Societal Aspects of AI - not in theory, but in practice. For Marlon, that means asking: how do people think ethically about the AI systems entering their lives, often marketed as tools to improve safety? And how can we bring different perspectives from residents to policymakers into real, constructive dialogue about what 鈥渞esponsible use鈥 should look like?

Grappling with ethics on the ground

It鈥檚 not enough to study AI as if we were neutral observers. Technologies like these don鈥檛 just affect our behaviour, they shape our moral and political sensibilities too. They change what we notice, what we tolerate, and even what we consider fair.

That鈥檚 why his research aims to develop practical, normative tools 鈥 ways for communities and institutions to think and act ethically in the very specific contexts where AI for public safety is being used.

Listening to Lombardijen

Over the past year, he has been working in Lombardijen, a neighbourhood in Rotterdam-South that has already become something of a case study for AI MAPS. Through interviews and weekly walks with the local neighbourhood watch group, he鈥檚 heard a mix of curiosity, fatigue, and scepticism about smart cameras and other AI systems.

Several residents said they 鈥渃ould have a stronger opinion鈥 about these technologies, but what would be the point? 鈥淭hey鈥檒l do it anyway,鈥 one person said, 鈥渨hether we agree or not.鈥 That kind of moral apathy, if it grows, could make it harder later on to have meaningful conversations about ethics or responsibility. It points to something deeper: a lack of trust, and a sense that decisions about technology are made elsewhere, by others.

Earning engagement

According to Marlon, if we want civil society to engage seriously with the ethics of AI, that engagement has to be earned. It means showing that those in government, academia, and industry are willing to share the steering wheel and to let community perspectives genuinely influence what gets built and how it鈥檚 used.

This is challenging work, but it鈥檚 also deeply rewarding. Marlon says that the trust he鈥檚 built by simply showing up - walking the streets, listening, joining community meetings - has been invaluable. Over time, he has become a familiar face at the local community centre. People wave when he arrives and sometimes they insist he stays for a hot meal or a sweet treat.

It鈥檚 those moments that keep him grounded. Marlon says 鈥渢hey remind me what this research is really about: not the technology itself, but the people it touches and the kind of community where safety is something we build together, not something imposed from above.鈥

Related content
Blogpost for the AI-MAPS project
Nanou van Iersel
Blogpost for the AI-MAPS project
AI imagination Lombardijen with the vegetable garden
Related links
Overview blogposts | AI Maps

Compare @count study programme

  • @title

    • Duration: @duration
Compare study programmes