糖心直播

From sceptic to scholar, a PhD researcher navigates AI's accountability problem

Blogpost for the AI-MAPS project

If you want to understand the real-world impact of Artificial Intelligence (AI), don鈥檛 just look at the technology. Look at the context. This is the core philosophy driving Nanou van Iersel, a PhD researcher with the AI MAPS project at 糖心直播.

AI MAPS is part of the Dutch network of ELSA labs - collaborative spaces where researchers, government, industry and citizens together explore the Ethical, Legal, and Societal Aspects of AI in practice. The goal is to learn from each other on the potential and limitations of AI, and whether and how accountability is affected by it. Nanou鈥檚 work digs into the messy middle ground between negative and positive effects, the specific, everyday contexts where AI is deployed and the complicated question of who is accountable when things go right or wrong.

Accountability in action

鈥淥n a generic level, it鈥檚 easy to say we need accountability for AI,鈥  explains. 鈥淏ut it's only on the concrete, contextual level that you see the complexity.鈥 What do we actually mean by accountability? In the face of failures due to AI, can one person or organization reasonably be held responsible? How does existing law structure accountability, and does it match the reality of how these new technologies work?

This focus is vital because AI is no longer futuristic. 鈥淎I and digital technologies are affecting everyone,鈥 she notes. 鈥淧ublic space is filled with it, from cameras with automatic number-plate recognition on highways to smart doorbells in neighbourhoods. We need to actively shape this future, rather than having it happen to us, led by tech companies projecting their ideals onto our society.鈥

From scepticism to nuance

One of the most surprising findings of her work has been her own evolving perspective.

鈥淚 started this project mostly anti-AI,鈥 she admits.  However, through countless stakeholder interactions, interviews, and observations, her view became more nuanced. 鈥淎fter seeing many real-life examples, I see now that there are also cases where AI and other digital technologies are desirable. I am still highly critical (and still kind of wishing we lived in a world with less surveillance) but I can understand the potential in particular contexts, like using AI to serve environmental protection.鈥

Accountability and "function creep"

Her research on camera surveillance shows how messy accountability becomes in practice.

Cameras are often installed for a narrow purpose, such as monitoring traffic. But once in place, they inevitably record much more. This phenomenon, known as 鈥渇unction creep,鈥 creates legal and ethical grey zones.

鈥淐ameras film everything within the scope of their lenses, not just the thing they were originally meant to record,鈥 Nanou points out. 鈥淚s it allowed to act upon a crime recorded by a camera that was not installed for that purpose? This is a problem inherent to cameras worldwide.鈥

Her work maps how function creep unfolds in practice, challenging law enforcement and policymakers to face the accountability gap between a camera鈥檚 original justification and its actual capabilities.

The value of 鈥渕ultiple ways of knowing鈥

What drew Nanou to AI MAPS wasn鈥檛 just technology, but the project鈥檚 commitment to epistemic inclusion - the idea that there are multiple, equally valid ways of knowing.

鈥淭here is knowledge from experience, from profession, from art, or Indigenous knowledge,鈥 she says. 鈥淚 believe in the equality of different knowledge practices. I like that AI MAPS takes this as a starting point by trying to learn from other stakeholders and having them shape our research questions.鈥

This philosophy ensures her research on accountability is grounded not just in law and theory, but in the lived experiences of those affected by AI.

A grounded perspective

When she鈥檚 not untangling the complexities of AI and the law, Nanou is likely untangling herself from a different kind of complexity: the jungle. She recently returned from a long holiday in Suriname and French Guyana, a stark contrast to the world of digital technology. 鈥淚t was a shock after spending so much time in a tropical climate, away from my laptop,鈥 she says, 鈥渂ut never away from books.鈥

This balance between deep critical thought and a connection to the wider world is perhaps what makes her approach so valuable. In a field dominated by either uncritical hype or blanket condemnation, Nanou offers a rare and essential perspective: that of the critical engager, asking "why" and "for whom" to ensure AI's future is accountable to us all.

Related content
Blogpost for the AI-MAPS project
AI imagination Lombardijen with the vegetable garden
Blogpost for the AI-MAPS project by Nanou van Iersel
Nanou van Iersel
Blogpost for the AI-MAPS project by Majsa Storbeck
Majsa Storbeck
Related links
Overview blogposts | AI Maps

Compare @count study programme

  • @title

    • Duration: @duration
Compare study programmes