Translate page with Google

Project June 16, 2025

Inside Amsterdam's 'Responsible' Algorithm

Country:

Authors:

Since 2021, the city of Amsterdam has been conducting a high-stakes experiment to show that artificial intelligence can be deployed ethically—and effectively—to detect welfare fraud.

When the algorithm revealed biased results in testing, the city revised it. But when deployed in a real-world pilot, some biases—including against women—remained, ultimately leading the city to scrap the system.

This reporting project raises the question: Can a risk assessment algorithm ever really be fair?

Previous reporting has covered the worst-case deployments of this technology. Because many of these systems were poorly designed, these stories have avoided more nuanced questions about if AI can be designed for fairness and what that would look like.

MIT Technology Review, Lighthouse Reports, and the Dutch newspaper Trouw aim to tell the story behind Amsterdam's abandoned efforts to use AI to detect welfare fraud, while visually explaining the math behind algorithmic fairness—and what it says about the possibilities and limitations of responsible AI.


Image by Yutong Liu and Kingston School of Art/https://betterimagesofai.org/https://creativecommons.org/licenses/by/4.0.

RELATED PROJECTS

RELATED INITIATIVES

Logo: The AI Accountability Network

Initiative

AI Accountability Network

AI Accountability Network

RELATED TOPICS

an orange halftone illustration of a hand underneath a drone

Topic

AI Accountability

AI Accountability
technology and society

Topic

Technology and Society

Technology and Society