Last week, WIRED published a series of detailed, data-driven stories about the problematic algorithm used by the Dutch city of Rotterdam to root out benefit fraud.
In partnership with Lighthouse Reports, a European organization specializing in investigative journalism, WIRED obtained access to the inner workings of the algorithm under freedom of information laws and examined how it assesses who is most likely to commit fraud.
We found that the algorithm discriminates on the basis of ethnicity and gender—unfairly giving women and minorities higher risk scores, which can lead to investigations that are significantly damaging to the personal lives of applicants. The interactive article breaks down the algorithm, taking you through two hypothetical examples to show that while race and gender are not among the factors included in the algorithm, other data, such as a person’s knowledge of Dutch, can act as a proxy for discrimination.
The project shows how algorithms designed to make governments more efficient — and which often proclaim to be fairer and more data-driven — can secretly reinforce public biases. An investigation by WIRED and Lighthouse also found that other countries are experimenting with similar flawed approaches to finding fraudsters.
“Governments have been building algorithms into their systems for years, whether it’s a spreadsheet or some fancy machine learning,” says Dhruv Mehrotra, an investigative data reporter at WIRED who worked on the project. “But when that kind of algorithm is applied to any type of punitive and predictive law enforcement, it becomes very powerful and very scary.”
The impact of an investigation using the Rotterdam algorithm can be dire, as seen in the case of a mother of three who faced questioning.
But Mehrotra says the project was only able to highlight such unfairness because WIRED and Lighthouse had a chance to test how the algorithm worked—countless other systems operate with impunity under the cover of bureaucratic darkness. He says it’s also important to recognize that algorithms like the one used in Rotterdam are often built on inherently unfair systems.
“Algorithms often simply optimize an already punitive technology for welfare, fraud or control,” he says. “You don’t want to say that if the algorithm was fair, everything would be fine.”
It’s also important to recognize that algorithms are becoming increasingly common at all levels of government, and yet their work is often completely hidden from those most affected.
Another investigation Mehrota conducted in 2021 before joining WIRED showed how crime-predicting software used by some police departments unfairly targeted black and Latino communities. In 2016, ProPublica uncovered shocking flaws in the algorithms used by some U.S. courts to predict which defendants are at greatest risk of reoffending. Other problematic algorithms determine which schools children go to, recommend who companies should hire, and decide which families get approved for mortgage applications.
Of course, many companies also use algorithms to make important decisions, and these are often even less transparent than those in government. There is a growing movement to hold companies accountable for algorithmic decision-making, as well as a push for legislation that requires more visibility. But the problem is complex – and making algorithms fairer can sometimes make things worse.