Settings

Theme

The Suspicion Machine

wired.com

29 points by dthal 2 years ago · 31 comments

Reader

denton-scratch 2 years ago

There's something schizophrenic about constructing a system for choosing who to investigate, while simultaneously trying to avoid discrimination.

The entire purpose of the chooser system is to discriminate between people; they want to investigate only those people likely to be cheating. If they really want to avoid discrimination, then they chould be choosing who to investigate using a straw-poll.

They have laws against certain kinds of discrimination, e.g. on the basis of race or gender. If those facts are used as input to the chooser, then race- and gender-discrimination is inevitable. There's not usually any protection against discrimination for e.g. being short, or having red hair, or speaking with a regional accent; I have no idea how such characteristics are correllated with cheating on welfare claims.

  • Doxin 2 years ago

    The problem there is that protected attributes can often be inferred from unprotected ones. The dutch tax authority got into hot water due to discriminating based on name. Names are not protected, but when you start marking people with foreign sounding names as likely frauds something has gone awry.

rnk 2 years ago

This is a real problem. These algorithms are a way for us in the west to experience social credit type scores like we read about from China. I'm sure there's someone here who was unfortunate to have a name that overlapped in some way with an identified "terrorist". Don't forget that when you buy an airplane ticket, there's that always slightly worrisome option to "add your special id number if you are incorrectly listed as a terrorist", whatever they call that. The inability to sue to identify the problem or correct it is a real loss of autonomy and freedom. I've always wondered what the impact would be if I ran into that. And also how come the 'terrorist' can't just find out someone's excuse-me number? I put terrorist in quotes not because there aren't any real terrorists, but because it is such a fraught identification, subjective, there must be mistakes.

  • 0xDEAFBEAD 2 years ago

    >The inability to sue to identify the problem or correct it is a real loss of autonomy and freedom.

    It's unclear if "inability to sue" applies in this case. For example:

    >Discriminating against people based on their gender and ethnic background is illegal in the Netherlands, but converting that information into data points to be fed into an algorithm may not be. This gray area, in part the result of broad legal leeway granted in the name of fighting welfare fraud, lets officials process and profile welfare recipients based on sensitive characteristics in ways that would otherwise be illegal. Dutch courts are currently reviewing the issue.

  • Biologist123 2 years ago

    In the west, I sometimes wonder if we already have a scoring system which ensures only good citizens can access housing, food and the rest.

    It’s called “money”, which together with credit scoring functions to make sure you abide by the rules society sets: no pay, no play.

    • lozenge 2 years ago

      Money in the US traces back to redlining, racist deeds, racist lending criteria, white areas enforced with mob violence etc. Money is just the modern legal way of carrying over past discrimination.

    • safety1st 2 years ago

      This is like equating dick pic spam with sexual abuse, yes the first one is problematic, but no it is not equal to the second, in fact it's a disservice to victims of genuine abuse to equate them.

      I can get money from almost anybody who's willing to give it to me for any reason, the government's influence and visibility into this are limited. I can earn money in some other currency/jurisdiction and convert it, too. Agree the US credit bureaus are problematic (they are not a "Western" thing, they're an American thing), but they're nothing like the social credit score.

      With China's social credit score there's one number in a database which the government can adjust as they see fit, if they decide to penalize you, you're no longer able to participate in a vast array of social services and functions. It is total control and you become a pariah overnight.

      This comment was whataboutism, I guess. https://en.wikipedia.org/wiki/Whataboutism

      • Biologist123 2 years ago

        Your analogy is a bit of an odd one, as unsolicited dick pics are undoubtedly a form of sexual abuse.

        • PH95VuimJjqBqy 2 years ago

          Are you implying their point about equivalency is incorrect?

          The other poster is not saying random dick pic's are ok, they're saying there's a gradation to the depravity.

          It's like calling a 25 y/o man who sleeps with 17 y/o girl a pedophile. You're using the same word that you would if the girl was 10, but those are not nearly the same acts. One is inappropriate, the other is heinous.

          this "guilt inflation" just makes it so many of us stop trusting words.

          When I hear the word pedophile, I shouldn't have to ask clarifying questions to find out if the speaker is trying to manipulate my feelings or not. Likewise, if someone uses the phrase "sexual assault", I shouldn't have to ask clarifying questions to determine if it was some jackass sending unsolicited dick pics online or something more harmful (that may or may not also involve sending unsolicited dick pics).

          Because, just like the pedophile example, a dick pic from a rando online and a dick pic from your boss or coworker are two wholly different acts.

      • 15457345234 2 years ago

        > With China's social credit score there's one number in a database

        Isn't the consensus that this doesn't actually exist, though?

        It's one of those very well talked about things that in reality just... isn't... there.

0wis 2 years ago

I am not sure the data model is the problem here. I feel like the journalist tried really hard to make a case against scoring (which I do not like either), but overlooked the fact that the whole system in which its embedded is bad. The case should not be on the technology but on its usage.

It’s already looking like a bad journalism piece in the first part :

” Being flagged for investigation can ruin someone’s life, and the opacity of the system makes it nearly impossible to challenge being selected for an investigation, let alone stop one that’s already underway. One mother put under investigation in Rotterdam faced a raid from fraud controllers who rifled through her laundry, counted toothbrushes, and asked intimate questions about her life in front of her children.”

Here the problem is not the algorithm, its the investigators.

Another ethical problem for me : the system of flagging in whole relied partly on anonymous tips from neighbors. I am not an expert but I feel more at ease about a system that rely on a selection algorithm + randomness than delation.

I think the problem was the processes around the algorithm not its existence itself. The journalist seems to assume during the whole piece that the algorithm will become the main/only way to identify fraudsters. If its the case, it’s terribly wrong because how are you training your algorithm then ?

Most of the time, the piece try to put the reader in an emotional state of fear and anger and is not at all doing any analysis, while faking it using a lot of numbers and graphs.

Sorry for the long rant but I am surprised that this came from Wired which I consider quite good on tech topics, and that its on HN 2nd page.

I am against government scoring and algorithms for legal / police cases precisely because it can be badly used by powerful people.

Am I the only one to feel that its not a good article ?

  • nonrandomstring 2 years ago

    > problem was the processes around the algorithm not its existence... > the problem is not the algorithm, its the investigators.

    Indeed. Going deeper, the 'problem' is a social/cultural belief that doing X at scale, using a computer, is somehow more ethical than having a bunch of people do the same immoral thing. Computer automation and algorithms become a moral justification (or at least a Hail Mary) for immoral acts.

    There is at once a diffusion of responsibility, a causal disconnection of consequential acts, a reassignment of responsibility, and - given that we bow down to machines as our masters rather than our tools - a change from choice to a belief in the "inevitability" of unstoppable processes.

    Together these make us no longer question whether:

    1) Computers are more reliable, consistent and accurate than people

    2) Computers are fairer/just

    3) Computers are more comprehensive/inclusive or selective/prejudicial

    4) Computers are actually economically more effective

    Of course, this has been going on since the 1960s at least and was part of "systems analysis for automation". I think we have regressed. Whereas it was commonplace to sceptically question technology in the 60s and 70s, today we start with the assumption that it must be "better" and then have to figure out how maybe it's not.

    • Jensson 2 years ago

      The major difference is embarrassment. People don't like when other people see them in private, but they don't care if computers see them in private, so computerized surveillance is a lot more acceptable to people.

      • nonrandomstring 2 years ago

        Good point. The benign neutrality supposition is a powerful factor I forgot. It was Weizenbaum who first wrote about this when noticing the relation between his secretary and the Eliza program.

0xDEAFBEAD 2 years ago

The algorithm described in this article seems very bad. But I would argue that ML risk scores can, in principle, be better than human judgment.

Humans seem more subject to bias than algorithms are. Algorithms only look at data, but humans are additionally vulnerable to stereotypes and prejudices from society.

Furthermore, using an algorithm gives voters an opportunity to have a debate regarding how best to approach a problem like welfare fraud.

Human judgment relies on bureaucrats who are often biased and unaccountable. It's infeasible for voters to audit every decision made by a human bureaucrat. Replacing the bureaucrat with an algorithm and inviting voters to audit the algorithm seems a heck of a lot more feasible.

I give the city of Rotterdam a lot of credit for the level of transparency they demonstrated in this article. If they want to be successful with algorithmic risk scores, I think they should increase the level of transparency even further. Run an open contest to develop algorithms for spotting welfare fraud. Give citizens or representatives information about the performance characteristics of various algorithms, and let them vote for the algorithm they want.

In the same way politicians periodically come up for re-election, algorithms should periodically come up for re-election too. Inform voters how the current algorithm has been performing, and give them the option to switch to something different.

  • tagyro 2 years ago

    > Humans seem more subject to bias than algorithms are.

    One might think that, but algorithms are built by humans, so they (algorithms) automatically have the same biases as the humans that built them.

    • 0xDEAFBEAD 2 years ago

      That doesn't follow.

      If I'm a chemist, and I write an algorithm to do something related to chemistry, that algorithm does not "automatically" know everything I know about chemistry.

      Bias works the same way.

    • adammarples 2 years ago

      I think there's a case to be made that data contains bias, ie. previous arrests, encoding the bias of the past arresting officer, but the algorithms don't contain bias. Unless you could explain how logistic regression is biased.

lozenge 2 years ago

Isn't the Dutch language requirement, which is codified as an eligibility criteria, already intended to create an underclass of residents?

I think it is morally justifiable as a residency requirement, but not justifiable to let people live there without being able to receive government support.

I think it's a situation where the government want to be racist or at least xenophobic, the citizens agree, but the law prevents them. Accenture was drafted in to get around the law.

  • 0xDEAFBEAD 2 years ago

    >I think it is morally justifiable as a residency requirement, but not justifiable to let people live there without being able to receive government support.

    Given the choice between accepting a small number of immigrants with government support, or a large number of immigrants without government support, the second option seems more humane to me.

    By choosing the first option, you are effectively creating a privileged class based on whatever criteria your country uses to accept immigrants. The underclass might be out of your sight, but it still exists.

    As long as you are using some criteria to accept immigrants, "willing to work without government support" (at least until they become fluent in the local language) seems like a perfectly reasonable criterion to me. And it is a criterion that gives your nation the capacity to accept a larger number of migrants without breaking the budget -- thereby helping more people from developing countries improve their economic situation.

friend_and_foe 2 years ago

https://archive.ph/9Ibjn

  • CommitSyn 2 years ago

    I desperately want to read this article. Anyone else getting /constant/ cloudflare CAPTCHAS while trying to access archive.ph? Android with mobile Firefox or chrome, first with wifi then mobile then VPN. No matter what I can't access it and it won't stop showing the "are you a human?" check page, even after trying the CAPTCHA the first time it stops showing.

    Must be a cloudflare outage?

    • ImAnAmateur 2 years ago

      Have you tried reading the linked article? It's not behind a pay wall. It works for me even with javascript on.

      IMO, ghe article is only good for better understanding the abysmal failure of the Rotterdam, Netherlands 2017-2021 government benefits random audit program. The authors allege that they contacted other cities that set up something similar but don't name any. Related reading: https://www.wired.com/story/welfare-algorithms-discriminatio...

      • CommitSyn 2 years ago

        Yes I always try reading articles before coming to the comments for the archive link. This is paywalled. https://ibb.co/x8T9xSh

        Thank you for the brief rundown, but that actually does interest me.

    • 15457345234 2 years ago

      You don't need to read the article, it seems like you understand the problem :)

      • CommitSyn 2 years ago

        From the title alone I certainly understand the issue, but this looks like news on said subject I understand, which I would like to read.

    • thedailymail 2 years ago

      If you are using Cloudflare 1.1.1.1, try switching it off or using another DNS resolver. There are reported issues with archive.md blocking access from 1.1.1.1

croes 2 years ago

Seems like it uses a simple equation:

Poor = suspicious

nicbou 2 years ago

This terrifies me.

Algorithms give the rank and file the option to defer all accountability to a machine. The algorithms make mistakes. No one gets blamed or fired for trusting it in the first place.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection