This was a comment that got too long… but relevant to the issue of technology “learning” and institutional prejudice…
(becoming more relevant daily.)
Marissa Higgins featured “Colorado police detain, handcuff Black girls in mistaken traffic stop in viral video“ (DK Aug04)
Police in Aurora, Colorado detained a Black family after mistaking their vehicle, a blue SUV, for a stolen motorcycle (as reported by The Washington Post.) [also Reuters.]
Pause to think about… a very large 6 passenger um, motorcycle.
The kicker was to blame the tech…
Police Chief Vanessa Wilson has since apologized and blamed the license plate scanner.
Which reminded me of the NYTimes podcast Wrongfully Accused by an Algorithm (theDaily Aug03) — “In what may be the first known case of its kind, a faulty facial recognition match led to a Michigan man’s arrest for a crime he did not commit.”
Around noon the next day, he is taken to an interrogation room. And there’s two detectives there. And they have three pieces of paper face down in front of them. And they turn over the first sheet of paper. And it’s a picture from a surveillance video of a large Black man standing in a store, wearing a red Cardinals cap and a black jacket.
And the detectives ask, is this you?
Robert Williams: I laugh a little bit, and I say, no, that’s not me. So then he turns over another paper — which is just a close up of that same guy’s face.
Robert Williams: And he says I guess that’s not you either. And I said, no. This is not me.
So Robert picks the piece of paper up, holds it next to his own face
Robert Williams: I was like, what, you think all Black men look alike??
So what happened, they ran a search on this, what they call a probe image, this picture from the surveillance video, which is really grainy not a very good photo. And they have access not just to mug shots but also to driver’s license photos. You get a bunch of different results.
So the facial recognition algorithm basically created a lineup of potential suspects. And then from that lineup, someone picks the person that they think looks the most like the man in the surveillance video. So that is how they wound up arresting Robert Williams.
So then the detective kind of leaned back and said, “I guess the computer got it wrong.”
(hence the parallel to Marissa Higgins story)
so in this case, I tried to figure out exactly whose algorithms were responsible and I had to really dig.
And I discovered the police had no idea.
They contract out to a company that contracts out to two other companies that actually supply the algorithm. It’s this whole chain of companies that are involved. And there is no standardized testing. There’s no one really regulating this. It’s just up to police officers, who, for the most part, seem to be just testing it in the field to see if it works,
Everyone who wants to beta test? Volunteers?
But the really big problem is that these systems have been proven to be biased.
A few years ago, an M.I.T. researcher did this study and found that facial recognition algorithms were biased to be able to recognize white men better. NIST, the National Institute of Standards and Technology, decide to run its own study on this. And it found the same thing. It looked at over 100 different algorithms. And it found that they were biased. And actually, the two algorithms that were at the heart of this case — the Robert Williams’s case — were in that study.
So the algorithm that was used by this police department was actually studied by the federal government and was proven to be biased against faces like Robert Williams.
But the story notes the police don't want to give it up, because it’s too convenient.
Just because their technology wrongfully identified this man, should he gets more closely watched by the police without his knowledge?
Right. And this is actually what police asked the facial recognition vendors to do. They want to have more, what you call false positives, because they want to have the greatest pool of possible suspects.
Which is the disturbing part, as the reporters note
People trust computers. And even when we know something is flawed. For a long time, when mapping technology was first being developed and it wasn’t that great, you know, people would drive into lakes. They would drive over cliffs.
Frankly we know that large portions of the population would still drive off a cliff if an authority they trust told them too. The Republicans do it daily.
The scary thing is this story is rare only because usually people don’t find out about it.
Police don’t tell people that they’re there because of face recognition. Usually, when they charge them, they’ll just say they were identified through investigative means.
So relying on biased computer algorithms is bad enough… but more concern was how the police (and prosecutor) dealt with it
the prosecutor decided to drop the case. But they dropped it without prejudice, which meant that they could charge him again.
This just seems like a clear misfire and misuse of facial recognition. And everyone involved was pretty defensive and said, well, you know, there might be more evidence that proves that Robert Williams did it.
Guilty until proven innocent ?
But after the story came out, everybody’s tune changed dramatically.
- Prosecutors office apologized, said that Robert Williams shouldn’t have spent any time in jail.
- The Detroit Police Department said this was a horrible investigation. The police officers involved just did this all wrong. [got caught] This isn’t how it’s supposed to work. [got caught]
- And they said that Robert Williams would have his information expunged from the system
The real problem is without the media publicity — this would (will) just continue. Frankly, despite those above statements — they probably are continuing… who is going to know? Who is going to verify?
Any bets that they are still going on In Detroit and other cities?
So the takeaway… probably little change, because, as they said — it’s too convenient and both the prosecutors and police justify what they do by the worst ‘bad guy’ scenario — “you don't want us to let a rapist or child abductor go, do you?” (presumably with any constraint on a new technology they never had before, they can't solve crime?)
A false dichotomy (and a dangerous “all or nothing” mindset)
To frame everything in extremes creates a false justification which perpetuates particular bias in the brain (it's always right because when it is... it’s important…) but that ignores the cumulative detrimental affects of everything else being false positives. It favors emotion over reason — public safety should be broadminded… law and order is narrow minded.
There are deep reasons for “innocent until proven guilty”
So for the tech they need much clearer regulation via testing, validation, protocols and training.
Also when someone say, “we've expunged your files” — there needs to be a trusted independent watchdog to not only ensure that data was removed, but whether peripheral storage need to be scrubbed as well.
But (I suspect) the fundamental bias in the algorithms… is as much a part of the police departments. There are many ways to de-bias an algorithm — and you can certainly improve the facial recognition independent of police sources. But many algorithms can and do rely on police data, and just like banks that practice red-lining… since they start with historical data from a distorted environment, the best deep-learning algorithms are actually going to pick up all the prejudicial patterns from past behavior.
It’s not because people inherently act some way, it's because people in authority imposed their assumptions. There is very little “default data” from society.
Self-confirming is the most prevalent bias.