There’s Nothing New In Facial Recognition Systems

Nothing exposes the contradictions of leftism more than the growing demands that municipalities refrain from using facial recognition technology to monitor public spaces, record activity and identify lawbreakers. The same people that want to give government the power to bankrupt a business because of because of a pattern of discrimination claim that the government cannot be trusted with accurate information on exactly what transpired in public spaces. They believe government should be starved of information, crippling it’s ability to identify violent criminals, while at the same time every single business transaction must be scrutinized to identify patterns of discrimination against protected individuals. So government, when it protects individual lives, persons or property is seen as malevolent, but that same government, when it protects favored classes from discrimination, is seen as benevolent.

This essay was prompted by this article:

It makes an argument that is centered solely on potential bias:

The product we’re selling is a flawed technology that reinforces existing bias. Studies have shown that facial recognition is more likely to misidentify people with darker skin. This was clearly demonstrated by a recent test of Rekognition that ran pictures of every member of Congress against a collection of mugshots. There were 28 false matches and the incorrect results were disproportionately higher for people of color. But even if these inaccuracies were fixed, it would still be irresponsible, dangerous, and unethical to allow government use of this software. The existing biases that produced this bias exist within wider society and our justice system. The use of facial recognition will only reproduce and amplify existing systems of oppression.

Ahhh — systems of oppression! Bias, bias, . . . bias!

They used to have wanted posters in U.S. post offices, page after page of grainy mugshots and lists of the heinous crimes the men were wanted for. Wanted posters were the facial recognition systems of the nineteenth and twentieth century, relying on the analog technology of photography to gather images, printing to transfer images to papers, and railroads to distribute bundles of them to post offices far and wide. Systems like Amazon’s Rekognition are designed to analyse images and match them to faces, using measurements abstracted from digital images and pattern matching algorithms. Wanted posters were analysed by pattern recognition software as well, but this was software that resides in all of our brains. There are areas of the human brain that specialize in facial recognition, so that kin and tribe members can be recognized. To complete the analogy, some facial recognition systems have been developed from genetic algorithms and neural networks, software techniques that were developed based on observations of natural selection — the same evolutionary process that shaped the inheritable capabilities of the human brain.

So there no difference in nature between wanted posters and Amazon Rekognition. Both rely on technology. Both use pattern matching software. Both are biased, because everything human is biased. It’s easy to imagine that the low-fidelity pictures on wanted posters, combined with implicit bias, would likely have led to more false identifications of darker skinned people. But that remains true for modern TV news shows that broadcast grainy video footage of violent attacks.

While wanted posters and facial recognition are the same in kind, they are way different in scale. Modern technology amplifies the power to identify people and hold them accountable for their actions. It makes it harder for criminals to escape detection and avoid punishment.

The inescapable conclusion is that the argument given to limit use of facial recognition by the police rests on the belief that no common good can outweigh the presence of any bias. Extremist socialists prefer universal abject poverty to an alternative situation where everyone is better off, but a few are very very rich, because they value only equality. In the same way, the author of the linked essay values only reduction in bias. The same reasoning could be used to argue against wanted posters, and even eyewitness identifications in police lineups.

Consider the benefits of widespread use of facial recognition in public areas. Apprehending violent criminals is in everyone’s interest. If we’re unwilling to use surveillance technologies and facial recognition to identify predators, then they remain free to walk in public places and seek out new victims, secure in the knowledge they won’t be identified.

Rather than banning facial recognition, it should be used in all public places, together with processes that minimize the possibility of mistaken identifications. With proper safeguards, everyone would benefit from safer, more orderly spaces.
Fear of more widespread bias is only one argument against facial recognition software, and it’s a losing argument in my view. The other argument is based on a feared loss of privacy. The two are actually related. In the article below, I argue that increased surveillance and scrutiny could actually create an society with less bias.

Retired software developer, husband, father. Student of history. Met Fan