No More Facial Recognition Software For American Cops!
The George Floyd killing and the subsequent protests have proved to be a hallmark event – perhaps even a turning point in the history of the United States. Among other developments such as the defunding of police, overhauling of their training programs, cops leaving the force, and an upheaval in the US political scene, among others.
However, in a more stunning development, IBM, Amazon, and Microsoft have all decided not to supply facial recognition software to the law enforcement agencies. IBM’s CEO has called for a national dialogue to discuss SOPs about the usage of facial-recognition technology and how the relevant authorities should employ it.
What’s wrong with Facial Recognition Systems?
It may be unexpected in the pretext of the George Floyd protests given the fact that facially recognizing software can better help law enforcement capture the suspects. But the truth is, this decision was a long time coming. Numerous studies have proved that the current artificial intelligence-based systems don’t work to perfection. Both false positives and false negatives lead towards identifications being missed or the wrong person being accused of a crime he or she didn’t commit.
Is it the right move in the direction towards establishing a racially just America?
The belief that police and overall government usage of the face recognition technology can be regulated to ensure fair use is fundamentally wrong. As per IBM’s CEO, the end of research, development, and sale of any face recognition technology is a must to establish race justice. He further added that IBM strongly opposes and will not support any technology for mass surveillance, racial profiling, and any other violations of human rights and freedoms.
Activism Is Bearing Fruit
Activism seems to work whether it’s people on the streets in huge numbers and the protests to end racism or the numerous celebrities speaking for civil liberties. The widespread movement to ban face recognition seems to be gaining momentum with time. The historical demonstrations carried on throughout June and the better part of this month, too, show that the public is not willing to compromise on their privacy and security as the tech companies profit off of surveillance systems that harm the already vulnerable members of the society.
The Dangers of Facial Recognition Technology
Amazon’s Rekognition program has been particularly problematic. As part of a 2018 research, the faces of sitting US Congress members were run through the program, and 28 congress members were wrongly identified as people who had been arrested for crimes in the past. It was only one of the many reasons why civil liberty organizations began demanding Amazon to cease providing support to the government surveillance infrastructure due to the inadequacies of the program.
According to the critics, there should be a national ban on government use of face surveillance of any sort. Because, even if the technology is regulated heavily, it’s deployment by the government will only make worse a policing crisis, predominantly affecting blacks, immigrants, and homeless segments of the society.
Not The Only Tool
Although it is the most problematic tech tool offered by the tech to policing, it is not the only one. Even though Amazon has put a stop to the supply of Rekognition, the Amazon-owned Ring, and the “smart doorbell” – sophisticated surveillance tools, are still being provided to 1300 police departments. These technologies let the police make batch-requests for obtaining footage through email to all residents within an area of interest, making the identification of protestors easier. These partnerships end up intensifying suspicions, make the racial profiling issue even worse and perpetuate police harassment of the minorities, especially the Black Americans.
Not Just An American Issue
The issue of problematic facial recognition is not just an American one, but it’s been observed in other countries with strong civil liberty protection laws. A study of such a surveillance program in London found that, of the 42 matches that were suggested by the tool, only eight were verifiably correct.