Privacy and security are two things most Americans cherish, and a dilemma over which is more important has long plagued citizens. Many wonder where to draw the line in giving up privacy to attain solid security. Others feel privacy is less vital than safety and security, and prioritize the latter. As facial recognition technology becomes more of a reality, it poses an important question to the public: is more security worth losing the last of your privacy?
My answer is a resounding no. I do not feel comfortable knowing companies like Clearview AI, led by Hoan Ton-That, have gathered billions of photos from online to identify random people on the street. I do not feel comfortable knowing that hundreds of law enforcement agencies now have facial recognition technology in their hands, and will almost certainly use this technology to make arrests, even though the state of facial recognition tech is far from 100 percent accurate.
For instance, facial recognition accuracy fluctuates among races and genders. Facial recognition technology often misidentifies women and people of color, especially black women, at higher rates than Caucasians. According to the National Institute of Standards and Technology, there are “higher rates of false positives for Asian and African American faces relative to images of Caucasions.” When tested, multiple facial recognition tools have demonstrated disproportional accuracy when identifying people of various races, including the disturbing 2015 Google Photos incident, in which the facial recognition tech misidentified two black women as gorillas.
The organization Fight for the Future, which stands for protecting rights and freedoms in the digital age, tested faces from UCLA’s records with Amazon’s commercially available facial recognition software, Rekognition, and compared them to a database of mugshots. They found that 58 of 400 photos of student athletes or UCLA faculty members were incorrectly paired with images from the mugshot database, and “the vast majority of incorrect matches were of people of color.”
Even if facial recognition software provided more benefits than harm, which it has not yet proved to, it seems absurd to champion such technology as so advanced and accurate when it repeatedly mistakes people of color for individuals they share practically nothing in common with except skin color. Considering minorities are already disproportionately targeted by law enforcement in the US, allowing the police to use inaccurate facial recognition tech is dangerous and will lead to more racial profiling.
Now, if the inaccuracy of facial recognition doesn’t scare you, the accuracy should. The aforementioned company Clearview has acknowledged that this technology may become accessible to the general public in the not so distant future. Without any regulations to manage the use of facial recognition technology, the end of privacy for all seems inevitable. Instead of security cameras, stores could automatically identify you upon walking in. Police using this technology to solve crimes could very well turn into investigating an innocent person who merely resembles the perpetrator with no other reason or justification except facial similarity. And if – more likely, when – the public can use this technology, random strangers could hypothetically find everything about you from a picture alone. No more walking down the street with your family unnoticed; with facial recognition software, people, some with malintentions, could instantly know almost anything about not only you, but your family, and do anything with that information. This means someone could find your social media, your contact info, salary, even your address, by merely seeing your face.
We should all fight for our privacy and refuse to allow corporations to sell our images without pushback. This technology is unregulated, dangerous, and vulnerable to hackers. According to CNN, Clearview AI was recently hacked, exposing billions of photos to the hacker.
This technology is not something to brush off or worry about in the future. These startups are using your pictures today to promote hazardous technology under the guise of security. Many universities, including Duke and Ohio State, are considering using this technology, and some already use it, despite fears from students. As citizens, we sometimes feel comfortable giving up aspects of our privacy to feel secure. However, we should not look to facial recognition as an alleyway to increase security at the expense of privacy, because supporting widespread use of this technology is essentially a call to dismantle privacy and security at the same time.
Ultimately, facial recognition is not only anti-privacy, it’s anti-security as well, with it’s inaccuracies and looming potential to enable stalking, profiling, information selling, and more. And it’s not far from being planted in the wrong hands.
(Sources: WSJ, NYT, CNN, NIST, Vox, The Guardian, The Atlantic)