“The amount of information that it can provide is almost priceless, due to the fact that you can’t get that kind of info from anywhere else,” Dean Papayoanou (’21) said, when speaking about the concept of facial recognition.
A new facial recognition company known as Clearview AI has gathered over three billion photos from the internet, sparking worldwide debate. The software uses all photos in the public domain and matches faces of people to use for law enforcement, however, the ethics and effectiveness of the company are controversial.
Clearview AI is based in New York City and was founded in late 2017 by Hoan Ton-That. It began gaining attention after a New York Times article, “The Secretive Company That Might End Privacy as We Know It”, published in early January, brought it into the spotlight.
According to Clearview, the technology is currently only being used by law enforcement and with caution. Its purpose is not to be evidence in court, but to develop an investigation by identifying an unknown face. Over 600 law enforcement agencies in the U.S. and Canada, including the FBI, have used this technology in the past year.
Papayoanou recognizes the value and ease of the technology to identify criminals.
“With this kind of AI, it makes it so much easier for people to say, ‘since we have the technology, it would be a travesty if we didn’t use it,’” he said.
On the other hand, there are concerns surrounding just how reliable the information Clearview AI can provide on criminals.
Claire Wiest (’20) is worried that recognition technology can be less effective on people of color.
“Some facial recognition technology misidentifies black and brown people at twice the rate of white people,” she said. “That’s just in addition to the systematic bias, mainly in the US. It’s just making an existing problem worse.”
However, according to PBS, Ton-That claims that Clearview technology has a 99% success rate and has completed “independent testing to verify that we’re not biased and have no false positives across all races.”
Another controversial issue surrounding facial recognition technology in general is the preservation of personal privacy.
Ava Porter (’23) is uncomfortable with the idea of photos being stored in Clearview’s database.
“If they aren’t aware of this or if they didn’t give permission, I think that’s just not fair to the person because they’re completely unaware that now,” she said.
Papayoanou agrees that facial recognition can endanger privacy.
“One of the benefits of the internet is that people have a sense of anonymity,” he said. “It brings up the entire question of just what rights [law enforcement] has to take away people’s rights to feeling safe and feeling that their own privacy is private.”
In a survey conducted by The Standard, 75% out of 126 students agree that facial recognition technology compromises personal privacy.
Clearview’s database includes public photos from millions of sites including Facebook, YouTube and Instagram. It analyzes unique and identifying facial features to match photos of the same person. Though the database only includes photos in the public domain, photos that were once public and have been deleted are saved and stored in their database.
For example, if a picture is uploaded on a public account on Instagram, but later the account is turned private or the picture is deleted, the photo is still on Clearview’s database.
Even with this software only being used for law enforcement, the idea of a database of images is a subject of concern for Wiest.
“Generally, I feel negative because the whole thing just feels like there’s not much privacy left,” she said.
Porter is also wary of infringements on privacy but recognizes the value of the technology for law enforcement.
“Law enforcement should be able to use it,” she said. “I think they’re within their rights to do so, for the protection of society.”
While Clearview AI is only providing access to its services to law enforcement, Porter said that she believes it will eventually be opened to the public.
“People need to better understand what is going on with the technology, set limits and find flaws that they can fix for everyones’ benefit,” she said.
However, Papayoanou does not think the technology will open to the public.
“Different types of events in history have shown that human beings, when they are capable of doing something, will often use it for both negative and positive things, but often negative,” he said. To avoid these negative consequences, Papayoanou believes facial recognition will not be released to the public.
Despite controversy, development of facial recognition technology has been increasing recently, even beyond Clearview AI. The London Met Police announced on Jan. 24 that they would be using Live Facial Recognition (LFR) technology to help fight crime by identifying criminals.
While it is the same idea of facial recognition as Clearview AI is, Live Facial Recognition scans live footage of public spaces in London and flags people on a watchlist, a group of people wanted by law enforcement. This technology is used by police officers in the area that will approach the subject.
There is debate over whether this technology is ethical or legal, but its development is ongoing. Of the surveyed students, 86% agreed that facial recognition technology will dramatically affect the future.
This story was originally published on The Standard on April 24, 2020.