8.28.24 – Biometric Update
Cautions come in wake of Detroit settlement
The local chapter of the American Civil Liberties Union (ACLU) has published a summary of a letter it sent to the Maryland State Police (MSP), highlighting “baseline protections that must be incorporated into the model statewide policy on facial recognition technology,” according to a release from the group.
Across the U.S., policy blows are being exchanged over the use of facial recognition by state police. As law enforcement continues to trumpet the value of using biometric facial matching to investigate crime – even as it balks at having FRT turned in its direction – representatives from the American Civil Liberties Union (ACLU) are pushing for stronger regulations to address the potential for bias or misuse.
At the heart of the discussion is the issue of racial bias. Of the seven cases in which wrongful arrests have been made based on false biometric matches by facial recognition systems, six targeted Black people. “Supporters of police using face recognition technology often portray these failures as unfortunate mistakes that are unlikely to recur,” says an article from the ACLU, published in April 2024. “Yet, they keep coming.”
The latest round of critiques follows the settlement of a lawsuit filed by the ACLU Michigan and Robert Williams, a Detroit man who was wrongfully arrested in front of his family after being misidentified by facial recognition software. The settlement will see the state of Michigan adopt the U.S.’s strongest policy governing police use of facial recognition, which prohibits police from arresting people based solely on facial recognition results, or on the results of photo lineups using FRT – a move the ACLU hopes to see other states follow.
Maryland warns against FRT vendors using ‘illegally collected faceprints’
The local chapter of the American Civil Liberties Union (ACLU) has published a summary of a letter it sent to the Maryland State Police (MSP), highlighting “baseline protections that must be incorporated into the model statewide policy on facial recognition technology,” according to a release from the group.
“Facial recognition technology in the hands of police is dangerous,” says Nate Freed Wessler, deputy director of the ACLU’s Speech, Privacy, and Technology Project. “The technology generates higher rates of false matches for Black people, People of Color, and women, and we’ve seen multiple wrongful arrests of Black people because police let false matches from the technology taint subsequent witness identifications. Plus, the threat of police using FRT to engage in pervasive tracking and surveillance is chilling.”
“Although the best protection against abuse is to stop police from using this technology at all, there are serious steps the Maryland State Police must take now to minimize the room for abuse.”
The ACLU emphasizes three main points. First, echoing the Detroit decision, no arrests should be made using facial recognition technology and a photo array, because face biometrics are not enough to establish probable cause. “Even when FRT generates a false match,” says the letter, “it will almost always look so much like the actual suspect as to taint the reliability of the photo array.”
The second point notes the no-no of using facial recognition technology for surveillance of live or recorded video.
Finally, “law enforcement agencies should not contract with private FRT matching databases containing non-consensually or illegally collected faceprints, including the abusively created database from Clearview AI.”
Mall of America FRT deployment could affect 40M annual visitors
In Minnesota, facial recognition has been deployed at Mall of America (MOA) in partnership with local law enforcement and provider Corsight AI – which the state’s ACLU says in a published statement raises concerns about “the misleading nature of performance scores for this tech, the lack of transparency and proper government oversight, and the quick descent of our society becoming a surveillance state and the insidious effects that has on our democracy.”
The ACLU goes deeper than its Maryland counterpart in its argument that facial recognition is not accurate or reliable enough to be used to establish probable cause.
“MOA states in their rollout that the particular software they’ll be using has a 99.3 percent accuracy rate, tested by the DHS and NIST.” (The 99.3 number is from DHS testing.) But, says the blog, while the statistics “serve as great marketing for vendors hoping to sell to private companies and government agencies, these performance scores can hide deeper disparities and don’t account for real world scenarios.”
Corsight ranked top in the NIST’s most recent facial recognition technology benchmarking test (FRVT 1:1 Verification), for proactively reducing bias. But the ACLU says the highly accurate scores are achieved in particular, optimized conditions. Labs conducting tests do not necessarily use the same databases as law enforcement. Live video facial recognition technology is still too unreliable. “And federal testing by NIST shows that even face recognition algorithms that have relatively high accuracy rates in testing can have much higher rates of false positives for black men than white men.”
Besides which, the Mall of America hosts 40 million visitors every year – more than the combined populations of North Dakota, South Dakota, Iowa and Canada. Its facial recognition system is intended to identify “Persons of Interest” – but that term is not clearly defined. “What we know for sure is that innocent civilians who shop at MOA will be surveilled and tracked throughout their visit with no reasonable suspicion given by the police or MOA. The immense amount of foot traffic within MOA should not be dismissed.”
“Remember that in order to catch the ‘bad guys’ with facial rec you need to surveil everyone. Misidentifying even a small percentage of those people would still mean thousands of wrongful identifications.”
California ACLU calls for full ban on police use of facial recognition
In California, meanwhile, legislators continue to give thumbs-down to government and police use of facial recognition. In mid-August, California’s Senate Appropriations Committee quashed a bill that would have permitted the use of FRT across the state – marking the third time in five years that the biometric tech has been deemed unfit for use.
A blog from California’s ACLU says had Assembly Bill 1814 (AB 1814) passed, “the bill would have created one of the worst facial recognition laws in the country, blessing the use of government facial recognition under the pretense of reining it in.” Among its perceived offenses was allowing police to build “facial recognition databases from state photo records, placing anyone whose photograph is in the DMV or another database into a perpetual virtual lineup.”
While Maryland activists say facial recognition is not enough on which to base an arrest, their California counterparts don’t even buy the argument that additional evidence would solve the problem. “It would have been extremely easy for the police to find an additional reason to justify a person’s arrest. For example, we’ve seen police use facial recognition to create a photo lineup of doppelgängers of people who look like, but are not, the suspect – a recipe for disaster that has time and again resulted in the wrong person being chosen.”
The ACLU says the decision sends a clear message: “Californians do not support any law that would sanction the government’s use of facial recognition.” In the spirit of third times being a charm, it also recommends that regulators take the bill’s failure as “a mandate to protect their constituents from this privacy-eviscerating technology.”
“The only surefire way to do that is by prohibiting the police from using it” – as San Francisco did in 2019 (only to find out that prohibition is far from ‘surefire’ solution).
In conclusion, the group leaves no ambiguity about its position: “even perfectly accurate facial recognition is anti-democratic at its core.”
In July, the Northern California ACLU published a guide, “Seeing Through Surveillance: Why Policymakers Should Look Past the Hype,” which further addresses police use of technology for mass surveillance, arguing that “surveillance is not safety.”
NAIAC public meeting to discuss facial recognition in law enforcement
Expect to hear from the ACLU at an upcoming virtual public meeting to be held by the National Artificial Intelligence Advisory Committee (NAIAC), in which it will receive a briefing from its Law Enforcement Subcommittee (NAIAC LE) on the benefits and drawbacks of using AI and facial recognition in law enforcement.
The NAIAC LE briefing looks set to extol the virtues of AI as a policing tool, while recognizing the associated risks. Using its own convenient statistics, a draft of the NAIAC LE document says that “while some communities and civil rights organizations oppose all use of FRTs by law enforcement,” public opinion is less settled, with 46 percent believing that “widespread use of facial recognition technology by police would be a good idea,” compared to 27 percent who say it would be a “bad idea.”
The ACLU is likely to contest such findings during the open meeting, which will be held via web conference on Wednesday, September 4, 2024 from 2:00 p.m. to 5:00 p.m. ET.