301.519.9237 exdirector@nesaus.org

Despite the objections of lawmakers and privacy advocates, facial recognition systems continue to be developed by the security industry at an exponential rate
(IMAGE COURTESY BIGSTOCKPHOTO.COM)

8.30.19 – SIW

Debate around the technology’s use continues to heat up as cities, states weigh legislation

The use of facial recognition by law enforcement authorities, school districts and others has been a hot-button issue of late. Concerns that the technology violates privacy rights and inordinately misidentifies people of color have even led some cities, such as San Francisco and Oakland, to ban police and other government agencies from using it.

However, despite the objections of lawmakers and privacy advocates, facial recognition systems continue to be developed by the security industry at an exponential rate, aided by advancements in deep learning technology and video analytics. Some end-users, such as the Lockport School District in New York, have even continued to move forward with plans to implement facial recognition in the face of overwhelming opposition, believing that the security it provides far outweighs any potential threat to the privacy of students and staff members.

According to Kevin Freiberger, Director of Identity Programs at identity verification solutions provider Valid, facial recognition can and has proven successful at preventing fraud and identity theft. In fact, in 2018, he says one state that uses Valid’s facial recognition system identified 173 fraudulent transactions and that they were able to stop the issuance of driver’s licenses in those instances. In addition, it was later determined that 53 of those transactions involved residents who had had their identity stolen and someone else was trying to obtain a license in their name.

“The technology itself definitely adds value when used properly,” Freiberger says. “Where these bans are coming from, people don’t understand the technology very well and I think education would help with that. The fear is that the federal government might use this technology to try and mine state government license databases to track down people that have residence challenges in the U.S. That is a feared use case that is driving some of the bans of the technology across many markets.”

Others like Sean McGrath, Digital Privacy Expert at ProPrivacy, a provider of privacy education and reviews of various privacy tools, feel the pitfalls of using facial recognition outweigh any potential good the technology could be used for.  “As you look at the stories that are coming out, I think that the current focus by legislators and the media on the technology itself is a bit of a red herring,” he says. “We probably need to take a step back and take a macro view of the wider impact of an increasing use of surveillance on civilians. Once you look at the bigger picture and you look at the history of surveillance technology and how it has been used to surveil populations, it quickly becomes apparent, in our opinion, that legislation might not be enough and that there might not be a place for facial recognition in most verticals.”

Even with well-intentioned municipalities that put strong restrictions on the use of facial recognition by police and other government agencies, McGrath says that it will be hard for legislation to keep pace with the advancement of technology. “Legislation, in a conventional sense, is much too cumbersome and slow to be able to deal with these types of technologies, so by the time any meaningful legislation is passed, we are already on version 6.5,” McGrath warns. “If you take it at face value, yes, there are positive use cases, but as soon as you apply it to the real world and how the legislative branch works, the genie is out of the bottle by the time anything can be done to keep a handle on it, particularly for government agencies.”

Freiberger says one of the biggest misconceptions about facial recognition is the notion that hackers could wreak havoc if they were to somehow steal the biometric data gathered by facial recognition systems without any accompanying information.

“When you create a biometric template, you’re taking the source photo that you’ve taken of someone – a driver’s license photo, captured by security camera footage, or whatever it is – you run it through a vendor’s algorithm and it outputs the template that gets matched,” he explains “That data itself, after it goes through that algorithm, is just binary data that means nothing. It is literally just a bunch of ones and zeros and if you store the template – the mathematical representation of the face – separately from the identifying information, such as the biographic information or source photo, that matching template data is basically worthless if it ever gets compromised. Good systems that are designed well and secure always take the match data and they store those separately from the identifying information.”

Besides storing biometric and identifiable information separately from one another, Freiberger recommends those thinking about leveraging facial recognition systems in their own applications to provide education to their stakeholders and to be transparent about what it is going to be used for.

“If I’m a school and I want to use this, I don’t want to do it in an ambiguous way,” he adds. “I want to make it very obvious to parents and students that, ‘hey, we’re using these photos so you can check out in the lunch line, it’s not used for any other purposes,’ and that there is a sharing policy for those photos. You want to make sure you don’t share this information with anyone else, it’s simply used for the point-of-sale system when you check out in the lunch line.”

However, McGrath says it is not hard to imagine government entities and others leveraging facial recognition beyond their stated purposes. “The biggest thing for me is function creep and the fact that these tools are largely invisible and that the scope for their use easily expands beyond what their original intent was,” McGrath adds. “In the case of a school, it is all well and good to say we’re implementing technology to improve safety at points of entry to ensure ‘person x’ and ‘person y’ are not coming onto the school grounds, but it is not a quantum leap to see how that starts to be used in different ways.”

McGrath says the secure storage of biometric and other data will become a paramount concern for schools and other government agencies that move forward with facial recognition. “It is almost a mathematical (certainty) that, at some point… some of this data is going to fall into the wrong hands,” he says. “So, it is really about bolstering how that data is stored and encrypted.”

Moving forward, Freiberger believes that some of the fears that currently surround facial recognition will be assuaged similar to the way people at first objected to the use of fingerprint readers, which have now become ubiquitous in smart phones and other applications.

“People said, ‘oh, there’s going to be latent fingerprints on windows and doors and when a crime is committed later they’re going to pick up latent fingerprints and identify the wrong individual.’ What the public didn’t realize is that the biometric data is used in a broader investigation,” he says. “Just because there is a facial match doesn’t mean it is the same person, it could be a false positive or you could get false negatives where you don’t catch the match. If it is a match, you use that with other tools like any good detective would – it is not a binary yes or no, it is a probability.”

McGrath warns, however, that if checks and balances are not placed on the technology in short order that the nation is headed towards a dystopian future. “If it is not checked quickly… we could be moving towards that 1984 scenario,” he concludes.

About the Author:

Joel Griffin is the Editor of SecurityInfoWatch.com and a veteran security journalist. You can reach him at joel@securityinfowatch.com.

 

Industry pushes back on facial recognition misconceptions

SIA’s Jake Parker discusses how their dispelling falsehoods about the technology in this Q&A interview

While privacy advocates and policy makers decrying the use of facial recognition have garnered numerous headlines in recent months, security industry advocates have been working hard to clear up many of the misconceptions surrounding the technology while pushing back against organizations like the ACLU that have been advocating for bans on the use of these systems.

Unlike the “Big Brother” surveillance tools that they are portrayed as by much of the mainstream media, facial recognition systems today are not being used to simply gather the biometric data on the masses, but rather to improve security and reduce fraud in a wide variety of applications.

To learn more about what the industry is doing to dispel some of the falsehoods being promulgated about facial recognition technology, Security Business magazine Editor-in-Chief Paul Rothman recently caught up with Jake Parker, Senior Director of Government Relations for the Security Industry Association (SIA), to discuss what is being done to educate lawmakers and make the industry’s voice heard on this controversial topic.

Rothman: Where does SIA stand on the government relations side of the facial recognition issue?

 Parker: First of all, we certainly are encouraging policymakers to take a second look at when they are being asked to consider legislation at whatever level that would prohibit the use of facial recognition technology or put a moratorium on it. The facts don’t justify that kind of action. Of course there are some concerns with the use of biometrics in general, but also the use of the data privacy concerns. We support making sure the public is informed about how the technology is being used – there are proper transparency and accountability measures in place. We just think the concerns that are being threaded in a media narrative out there are misleading and shouldn’t be used to justify bans on the technology.

We are talking to members of Congress and state legislators…we are working with our members to help educate policymakers and the public about how the technology works because a lot of the misconceptions stem from technical things that are difficult to explain in everyday terms. That’s why, as a first step, we put out a document about what those misconceptions are as a starting point, but of course, there’s a lot more work to be done in this area. Overall, we need to do a better job about educating folks about how the technology works but also how it applies to the real world here in the U.S.

How can security integrators help out in this effort?

Parker: We should be in a situation where integrators are advising their customers on best practices for using these technologies, and this should mitigate some of the concerns we are hearing. The security industry really needs to take up this issue because there are other security technologies that these same activists are trying to suppress the use of, including other video surveillance technologies.

Generally, the voice of organizations like the ACLU tends to be louder than others – how does SIA rise above that to get the message out?

Parker: This is a big challenge. The ACLU is extremely well funded and there are a number of activist organizations they are working with to get this narrative out there in the media regarding facial recognition that is really based on fear. These concerns are not some kind of grass-roots concerns that everyday Americans have – this is a very well-coordinated and funded campaign by activists to limit the use of this technology. There is a lot of angst about big tech and the use of data, and since there’s a lot of opposition to that, this is a way for their opponents to make it more difficult to collect our data. We are also hearing a lot about China and how they use facial recognition technologies…the reality is the types of things they are doing with technology would just not be possible in the United States under our constitutional system and existing laws.

Those who are opposing facial recognition and want to shut down its use are also purposely conflating the many different types of uses of the technology – for example equating the use of facial recognition for visitor management and access control in a building with government use of the technology. They are lumping everything together into one. Activists have been unsuccessful for the most part in recent years, but they got a few successes this year in getting cities to ban facial recognition through ordinances. They are now trying to capitalize on that.

What would a ban of facial recognition impact?

Parker: We are trying to communicate that facial recognition is not new; in fact, in some applications, it is actually a fairly mature technology. Law enforcement, for example, has been using it for about 7-8 years. When you talk about banning technology, you are talking about taking tools off the table that law enforcement has been able to use successfully for years to fight crime. We would be worse off without it.

Is there a happy medium that can be reached between the people who are afraid and the rest of us who use the technology or see value in it?

Parker: Ultimately there can be policies put in place to address these concerns. That they will please the activists is very much in doubt – but there are things we can do as an industry and at the policy level that can make sure that everyone is comfortable about how the technology is being used. The challenge is that it is much easier to make an argument that you should be concerned or afraid of something using sensationalized stories or information than to explain in detail what is really happening.

Are there any facial recognition success stories that we should be touting as an industry?

Parker: We have included some examples in our Face Facts paper and we are working on more. It has been used to find missing children, which has been widely reported. The most common use is state driver’s license bureaus. Not all share their information with law enforcement, but they have been using facial recognition to fight identity theft and fraud for a long time. The City of New York has put out some stats on their use of facial recognition in 2018, and I believe it was used to assist law enforcement in identifying wanted criminals that resulted in about 1,000 arrests – most were cases where they wouldn’t have been able to identify them at all without the help that facial recognition provided. Of course, there’s also a huge counter-terrorism benefit.