1.22.24 – SIW – Timothy J. Pastore, Esq.
Europe leads the way, but the U.S. is sure to follow
This article originally appeared in the January 2024 issue of Security Business magazine. Don’t forget to mention Security Business magazine on LinkedIn and @SecBusinessMag on Twitter if you share it.
Nobel Peace Prize recipient Christian Lous Lange said: “Technology is a useful servant but a dangerous master.” That phrase is being put to the test today, as Artificial Intelligence and biometrics are just two broad categories of the many types of useful technologies changing the world across many professions and in everyday life.
My law firm receives solicitations daily from companies seeking to sell us the latest legal technology, such as AI-generated legal research, drafting tools, or e-discovery platforms. In mid-2023, a New York lawyer used the AI tool ChatGPT for legal research. While that is unconventional, there is nothing inherently wrong with using it for research; however, that presumes the lawyer confirmed the accuracy of the research. He did not.
The case involved a man suing an airline over an alleged personal injury. The lawyer submitted a brief that cited several previous court cases. The defense lawyers for the airline alerted the judge that they could not find several of the cases cited in the brief, and the court determined that at least six of the cases were not real – they were made-up cases with fictitious quotes by ChatGPT. Thus, the brief contained fabricated information that could not be used in a court filing.
The lawyer claimed that he was unaware that the content generated by ChatGPT could be false. This was foolishly naïve and resulted in disciplinary proceedings against the lawyer for failing to meet professional standards.
EU Leads the Way on Regulation
The security industry – even more than the legal industry – is heavily reliant on technology. AI and biometrics present great opportunities, but also great risks. How will these technologies advance in the security industry? How can we regulate these technologies without hindering innovation?
The U.S. is lagging behind Europe, which has been working since 2021 to devise a regulatory framework for AI and biometrics.
In the U.S., regulation of AI has not taken shape; however, regulation of biometrics is developing, with various states and cities limiting the use of biometric technologies.
The U.S. is lagging behind Europe, which has been working since 2021 to devise a regulatory framework for AI and biometrics. In December 2023, the European Union seemingly made some progress. Specifically, it passed a set of provisional rules governing the use of AI in biometric surveillance and AI systems, such as ChatGPT. The rules are not specific to the security industry, but they undoubtedly encompass it. While many details are yet to be resolved, this represents the first major world power to enact laws governing AI.
Editor’s Note: Read more about the potential implications of the EU’s AI Act in Jon Polly’s Tech Trends column from the October issue of Security Business at www.securityinfowatch.com/53073484.
The European Parliament is pushing for stricter restrictions on biometrics stemming from fear that the technology could enable mass surveillance and infringe on citizens’ privacy and other rights. Consequently, the law bans some uses of AI that infringe on human rights and civil liberties. It also identifies certain “high-risk” AI systems (such as systems intended to monitor workers and those involving critical infrastructure, law enforcement, border control, etc.).
As to those systems, the developers must provide details on training, ensure transparency for users, and offer opportunities for appeal and redress. The rules ban cognitive behavioral manipulation, the untargeted scrapping of facial biometrics from the internet or CCTV footage, social scoring, and biometric categorical systems (i.e., systems that infer political, religious, philosophical beliefs, sexual orientation, and race).
The EU regulatory framework allows consumers to bring complaints if they believe their rights were violated by the use of these systems. Violators are subject to fines – which can vary depending on the circumstances.
Possible Exceptions
Not all are pleased with these developments, as some EU member countries want to use AI to fight crime and terrorism. As a compromise, the rules allow real-time biometric surveillance in public spaces for certain crimes, prevention of genuine threats, such as terrorist attacks, and searches for people who are suspected of the most serious crimes. Also, the rules will not apply to systems used exclusively for military or defense purposes, systems used for the sole purpose of research and innovation, and to people using AI for non-professional reasons.
Some believe these rules will stifle innovation and are yet another example of hyper-regulation. Others think the rules do not go far enough – claiming, for example, that public use of facial recognition violates the privacy rights of the public.
The legislation is expected to enter into force early in 2024 and should apply two years after that. That timetable is ironic, because the technology may be very different by then!
Although the EU regulations do not apply directly in the U.S., companies who do business with the EU will have to study them and comply. Further, the laws could become the blueprint for other governments – including individual U.S. states, cities, or the U.S. federal government.
The regulators remain two steps behind the technology, but, in Europe, at least they are off the couch.