4.21.23 – SSI – Scott Goldfine
AI holds the key to realizing dreams like the elimination of false alarms, real-time response, and even predicting and preventing incidents.
Attending ISC West during the period in which SSI’s annual AI Issue was produced could not have been more timely as the loudest buzz (cynics might say hype) around what appeared to be this show’s most robust turnout ever was unquestionably artificial intelligence. This makes sense as AI is going to transform the way modern society lives, works and conducts business across industries. Electronic security — with its access control, video surveillance and other systems’ capacity to aggregate large datasets and its mission to safeguard people and assets — is near the top of that list. AI holds the key to realizing dreams like the elimination of false alarms, real-time response, and even predicting and preventing incidents. The prospects are astounding and game changing.
No wonder on the ISC West expo floor, to varying degrees, most of the largest and familiar vendors, and especially newer entrants, promoted either AI-centric products or the incorporation of the technology into their products — often in concert with specialized partners. And while the advertised and, in some cases, demonstrated capabilities of many of these solutions are impressive, the rollout and application of AI-heavy security solutions is in its infancy. In fact, many are still trying to get a handle on what constitutes AI.
According to IBM, artificial intelligence combines computer science and robust datasets to enable problem-solving. It also encompasses subfields of machine learning and deep learning, which are frequently mentioned in conjunction with AI. While the possibilities are fascinating and exciting (some would say scary), we are on the precipice of a bold new world that demands proceeding with great care, diligence, and mindfulness.
As our industry advances AI’s real-world security capabilities and fleshes out its use cases, we must endeavor to avoid the premature and overhyped missteps of technologies like video analytics and biometrics (now both greatly improved and viable) following 9-11. Deep and ongoing communication and education with the general public and legislators is essential, lest we find ourselves in the kind of quagmire that has beset the widespread adoption and deployment of a powerful technology like facial recognition.
“With AI, you don’t need to know anything about that individual, their identity, their demographic characteristics, all this stuff that is incorporated into facial recognition,” Phillipe Sawaya, co-founder of AI startup Percepta, told me. That business was acquired by ADT Commercial, where Sawaya was recently named the company’s first-ever director of artificial intelligence. “You don’t need to know any of that to determine what’s happening. Their current behavior, what’s going on around them, that’s all that should be needed to make informed analytic decisions. Frankly, the backlash against facial is a good indicator for the industry that our customers are rightfully critical and paying close attention to if we need that level of invasiveness to accomplish the end goal.”
Coming out of the gate with a baseline of reasonable perceptions and expectations, compelling applications, and well informed and trained providers are keys to successful acceptance, adoption, and advancement. The overriding factor is ensuring the use of AI serves not only in the best interests of clients but the public good at large.
Related: Adding Perspective on Today’s AI Abilities
“We’ve pursued ethical AI since the start for just basic human ethical reasons. As AI becomes more commonplace and more capable, there are going to be a lot more reasons for companies to share our concern for ethical AI, be it the development of regulation or limitations on where and how AI can be used. More and more, ethical AI is going to become not so much an ethical aspirational goal, but really a practical must for the development of AI solutions,” Sawaya adds.
“What that means on the technology side is there’s going to be a lot more standardization of what it means to be ethical. Every analytics company on the planet today probably says, ‘Unbiased or privacy-preserving,’ or something like that on their website. But I’d wager every company has a different definition of what that means. As that definition becomes standardized, there’ll be a lot of catch-up in the methods and tools to measure bias and to what degree something preserves privacy, and how the AI models run themselves.”
Tagged with: AI ISC West ADT Commercial Business Columns News Uncategorized
About the Author
SCOTT GOLDFINE, Editor-in-Chief and Associate Publisher
Scott Goldfine is Editor-in-Chief and Associate Publisher of Security Sales & Integration. Well-versed in the technical and business aspects of electronic security (video surveillance, access control, systems integration, intrusion detection, fire/life safety), Goldfine is nationally recognized as an industry expert and speaker. Goldfine is involved in several security events and organizations, including the Electronic Security Association (ESA), Security Industry Association (SIA), Security Industry Alarm Coalition (SIAC), False Alarm Reduction Association (FARA), ASIS Int’l and more. Goldfine also serves on several boards, including the SIA Marketing Committee, CSAA Marketing and Communications Committee, PSA Cybersecurity Advisory Council and Robolliance. He is a certified alarm technician, former cable-TV tech, audio company entrepreneur, and lifelong electronics and computers enthusiast. Goldfine joined Security Sales & Integration in 1998.