6.12.19 – SIW –
If you attended ISC West in April, you were almost certainly bombarded with the industry buzz term and marketing tactic du jour: artificial intelligence (AI).
While the term itself and its use in the security industry is not new, at some point in the last 12 months manufacturers far and wide have adopted the term as a key selling point and major product differentiator. It seems as though every developer of video analytics, sensors, IoT devices, and limitless varieties of smart widget was claiming their product was “AI-enabled.”
It got me wondering about the definition of the term, whether or not some folks were using it a bit too liberally, and whether or not we were potentially setting ourselves up for a failure scenario which seems all too familiar: expand the definition of a technology too broadly, overpromise capabilities, underdeliver performance, and let it eventually become a difficult topic at industry cocktail parties.
To help ensure we are all defining AI correctly – myself included – I’ve included some definitions that help contextualize the use of AI in our industry, which generally fall into two categories:
Machine Learning: At a simple level, machine learning involves teaching computers to use algorithms to deconstruct a bunch of data, learn from that data, and use it to make helpful predictions. Video analytics does this well today – we teach computers what an action looks like using algorithms and a bunch of sample data, it looks for that action in a video feed, and it predicts when that action has occurred. In order to improve the accuracy of the prediction, we need to provide the algorithms with more information about what to look for. Over time, manufacturers have done that, and the video analytics at our disposal have gotten more accurate.
Deep Learning: The next evolution of AI, deep learning effectively takes this a step further by teaching the computers to come up with their own rules and algorithms for improving accuracy. Deep learning machines use neural networks which pass data through several rounds of processing, allowing them to “think” about data on several levels before drawing final conclusions. Instead of thousands of data points involved in a decision, there are millions. This requires some serious computing power, but the outcomes can be much more accurate and the use cases more complex.
My suspicion, informed by conversations with industry professionals and manufacturers, is that the promise of deep learning has led many machine learning companies to newly adopt “AI” as a primary marketing term. While still correct, it is important to recognize the difference, as deep learning represents a large step forward in technology.
While deep learning is still in its early stages, it can potentially expand our computing capabilities to look for complex behaviors in streaming video feeds rather than simple actions. This may help reduce the strain on security staffing and make more effective use of video surveillance system investments; however, these systems will require additional education on how to design, specify and deploy them.
Deep Learning Implementation
I recently spoke with Jeff Gurulé, the Founder and Principal of Obsys Group, a security consulting firm based in California that has helped clients implement AI solutions. Gurulé is also on the Strategic Advisory Board for Ambient.ai, one of a small handful of firms developing true deep learning solutions in the security industry. This puts him at a unique intersection of technology provider and implementer.
“Deep learning AI is ready for adoption in our industry, but we need to first understand how artificial intelligence works, address privacy concerns, and how to implement it to solve specific safety, security or business risks,” Gurulé says. “We also need to make sure we choose leading companies that truly understand AI and how to operationalize it to get the desired outcomes. Further, we must distinguish between real-time and recording/analytics technologies. Ambient.ai is a real-time technology, others are ‘post-event’ search engines for recorded video with limited intelligence and offering limited utility in a narrower scope.”
As security practitioners, it is incumbent upon us to work with clients to define practical use cases and build AI solutions around them. “The more we learn about artificial intelligence and its application in safety and security, we quickly identify a wide range of use cases,” Gurulé says. “The challenge is prioritizing all the potential uses cases for actual deployment. At its most fundamental level, threat detection and response can be achieved with the use of artificial intelligence-powered tools saving real human labor while improving and normalizing outcomes on a vast scale.
“The key is to choose an artificial intelligence technology that is built by a qualified team,” he adds. “I would also choose a platform that can offer a foundational technology that can leverage your company’s data or knowledge for the long-term. Also, target specific use-cases that, when implemented, can be replicated easily within both security and business contexts.”
In terms of challenges, Gurulé says that “the processing power for data – such as video – will struggle with cloud-based models, as they will need the on-premises hardware to perform artificial intelligence in real time. For now, this will not be an issue for security, as the on-premises model is still preferred to address privacy and data governance.”
Another impediment to the adoption of AI technologies will be social and governmental acceptance. “As security practitioners, we should design and promote a responsible use and applications of artificial intelligence and understand that AI will not eliminate the need for human cognition and contextualization of events, data and information,” Gurulé says.
Brian Coulombe is Principal and Director of Operations at DVS, a division of Ross & Baruzzini. He can be reached at email@example.com, through Linked in at www.linkedin.com/in/brian-coulombe, or on Twitter @DVS_RB.