5.31.19 – SSI –
The alarm industry has been contending with false alarms for well over a hundred years. We have made great strides in improving detection equipment such that most newer installations, installed properly, rarely have a false alarm problem caused by the components.
We have also developed and implemented effective processes such as Enhanced Call Verification, Seven-Day-Soaks on new installations and CP01, along with prescriptive false alarm fines and fee structures. These have resulted in a significant improvement in reducing “calls for service” — dispatches to first responders.
But, as with all successes, there are ancillary issues that can complicate our progress. We have yet to address all the effects — good or bad — on monitoring and the overriding issue of human error, for instance.
There are three key performance indicators (KPIs) to track for reducing false alarms/service calls. The first is the number of alarms a central station operator, as opposed to any automated systems, must process. At Rapid Response, we call this Operator Handled Events (OHE).
The second KPI is the number of OHEs that get cleared by an operator using various methods to determine a crime is not in progress. These could be CP01, ECV, Abort Delays, SMS replies, or they could be video or audio verification, but the result is a determination not to dispatch the event to a first responder.
The third KPI is the false alarm rate, indicated by the number of false alarms a first responder is receiving, and this is critical. Even though we have significantly reduced the calls for service, the false alarm rate is still 98% or higher. Plus the subscriber base is continuing to grow, so even though our percentage of calls for service is decreasing, the sheer volume received by first responders is increasing overall.
So why is this not working better? We have better equipment, improved processes, more systems with video and/or audio verification, but we are still netting the same percentage of calls for service.
Well the answer is pretty obvious: it’s the people who own and operate the systems. We have been able to greatly improve within the industry, but we have been unable to curtail errant human behavior.
Solutions Require Shift Outside Comfort Zone
The use of video and/or audio verification was initially going to be the silver bullet to reduce false alarms, but now what most centers are doing, as policy, is that if human activity is detected (heard or seen) then a call for service is initiated.
I was also able to determine that utilizing video, with a typical eight-camera system or equivalent audio detection, almost doubled the actual handling time that it took an operator to determine whether the event indicated criminal activity or that people were not present when the alarm triggered.
Should widespread adoption of video/audio verification, as we know it today, take hold in a significant way, the cost of monitoring or processing these events will absolutely go up.
It’s important to remember: using video/audio we cannot 100% determine intent and we cannot know what’s in the hearts and minds of the people captured on video/audio during an alarm condition. With newer technology and by getting increased engagement by subscribers, we do have tremendous opportunities to change the percentage of false alarms that first responders experience.
This will require a significant shift in how alarms are treated and processed. It will require us to step out of our comfort zones and begin to really focus on solutions to the human side of the issue.
While training will help, the reality is that we will never be able to train or influence everyone who could cause a false alarm. We are going to have to work on systems and artificial intelligence to assist in determining the likelihood of a crime in progress. Some companies use facial recognition to inform operators whether people in the video are friend, foe or unknown.
On the other hand, if we have a video of persons not specifically allowed (terminated employee, ex-spouse), we know for sure a crime is in progress. The technology is promising: I recently reviewed a system that accurately detected weapons and threatening behavior.
Still, we need to engage subscribers more than ever. Only the subscriber knows with certainty who is and isn’t allowed in protected areas. This again will require different methods and attitudes toward interacting with subscribers in order to determine intent. If we use A/V partnered with AI, and we engage the consumer as part of the process, we will finally flip the false alarm paradigm.