301.519.9237 exdirector@nesaus.org
AI systems require large amounts of high-quality data for training. This data needs to accurately represent the different scenarios and populations the systems will be used on.

8.9.24 – SIW – David Borish

Integrated solutions are effective in leveraging advanced technology systems

In recent years, the increasing incidence of school shootings in the United States has spurred a quest for effective solutions. Amidst this crisis, the power of artificial intelligence (AI) has been harnessed to develop innovative approaches to enhance school safety. These approaches, which focus on weapon detection and violence prediction, leverage the capabilities of computer vision, deep learning, and machine learning.

Visual AI for Weapon Detection

Iterate.ai, a leading figure in AI applications, has been at the forefront of enhancing school safety by developing a weapon detection system. This system is a testament to the power of artificial intelligence, specifically computer vision and deep learning.

The system integrates seamlessly with existing surveillance cameras, transforming them into smart detectors capable of identifying potential threats. The core of this system is computer vision, a branch of AI that enables machines to interpret and understand visual data. In this context, computer vision processes the images captured by the surveillance cameras, turning raw visual data into a format the AI model can analyze.

The analysis is performed by a deep learning machine learning model composed of artificial neural networks with multiple layers. These layers enable the model to learn complex patterns and extract features from the processed visual data. The ‘deep’ in deep learning refers to the number of layers in the neural network, with more layers allowing for more complex pattern recognition.

In Iterate.ai’s weapon detection system, the deep learning model is trained on a large dataset of weapon images. Each image in this dataset is labeled with the correct identification, providing the model with examples of what each type of weapon looks like. This is a form of supervised learning, where the model learns from labeled data.

The model is trained to recognize various types of weapons, including handguns, semi-automatics, and knives. It can even identify partially visible guns, demonstrating its ability to recognize weapons in various real-world scenarios. Upon detecting a weapon, the system sends immediate alerts to prevent potential threats.

Challenges in Weapon Detection Systems

For weapon detection systems like the one developed by Iterate.ai, accuracy can be significantly influenced by various environmental factors. For instance, lighting conditions are crucial in the system’s ability to identify weapons correctly. Poor lighting or high-contrast scenarios might lead to false negatives, where a real threat goes undetected. Similarly, the angle at which the weapon is held can affect the system’s recognition capabilities. If a weapon is held at an unusual angle or partially obscured, the system might fail to identify it as a threat correctly.

Predictive AI for Violence Risk Assessment

Cincinnati Children’s Hospital Medical Center has pioneered a predictive AI system that uses machine learning to assess the risk of school violence. This innovative approach aims to predict potential violent incidents before they occur, providing an opportunity for early intervention.

Machine learning, a subset of AI, is at the heart of this system. It involves teaching a computer model to make predictions or decisions based on data.

The system is trained in historical data, which includes past incidents of violence and the characteristics of the individuals involved. This data serves as the foundation for the AI model, providing it with the information it needs to learn and make predictions.

Machine learning, a subset of AI, is at the heart of this system. It involves teaching a computer model to make predictions or decisions based on data. In this case, the machine learning model uses various algorithms to analyze the historical data and identify patterns that may indicate a risk of violence.

The model’s training process involves feeding it with historical data, allowing it to learn from past incidents of violence. The characteristics of the individuals involved in these incidents are also considered, providing the model with a comprehensive understanding of the factors that may contribute to school violence.

Once trained, the model can analyze new data and predict the likelihood of violence. These predictions can then inform preventative measures and interventions, potentially averting violent incidents before they occur.

Challenges in Predictive AI Systems

AI systems require large amounts of high-quality data for training. This data needs to accurately represent the different scenarios and populations the systems will be used on. However, obtaining such data can be challenging due to privacy concerns and logistical issues. Furthermore, AI systems can reflect and even amplify existing biases in the data on which they are trained. For instance, an AI system trained on data that includes biased policing practices could unfairly target certain groups of students. Additionally, AI systems can make errors, leading to false positives (identifying a threat without one) and false negatives (failing to identify a real threat). These errors can have serious consequences.

Contextual Performance and Infrastructure Dependence

The performance of AI systems can vary based on the context they are used in. For example, a predictive AI system trained on data from urban schools may not accurately predict violence in rural schools. This highlights the importance of using diverse training data representing the various contexts in which the system will be used. Moreover, AI systems for school safety often rely on existing infrastructure, such as surveillance cameras. If this infrastructure is outdated or not maintained, the effectiveness of the AI system can be compromised.

Continuous Improvement

As we continue to develop these technologies, it’s crucial to address these challenges and improve the accuracy and reliability of these systems. This involves continuous testing and refinement of the systems, ensuring they are effective but also fair and reliable. By addressing these challenges, we can harness the power of AI to enhance school safety while also considering the ethical, legal, and practical implications.

With a security-centric company culture, every employee in your organization can protect you from threats to keep your business safe. Learn what it means to have a strong security culture and how to build it.

About the Author

David Borish | Co-founder and Chief AI Strategist at PLSAR, Inc.

 David Borish is currently the Co-founder and Chief AI Strategist at PLSAR, Inc. During his career, he has navigated the intricacies of entrepreneurship and artificial intelligence, shaping a narrative of innovation and business growth. He formulated a comprehensive AI strategy and oversaw developing and integrating the company’s machine learning model, which reduces inefficiencies in the carbon reduction platform.

 Over the past decade, David has had a lasting impact on multiple industries. He led the creation of the video production industry’s first high-speed 360-degree camera rig and developed a machine-learning model capable of handling over 8,000 content frames per second. Additionally, David pioneered and patented a voice search technology named PRIMO AI. This groundbreaking technology recommends top-performing Speech-to-text (STT) and Natural Language Understanding (NLU) services tailored to specific datasets and regions.

As an AI integration expert, David seamlessly incorporates cutting-edge technologies into creative and operational workflows, offering a distinctive blend of strategic foresight, technical acumen, and entrepreneurial spirit.