How Northrop Grumman is Approaching and Enhancing Video Intelligence

A conversation with Mahasa Zahirnia, Chief Engineer, AI Lead, Northrop Grumman

Add bookmark

Video content analysis and facial recognition are emerging as two fast growing subsets of applied AI. At Northrop Grumman, these tools are being used to develop a wide variety of applications from surveillance systems to COVID-19 detection, prevention and tracking solutions. 

To learn a bit more about how Northrop Grumman is embracing video intelligence to solve real-world problems, we sat down with Mahasa Zahirnia, Chief Engineer, AI Lead, Northrop Grumman to learn more about the company’s approach. In addition, Mahasa will be presenting at the Applied AI event taking place June 29-30 on “Applied AI: Image Optimization For Problem Solving Insights.” 

 

The Art of Video Streaming Analytics

“We’re able to data stream video images of everyday environments - things like parks and shopping malls and high traffic areas, and we were able to dissect those images in a neural network multi-layer network using feature selection of those images, and catalog and categorize them accordingly,” Mahasa tells us, adding, “If you look at how we, as humans, process an image, we executed that via the code, so if you look at the neurons in how we develop thought processes and pick up the data, pick up the feature of that image, and be able to store that, and be able to execute that at the lower state diagram, that was how we were able to process thousands and thousands of images in a couple of minutes. That was how we were able to capture that. It's called neural networks. It's called feature selection. It's called image optimization, object detection. “



Combating Bias

It’s well documented that facial recognition technology has a major bias problem. For example, an MIT study of three commercial gender-recognition systems found they had errors rates of up to 34% for dark-skinned women. In other words, it was 49X more inaccurate for dark-skinned women than white men. A separate study of 189 facial recognition algorithms conducted by NIST found that these technologies falsely identified Black and Asian faces 10 to 100 times more often than they did white faces.

When it comes to combating bias in AI systems at Northrop Grumman Mahasa explains, “We know bias... As engineers, we already have biases. As developers and designers, we already know that we look for certain features, but as intelligent engineers, we also know that we have our own biases, so we have to correct that constantly. If I only look at one specific feature, then I am not designing correctly. I have to look at a spectrum of features, and we created new, improved models that captured thousands and thousands of images and faces, and we were able to model and train our agent to look at all those features. If I don't have the information as an engineer to know what kind of features I should look for because I have a bias, then I have to bring in models that educate me and educate my agent to do so. Right? Yes, we have internal biases, but we also have people that resolve those problems for us, so we have to be smart engineers to build a smart agent.”

To learn more about Northrop Grumman is leveraging Image Optimization For Problem Solving Insights, join us at Applied AI LIVE!

 

 

Become a Member of the AI, Data & Analytics Network TODAY!


RECOMMENDED