Securing visual data models for object detection and facial recognition
Images encapsulate a wealth of visual cues, encompassing textures, colors, shapes and contextual elements
Object Detection models share common model algorithms
This car's doors doesn't open
when a cyclist is passing by
DeepKeep analyzes and secures the data integrity of datasets used for the detection of people and street signs. And for automating the analysis of insurance claim accident images when safer driving isn't enough.
Safeguard ML pipelines for unbiased, error-free, and secure AI solutions
Adversarial Physical Attacks
Adversarial physical attacks involve weapons going through airport screening processes unidentified. Object detection AI models are increasingly replacing human authentication to screen and interpret images of baggage from X-ray machines used at boarding gates. Security checks for passengers flying out will be faster and more accurate once the detection of prohibited items in screened luggage is automated with machine learning.
Biometric Facial Recognition
Facial recognition platforms are increasingly replacing human authentication, but visual adversarial noise – placed in eye glasses, jewelry, makeup, or travel documents – amounts to a physical attack which enables evasion. A hacked airport system hinders the ability of immigration authorities to safeguard border crossings, potentially leading to crime, terrorism and illegal immigrantion, as well as delays or denial of entry for legitimate passengers.