Book a Demo

Fill in your details and we'll get back to you.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Deepkeep's Blog

Multimodal AI is Smarter. Unfortunately, so are The Attacks.

AI has gotten good at understanding not just what we type, but what we show. This shift has made AI more powerful. Unfortunately, it has also made it more vulnerable.

You Can’t “Detect” a Jailbreak. Here’s What to Do Instead

Everyone is looking for an efficient way to detect and block jailbreaks, but here’s the uncomfortable truth: you can’t reliably detect every jailbreak, and trying to chase them all is a losing game.

Two Smart AI Models. Zero Common Sense.

AI is no longer a one-trick tool. It writes reports, analyzes photos, answers complex questions, and even kicks off real-world actions. Most of this power comes from two areas working side by side: Generative AI and Computer Vision.

Top Three Scenarios for PII Leakage in GenAI

Comprehensive PII detection combines scanning of data, penetration testing and a real-time AI firewall

DeepKeep Launches GenAI Risk Assessment Module

Evaluating model resilience is paramount, particularly during its inference phase in order to provide insights into the model's ability to handle various scenarios effectively

DeepKeep Comes out of Stealth to Safeguard GenAI with AI-Native Security and Trustworthiness

DeepKeep offers AI-Native security and trustworthiness that secures AI throughout its entire lifecycle

Meta’s LlamaV2 7B LLM Suffers from Susceptibility to DoS and Data Leakage

DeepKeep's evaluation of LlamaV2 7B's security and trustworthiness found strengths in task performance and ethical commitment, with areas for improvement in handling complex transformations, addressing bias, and enhancing security against sophisticated threats
DeepKeep safeguards ML pipelines, promoting unbiased, error-free, and secure AI solutions