Investing in Lakera to help protect GenAI apps from malicious prompts

Matt Carbonara

Head of Enterprise Tech Investing, Citi Ventures

Avi Arnon

Senior Vice President, Citi Ventures

Blaze O’Byrne

Senior Vice President, Citi Ventures

Nick Sands

Senior Vice President, Citi Ventures

Logo

As enterprise adoption of large language models (LLMs) and GenAI applications ramps up — a May 2024 Altman Solon survey reported that 67% of large enterprise respondents are currently using or implementing GenAI — so too do concerns about how best to secure these tools.

Since LLM-based applications present fundamentally different user interfaces, application behaviors and security vulnerabilities from other applications, they also require new approaches to protection that traditional cybersecurity tools are ill-suited to address. And with the GenAI threat landscape just beginning to take shape, questions remain about which elements of the complex and fast-growing technology are most important to secure.

Prompt defense is the biggest enterprise GenAI security need

Given the current state of enterprise GenAI adoption, we at Citi Ventures believe that the most immediate security concern is prompt defense: protecting LLM-based apps from malicious prompts that aim to reveal sensitive data or cause the app to otherwise behave inappropriately.

At this early stage, most enterprises are embedding third-party LLMs like the ones that power OpenAI’s ChatGPT into custom-built applications using techniques like retrieval-augmented generation (RAG) to connect them to proprietary data. This leaves prompts as the main uncontrolled variable for bad actors to exploit — creating a new opportunity in the cybersecurity market for a solution that protects LLM-based apps at the prompt layer. That opportunity is set to grow exponentially as enterprises begin to adopt autonomous AI agents, which will be able to prompt one another at superhuman speeds without human oversight. As the number of prompts increases, so too will the risk of data exposure — and the need for a broker to sit between the agents and ensure secure prompts.

That's why we're so excited to invest in Lakera, the leading solution for securing AI applications at run-time.

Why we're impressed with Lakera

Based in San Francisco and Zurich, Lakera is the world’s leading real-time GenAI security company. Its flagship product, Lakera Guard, uses proprietary machine learning techniques to screen prompts before they reach the LLM, denying malicious prompts and keeping GenAI applications secure.

While Lakera Guard is ultra low-latency, fast and easy to implement, what truly differentiates it from other solutions in the AI application firewall segment of the LLM security market is the proprietary data on which it is trained. Lakera collects its training data from several sources, including:

  1. Novel attack vectors identified by Lakera’s research and development team
  2. Analysis of publicly available attack information, correlated with proprietary insights derived from the Lakera platform
  3. Its GenAI education game, Gandalf. With over one million players, Gandalf serves as the world's largest AI red team, generating real-time AI threat data that grows by tens of thousands of unique new attacks every day.

This early, substantial data moat has helped Lakera train its models to achieve over 97.6% true positive and 0.16% false positive rates for malicious prompt identification (according to user testing) — making Lakera the undisputed leader in the market.

New approaches to security require a new type of security team

As we noted above, securing GenAI apps requires stepping outside of traditional cybersecurity frameworks; thus, we believe that the best LLM security solutions are developed by people who are, first and foremost, profound experts in AI.

This is certainly the case for Lakera, which boasts an impressive roster of AI experts on its founding team. CEO David Haber has nine years of experience in AI, having previously worked at autonomous flight company Daedalean AI — where he rose through ranks to become Head of AI in under a year. Before that, he built AI across the healthcare and finance sectors. CTO Dr. Matthias Kraft worked with David at Daedalean AI, where he was Head of Visual Positioning, and CSO Dr. Mateo Rojas Carulla brings substantial industry experience to the table, having previously worked at Credit Suisse, Google and Facebook in various roles across AI.

Given Lakera's exceptional GenAI security platform and top-notch team — not to mention the growing tailwinds for the AI application firewall market — we’re thrilled to announce our investment in Lakera’s Series A round. We join Atomico, who led the round, existing investors redalpine and Fly Ventures, as well as fellow new investor Dropbox Ventures. Our deepest congratulations to David, Matthias, Mateo and the Lakera team! We look forward to helping them make enterprise-grade LLM applications as secure as possible in a GenAI-driven world.

For more information, email Matt Carbonara at matt.carbonara@citi.com, Avi Arnon at avi.arnon@citi.com, Blaze O’Byrne at blaze.obyrne@citi.com or Nick Sands at nick.sands@citi.com.

To see Citi Ventures’ full portfolio of companies, click here.