Adding AI To Cyber Products
Four Emerging AI Use Cases in Cybersecurity Startups
At Gula Tech Adventures, we’ve had the privilege of investing in over 35 cybersecurity startups—and increasingly, artificial intelligence is at the center of their innovation. As AI technologies continue to evolve, we’re seeing four major categories where AI is transforming cybersecurity products, operations, and risk management. Here’s a snapshot of the trends shaping the future of AI in cyber:
1. Enhancing Existing Cybersecurity Products with AI
Many startups are integrating AI into traditional cybersecurity tools to make them more effective and user-friendly. AI helps users unlock more value from complex products by simplifying workflows and uncovering features they might not otherwise use.
Take our portfolio company Halcyon, for example. They use AI to detect and stop ransomware with far greater accuracy than traditional endpoint detection tools. Automox added ChatGPT to help users write scripts directly in their patch management interface. Polarity uses AI to help SOC analysts automate threat searches across various platforms. And Conceal developed an AI-powered browser tool called Sherpa that warns users before they click on malicious links. These are all examples of AI making existing cybersecurity tools more accessible and more powerful.
2. AI-Powered Access Control and Rights Management
As enterprises begin to adopt large language models (LLMs) across departments, traditional access control systems fall short. AI systems don’t operate like users—they interact with all apps and datasets simultaneously, like an octopus with tentacles in every system.
This shift is spawning a new category of tools focused on enterprise rights management for AI. The challenge is enforcing who can ask what of an LLM. For example, can Larry from accounting only ask financial questions? Defining and enforcing that is far more nuanced than role-based access in a traditional database. While Gula Tech hasn’t invested in this area yet, we believe it’s a space that will rapidly evolve as LLMs become standard in enterprise environments.
3. Provable and Auditable AI
As AI becomes a core part of business processes, companies need to audit AI systems just like they audit software and compliance frameworks today. We're seeing tools emerge to detect AI model drift and ensure predictions don’t degrade over time.
However, auditing commercial LLMs like OpenAI or Claude is still a black box. The best path forward for enterprises is to audit homegrown models using internal training data and known governance standards. While we haven’t directly invested in a company doing LLM audits yet, we are indirectly backing Arthur AI, a leader in this emerging space.
4. AI Operations and Data Provenance
AIOps is a growing field concerned with the secure and reliable operation of AI models. This includes data integrity, training process security, and runtime monitoring. One of our portfolio companies, Shard Security, focuses on controlling access to massive datasets used in AI training—especially in scenarios where encryption alone isn’t enough.
In this space, data provenance is critical. You must know where your training data came from, how it’s been processed, and whether it introduces vulnerabilities or bias. As companies train their own models, AI security will become as important as traditional application security.
Have an AI cybersecurity startup idea? Reach out to us at Gula Tech Adventures or connect on LinkedIn. The future of AI in cyber is just getting started.