BrainBlog for Crogl by Jason Bloomberg
Security Operations Center (SOC) automation has been a wish list item for SOC professionals for years now. However, with the inherently dynamic and unpredictable nature of threat hunting and mitigation, such automation has largely been out of reach.
The rise of generative AI and its underlying large language models (LLMs) have changed the game. In theory, adding LLMs to the SOC operations mix should have brought automation into the realm of practicality.
The LLM reality, however, has fallen short of its promise. LLMs have proven adept at incorporating massive quantities of data and returning understandable, natural language results. What has been missing, however: an understanding of how such data relate to each other.
The missing piece: a knowledge graph
When it comes to available data, the SOC has an embarrassment of riches. Enterprises typically have well over a dozen tools that spit out never-ending streams of alerts and other potentially relevant data.
Not only do these quantities of data lead to alert fatigue, but they also lack the context necessary for SOC analysts to understand which data are important.
To address these challenges, Crogl complements LLMs (as well as smaller models) with a knowledge graph that maintains the context of the streams of security data flooding the SOC – leading to what Crogl calls an autonomous enterprise knowledge engine.
Crogl leverages this autonomous enterprise knowledge engine to normalize alerts and other security-related data, facilitating the analysis and remediation of security threats and other issues by leveraging the enterprise’s existing security tooling.
Unlike other security tools with LLMs bolted on, Crogl delivers a compound AI system that includes LLMs and smaller models as well as agentic AI orchestration that leverage results from the knowledge engine to execute investigations and make recommendations on escalating issues, rather than requiring human-generated prompts.
Every alert that comes into the platform (either from an internal tool or an external threat advisory) enters the knowledge engine, where Crogl normalizes and transforms it into an input action into the appropriate security tool, following each tool’s data input schema requirements.
Why context is so important
The context of available data is especially important in the SecOps context, because the bad actors are always trying to be sneaky.
Here’s an example: let’s say one of your security tools sees that the user Sam Smith logged into the corporate network from an IP address in New York City at 2:00 PM. The same tool (or perhaps a different one) registers Sam Smith logging in from Prague at 2:04 PM.
A human SOC analyst who notices these two alerts might flag them as suspicious, but that conclusion is missing sufficient context to automate a response.
Does Sam Smith often use a VPN? Are they accessing unexpected assets during one of the two sessions? What else do we know about Sam or the two locations that might indicate that these interactions are benign or malicious?
Without such context, automating an effective response is impossible. However, Crogl’s knowledge engine will automatically answer these questions, thus supporting whatever automated response is warranted in this situation.
The Intellyx take
There’s no question that AI – generative AI and agentic AI in particular – is transforming the role of the SOC analyst.
However, AI alone is insufficient for improving the automation so critical for making SecOps more efficient.
The missing element is a knowledge engine that leverages both knowledge graph technology and generative AI to provide the context necessary to implement meaningful, useful automations in the SOC.
Copyright © Intellyx BV. Crogl is an Intellyx customer. Intellyx retains final editorial control of this article. No AI was used to write this article. Image credit: Craiyon.



Comments
Comments are closed.