AI Isn't New to Cybersecurity, But Some of Its Use Cases Are


AI Isn't New to Cybersecurity, But Some of Its Use Cases Are

Many state and local agencies already use endpoint detection tools that speed up mean time to detection. These have been around for a while, and they're undoubtedly getting better with time.

Increasingly, however, endpoint detection and response (EDR) tools are building out their own large language models that expedite time to respond. Security analysts can query these LLMs much as they would ChatGPT, to try to make sense of what they're seeing.

For example, you can ask for more information about a particular threat or a specific MITRE ATT&CK ID. Sometimes it's as simple as right-clicking and requesting more information. This makes it possible to gather threat intelligence in a conversational way, and in real time, which can vastly improve the speed and quality of a response.

Examples of EDR tools that have these AI featured include:

These tools integrate AI to help security teams make sense of what they see inside of the environment by enriching or simplifying information. This is highly beneficial to state and local agencies, especially those pressed for time and resources.

Stitching or tying events together is another powerful use of AI in a security operations center. One breach or cyber incident can generate thousands or even tens of thousands of additional alerts. In the past, these alerts might have been extremely difficult to tie together. AI has the pattern recognition capability to understand how these alerts relate to a particular event. This makes it much easier to address the crux of the problem rather than chase the echoes that come out of it.

WATCH: Virginia's CISO talks about how AI is affecting the state's cybersecurity efforts.

In some cases, generative AI's value is much simpler. EDR tools use AI chatbots to answer basic questions, such as how to access a certain feature of the tools. Cybersecurity experts moving from one EDR tool to another -- or who were recently hired, for instance -- can get up to speed more quickly.

Leveraging your EDR solution's existing AI integration -- or switching to an EDR that provides an integration -- is the most direct way to harness the power of AI for detection and response.

But larger agencies, such as those at the state level or in a very large city, can create a Retrieval-Augmented Generation solution. A RAG solution essentially allows them to query their own LLMs for data that has been specifically curated for cybersecurity. Imagine creating a repository of data that an LLM can make reference to so that anyone can quickly ask it questions and get reliable answers; anyone using the LLM will get answers based only around the data that has been uploaded for these specific purposes.

With this custom solution, security personnel can ask specific queries that are applicable to their own security environments and get very direct answers. This is ideal for larger state and local agencies that can fund this endeavor, as it is a bespoke, highly secure LLM that can cater to the idiosyncrasies of a particular environment.

EXPLORE: Agencies must consider security measures when embracing AI.

Organizations should avoid letting cybersecurity teams and the greater workforce leverage publicly accessible LLMs such as ChatGPT in a carte-blanche manner. These tools are readily available and adept at analyzing and summarizing information. Unsanctioned or ungoverned use can be enticing. A clear, well-defined AI policy can prevent teams from sharing proprietary data with public LLMs.

Still, simply roping off generative AI is also risky. The tools exist, and people want to use them because they're efficient, powerful and operate at machine speed. Pretending they don't exist can lead to people cutting corners or becoming dissatisfied with the working environment. I'll add that even agencies that rely on sanctioned, third-party LLMs built into EDR solutions must have a strong understanding of how the data they provide is governed by their vendors.

There is no need to rush into "AI-enabled" products, and it's important to take time to define the AI. Is it an LLM, machine learning, deep learning or is it just algorithms? Additionally, the data governance must be investigated when sending potentially proprietary data into a public LLM or even to a vendor-specific LLM.

Finally, I recommend taking anything that people market as "next-gen" with a grain of salt, as "next-gen" has largely become a marketing term (with some exceptions).

But it would be a mistake to dismiss AI entirely. It's seeing rapid adoption and iteration, and it's not going away any time soon.

Previous articleNext article

POPULAR CATEGORY

industry

6761

fun

8618

health

6739

sports

8882