What’s the Real Role of AI and ML in Cybersecurity?

Artificial intelligence (AI) and machine learning (ML) are being heralded as a way to solve a wide range of problems in different industries and applications, such as reducing street traffic, improving online shopping, making life easier with voice-activated digital assistants, and more.

The cybersecurity industry is no different. However, we need to be careful of the “hype” around AI and ML. And there is a lot of hype out there! A simple Google search of the term “artificial intelligence” yields about 630 million results, and AI continues to dominate the headlines and has even made its way into mainstream TV advertising. However, the cybersecurity industry needs to set the record straight – contrary to popular belief, AI and ML will not solve all of our problems.

The industry needs to separate what is real from what is simply hype when it comes to AI/ML in cybersecurity. In particular, a key issue that enterprises need to be aware of is that AI/ML cannot do causation – meaning that AI/ML is not able to tell you why something happened. Understanding why is a key component of cybersecurity, especially as it relates to security incident investigations and analysis.

Judea Pearl, an early pioneer in the field of AI and one of its leading experts, discusses the problems with AI in his latest book, “The Book of Why: The New Science of Cause and Effect.” He argues that the AI permeating the tech industry today has been handicapped by an incomplete understanding of what intelligence really is. Pearl explains how the hyper-focus on probabilistic associations has led us to simply evolve into more advanced applications of the same simple reasoning that AI was doing in the early 1980s.

This problem is at the core of why AI is still not solving enough real problems for cybersecurity. Based on how AI is often marketed, many in the industry assume that AI-powered cybersecurity technology can simply replace humans. And while its ability to ingest and process vast amounts of information is important, AI’s lack of causal reasoning is why human intelligence – especially from experienced security analysts and incident responders – is still critical. Highly-trained security teams play an important role in detecting, identifying and protecting against a wide range of cybersecurity threats – and will continue to do so for a long time.

Other experts agree that misconceptions exist around AI. In a July 2018 article in Elsevier, Dr. Gary Marcus, a Professor of Psychology and Neural Science at New York University, and former CEO of the machine learning startup Geometric Intelligence (acquired by Uber in 2017) stated: “I think the biggest misconception around AI is that people think we’re close to it. We’re not anywhere near that…Humans can be super flexible – they can learn something in one context and apply it in another. Machines can’t do that.”

However, there are some important benefits of AI/ML, including its ability to correlate vast amounts of data from a variety of sources. This level of correlation is important for informing security teams about the incidents that they are investigating and making teams more educated and efficient at processing analytics. For example, AI/ML can provide details on potential incidents using anomaly detection and clustering. It can also assist with risk scoring of incidents needing investigation. This data can be used to better inform humans who are working to make decisions about security incidents. But AI/ML cannot make the decision for you […] Read more »….

Share