I was recently asked to give a talk about the state of AI in the field of Cyber Security. As I put together my comments, I found myself wondering, as I often do when on this subject, why AI hasn’t made a bigger impact on my field. I’ve been thinking about how to use AI techniques to improve security results for nearly two decades, and while the tooling and platform have gotten bigger and better, the impact I have been expecting has not yet materialized. Why is that?
What I see in the vendor landscape today is mostly hype and mostly focused on network security problems. There are vendors who sell behavioral anti-malware solutions. Vendors who will sell network and device profiling solutions. All of which are sold to customers on the idea that they are using some form of AI to learn what ‘good’ looks like, differentiate that from ‘bad’ and then take action when necessary. These are all defensive solutions, focused on detecting a threat, trying to stop the threat, and if possible, reverse the damage done. These are not revolutionary solutions, but rather evolutions of what we already see in terms of network defense - tried and true but with some additional learning capability that may or may not be truly deep learning AI. Maybe I’m cynical, but I often think AI is added by marketing to make a solution look new and advanced. Like ‘all natural’ labels have sprung up on products throughout the grocery store.
AI is a nebulous term anyway. Several decades ago it would have been applied to any algorithm that was trying to do what humans do, and was sufficiently advanced that we didn’t think a computer could do it. Like play chess. But as machines have gotten better at a wide variety of tasks that would have seemed impossible just a short while ago, the tide of what constitutes AI has receded, leaving a large collection of very useful algorithms, that are no longer considered AI. A chatbot that uses a logical decision tree to respond to you? No longer AI. Alpha/Beta pruning on a chess playing search tree? No longer AI. Genetic algorithm for generating malicious fuzzing input? Still AI. A neural network that can decode x-ray imagery for diagnostics. Definitely AI.
The most impactful AI algorithms of today use machine learning techniques on enormous data sets to tune neural networks to perform meaningful tasks. I think we will, very soon, converge on a definition of AI in which the defining characteristic is that the algorithm is doing something sophisticated, and presumably useful, but no-one understands how it is doing it. It doesn’t take many layers of a neural network before we lose the ability to understand the underlying algorithm. Algorithms that we cannot truly predict or ever fully understand, represent the state of the art of AI today.
So how does that apply to cyber security? I think if you were to use this definition of AI and hold the supposedly AI-powered security solutions out there, then a very small percentage, and perhaps none would pass the test. They may be interesting and innovative solutions, but the AI moniker is purely marketing for the vast majority.
When the cyber security battlefield is dominated by autonomous bot attackers and autonomous bot defenders, at every layer from hardware to software, that will be the day in which AI has truly come to our industry.