It seems like everyone is struggling to build a scalable application security program these days. Budgets are small, internal politics and bureaucratic inertia are a real problem, and in the meantime the threat landscape isn’t waiting while your business figures things out.
I would love it if my conversations with customers could be focused on a holistic, risk-centered approache that combines improvements to people, process, and tools in order to reduce risk to manageable levels.
But that’s not how it normally goes.
The majority of my conversations are instead focused on security tools. Which tools should I buy? Why aren’t I getting the results I was promised from my tools? Why won’t my team use the tools I’ve purchased for them?
Everyone wants a silver bullet. Everyone succumbs to the siren call of a tools vendor that tells them they have the perfect solution for managing security risk.
Lost in translation is the fact that tools do not an application security program make. Tools can only support. You need to build the program first, and only then should you focus on acquiring the tools necessary to optimize your activities. It never works to start the other way around. That would be like hiring a contractor to build your house before you have architectural plans.
Sometimes my conversations are even more depressing. Sometimes what I'll hear from a CISO is something along the lines of, “You think I have time to worry about vulnerabilities in my software? I can’t even get my CEO to use multi-factor authentication!” Then the conversation turns to what they want to do when they retire from their god-awful job.
The more I think about it, the more I realize that these depressing conversations neatly capture the essence of the current security problem (as I see it).
Executives, even at savvy technology companies, don’t understand application security vulnerabilities well enough to manage inherent risk. It doesn't stop there. Even the engineers building software don’t understand application security well enough to prevent and mitigate the most common vulnerabilities. These are the people we are counting on to write correct code, and they will get it wrong more often than not. But it gets even worse. The majority of people in risk management, in the security teams whose very job is to understand and reduce IT risk, don’t understand application security well enough to manage risk effectively.
So where does this leave us?
In a nutshell, every security problem can be traced to a human being doing the wrong thing at the wrong time. Often repeatedly. Even after they’ve been trained.
Why is it that the majority of security solutions and methodologies are focused on technology instead of on people? Is it perhaps because that’s where security vendors are able to make the most money?
Let’s look at what a human mistake looks like:
- A user clicks on a link in a phishing email and loses their login credentials to a hacker. User error.
- An executive is given a USB drive at a conference, plugs it into his laptop, and installs a root kit. User error.
- An engineer doesn’t pick up the latest patch for a vulnerable library, resulting in a breach that effects millions of customer records. User error.
- A developer codes an unsafe SQL statement, allowing a hacker unauthorized admin access to the database. User error.
These are all mistakes that didn’t need to happen. Each individual could have known better and honestly they probably did know better. But they made a simple mistake. One mistake in a sea of mistake-free actions. Yet that one mistake will result in huge personal, professional, and business liability due to the nature of our incredibly hostile modern cyber world.
What is the solution you might ask? My knee-jerk reaction is to say every individual in your organization that is a security risk (which means everyone) should be trained so they know what not to do. My more considered reaction is that we’ve been training people for years, and knowing that humans are unalterably human, it hasn’t solved the problem. Don’t get me wrong, training does help, but if the security solution is to depend on every single individual in your organization to do the right thing 100% of the time… well… good luck, my friend.
My next reaction is that it is simply too easy to make catastrophic mistakes. I was talking to a new acquaintance on a chairlift while skiing a few weeks ago and he told me about how he had clicked on a link in an email and the end result was that someone had taken over his bank account. Many weeks later it was mostly resolved and he mostly had access to most of his money again. But talk about painful consequences for a simple mistake! He was down on himself and felt he’d been stupid. My reaction was different. We should not be satisfied to live in a world in which every user has to be so wary of making simple mistakes with such drastic consequences.
We have clearly made things too hard and we need to figure out how to fix that.
In the meantime, we need to teach people to do the best they can in the inhospitable environment we've thrown them into. That still means training. We can’t count on people to never make mistakes. But they will make fewer mistakes if we teach them what to do and what not to do. Then we need to figure out how to make the few mistakes they make less damaging.
A Response from Joe
When Joe read this article, he had a few comments that we both thought were worth sharing. In response to the idea that a simple mistake can result in catastrophe, he had this to say:
It is not only unreasonable to ask this, it is unfair. We design every other system we use to be fault tolerant. We know people will drink and drive or fall asleep at the wheel so we build better seatbelts, airbags and safety systems. We know people will forget to close and lock their doors so we build in automatically closing and locking doors. Literally every other area that matters we’ve built in failsafes, except in software. Oh, you clicked on that link and didn’t recognize it said https://gmai1.com? Too bad, you have literally given up your keys to the castle to every system you have access too. You can’t figure out a password system, too bad, you’re putting yourself at an unimaginable amount of risk after the next data breach with your IL0veLucy1$ password. It’s not fair that we’re putting that much pressure and responsibility on people, when we ask so little of them in every other aspect of their lives. We live in a risk free world, and a risk filled cyber landscape.
In response to what can we do to make things less hard, he had this to say:
I think we can think up a handful of remediations. Some of which come down to training the users, and some are controls for the people building software. Having better 2FA options in place, leveraging breached and common password lists for registration forms, protecting authentication, doing a better job detecting fraud and having better risk thresholds for critical systems to name a few. Having email as a lynchpin for every other system is super scary. Email providers like gmail and iCloud should put controls in place so to slow down risky behavior. If something comes in that looks risky (password reset, transfer, etc.) can we inject realtime training or force a second factor verification then? Can we call upon the banks to do a better job of recognizing errant behavior, many use adaptive authentication systems already, can we apply that to money transfers and account modifications?