This is part of a series Introduction Awareness (you are here) Enablement (coming soon) Enforcement (coming soon) Last week I published a post introducing three important phases of AppSec Engineering: Awareness, Enablement, and Enforcement. Over the next three posts I will dive into each of these topics to share best practices and guidelines you can roll out to optimize your security engineering practice.
In my experience, the best AppSec programs start with AppSec awareness training.
Read more >>
This is part of a series Introduction (you are here) Awareness Enablement (coming soon) Enforcement (coming soon) In order for an AppSec team to collaborate effectively with development teams they should think in three phases: Awareness, Enablement, and Enforcement. This month I’ll be dedicating an article to each. The focus of these articles will be on the critically important area of application security, focused on the roles involved in building software: developers (DevOps), testers, and architects.
Read more >>
I recently talked with a CISO friend of mine who was struggling to scale his security team. He has fewer than 10 security people on his team to support an organization with over 500 developers and 2000 employees. Responding to all of the requests which include: development best practices, legal and compliance, security awareness, IT security, trying to organize his team to perform the scanning, testing, reviews and more left him under water and stressed out!
Read more >>
In my last blog post, I wrote about what an application security program is and why it matters. In this post, I’ll cover what it takes to build and scale an effective application security program. I’ve seen many different ways that a well-intentioned program can fail to meet its objectives. While there may be many ways to fail, there are just a few key characteristics that lead to success.
The program must be:
Read more >>
In the late 1990s I worked on the security team for Internet Explorer. In fact, I was the first hire that Microsoft made in response to an influx of browser-based security vulnerabilities. I got to see what it looks like when a development team is bombarded by security problems that are serious enough to require a response and yet there’s no process to handle it. In the early days we would get at least one new vulnerability each week.
Read more >>
Zoom is an interesting case study in the various ways that software can fail. The Zoom team has had to learn a lot of lessons quickly, including the pitfalls of reusing components, figuring out how to make security engineering improvements to their SDLC and DevOps processes, and the need for a CISO leadership team.
In this article I want to walk you through some of the issues that were recently publicized.
Read more >>
It seems like everyone is struggling to build a scalable application security program these days. Budgets are small, internal politics and bureaucratic inertia are a real problem, and in the meantime the threat landscape isn’t waiting while your business figures things out.
I would love it if my conversations with customers could be focused on a holistic, risk-centered approache that combines improvements to people, process, and tools in order to reduce risk to manageable levels.
Read more >>
A friend of mine just left the world of consulting. I asked him for the biggest change in his thinking, he said:
Something is going to go wrong. It’s not a matter of if, it’s when. When that bad thing goes wrong everything hinges on how you detect and respond to it.
As consultants so much of our job is focused on reducing risk for our clients, but that’s all we can do.
Read more >>
I used to think I’d like to be a CISO. Now that I’ve spent the past few years speaking with CISOs, I’m not so sure I’d want the job. CISOs have targets on their backs, they hold our data in their hands and I, for one, am thankful for what they do.
I’m convinced that the CISO job is one of the hardest, most thankless, and most stressful jobs on the planet.
Read more >>
In my last post , I talked about the fact that none of us knows how to solve the problem of cybersecurity. It’s a tautology, so it shouldn’t be surprising. If we knew how to solve the problem, the problem would be solved. Therefore we don’t know how to solve the problem. But it is surprising, and so it feels like a ‘hard truth’ rather than ‘the truth’.
When confronted with a long-standing problem (like cybersecurity), it is typical to assume that if we had more will, more resources, more intelligence, or perhaps more of all of the above, we could solve the problem.
Read more >>
I want to let you in on a little secret. Nobody in the security profession, especially those of us in the software security profession, truly knows what we are doing. You’d never know it talking to security vendors. If you attend a security conference you can walk the booths and talk to hundreds of security professionals selling solutions that cost tens or hundreds of thousands of dollars, each promising that they will solve your security problems with their technology platform, their professional services, and most importantly with their new deep learning, AI algorithms.
Read more >>
Back in the day hackers hacked to see what was possible. Why did they do it?
Because it was there.
I’m pretty sure Robert Morris said that. A lot of the most interesting and epic hacks in the early days of software were about pushing boundaries or learning systems for the simple joy of understanding how they worked. There are still a couple of areas like that left: the incredible checkm8 research comes to mind.
Read more >>