In today's post, I'd like to introduce you to Loren Kohnfelder, an old friend of mine. I met Loren at Microsoft in the late 90's when we were tasked with the herculean task of improving the security of Internet Explorer.

It was an exciting and harrowing time, and while it is amazing to think about how far we've all come, it is also surprising to realize how many of the security challenges we struggled with twenty years ago are still with us today.

I wanted to interview Loren because he has one of the most illuminating and important security perspectives on the planet. In his 1978 bachelor's thesis he invented Public Key Infrastructure (PKI), including the concepts of certificates and certificate revocation lists, and laid the foundation for the model of trust underlying the Internet we use today. I feel like an old timer because I've been thinking about security for over two decades. But Loren has been thinking about security for over forty years!

What follows is a free-ranging discussion we had on a range of security related topics. I hope you enjoy it and find it as interesting as I did.

Jason:

Thanks for taking the time to talk with me today. I'm really excited to hear your thoughts. I'd like to start by touching on the incredibly long, deep history you have on the topic of software security. What inspired your bachelor's thesis? Did you know at the time how influential it would be?

Loren:

I was incredibly lucky at MIT to be able to hang around the Laboratory for Computer Science where two of the three inventors of the RSA algorithm had their offices. This was right around the time their paper was going to appear in print and both Len Adelman and Ron Rivest were very generous taking time to walk me through the math.

There is a technical problem with the RSA algorithm in the case of signing an encrypted message because two different key pairs (the signer's and the recipient's) will have different modulo bases and the result of one might be larger than the other. (See their paper, Section X. Avoiding "Reblocking" for details.) As I recall, when Ron explained this I pointed out that by reversing the order (which you can always do because it's commutative) the problem goes away; he immediately called in Len to see if they had possibly missed that, and I had their attention. It was too late to amend the paper, so I wrote a letter to the Journal of the ACM that got printed with the endorsement of the RSA authors.

Len became my thesis advisor in order to determine a topic and we quickly settled on exploring practical applications for RSA. At the time, in the late 1970s, computers were big and expensive so the only applications for RSA we could think of were scenarios like bank-to-bank transaction security or military communications. Special-purpose printed circuit boards to accelerate RSA computations cost thousands of dollars and were still so slow that only very modest key sizes were feasible. Plus, the NSA was actively discouraging cryptography researcher and there were export restrictions on software implementations, requiring us to treat them like munitions.

Key distribution was the obvious next, big problem to solve so I focused on that. The idea of public keys were a game changer. First, we postulated that you could publish a "yellow pages" style directory of keys, but of course that wasn't a very good solution. To begin with, transcribing those long numbers was a nonstarter. At the time, I thought that digital certificates were a fairly obvious solution: digital instead of paper, issued by an authority instead of the phone company. Even though we were well aware of Moore's Law, nobody foresaw that the technology would someday be in everybody's pocket or purse. I was focused on graduating so it did not occur to me to patent the idea, and in hindsight I would say it ended up being better that way.

Jason:

That is a fascinating story. It is particularly exciting for me to hear about, as I don't believe you've ever told it publicly like this before. We are capturing a small, but important piece of computing history here!

Is there anything that surprises you about how the ideas you invented have been used in the real world?

Loren:

It took over twenty years from the idea of digital certificates until the HTTPS protocol was first proposed (RFC2818). Today — just under another twenty years later — we finally have over half of web traffic secured with HTTPS and are just beginning to see DNS over HTTPS roll out. I could say that the long timeline for securing digital communications tells us something about exactly how important the world has considered information security on the internet.

Jason:

I appreciate you pointing that out, because it's incredible to think about. Forty years have passed since the concept of certificates was proposed, and we still don't live in a world where all sensitive traffic (much less all web traffic) is encrypted. It wasn't that long ago we were arguing about whether HTTPS everywhere makes sense or not. Seems silly now. Of course, we should be optimizing for security.

Point being, we are still, often, making the job of security harder than it should be because much of the cost of doing it wrong is invisible until you are the victim of an attack. If you are never attacked, then perhaps gaining performance over security makes sense - the cost/benefit tilts in that direction. As soon as you've suffered an attack, the cost/benefit tilts rapidly in the other direction.

I'd love to hear from your perspective, what else in the security landscape has not changed as much as you would have expected? I pointed out in an earlier post, for instance, that it surprised me that memory related vulnerabilities are still at the top of the CWE top 25 most dangerous vulnerabilities list.

When you look back on all that you've learned and experienced in this field, what else has surprised you?

Loren:

That's an excellent observation about mixed HTTPS as an example of how progress evolves, which is similar to how science makes progress as well. It's very easy to get attached to the status quo, so when new ideas are proposed, people split into camps and at first there are fierce challenges. These harsh criticisms can help refine and perfect the new idea, unless the pushback becomes overwhelming. The old guard typically over-defends their old ways far too long, until finally resistance collapses and the new idea quickly gains broad acceptance.

To your specific question: I can say, without too much exaggeration, that everything surprises me. The biggest learning for me is coming around to see that software is such a very human undertaking, and those subjective factors almost entirely subsume the technical aspects of the work. Make no mistake: the technology is also crucially important, but we can only create or evaluate technology through the lens of our own experience and priorities. This is a big hard topic, and I haven't found a good way of talking about it yet, but this is what I alluded to when questioned why it has taken so long to implement network connection security in practice.

Several years ago I learned about behavioral economics via Dan Ariely's work and it was eye opening. Economics always assumed that people in the market are rational actors who are maximizing their self interest — but if you do the experiments it turns out this is almost never true. It turns out that advertising and many business strategies have been taking advantage of the quirks in human thinking for ages.

The most challenging aspect of this is that it is so hard for us to see these foibles in ourselves, even though science tells us that nobody is immune. I would say that software people, in the most logical job category imaginable, are particularly incapable of seeing their less-than-objective decisions and actions. So, while it would seem that making security a high priority makes perfect sense, it shouldn't surprise us that the reality is very different.

I surmise that people evolved to very precisely demand the minimum degree of security they can get away with (obviously degrees of risk aversion vary between individuals), and that is exactly what we have today. As security professionals, we place a high value on better security, inevitably clashing with others who don't see it the same way we do. This is akin to the endowment effect in which we place a high value on objects we own, compared to the same object if somebody else owns it. I haven't figured out how to apply this to improve the situation, but when I see something puzzling this is one of the first things that I consider to explain it.

To bring this wild speculation down to earth, consider the extremes of how old some legacy computers and software are. According to The US Government Accountability Office (2016), numerous critical systems are over fifty years old and for some of these there are no specific plans to update them. This "if it ain't broke don't fix it" mentality is very strong. If a system is already 55 years old, for instance, going for 56 almost seems reasonable. If anything, it seems that the more crucial the system function is, the more daunting it is to replace it. This means it is often hardest to tackle the most important systems in need of modernization.

Security may well be viewed similarly: if the code we have has survived this long, why mess with it? Given our inability to find all the security bugs, we never know how much risk we are actually taking on, so one day at a time the status quo perpetuates, ad infinitum.

While some might consider the idea crazy, I am surprised that nobody (to my knowledge) has ever even attempted to sell software providing any semblance of a warranty of quality. Disavowal of merchantability is the lowest possible bar to set for a product: I have to believe it is at least possible to do better. You would think that somebody would try to sell security with some assurance — but no. I can only assume this means there is zero market for it. Perhaps, as with protection from wild bears, people are satisfied to simply run faster than the other potential victims.

Jason:

I have seen contracts requiring response times on vulnerabilities, so I know that is something that is done. In terms of security assurance, I'm not sure how one would do that. You can assure up-time, but how do you ensure the absence of security problems? Or perhaps you could assure the privacy of data (which is already partly done in privacy statements), but companies already face liability for losing data. Would they want to add on more cost in the nearly inevitable case of a breach?

If we are talking in terms of economics, I think the problem can be easily summarized by the fact that sensitive data is worth more to thieves than it is to the companies who are protecting it. Think about that for a moment. Is that dynamic true for any other area of our lives, other than cybersecurity?

I attended a talk recently given by Richard Rush, CISO for Motorola Mobility, in which he explained that your data can be worth anywhere between $22 (credit card number) to $1,000 (medical records) on the dark web. Meanwhile, Facebook, values your personal data between $0.20 and $0.40. That's a huge disparity in value that impacts how much a company can spend on protecting your data compared to how much an attacker will be willing to spend to steal it.

The cost of cyber crime is $7m per minute and increasing in cost at an exponential rate, while cybersecurity budgets continue to increase at a linear rate. Every CISO I talk to feels they are behind and continuing to lose ground.

In this environment it is very hard for the good guys to win. Attacks are launched at scale, and it takes a very small hit rate for any automated attack to have positive ROI for the attacker. As a result, breaches are becoming more common and the rate of successful breaches is increasing.

The big question is, what's next? The playing field is uneven and getting more so. We are spending more on security every year and still falling behind. Where can we look for solutions?

Loren:

Let me be clear that I have never seen such security enhanced product contracts or assurance offerings, but I do find its absence from the mainstream market to be striking. I'm a big believer in the value of proactive security ("moving left"), and a model that begins with the OS maker disavowing responsibility does the opposite of providing incentives for investing in security upfront. From the customer perspective, if we are serious about building more secure systems then it's only fair that we should expect to pay for it.

If you asked one of the big OS makers to pay for a more secure version of their product, I'm guessing they wouldn't take you seriously. Yet governments do have tremendous weight and apparently they haven't tried asking: so either they aren't trying very hard, or my idea is completely bonkers. Even if it is a crazy idea, I don't see why it isn't worth at least exploring. There are plenty of reports of far-out military R&D — why not more secure software? I hope it's clear that I'm not talking about "perfect security" at all: that would be crazy.

The kind of approach I can envision would be an incremental negotiation between supplier and customer, and I would suggest basing premium secure features on threat model based analysis. Consider a bid/ask pricing model using measurable security properties: what would it cost, and what would customers pay, for various protections? Just as we have a thriving market today in bug bounties, it seems we could also create markets in software components that have specific demonstrable security properties. Technology providers could team up with insurance companies to offer restitution payments in the unlikely event of failure as added assurance. Rather than purport to design future security products here, my point is that there is real value in exploring new possibilities — even if one out of many bears fruit, that's a big win.

What's next, I have no idea. Securing information systems is loaded with disadvantages for the good guys as you have mentioned. One response to this fact is to continue to ask: are we doing everything we can to raise the bar? This includes basic as well as advanced mitigations, better metrics and analysis, more education, more auditing. Small improvements can be more effective than we realize since it is difficult to detect failed attacks that were proactively prevented.

Jason:

I'd like to cover the idea of human-centric security design. It's a term I've been hearing more often. I find it encouraging because I think it reflects a growing understanding of the fact that humans are the root cause of all security breaches. Whether it's because of direct action (e.g. responding to a phishing mail) or indirect (e.g. coding a SQL Injection vulnerability into a database application). It seems that the emphasis must be on how do we make it easier for humans to avoid making mistakes? Because if our strategy is to simply count on people to do the right thing, we will continue to be disappointed. Even with perfect training, even if a person has been definitively taught the right thing to do in every situation, they will still make mistakes. And in a world where a single mistake can result in a breach costing hundreds of millions of dollars, that state of affairs is untenable.

Loren:

I like the basic intention, certainly as a reaction to overemphasis on technology. There is so much to do on the human side, and we understand the human factor so little. For insight into "real users" I encourage non-technically inclined friends to ask me for technical help, and it's always eye-opening to glimpse how they think. For starters, it's important to keep in mind that you have no idea about the skill set, much less priorities, of your users.

Training has so very far to go, both what the message is and how its delivered. For example, I searched [basic online safety tips] (over two billion results!) and found the top results quite a mixed bag in terms of what they say, and in my opinion, none of them particularly good. As an industry it seems we should come up with consensus based user guidance and then line up behind that. Currently, users get separate advice from the government, various online services, PC and OS makers, anti-virus vendors, and more, and have long since given up taking it seriously.

I would also add that as an industry we need to keep working on practicing what we preach. Just last month my financial institution contacted me via email from a third party that directed me to a different website where I was asked to enter my banking credentials to proceed. I actually checked their own online help and found guidance to never provide my password other than at the online banking website and pointed that out to them. Customer support assured me this was legitimately outsourced and saw no conflict at all with the practice.

The one certainty is that humans will always make mistakes, so we need to evolve software to become more resilient so that the inevitable errors aren't quite so catastrophic.

Reducing fragility is very important, and that often means reducing complexity. The ability to undo is a great mitigator of human error, and in text editing it's become essential both in editing undo and redline versioning. Imagine a world without those terrible dialog boxes asking, "Are you sure?" and instead offering, "Let's try it and see," with a guaranteed way of backing out later. That capability in itself would be extremely educational for any user. I'm talking about undo beyond the scope of one operation in one app: imagine a database that could undo updates, undo transactions between entities, or repair damage done after a malicious account takeover.

This isn't easy, but it can be done, and I think would be a significant boon. We need to move beyond the status quo of software that is fragile, where you can lose everything in an instant with one false move. Instead, we should be able to make digital environments safer than the real world, complete with safety nets so we can worry less about slipping up, and instead focus on the work.

Jason:

I love the idea of reducing complexity in the name of reducing fragility. It is striking how complicated most enterprise environments are. It is hard enough to secure a single application, exposed to the Internet and dealing with sensitive data. Now imagine multiplying that by a factor of 1000x and add in the fact that nobody has visibility into all of the software that has been deployed. Just getting an accurate inventory is a huge challenge for most large companies.

So much software was developed without a true understanding of the threat environment, and unfortunately new software is developed every day without an adequate focus on security. The cold, hard fact is that building a truly hardened enterprise can be prohibitively expensive. The cost is simply not factored into the business model since the overall threats and security needs are so poorly understood at the executive and board levels.

Loren:

That's a nice summary of many of the big inherent challenges that software security faces with so many unknowns, topped off by great complexity. I think too often the reality is that time and resources for building software are notoriously difficult to estimate. Overruns are common and security usually gets done within shrinking constraints. It's easy to fault this underinvestment in security — true as that may be — but I think that the roots of the problem go deeper.

Understanding the threat environment is key, yet it is so difficult that we typically fail to set a clear goal of how much security is needed. Absent goals, security work plans quickly trend toward "what everyone else does" as good enough. This approach is reinforced by strong tendencies to "do what we did last time" because revisiting security for existing systems is such a daunting challenge that few want to even contemplate it. While the "how much" question is indeed difficult, I would suggest that even partial answers are well worth the effort because they inform how to best approach security work, and perhaps most importantly, how to know when we are done.

Jason:

Thank you, Loren, for this important discussion, I really appreciate your time. I have one final set of questions for you.

What are you thankful for in the field of security? In other words, where do you think people are doing a great job?

If you could make one wish for what should change to improve the security landscape, what would it be?

Loren:

Excellent wrap up questions because it's important to acknowledge the good progress being made which is too easily simply taken for granted. No doubt I am only aware of a tiny fraction of all the great work we should be thankful for, but for this discussion I will offer a subjective observation that there is a broad growing acceptance of the importance of understanding the threat landscape to guide security strategy. This is a big deal because it means, in terms of a well-known adage, that we will go beyond "looking for our keys under the street light" and start applying effort where it's most impactful (as opposed to where it's easiest).

My wish for change would be to see the software industry advance the level of transparency so that we can establish reliable community metrics. I'm aware that there are many hurdles making security disclosure difficult, but I don't think it's a Pollyannaish aspiration at all. Instead of routinely providing the absolute minimum of information about a security update, I would challenge software makers to tell us more, so long as it doesn't help the bad guys. Even transparency about attempted attacks or internal processes would be valuable. Grassroots efforts toward fuller disclosure would, over time, apply pressure on the laggards. Little by little, we could have a much better global view of the actual state of software security.

I'd like to close by thanking you for the opportunity to have this chat, and wish you a Happy New Year!

Please subscribe to our newsletter. Each month we send out a newsletter with news summaries and links to our last few posts. Don’t miss it!