Several quotes from ‘The Security Development Lifecycle: SDL: A Process for Developing Demonstrably More Secure Software’ 2006 book by Michael Howard; Steve Lipner
Design specifications miss important security details that appear only in code.
Don’t just say, “This is bad.” Instead, say, “This is the way you should do it.” In our experience, engineering staff are happy to adhere to security and privacy policies as long as you explain how to attain the desired objectives.
The software industry needs to change its outlook from trying to achieve code perfection to recognizing that code will always have security bugs.
Code with a large attack surface—that is, a large amount of code accessible to untrusted users—must be extremely high-quality code. It must be extensively hand-reviewed and tested.
The goal of a good development process that leads to quality software is to reduce the chance that a designer, an architect, or a software developer will insert a bug in the first place. A bug that is not entered during the development process is a bug that does not need removing and that does not adversely affect customers. Make no mistake, there is absolutely a critical need for code review, but simply “looking for bugs” does not lead to a sustainable stream of secure software. Throwing the code “over the wall” for others to review for security bugs is wasted effort. A goal of the Security Development Lifecycle (SDL) is to reduce the chance that someone will enter security bugs from the outset.
The belief that, over time, open source code quality will improve is a pretty typical view in the open source community. It may be true, but it is a naïve viewpoint: customers don’t want code that will be of good quality in due course; they want more secure products from the outset.
We have noticed that although deep security skills are important, it’s even more vital that the Security Advisor have good project and process management skills.
Never underestimate your enemy—if you don’t test your code, somebody else will do it for you and with harsher consequences.
Kinds of Vulnerabilities Will Appear
If you follow the guidance in this book, you’ll build software that is as secure as you can make it at the point in time when you’re doing development. You and your team will do your best, and that might be very good indeed. But we can guarantee that the security researchers will keep trying, and they will find a class of vulnerability that neither you nor we knew about, and one (or many) vulnerabilities in that class will affect your software.
Back in the 1970s and 1980s, one of us (Lipner) believed that it would be possible to apply highly structured formal specification and design methods along with formal verification of specifications and programs to produce software that would be substantially free of vulnerabilities. A few projects attempted to follow this path, but all failed. (Lipner led a project that came close to releasing an operating system intended to reach Class A1—the highest level—of the U.S. Trusted Computer Systems Evaluation Criteria, or Orange Book [Karger et al. 1991].) The obvious cause of the failures was that by the time a team had executed the highly structured development process, the system they were producing was obsolete and no one wanted to buy it. But even if those highly formal processes had been efficient enough to produce competitive products, we don’t believe they would have achieved their ambitious security goals. The reason is that new classes of vulnerabilities continue to be discovered, and those vulnerabilities almost always result from errors that are below the level of detail addressed by the formal methods. To quote Earl Boebert, a security researcher whose experience goes back to the early 1970s, “Security is in the weeds.” Not only do you need to get the specifications and designs right, but any error in the machine code that is actually running can undo you.