Security in the SDLC

The Software Development Lifecycle (SDLC)

Regardless of your development methodology, whether you create open source or proprietary software, writing software typically goes through four phases:

In the requirements phase, you decide what you are going to build and what it is that the software will do. You can write this down in user stories, a formal specification document, or as a feature request in your issue tracker.

In the architecture phase, you choose the technical components of your software. Think of the kind of application (web application, client/server application or stand-alone program), middleware, databases, file formats, protocols, etc.

The coding phase is where you create the software product by writing code.

In the testing phase you verify that the result of the coding phase is correct (usually this means that it satisfies the requirements).

In every development methodology, when and how you execute these phases, can be different. A waterfall development style will execute these phases in order, while agile development methods will run many iterations of the above cycle. Notice that the name requirements phase is awkward, especially since some security requirements can only be specified when enough technical details are available, which happens only later in the process. However, it is what is used in most of the literature.

Security problems and remedies in the SDLC

The Dutch computer scientist Dijkstra once said: “If debugging is the process of removing software bugs, then programming must be the process of putting them in.” If we do not pay attention, we will put bugs into our software. This also holds for security bugs. In each phase of the SDLC, we must pay attention to security.

While Dijkstra opted to prevent bugs through a mathematical derivation of an algorithm from a formal specification, such methods are difficult if we want to prevent security bugs. Even if we would have our security requirements crystal clear, formally specifying them would be very cumbersome, because of the different abstraction levels of the security requirements. It is more practical to create safety nets that can catch as many security mistakes as possible, in every phase of the SDLC 1. Let’s look at what can go wrong in each phase, and what we can do about it.

In the requirements phase, we can fail to specify what security properties we want to have in the system. As a result, we may forget to implement or test the desired security. We can derive security requirements from the functional requirements. A short an powerful technique is the STRIDE requirements analysis, discussed in Threats in the requirements phase.

In the architecture phase, not knowing what technical threats to mitigate will increase the likelihood of such threats. Therefore, we need to know the system’s technologies and the threats that come with them. In an architectural risk analysis, we can apply a so-called threat library to identify where certain threats can occur (see Threats in the architecture phase).

Furthermore, we must identify dependencies on third party components and known vulnerabilities (see Third party software components) in them.

In the coding phase, programming mistakes in a mitigation can cause the mitigation to provide less protection than we wanted. In addition, insecure programming constructs make the software vulnerable. A code review can catch programming mistakes and insecure programmming constructs, with the help of a secure coding standard (see Threats in the coding phase).

In the testing phase, we can miss important security tests or the tests may be incomplete. With a risk-driven security test plan, the most important security tests are done first. We can apply many verification methods (see Verification methods). Which one to use depends on circumstances such as available expertise and time.

Summary

We can make security mistakes in every phase of the software development Lifecycle (SDLC). Certain practices in every phase will help us to catch these. We have summarized this in the following table:

phase

problem

remedy

requirements

incomplete security requirements

systematic specification and review

architecture

insecure technology use

architectural risk analysis

coding

insecure programming constructs

code review against secure coding standard

testing

incomplete testing

risk driven test plan

Footnotes

1

do not worry, formal methods can still be used.