\ /
Security 

Software Design Security Principles

Confidentiality, Integrity and Availability (also referred as the CIA triad) are considered the core concepts of information security. So basically when building applications, we need to:

  • do not disclose any information to any unauthorized parties, such as individuals, entities or processes (Confidentiality)
  • do not allow anyone to tamper/modify the data during or after the submission (Integrity)
  • ensure that the information is available (to authorized parties) when it's needed (Availability)

However, the growing complexity of modern applications increases the likelihood of software bugs that can lead to security issues. Furthermore, nowadays building applications requires knowledge of several different programming languages, frameworks, tools and interactions with third-party software. For these reasons, building secure applications is quite challenging.

But fortunately, there are several Security Principles that can be applied to any project in order to minimize the security risks. It's important to deeply understand these principles so that they can be applied starting from the architectural design phase of the application.

Principle of Least Privilege

This principle is quite simple but really effective (and often underrated). It states that when performing any task, the individual should have the minimum access privileges required to accomplish the task.

The classic example is the database user who only needs to get records from some tables. This user does not require any administrator feature to get the data, so when creating the user we need to grant him only the minimum required privileges (likely we only need to allow the user to query few specific databases). This way, if for some reason the user got compromised and the attacker is able to connect to the database, he won't be able to move laterally and to leak data from other databases.

Minimize attack surface area

Attackers are always looking for new attack vectors. More the attack vectors, more likely to find vulnerabilities.

For example, a web application with a huge amount of API enpoints that manipulates tons of user inputs, offers the attacker a lot of opportunity to find a vulnerability. Furthermore, sometimes applications rely on data that they trust because they (wrongly) assume it's not user controlled. It's the case of the HOST header which can then be used by the web application and it can lead to many dangerous vulnerabilities if not properly validated.

Reducing the attack surface is a security best practice which limits the attackers to test minimal targets and reduces the risks of successful attacks.

Establish secure defaults

This simple principle states that the default configuration of a system/application, enforces some minimal security policies before allowing entities to get access to the system itself.

The main example is a Web application that allows users to register but the minimum password requirements must be satisfied in order to get access to the platform (i.e. password must be at least 8 characters with at least 1 uppercase letter, at least 1 lowercase letter, at least 1 digit and at least 1 special character).

We can summarize this principle saying that: "It should not require an expert to make the system secure, it should require an expert to make it insecure". (I've found this quote on Internet, from some anonymous engineer).

Principle of Defense in Depth

We know that securing applications is hard. But we can enforce multiple security measures so that breaking the application is hard as well. That's the main goal of defense in depth principle: even if an attacker is able to bypass a security check, an additional layer of protection can prevent the attacker to succeed in his attempt to break our system.

For example, let's assume we deployed a WAF (Web Application Firewall) in front of our WebApp, so that all the traffic coming from users is first checked by the WAF engine. Let's say the WAF engine fails and the traffic is not inspected anymore: if our application implemented user input validation and other security measures, we are still able to detect and prevent attacks. If we only had rely on the WAF engine to protect our WebApp, now we would be vulnerable and it would be easy for an attacker to do bad things on our application.

Avoid security through obscurity

Security through obscurity is based on the (wrong) assumption that a system is more secure if, for example, its implementation details are kept secret.

Even though it might be reasonable that if you don't know something it might be harder to break it, this doesn't mean that is not going to happen. For example we can use a real life analogy to describe why security through obscurity is not a good approach: your house is not necessarily more secure if you hide the key under your welcome mat; sooner or later someone is going to find the key.

Another example (more related to computer science) would be to hide sensitive data in your application, just by encoding it in a fancy or exotic way (i.e. using base64 encoding does not protect your data).

Implementing security through obscurity doesn't really add any relevant security layer to the system you are going to protect and it should be avoided as much as possible.

Fail securely

Error handling is an important programming skill required by any developer. For example, every time a function is called, we need to verify that it runs without errors. If an error occurs, we need to handle it properly in order to avoid exceptions that can cause unexpected program termination or other issues.

An example of an insecure failing is a piece of software that needs to perform some data encryption but it fails and it falls back to the weakest encryption algorithm. Another example is a firewall that for some reason fails and it just starts to allow any traffic (fail-open behaviour).

Regarding error handling is also important to remember that any information caused by some kind of error or exception (for example the stack trace of a failing function call) should not be sent to the users. Using this information an attacker can rapidly gain additional knowledge of the runtime system that will allow him to perform more specific and targeted attacks.

Generally speaking, the application should fail securely and should properly handle errors and exceptions without revealing debugging information.

Separation of duties

This is a very basic principle that states that each entity (person, group, role) should have different responsabilities and capabilities. For example, an online shopping WebApplication should not allow the regular user to power off the application (which is an administrative task) but at the same time the administrator of the website should not be allowed to login into the web application and order items on behalf of other users.

Another example is a developer working in a development environment that should not be able to read or modify the database behind the same application running in a production environment. This task must be done by a different role/person.

Rely on known components

It's quite hard to build a robust and reliable security framework.

As an example, let's take a look at Cross-Site-Scripting (XSS) vulnerabilities: due to the nature of this security issue and the fact that mostly all web applications require user inputs, it's actually really hard to write a library that takes arbitrary user input as argument and returns a string that is 100% "XSS free" (so that the web application can safely embed that user input within the page rendered by the browser).

So when it comes to protect against XSS, instead of trying to build your own "XSS sanitizer" (which will likely fail at some point) always use a well-known, tested and maintained library (i.e. DOMPurify for JavaScript and bluemonday for Go).

More in general, always try to adopt third party components which are considered industry standard and are proven to be the most secure choice.

comments powered by Disqus