For a long time, the main mode of software development has globally rested on the concept of ‘as long as it works’, and most applications (particularly web applications) developed over the last two decades have failed to take into account an aspect that is fundamental today: security.
While it is true that today we have at our disposal a wealth of libraries, frameworks and ecosystems capable of communicating with each other efficiently and quickly, this has also led to an increase in the so-called ‘risk surface’ with respect to possible malicious actors.
Although not the most widespread modality, the attack via web vulnerabilities remains one of the methods exploited by attackers to get inside an infrastructure or to steal the data on which the application itself relies.
Outdated third-party libraries or frameworks, code developed without a ‘security by design’ concept, poorly designed application architectures… are just some of the aspects that can make software products particularly vulnerable.
According to the latest OWASP TOP 10 report, the top positions of the most exploited vulnerabilities in software are as follows:
- Broken Access Control: type of vulnerability related to mismanaged permissions, either at user or resource access level.
- Cryptographic Failures: set of vulnerabilities relating to weak, misapplied or partially applied encryption modes.
- Injection: set of vulnerabilities that allow, through the injection of code (e.g. SQL), the extraction or modification of data that could not normally be modified or exposed.
- Insecure Design: set of vulnerabilities relating to an essentially insecure application design, which is difficult to correct except by means of a complete rewrite.
- Security Misconfiguration: set of vulnerabilities caused by incorrect configurations, unnecessary services/accounts left active or insecure framework configurations.
- Vulnerable and Outdated Components: Vulnerabilities caused by outdated components at any level of the application (frontend, backend, database, etc.).
How to defend yourself?
From the point of view of those who develop software applications, it is of fundamental importance that the genesis of a new application starts with the concept that its architecture must reflect the concept of security by design.
If we do not design the application from the outset to be secure, it is unlikely that when it is finished and released for production, we will be able to make it as robust with subsequent corrections.
Architecture aside, an effective way to manage the cyber security aspect in the development cycle is to ensure that the process itself has mandatory security validation steps.
Monitoring or assess?
Just as in the development chain, there are very often steps within the compilation processes that check the functional integrity of the code (e.g. unit test or integration test), there should similarly be equally automatic steps that check its consistency in terms of security.
These automatisms can, for instance, check the obsolescence of the third-party libraries used (checking if they are vulnerable and need updating or replacement) or even that the code, as it is written, does not allow attacks of the most common type, such as Cross Site Scripting (XSS) or SQL Injection.
Exploiting a term that is becoming popular at the moment, this is a move from the SDLC (Software Development Life Cycle) paradigm to the SSDLC (Secure Software Development Life Cycle) paradigm.
The idea then is not so much and only to carry out occasional software assessments, but to constantly monitor the cybersecurity of the code developed at every stage prior to its release in production (preferably in an automated manner).
Software security
It is also useful to approach the security aspect of software, even from a more detached and agnostic point of view, so as to understand what a possible attacker might find and what weak points in the code may have been missed during the development process.
The objective is always to compensate for the complexity of an application (or an ecosystem of applications) with appropriate techniques and procedures that cover all security aspects, and to prevent with the activation of 24-hour monitoring and response systems to nip any malicious attempts in the bud.
In addition to these processes, it is necessary to maintain a high level of attention by periodically carrying out Penetration Tests (PT) and Vulnerability Assessments (VA) on software.
If the results of these tests show vulnerabilities, corrections must naturally be implemented with a priority proportional to the severity of the problem detected.
This step is particularly important if the infrastructure has suffered an attack, in order to consolidate all potentially fragile points before the infrastructure is completely restored.
VA/PT activities may also and especially be required when the software producer is one of the suppliers.
After all, this too is an example of a supply chain, on which regulations such as NIS2 rightly insist that this aspect be adequately protected.
In conclusion…
Developing software is often a complex activity that can no longer be separated from special attention to the security of the developed code.
In summary, we can point to these as the three pillars of a good software project:
- Before writing code for a new product, design an architecture that is solid and secure by design.
- Include mandatory validation steps in the development cycle that cover the fundamental aspects of security.
- Periodically check the security of the software with VA/PT activities.
Analysis by Paolo Leoni – Incident Response Specialist, CYBEROO