The History and Future of Artificial Intelligence in Application Security
2018 was a record year for the amount of data stolen as a result of cyber security breaches. These breaches affected major brands both financially and reputationally. With the sheer volume of breaches, you’d be forgiven for thinking that being compromised has become more of a norm than the exception.
The vast majority of cyber breaches are occurring at the application layer. Whilst there are a multitude of reasons as to why this route is preferred by attackers, the application layer is an increasingly porous one and open to attack for the following basic reasons:
- Put simply, developers make mistakes when writing the code, regardless of their experience and how security savvy and/or conscious they may be.
- The demand for software to be delivered fast, has resulted in a lack of effective planning, writing and more importantly, testing (including QA and Security testing). Security testing methods can be very time consuming, a drain on resources and in most cases an unrealistically and often insurmountable task, resulting in an overwhelming amount of data testing being skipped, with companies forced to take a view and release relatively secure, but not totally secure applications, to meet tight business timeframes.
- New methodologies (cloud, serverless, etc.) are still maturing and constantly evolving. It is increasingly hard to keep up to date with advances in these methodologies, with very few having the requisite technical expertise and experience of working with them properly. As a result, even whilst using the best practices, mistakes are still made, and often.
In order to overcome these challenges, organisations must leverage a number of tools and methodologies to strengthen their applications as part of their IT Security.
As part of the developing technologies to deal with these challenges, new AI solutions are emerging as an important new step in the evolution of automation, being able to take on challenges that regular, heuristics-based automation is unable to solve.
But why the need for fancy, complicated AI tools to overcome these issues?
In an ideal world and if we were able to mould the perfect developer, they would have the ability to predict every possible scenario that their software will encounter, including any scenarios from communicating or integrated software written by others and to write the code to effectively cover and secure against these scenarios.
This is, unfortunately, a fantasy. The overriding factor in the above is the issue with ‘scale’ across development and security, in particular the inability for humans, whether singularly or indeed as groups working in tandem, to learn and master the multitude of different technologies and methodologies required to effectively produce 100% secure applications.
However, in today’s world, due to the ever-growing and accelerating amount of data that needs to be tested and its complexity, it is impossible to keep up using simple automation, which relies on manually coded, heuristic-based solutions.
As a result of this, we now need an “automated way of developing the automation”, which is achieved by Artificial Intelligence (AI).
AI is currently integrated into Application Security within two main passive approaches:
1. The Defensive Approach
Owing to the substantial ineffectiveness at securing applications at the development stage, coupled with the disappointment at the WAF customer end due to a lack of automation and coverage of the emerging threats, defensive Runtime Application Self-Protection (RASP) technology is commonly used. This approach relies on the assumptions that malicious activity within the application differs from normal activity, that it can be detected and that the threats can be blocked in real time. Whilst this defensive approach is theoretically effective and has been relatively successful, in reality it will never be able to fully protect an application due to:
- RASP not being able to protect against all classes of vulnerabilities, as no AI model is 100% accurate. This leads to false-positives and false-negatives. So while providing a good deal of protection for applications, it is not going to make an application as secure as it would be if security were built into it from start to finish.
- More complex problems, such as business-logic flow vulnerabilities and valid feature exploitations go undetected because they ride on healthy requests. This complexity issue was the downfall of the antivirus industry in the 90s, owing to their failed attempt at detecting virus signatures in order to block them. The same failures can be seen with RASP, as evidenced by the fact that vectors can be randomised and stem from multiple sources with multiple patterns, going undetected, with the defence mechanism lacking the dynamism to effectively mitigate.
- RASP is a shield. Like in any ‘fight’, being on the back foot and constantly absorbing blows is never advised. It only takes one misplaced attempted block for penetration to occur, with crippling consequences!
2. The Semi-Automated Approach
Utilising automation to go over a list of known vulnerabilities will always result in a substantial number of false-positives. Effectively and efficiently going through them to correctly identify the real vulnerabilities is a massive task and there are nowhere near enough human resources to tackle the volume of information and data that are produced, let alone to stay ahead of threats.
Currently, AI is used to ‘guess’ which vulnerabilities are real and which ones are false-positive, greatly reducing the manual time needed to determine the real vulnerabilities. The most advanced Application Security Testing solutions claim to reduce the numbers of false-positives by up to 98%, saving valuable time and indeed money. This approach is however, problematic:
a. Only known vulnerabilities are assessed, unable to find new vulnerabilities.
b. This method is reliant on substantial manual updating of the database of vulnerabilities, which is a key problem that AI should solve.
c. It is still necessary to manually complete the final filtering process and assessment of the false positives because this type of AI will never be able to understand the complexities of many vulnerabilities and correctly classify them. The circa 2% of vulnerabilities that still require triaging cause substantial delays and result in increased costs.
In all of the above limitations in Application Security, the root issues are similar – the ineffective passive and reactive nature of their approach and the human factor bottleneck.
In order to overcome these issues, an all encompassing solution is required that:
1. Is active in discovering, exploiting and correctly classifying both known and new vulnerabilities.
2. Can validate vulnerabilities found to produce false-positive free results.
3. Negates the requirement for manual updating of the knowledge base by learning on its own.
4. Facilitates secure software development from the ground up.
NeuraLegion’s AIAST™ solution, NexPloit, does exactly that.
Launched in 2018, NexPloit was designed specifically to address the aforementioned Application Security challenges, where other solutions have repeatedly come up short.
As the world’s first AI-based Application Security Testing solution, NexPloit was designed to completely replace a security expert’s critical thinking. It does this by using a proprietary Machine Learning algorithm to understand complex software architectures and APIs, leveraging this knowledge to detect security vulnerabilities both in the logical-flow of the software and in single-point failures, such as in an API itself.
Using state of the art Evolutionary Strategies, NexPloit’s major differentiator from its competitors is its revolutionary ability to generate its own sophisticated and ever-changing malicious scenarios, performing real new attacks, not simply going through manually updated databases of known vulnerabilities. This enables NexPloit to actively stay ahead of the competition, whether other AST solutions or indeed malicious hackers.
The Reinforcement Learning nature of NexPloit enables it to utilise the impact of each attack, to learn from and adapt to each target. By only considering real impacts as real vulnerabilities, false-positives are completely eliminated, removing the need for security experts to recheck and filter every scan report. In addition, each new vulnerability is automatically stored within its own knowledge base, improving the efficacy of future scans.
Unlike the passive approaches, NexPloit’s active ability to constantly learn and improve at a rate far beyond human capabilities, makes it completely autonomous and thus more efficient than the current manual, human enhanced alternatives.
The business case for a solution like NexPloit is apparent. With autonomous AI driven solutions, the future of Application Security is an exciting one, stepping up to make the world a safer place, faster.