Author: Nir Gaist, Co-Founder and CTO, Nyotron
As we head into the New Year, let’s bid “good riddance” to 2018, which will go down as one of the worst years in cybersecurity history. Dark Reading’s Jai Vijayan reports while the number of breaches between Jan. 1 and Sept. 30 was down 8% compared with the same point last year, those numbers still translate to 3,676 breaches and 3.6 billion records compromised. Why are we in this situation? Besides being such a lucrative business for organized crime (with very little downside), as well as government-level sponsorship, we can attribute malware’s asymmetric advantage over defenders to a number of factors, including the sheer volume of attacks bombarding organizations and the ineffectiveness of their traditional endpoint security solutions.
AV-Test.org registered 118 million new malware samples in 2017. Even at a 99.9% detection rate, there would be 118,000 undetected threats, and this is just for known file-based malware. It is relatively easy to create new malware and even more straightforward to modify or obfuscate existing malicious code in order to avoid antivirus detection.
With fileless malware, there is nothing to obfuscate or scan. Attackers can easily create it by leveraging legitimate administrative tools, such as PowerShell, WMI, WScript, CScript, etc.
The majority of endpoint security products approach the malware challenge by attempting to “enumerate badness” (i.e., applying the Negative Security model).
But building and maintaining an infinite list of badness is impossible. Human ingenuity is limitless and bad guys will always find a never-before-seen or used way of getting in. So it’s no surprise that IT security professionals realize their traditional defenses are growing less effective. A recent Ponemon Institute survey found 70 percent of IT security professionals are very concerned about new and unknown threats, but only 29 percent believe their AV products provide all the protection they needed.
Understanding the current state of endpoint security requires studying the three key eras in the evolution of traditional antivirus solutions.
1. Let’s Add More Gates
The first iteration of AV was a signature-based AV engine. Today, the typical solution contains between five and ten different technologies that work as gates: AV, Host Firewall, Application and Device Control, Heuristics/Behavior Monitoring, Host Intrusion Prevention, Memory Exploit Mitigation, Reputation Analysis, and Emulation/Sandboxing. These “gates” have significantly increased products efficacy vs. just an AV engine.
Trouble is, the more technologies you add, the more heavyweight your agent becomes, and the more your users complain about the performance impact on their systems. This “agent bloat” has become a persistent problem as endpoint security products with legacy architectures have stuffed more countermeasures into their agents.
More importantly, if attackers manage to bypass all of these gates (and they will), they have a free pass to the system since traditional AV provides only transient security as opposed to persistent security that never stops looking for threats.
2. Endpoint Detection and Response—Let’s Go Hunting!
The “we are all doomed” attitude from a few years ago resulted in the rise of Endpoint Detection and Response (EDR) products. The objective is to track every single event on all endpoints and continuously hunt for threats that have slipped through our defenses. EDR strives to provide security practitioners with better visibility into malicious attacks that have evaded endpoint-blocking measures and spread through the network.
An obvious drawback to this approach is that you are already infected and the malware has likely done the damage. Probably the most important downside is that you will need more security staff to perform threat hunting but will struggle to hire new hunters. According to the new (ISC)² 2018 Cybersecurity Workforce Study, the global shortage of cybersecurity experts stands at 2.93 million and rising.
3. Next-Generation Antivirus—Machines Will Save Us All
Now, NGAV has become security’s shiny new object. Security tools powered by machine learning, deep learning and artificial intelligence are supposed to save us. Since approximately 2014, early adopters and parts of the early majority have already been using these technologies. Yet, the most advanced attacks are still able to slip past NGAV.
Vendors train AI-powered security tools on known malware samples. Hence, they are not fully effective against truly new unknown malware. Just look at the NSS Labs Advanced Endpoint Protection results. The full test reports per vendor show efficacy against “unknown threats” under 50% for a number of them.
Since AI tools focus on static file analysis, they are not necessarily effective against fileless attacks that are responsible for a significant percentage of modern-day attacks. Moreover, AI tools tend to produce significant false positives. Research has found that these can be over 20% in some tests. And don’t forget that the bad guys are now using AI to beat AI-powered tools. After all, they have access to the same tools and technologies as the good guys.
“Put simply, we have reached the point of AI fatigue.”
2019: The Year of Positive Security
In 1987, Fred Cohen’s study of computer viruses revealed that there is no algorithm that can perfectly detect all possible viruses. The industry is searching for a better approach. The option that holds the most promise is the Positive Security model which blocks everything that isn’t “good” (e.g., unknown files or behaviors) by default.
For example, OS-Centric Positive Security protects endpoints from fileless and unknown threats by focusing on the damage stage (i.e., intentions or outcomes of an attack) and on OS system calls rather than applications, user behavior or reputation. This reduces management overhead that has plagued the prior generation Positive Security approach like Application Control/Whitelisting.
While threats are infinite, good OS behavior is finite [and static], so it is definitely possible to map the good behavior of the OS. The key is to remain threat, application and user-behavior agnostic so your system does not need to ‘know’ anything about a particular application or a threat. No matter the application, it uses the operating system and the way the OS reacts to that application is always the same.
Negative and Positive Security—Better Together
Endpoint protection solutions that include static file analysis (whether based on AV definitions or Machine Learning models) and hence are based on the Negative Security model are, and will continue to be an important layer in an organization’s defense-in-depth strategy. In some industries, compliance requirements drive their usage as well. Therefore, the prudent course of action is to consider improving its defenses by pairing traditional AV (or NGAV) with a Positive Security model-based solution.
With endpoint security products continuing to be at the tip of the spear of cyber defenses for years to come, the question is how to ensure the best possible security posture. No matter which way an endpoint security buyer turns, there is no one magic bullet. It is likely that a layered approach with multiple different technologies working together is required.
Nir Gaist, Founder and CTO of Nyotron, is a recognized information security expert and ethical hacker. Nir has worked with some of the largest Israeli organizations, written the cybersecurity curriculum for the Israel Ministry of Education, and holds patents for Behavior Pattern Mapping (BPM), a programming language that enables the monitoring of the integrity of the operating system behavior to deliver threat-agnostic protection.