Cloud Security: Three Common Mistakes and How to Avoid Them
Author: Tzury Bar Yochay, CTO and co-founder of Reblaze
Migrating to cloud is still a large trend in tech. For most organizations, the advantages of doing this are numerous and compelling. Nevertheless, the decision to migrate includes a few risks. Foremost among them is security.
If the cloud is not used properly, then organizations can unwittingly expose sensitive data, or open themselves up to breaches, or suffer other unpleasant consequences.
Fortunately, the most common mistakes in cloud security are easy to avoid.
- Failure to properly secure cloud data storage.
- Relying exclusively on the cloud providers’ security products.
- Not using effective cloud-based bot mitigation.
Let’s discuss each of these errors.
Failure to properly secure cloud data storage
This is the most common mistake, at least as evidenced by the number of high-profile security incidents that it has produced. AWS S3 in particular has been a popular method for organizations to accidentally publish sensitive data on the web. Fortunately, this mistake is easy to avoid. Cloud providers have improved their storage offerings and are making their default configurations more private and secure. Plus, storage access control has become more granular and easier to administer.
The best practices for cloud storage security consist of three basic steps. All should be followed whenever possible.
- By default, make all storage inaccessible from the public web. This should be true for everything except data that is clearly meant to be public (for example, cached media and graphics for a marketing website).
- On an individual basis, enable access for the fewest users that are necessary.
- Automate this process.
Different cloud platforms have different terms and somewhat different capabilities for these activities, but here’s an example based on AWS.
- Unless otherwise necessary, all buckets should be set to private. The objects within the buckets will inherit these permissions (or more accurately, the lack of permissions) for users on the public web.
- Individual buckets and/or objects should then be assessed. Where appropriate, some can have less restrictive permissions set. The best way to do this is to allow access on a case-by-case basis, and only for specific resources and/or users.
- Finally, as new storage is created, IaC can be used to drive this process automatically. This will reduce the chance for human error. Existing buckets can also be scanned, periodically and automatically, to ensure that buckets are not exposing any private data to the public.
Relying exclusively on built-in security products
Most cloud-using organizations are customers of one of the Big Three platforms: AWS, GCP, and Azure.
Each of the Big Three offers built-in security products: AWS WAF and Shield, Google Cloud Armor, and Azure Application Gateway (which includes a WAF). These products are tightly integrated into their respective platforms, are free or low-cost, and are very easy to deploy and use. Enabling them is usually as easy as clicking a few buttons in an admin console. As a result, it’s very tempting to use these products. Nor is it wrong to do so—for most organizations, each of these products can be one part of a good overall cloud strategy.
However, it is a common mistake to rely exclusively on these products for web security. Each product does its job well, but none of them are designed (or intended) to provide comprehensive security for the organizations using them.
First of all, these products only work on each vendor’s specific platform. Therefore, they are not useful to organizations which use a multi-cloud or hybrid architecture.
Secondly, these products have limited scope. Although the Big Three’s security products have varying capabilities, there’s one action that all of them perform well: they are all good at blocking attackers. However, none of them can detect one hundred percent of the attackers which need to be blocked—not even close. Although they can identify simple attacks like volumetric DDoS, they cannot detect more sophisticated threats. Instead, they are meant to be supplemented by rulesets that are manually created and maintained by the users. This requires a level of expertise that few organizations have within their staff.
The best approach for these products is to use them as an enforcement mechanism. There are third-party dedicated security solutions which are fully integrated with the Big Three and can act as the “security engine” for these products. In this configuration, the third-party solution identifies threats, and the Big Three product enforces these decisions and blocks the hostile traffic. This is the recommended way to gain robust and comprehensive security from these products. Organizations which try to use them as stand-alone solutions are making a serious mistake, because these products cannot provide full security on their own.
Using an inadequate bot mitigation solution
This mistake is not limited to cloud security (it also occurs frequently within organizations that are still using data centers and security appliances). However, this mistake is more regrettable for organizations which have moved to the cloud, because the cloud makes it easy to avoid this error.
Almost forty percent of Internet traffic today consists of hostile bots. [Source: the four billion http/s requests processed daily by the Reblaze web security platform.] These bots wage a wide variety of attacks: DDoS, vulnerability scans (which are soon followed by breaches), credit card fraud, credential stuffing and account hijacking, spam, inventory denial and hoarding, API abuse, and much more.
Clearly, effective bot detection and control is a vital part of web security today. Unfortunately, although most security solutions can identify common bot attacks (such as simple volumetric DDoS), few can detect the most advanced forms of automated traffic in use today. The latest generations of malicious bots are quite sophisticated, and they are designed to evade detection by mimicking human identity and behavior.
Only a few security solutions offer advanced bot detection and mitigation that can defeat these bots. The good news is that all of these solutions are cloud-based. If your organization has already moved to the cloud (even if only partially), then it is a simple matter to deploy one of these solutions and take advantage of its capabilities.
Ignoring the need for robust bot detection is, in effect, an acceptance of an inadequate security posture. It will leave holes in your defenses that hostile bots will eventually penetrate. And you probably will not know about it until it’s too late.
The cloud presents many opportunities, but also a few risks, Fortunately, these risks are easy to mitigate. Securing cloud storage, supplementing the Big Three’s built-in security products with third-party capabilities, and adding a dedicated bot-detection solution will go a long way toward cloud usage that is safe and secure.