The 2017 Deep Root Analytics incident that exposed the sensitive data of 198 million Americans, or almost all registered voters at the time, should remind us of the risks associated with storing information in the cloud. Perhaps the most alarming part is that this leak of 1.1 terabytes of personal data was avoidable. It was simple negligence. The data repository was in an AWS S3 bucket that had its access set to public, so anyone could find it—and download much of it—by navigating to an Amazon subdomain. We all know that the misconfiguration of an S3 bucket is a common mistake. That's because organizations oftentimes overlook IaaS systems like AWS. But such negligence isn't defensible over the long term. Indeed, the Deep Root Analytics leak emphasizes the importance of organizations adopting a strategy that can help them avoid this type of costly misstep by focusing on properly configuring their AWS assets. The AWS platform itself has strong security thanks to extensive investments by Amazon. Even then, the strongest defenses are vulnerable to attack by resourceful bad actors. As we saw back in 2016 in the Dyn DDoS attack, a large-scale attack can still overwhelm the sophisticated security protocols of AWS. Let's keep this in mind as we set the record straight on the shared responsibility model. Specifically, it's important to clarify what organizations and CSPs are responsible for protecting under this framework.
Understanding the Shared Responsibility Model
Under a shared responsibility model, both the vendor and the customer are responsible for securing the cloud. The vendor, Amazon, is responsible for the security “of the cloud,” i.e. its infrastructure that includes hosting facilities, hardware and software. Amazon’s responsibility includes protection against intrusion and detecting fraud and abuse. The customer, in turn, is responsible for the security “in” the cloud, i.e. the organization’s own content, applications using AWS and identity access management as well as its internal infrastructure like firewalls and network.
How to Secure Your Data on the AWS Platform
Now that we understand the shared responsibility model, let's focus in and see what organizations can do to full their responsibility for security "in" the cloud. The best practices discussed below can serve as a starting point in this regard.
- Enable CloudTrail across all AWS and turn on CloudTrail log validation. Enabling CloudTrail allows logs to be generated. Here, the API call history provides access to data like resource changes. With CloudTrail log validation on, you can thus identify any changes to log files after delivery to the S3 bucket.
- Enable CloudTrail S3 buckets access logging. These buckets contain log data that CloudTrail captures. Enabling access logging will allow you to track access and identify potential attempts at unauthorized access.
- Enable flow logging for Virtual Private Cloud (VPC). These flow logs allow you to monitor network traffic that crosses the VPC, alerting you of anomalous activity like unusually high levels of data transfers.
- Provision access to groups or roles using identity and access management (IAM) policies. By attaching the IAM policies to groups or roles instead of individual users, you minimize the risk of unintentionally giving excessive permissions and privileges to a user as well as improve the efficiency of permission management.
- Restrict access to the CloudTrail bucket logs and use multi-factor authentication for bucket deletion. Unrestricted access, even to administrators, increases the risk of unauthorized access in case of stolen credentials following to a phishing attack. If the AWS account becomes compromised, multi-factor authentication will make it more difficult for hackers to delete evidence of their actions and so conceal their presence.
- Encrypt log files at rest. Only users who have permission to access the S3 buckets with the logs should have decryption permission in addition to access to the CloudTrail logs.
- Regularly rotate IAM access keys. Rotating the keys and setting a standard password expiration policy helps prevent access due to a lost or stolen key.
- Restrict access to commonly used ports, such as FTP, MongoDB, MSSQL, SMTP, etc., to required entities only.
- Don’t use access keys with root accounts. Doing so can easily compromise the account and open access to all AWS services in the event of a lost or stolen key. Create role-based accounts instead and avoid using root user accounts altogether.
- Terminate unused keys and disable inactive users and accounts. Both unused access keys and inactive accounts increase the threat surface and the risk of compromise.
If you’re using custom applications in AWS, you also need to follow best practices for custom application security. Don’t leave any loopholes for bad actors to exploit or for your IT team to overlook. Organizations don't need to make mistakes when it comes to securing their AWS assets. Moreover, in the wake of GDPR and other data protection regulations, no organization can afford the implications of not paying attention to their security policies and practices. Editors note: Tripwire has announced that it has joined the global partner program for Amazon Web Services (AWS). As a new Advanced Technology Partner of the AWS Partner Network (APN), Tripwire has now made its vulnerability management solution, Tripwire® IP360™, available on the AWS Marketplace. Learn more here.
Zero Trust and the Seven Tenets
Understand the principles of Zero Trust in cybersecurity with Tripwire's detailed guide. Ideal for both newcomers and seasoned professionals, this resource provides a practical pathway to implementing Zero Trust, enhancing your organization's security posture in the ever-evolving digital landscape.