Justin Boyer

The AWS Shared Responsibility Model: 3 Areas of Improvement to Make Today, Part 1

The AWS Shared Responsibility Model: 3 Areas of Improvement to Make Today, Part 1

Migrating your digital assets to the cloud can seem overwhelming at times. But you’re not alone. AWS has done a good job of meeting you halfway to help with security. AWS calls it the Shared Responsibility Model. Both you and AWS are each responsible for the health and security of your systems. AWS doesn’t expect you to secure everything yourself. On the other hand, you shouldn’t expect AWS to secure everything for you. 

Let’s look at what the AWS Shared Responsibility Model is and how it relates to keeping your secrets stored in AWS private.

The AWS Shared Responsibility Model

AWS places its services into three categories:

  • Infrastructure services
  • Container services
  • Abstract services

Knowing what category of services you’re using will help you to understand what responsibilities both you and AWS have. 

Infrastructure services include compute services such as EC2 and supporting services like Elastic Block Store (EBS), auto scaling, and Virtual Private Networks (VPC). Essentially, infrastructure gives you complete control of your compute services with little difference from your own data center. You install your own operating systems and make sure they’re up to date.  Your application code runs on the infrastructure AWS provides. In cloud terminology, these are the infrastructure-as-a-service (IaaS) offerings.

In a nutshell, you are responsible for much more when using infrastructure services. Check out Amazon’s diagram of what the customer is responsible for versus what AWS is responsible for (note: all diagrams in this section come from Amazon’s Security Best Practices white paper).

Shared Responsibility Breakdown for Infrastructure Services.
Shared Responsibility Breakdown for Infrastructure Services. Source: AWS Security Best Practices


In this model, AWS is responsible for the security of the infrastructure, which includes physical servers, the virtualization layer, and the physical network. The customer manages everything else necessary to run their workloads.

Container services run on the same infrastructure, such as EC2 instances, but you don’t manage the operating system or platform. If EC2 instances or other services are required to run your workload, container services manage this for you. Examples include the Relational Database Service (RDS) and Elastic Beanstalk. In cloud terms, container services are Amazon’s platform-as-a-service (PaaS) offerings.

The security model for container services shifts more responsibility onto the shoulders of AWS. For example, if you are using Oracle in RDS, AWS is responsible for keeping the Oracle database software up to date as well as the security of the underlying operating system and EC2 instance the databases is running on. The customer is responsible for some firewall configuration and data encryption. The shared responsibility can be shown by this diagram:

Shared Responsibility Breakdown for Container Services
Shared Responsibility Breakdown for Container Services. Source: AWS Security Best Practices

 
Finally, we’ll consider abstract services. Abstract services provide a complete service to the customer and require very little work on the customer’s side to secure it. These services include S3 and DynamoDB. These services fall under the software-as-a-service (SaaS) umbrella. Misconfiguration can still leave you open to attack, but the customer is responsible for much less overall.

The shared responsibility model for abstract services is represented in the below diagram. AWS is responsible for the infrastructure, the operating system, platform, and network security. Customers are required to protect their data properly using IAM and encryption services as needed.

Shared Responsibility Breakdown for Abstract Services. Source: AWS Security Best Practices
Shared Responsibility Breakdown for Abstract Services. Source: AWS Security Best Practices

 

Now that you know what the shared responsibility model is, we will now discuss three important pieces of security which the customer must manage. We’ll discuss keeping your private keys (and other secrets) private, common mistakes with firewall configuration, and proper logging and monitoring within AWS.

In this post, we’ll cover how to keep your secrets private. We’ll cover the other two in subsequent posts.

Keeping your private keys private

You’ve now accepted a new position, AWS customer. What are your responsibilities? First, we’ll look at what you need to keep secret. Many employers have their employees sign a non-disclosure agreement, barring the employee from sharing secrets of how the company does business. If the employee breaks this contract, they’re likely to be fired.

Similarly, there are keys secrets, required to run your workloads, but must not be shared with others. These include:

  • Encryption keys
  • Private keys
  • Sensitive credentials (such as AWS secret keys used for authentication)

Businesses have run into trouble when these secrets are exposed. Verizon and Dow Jones both had major data breaches due to wide open S3 buckets. In the rush to get to the cloud, businesses have at times misconfigured services and made simple mistakes with huge consequences.

Individuals can make mistakes as well. Some developers have put their AWS secret keys into Git repositories. If exposed on a site such as GitHub, the secret keys used to authenticate to AWS services will be compromised within minutes. Bad guys can then use the secret key to create EC2 instances for crypto-mining on your company’s dime, if not other nefarious purposes.

Some of HackerOne’s clients have fallen victim to disclosing private information as well. Our hackers have found cases where metadata servers on EC2 can be used to leak sensitive data, such as passwords, AWS keys, and source code. Check out the SSRF: Private Key Disclosure report and SSRF Vulnerability (EC2 Metadata) report for more details.

Practical Steps To Take for Protecting Your Secrets

We’ve seen that private keys and other secrets can be leaked if customers don’t take the shared responsibility model seriously. If you want to protect your secrets, and not become the next headline, here’s what you can do: create a walled garden within AWS.

A walled garden is an environment designed to control a user’s access to resources. It guides navigation and prevents unwanted data from entering or leaving. A walled garden approach to security within AWS gives you the ability to complete the work you need to complete without leaving your secrets in the open. Let’s discuss the steps.

First, you need a build pipeline. All changes to AWS infrastructure should be through a build pipeline. No manual changes should be allowed. Using a build pipeline with tools such as Jenkins and Atlassian Bamboo will allow you to make security checks part of the process. Any security violations can be discovered and reported or even removed.

Second, good IAM policies are essential. IAM is a powerful tool and will help your security immensely when used correctly. Use the principle of least privilege to make sure no one has more access than is necessary. Services can be assigned IAM roles, so use IAM roles to link services together. For example, make sure your S3 buckets are only accessible by EC2 instances with a certain IAM role. Your workloads running on EC2 will have access to necessary data, but your S3 buckets won’t be exposed to the Internet or anyone with AWS credentials.

EC2 instance metadata can be used to store and retrieve IAM roles for individual EC2 instances. This is a good practice, but there is a “gotcha.” The instance metadata service has rate limiting on API calls. If you are following good practices and assigning many IAM roles to individual instances, you may hit this limit. One solution for this problem is to use code or a script to locally cache the credentials for some period of time after retrieving them from the metadata service. Don’t keep them too long, as they expire. Keep them for a few minutes, then refresh them. More details around IAM best practices can be found at this open source resource.

Exposing S3 buckets certainly isn’t a good practice. So protect your S3 data. S3 buckets are private by default, so someone has to consciously make the bucket public. Don’t do this. Instead, treat your S3 data as a “backend” system and only give access to it to the services that require it. Use CloudFront to serve content from your S3 bucket instead of exposing the S3 URL publically. As mentioned previously, give EC2 instances the roles they need when they need them to access S3 data. S3 data should never be exposed to anyone or anything without proper authorization. AWS config rules allow you to find non-compliant S3 buckets more easily.

Protect Your Secrets

You’ve signed a “non-disclosure agreement” with your data and your customer’s data. AWS has some responsibility in securing the cloud infrastructure you use. However, all shared responsibility models place the safety of data firmly in the customer’s hands. 

Therefore, it is up to you to keep your secrets safe. Create a walled garden within your environment. Use build pipelines, IAM policies, and wise use of S3 resources to make sure your secrets don’t get out.

Stay tuned here for part 2, common mistakes with firewall configuration.

To learn more about how hacker-powered helps secure your cloud environment with continuous testing from expert hackers, contact us today. 

 


HackerOne is the #1 hacker-powered security platform, helping organizations find and fix critical vulnerabilities before they can be criminally exploited. As the contemporary alternative to traditional penetration testing, our bug bounty program solutions encompass vulnerability assessment, crowdsourced testing and responsible disclosure management. Discover more about our security testing solutions or Contact Us today.

The Ultimate Guide to Managing Ethical and Security Risks in AI

AI Ebook