AWS Practitioner
- AWS Free tier tracking and Billing widget
- Billing Dashboard
- shows spends on AWS web services
- Shows free tool usage and used quota of tools
- Optimizing the Billing
- Preferences
- Check mark receive free tier usage results.
- You will get free quota reports on mail
- You can customize email for alerts
- AWS Global Infrastructure and Compliance
- Regions
- Largest Organizational unit in AWS.
- Houses 2 or more independent data centers
- Stored data in particular region is not duplicated in another region by default.
- We can have a compliance regulation in our organization to store AWS data in a region in world.
- We can use regions to cut down network latency
- Regions should be as much as possible near to the user.
- Regions in AWS diagram are represented with a dotted line
- There are 16 regions as on Oct 2017 and then there are certain upcoming regions too
- The regions on AWS dashboard are seen next to your name which can be selected from drop down and used to deploy assets.
- Certain regions like china and Govt Clouds of USA may not show in list as they require special permissions to be used.\
- Select your region from drop down and got to EC2 and launch an instance that will show in that particular region.
- Next select a different region and launch an instance let us say in India
- Each instance you launch takes a part from your EC2 service quota.
- Availability Zones
- These are independent data centers in the regions.
- They are interconnected with High speed links with speeds of 10,20,30 and may go up to 100 gigabyte.
- There are at least 2 availability zones in a region and that is for the fault isolation redundancy purposes
- If one availability zone goes down there is a way to architect the application so it will not be affected.
- This is managed by application and not AWS.
- In EC2 dashboard we can see the region as well as the availability zones.
- Each availability zone is named after a region and then a letter in sequential order for example
- us-east-1a
- us-east-1b
- us-east-1c
- In the above example "us-east-1" is the region and a,b,c are availability servers.
- Our application should be such that if 1a is down then our application should survive with 1b.
- End Points
- These are used by end users and customers to connect to AWS environments.
- Users can use AWS services using endpoints.
- Users can connect using following methods
- AWS Console(Web console launched via browser)
- AWS CLI(Command Line Interface,Connects through a terminal program)
- AWS API's(Application Programming Interface)
- Customers can connect to AWS users using Content Delivery Network(CDN) Endpoints.
- Content Delivery Network name in AWS is Cloud Front and we use edge locations for those endpoints.
- This allows us to cache the web resources near to customer so that they can load faster and there is less latency
- In AWS certain services inside regions also have endpoints
- S3- for storage
- DynamoDB - can be accessed via web urls
- There are services inside availability zones that use endpoints
- EC2(public subnet)
- When we use EC2 as a web-server and it has a public ip address and accepts direct web traffic.
- Elastic load balancer
- Has DNS endpoints which have public accessibility.
- VPC Endpoints
- Previously if EC2 instance needs to get objects from S3 bucket or use DymoDb table it would use via public endpoints of both of these.
- Traffic from EC2 instance goes to internet and then comes back to the endpoints on the services.
- This was making traffic unnecessarily public
- AWS now uses VPC endpoints to address the issue so that unnecessarily traffic in such cases does not goes public.
- Allows for a private connection to AWS services without going through the internet.
- Traffic does not leave the VPC network.
- VPC endpoints are virtual devices and have scaleble,redudent and are highly available.
- 2 types of VPC endpoint
- Interface(Using AWS private link service)
- An Elastic Network Interface(ENI) with a private address serves as the endpoint.
- ENI is like an Elastic endpoint with only difference we can move the the entire network card where ever we want it.
- ENI is placed our VPC so that we can send traffic to it.
- ENI works with following services
- Kinesis streams
- Elastic load balancing
- EC2 API
- EC2 Systems Manager(Centralized Management for Instances)
- Service Catalouge(Central Management of IT services in your organization)
- Gateway
- A target for a route table in your environment.
- Supported Services
- DynmoDb
- S3
- To setup VPC
- On the AWS Dashboard go to VPC
- Go to Endpoints
- Create an endpoint for your EC2 elastic load balance service.
- Then we pick what subnets we want
- Enable DNS
- Create the Endpoint
- Next enable DNS resolution and hostname for our VPC by giving to VPC's.
- Next wait for the endpoints to be available
- Once available we have a Elastic Load Balancer that is using an endpoint inside our VPC.
- Below in details we also get the various DNS names for subnets that we can use for it.
- Each availability zone has a network interface.
- Similarly we can create a gateway endpoint for services like S3 & DynmoDb much like we create route for a NAT gateway or Internet gateway.
- Let's select S3 service now for endpoint.
- Select Subnets
- Give full access and create endpoint
- Since it does not have to propagate a DNS name it goes up and running instantly.
- Now in your Route tables you will have a local subnet, a NAT gateway and we have a private link which points to VPC endpoint
- We can use this private link to send information directly to S3.
- VPC endpoints only work in some region supports only IPV4
- VPC endpoints only work in some region
- Supports only IPV4
- An interface endpoint cannot be accessed through a VPN or VPC peering connection
- only Direct connect from Hybrid environment.
- Identity and Access Management
- Helps us to grant users access and create policies for different types of access
- Has a global scope across all the AWS
- Allows for large scale granuality
- We can grant a user EC2 access in all regions,one region,multiple region or even down to availability zone.
- Entities that we can control via IAM and centrally managed are
- Users
- Passwords
- Access Keys
- Permissions
- Groups
- Worlds
- Compliance
- Security practices are compliant with whatever security standards are required.
- Documents that AWS meets regulatory, audit and security standards.
- HPAA-Healthcare documents
- ISO Standards- Quality
- Various regulatory and security agencies around the world
- Only applies to services and infrastructure that AWS is responsible for.
- Shared Responsibility Model
- Describes what AWS is responsible for and what you, the user or customer is responsible for when it relates to security
- Has 3 different components
- Infrastructure Services
- Includes services like VPC, EC2, EBS and Auto Scaling
- Amazon is responsible for the "security of the cloud"
- The global infrastructure(Regions,AZ's,Edge,Locations)
- The foundation services(Compute,Storage,Database,Networking)
- Hypervisors
- Backend Network Traffic
- Hardware that all runs on.
- The user/customer is responsible for "security in the cloud"
- Customer Data
- Configure and Deploy
- Platforms and Applications
- OS and Network Configurations(patching security groups,network access control)
- Customer IAM(passwords,access keys,permissions)
- Additional Concerns
- Data Encryption
- Data integrity
- Container Services
- Services like RDS,EMR,ECS
- Middleware between what your architecture is managing and AWS is managing.
- AWS is responsible for
- Platforms and Applications
- patching of RDS instances
- Deployment of EMR
- OS and Network Configurations
- The Global Infrastructure(Region,AZ,Edge Location)
- The Foundation Services(Compute Storage,Database and Networking)
- Customer is responsible for
- Customer Data
- Customer IAM
- Data Encryption
- Data Integrity
- Abstracted Sevices
- Includes AWS services like DynoDb, S3 and Lambda
- AWS is responsible for
- Network Traffic protection
- Platforms and Applications
- OS and network Configurations
- The Global Infrastructure
- The Foundation Services
- The User is responsible for
- Customer IAM
- Data in transit and Client side
- Data Encryption
- Data Integrity
- Trusted Advisor
- Allows an AWS customer to get reports on their environment.
- Cost Optimization
- Performance
- Security
- Fault Tolerance
- Available to all customers
- There are 7 case checks
- General Users have access to 6 care checks
- Security(Security groups, IAM, MFA on root account,EBS and RDS public snapshots)
- Performance(service limits)
- Business and Enterprise customers have access to all checks and recommendation
- Access to full set of checks
- All four categories above
- Notification
- Weekly Updates
- Programmatic access
- Retrieve results from the AWS Support API and use them with customized reporting tools.
- From the console go to trusted advisor.
- It tells us about status of various parameters in our environment which are mentioned before.
- We can check the amount of services we are allowed to have
- for example we can purchase 20 instances per region
- Number of active snapshots allowed for EBS
- Security status in IAM is based on trusted advisor checks.
- Root User
- This is created when AWS account is created and credentials are the email and password used when signing up for an AWS account.
- Root User has full administrative rights and access to every part of account.
- The root user should not be used for daily work and administration.
- The Root User account should not have access keys, delete them if they exist
- The root user should always use Multi Factor Authentication(MFA)
- Users and Groups
- All the users belong to IAM
- A new user has implicit deny for all AWS services. A policy needs to be added to grant them access for something.
- Users receive unique credentials(username,password, and possibly access keys)
- Users can have IAM policies applied directly to them or they can be member of a group that has policies attached.
- With policies, an explicit deny always overrides an explicit allow from attached policies.
- All the policies attached to the user will be ignored if a single deny all policy is added.
- We should never store or pass our access credentials to an EC2 instance
- We should use SSH forwarding for this.
- MFA can and should be used for user accounts
- Access credentials are unique and should never be shared.
- In IAM dashboard go to users and click add user
- add username
- Select for management console Access and programmatic access for the user.
- In case of programmatic access user is given access keys so that he can use CLI's,API's,SDK's.
- Select a password which can auto generated or custom.
- password depends on password policy set by root user
- Next provide him with permissions which can be of types
- Attach existing policies directly
- Copying permissions from existing users.
- Add users to group.
- Policies in AWS are JSON documents
- Once a user is created with programmatic access we will see a screen of its Access Key id and Secret access keys.
- This is the only time when we can see and keep these 2 keys of user it cannot be retrieved later.
- We should download these at that moment only. The option is available on screen.
- We should keep the file in safe place.
- Groups allow for a policy assignment to multiple users at the same time.
- It is more organized and efficient way to manage users and policies.
- We can assign users to group and they can be assigned policy all at once.
- Users can be organized by function(i.e. DB admins,developers,architects) etc
- Assign policies to the group, not the individual users.
- In IAM dashboard go to groups
- create a group and give it a name
- Attach policy to it
- Add user to the group.
- Roles
- Temporary security credentials in AWS managed by Secure Token Service(STS)
- Another entity can "assume" the specific permissions defined by the role.
- These entities include
- AWS Resources(such as an EC2 instance)
- A User outside of our AWS account who needs temporary access.
- Roles are needed because policies cannot be directly attached to AWS services.
- Services can only have 1 role attached to them at a time.
- We should never pass or store credentials in or to an EC2 instance so roles are used instead.
- If an EC2 instance needs to be able to read data from S3 bucket it requires a read access role
- Instance assumes a role with S3 read-only permissions from IAM
- Instance can then read objects from the bucket specified in the role.
- We can change the roles on running EC2 instance through the console and CLI.
- Roles can also be given to external to our AWS account in following manner.
- Cross account access(Delegation)
- Provide access to another aws user from another account.
- Developer users have delegated access to Stage and Production.
- Identity Federation
- Users outside AWS can assume a "role" for temporary access to AWS accounts and resources.
- These users assume an "Identity Provider Access" role.
- Example of Identity Providers are
- Active Directory
- Single Sign On providers like Facebook,Google,Amazon etc.
- Click on roles and click on create role
- Create a role for EC2 by selecting EC2 under AWS service
- Select Amazon S3 readonly access from policies.
- Give a name and description to role
- Click Finish to create a role
- On role page we can provide authentication and authorization to another AWS account or Web Identity provider or users from SAML2.0
- Now we can assign this role to an EC2 instance which it will use to talk to an S3 instance.
- Identity Federation
- Providing a Non AWS user temporary AWS access by linking that user's identity across multiple identity systems.
- Federation with third party providers
- Most Commonly used in web and mobile applications
- AWS cognito allows for
- Creation of unique identities for users
- Use identity providers to federate them
- Example providers
- facebook,google,amazon etc
- Establishing SSO using SAML(Security Assertion Markup Language) 2.0
- Most commonly used in enterprise environments with an existing directory system like Active Directory etc.
- Federated Users can access AWS resources using their corporate domain accounts.
- Federation also aids user management by allowing central management of accounts.
- Establishing SSO without SAML
- AWS directory service for Microsoft Active directory
- Allows for a windows trust relationship to be built between an on premises
- Microsoft AD and your AWS Microsoft AD in the cloud.
- Secure Token Service
- A service in AWS that allows for management of temporary security credentials.
- It allows for granular control of how long access remains active.
- fifteen minutes to 1 hour(Default is 1 hour)
- Tag is sent along with call to url to determine amount of time.
- Credentials are not stored with the user or service or granted temporary access.
- A token is attached to the access request
- Risk is low as credentials are not exposed
- Do not have to create IAM identifier for every user.
- Because they are temporary in nature these is no need to rotate keys.
- STS uses a single endpoint
- sts.amazonaws.com
- resides in us-east-1(N.Virginia)
- Latency can be reduced by using STS api calls to regions that support them.
- Temporary Credentials have global scope just like IAM.
- Policies
- Policies are documents that states one or more permissions(JSON Formatted)
- An explicit deny overrides an explicit allow
- Allows for the use of a deny all policy to quickly restrict ALL access that a user may have.
- IAM provides pre-built policy templates to assign to users and groups, examples include
- Administrator access
- full access to all AWS resources
- Power User access
- Admin access except it does not allow user/group management
- Read Only Access
- Only view AWS resources i.e. user can only view what is in S3 bucket
- We can also create custom IAM permission policies using policy generator or write from scratch
- More than one policy can be attached to a user group at the same time.
- Policies cannot be directly attached to AWS resources(such as an EC2 instance)
- We use roles for this.
- In console policies section shows the built in AWS policies.
- To create a policy
- We can copy an AWS managed policy and customize it to fit our needs.
- We can generate a new policy using policy generator
- Helps us to create policies at granule level.
- We can select conditions from drop down for different resources.
- Create our own policy
- We can paste our own json
- We must validate policy in this case.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "ec2:*",
"Resource": "*",
"Effect": "Allow",
"Condition": {
"StringEquals": {
"ec2:Region": "us-east-2"
}
}
}
]
}
Custom Time Bound Policy
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ec2:*",
"efs:*"
"Resource": "*",
"Effect": "Allow",
"Condition": {
"Bool": { "aws:MultifactorAuthPresent": true },
"DateGreaterThan":{"aws:CurrentTime":"2020-07-01T00:00:00"},
"DateLessThan":{"aws:CurrentTime":"2020-12-31T23:59:59"}
}
}
]
}
- Visual Editor for Policy Creation
- This is a new feature recently added by AWS to create policies
- Now along with JSON we can also create policy using Visual Editor
- When we click on create policy we are presented with both options we discussed JSON in last option.
- In Visual Editor here is the process
- Select service for which we conditions in policy
- Next select actions available in that service
- Select resources
- Select Conditions from checkbox like MFA, Source IP etc
- Name the Policy
- Put Description
- Access Advisor
- Applies principal of least privilege
- User should have as few permissions as possible
- These permissions include group membership and assumed roles to other accounts.
- Access advisor allows unused permissions to be identified
- It's a way of auditing permissions
- We can also look at permissions per
- User
- Group
- Role
- Provides a list of all the services that a User used recently with last used time period
- This information is necessary t apply the principle of least privilege
- This information is in dashboard of each user and group.
- Encryption
- Encryption is used to mask code/text so that humans and computers can't read it
- It scrambles the code/text
- The basic component of encryption is Key
- The cipher used to encrypt and decrypt data
- Keys have gotten longer as computing power has grown
- Keys can also be used to encrypt other keys(Master Key)
- This process is known as enveloping
- Key is put through a math scrambler along with text and result of this scrambler is unreadable.
- Keys of 16,32,64 bit can be easily decrypted now a days thus standard now a days is 256 bit keys minimum max can be 512,1024 bit keys.
- The cipher should be complex enough for better encryption.
- Server side Encryption
- Data is encrypted as it is written to the disk, then decrypted as it is read from the disk.
- Often referred to as encryption "at rest".
- Client side Encryption
- Data is encrypted by the client before it is sent to the server, then decrypted when the client receives data from the server..
- Often referred to as encryption "in transit".
- Symmetric Encryption
- Uses the same key to encrypt and decrypt.
- Example Advanced Encryption Standard(AES) - 128,192,256 bit
- Asymmetric Encryption
- Uses different keys to encrypt and decrypt a public and a private key
- The private key cannot be derived from the public key
- The public key is available to any entity
- Example Secure socket layer(SSL),Transport Layer Security(TLS),SSH
- Client sends request to server
- Server sends back a public key to client
- Client encrypts the data with public key and sends to server which is then decrypted by server using private key.
- This is encryption in transit.
- SSH
- SSH uses both types of encryption, depending upon the purpose
- In case of symmetric encryption initial connection gets encrypted with both sides using an agreed upon session key.
- This process is much faster for data transmission.
- In Asymmetric authentication two keys are generated viz public and private.
- These are industry standard RSA key pairs.
- Public key gets copied to server(~/.ssh/autherized_keys)
- This happens automatically when we launch an instance with an associated key pair.
- Private key is downloaded to user's computer and its permission is updated
- chmod 400<keyname>.pem
- Server sends a challenge message to the client encrypted with a public key, and it gets decrypted using client's private key
- This String is then sent back to the server and if String matches access is granted.
- Windows servers using the ec2config service will use their private key to decrypt the administrator password.
- HSM(Hardware Security Module)
- Used in Data Center Environments
- Physical device for secure key storage and management.
- AWS has developed a service that allows us to have a cloud HSM.
- AWS can connect to a VPC and it can be separated from other networks for latency and Security reasons
- The keys are controlled and managed by user.
- Cloud HSM can be placed in multiple availability zones and clustered
- The load balancer and replicating keys are shared among the clusters so we only need to add keys in one place
- As such keys can be kept in a dedicated hardware
- Asymmetric handshakes can increase processing time but this process can be offloaded to cloud HSM and user can be redirected to our application directly thus taking this load off our application.
- HSM clustering helps to ensure that our keys are always available in case one of the HSM cluster in availability zone goes down.
- Key Management Service(KMS)
- AWS KMS is a managed service that makes it easy for you to create and control the encryption keys used to encrypt your data, and uses FIPS140-2 validated hardware security modules to protect security of your keys.
- AWS KMS can also be integrated with AWS cloud Trail and S3 to provide you with logs of all key usage to help meet your regulatory and compliance needs.
- If we do not have a compliance need to store our keys on dedicated hardware we can use KMS.
- A service that allows us to create and control our access keys.
- Advantages over HSM are
- We can use IAM policies for KMS access.
- AWS services integrate directly with KMS.
- KMS stores customer master keys(CMK).
- This process follows symmetric encryption with a change
- In some HSM's there can be any number of Key Encryption Keys(KEK).
- This process is called as enveloping
- KMS envelopes one layer and stores the top key, the CMK
- The encrypted data key is stored with data.
- KMS encrypts the key of HSM used to encrypt the data with a Master Key and stores the encrypted key along with data which can be on S3,EBS,EFS
- Thus keys stored with data are safe as it is also encrypted with a master key and data cannot be decrypted using that key.
- In KMS we can define granule access control regarding Access to keys.
- On Dashboard go to IAM in IAM go to KMS and under section KMS go to Encryption Keys.
- We will get master keys related to our services.
- Now lets try encrypting a file using KMS
- First create a S3 bucket with following permissions
- Enable Versioning
- For encryption we can choose from following keys
- Default Encryption is AES-256 server side encryption with s3 managed keys
- Does a symmetric encryption of our data and key used in symmetric encryption is stored in S3 as well.
- AWS-KMS Encryption allows us to use symmetric encryption that encrypts the data with a key and KMS encrypts that key with a master key
- The master key is stored in KMS and encrypted key is stored in S3 along with data.
- We select the AWS-KMS option and will encrypt using stock aws s3 key
- We can also choose from one of our pre-generated KMS master keys
- We can create our own Custom KMS master key ARN
- You can find your KMS in IAM- Encryption keys or KMS-System keys.
- Note the key is not created until a file is uploaded to S3 bucket i.e a file is present in bucket to be encrypted by S3.
- S3 must have a file for this key to be generated.
- You will not find a key in IAM if it is not generated.
- If you have a custom key you can find that in your KMS custom keys
- We can check the key document is encrypted with from the last few characters of the KMS key ID of document by matching it with our KMS key ID.
- If a policy is on a bucket then it will override file policy by default while uploading a file.
- For example we have 2 keys one system and one custom key and we have system key set on bucket so by default the uploaded file will also have a system key.
- To setup a different key policy on file we have to set it individually or explicitly while uploading the file
- OS-Level Access
- EC2 OS Level Security
- For EC2 least privilege principle is followed
- To access our instances securely
- Linux/Mac OS accessing Linux instances(cloud-init)
- use built in terminal to SSH to the instance
- Authenticate with EC2 key pair
- For any OS connecting to windows(ec2config)
- Use the EC2 key pair to decrypt the Administrator Password
- Use Remote Desktop
- For windows connecting to Linux Instances
- Use Putty
- The Key Pair is created while creating an new EC2 instance
- We need to store the pem key in our local system and give path of the same in our ssh command
- Basiton Host
- Helps to secure SSH in our environment
- Functions like a "Jumpbox"
- Allows us to securely access instances in private subnets without making those instances public anyway.
- We access the Basiton Host and then from Basiton Host we access instances in private subnets.
- Best practices for Basiton Host
- Deploy in 2 availability zones
- Use Autoscaling to ensure number of Basiton Hosts
- Deploy in public subnets(DMZ)
- Access is locked down and is only allowed from known CIDR Ranges
- Our office IP Range
- Specific Administrative Machine IP Range
- Ports are limited to only ports Basiton Host needs
- Do not copy keys or any other access information to the Basiton Host or any other instance.
- Creating a Basiton Host in Linux
- We need a VPC.
- Subnets 2 public and 1 or many private
- Private Subnets are for instances.
- Public Subnets are for Basiton Hosts
- Route Tables with private subnets assigned to them.
- Internet Gateways attached to VPC
- NAT gateways preferably 1 in each availability zones limited to private subnets.
- Assign NAT gateways to each route table.
- Lock down network ACL's
- Setup a Public Security Group for Basiton Hosts
- with inbound rules for SSH and RDP
- RDP is needed for windows if we are using linux we don't require RDP.
- In the source we can setup the ip range of our corporate office,ip range of admins or ip range of admin workstations.
- Setup a Private security group similerly
- Create Instances that will be accessible via Basiton Host.
- Create an Auto Scaling group.
- Click on create launch configuration if you are creating for first time
- Use Amazon AMI's
- Use T2 Micro's
- Give it a name, IAM roles
- Assign a public IP to every instance
- Let storage be what is given by default.
- Add the Basiton Security Group
- Review and create Launch configuration.
- Now create a Autoscaling group using the above launch configuration give it a name.
- Give 2 instances
- Select VPC network.
- Select public subnets since we need to access our Basiton from public network.
- Configure load balancer if needed.
- Next configure scaling policies
- This is only required if we need to keep two similar instances in one availability zone i.e. we need to add them into more than 2 public subnets.
- Now in our instances rename them as Basiton 1 and Basiton 2 (we created 2 instances so you will see 2 entries there) in different availability zones.
- To access a Basiton Host we use ssh forwarding unlike for instances we used ssh only.
- Because we do not want to copy ec2 key value pairs on the instance in our environment we need to keep them outside our envirnoment.
- So using ssh forwarding we first hop into our Basiton Server and then we hop into our access servers.
- Setting up ssh-agent bash
- ssh -agent bash
- ssh -add key.pem
- ssh -A ec2-user@publicIp
- Using ssh we can only hop 2 instances.
- Securing ssh communication
- In security groups add rules for protocols for application servers.
- Access from ssh should be from Basiton Server only.
- Add a rule for HTTP protocol if instance is hosting a web application.
- Now users from other than our Basiton Host security group cannot access these instances.
- Next try shutting down your Basiton host you will find due to auto scaling it will create another instance of Basiton host automatically without any name.
- Creating a Bastion Host in Windows
- For security group change the protocol from SSH to RDP with source from Basiton Host.
- Click on instance and generate password and download key file.
- Copy the password and click on RDP file and paste in password you are connect to Basiton Host.
- Now repeat similar steps from Bastion Host to connect to application server.
- Data Security
- Things to take care
- Accidental information disclosure
- Data integrity compromised
- Accidental deletion
- Availability
- Securing Data at Rest in S3
- Permissions
- Bucket level and object level permissions along with IAM policies
- Rule of least priviledge
- MFA delete
- Versioning
- Enable to store new versions of every modification or delete.
- Helps with accidental deletion by creating a version for deleted objects.
- Replication
- Objects are replicated across Availability Zones automatically.
- Standard and Reduced redundancy options at different price points.
- Backup
- Replication and Versioning make backups unnecessary.
- Can write applications to backup objects to another region or on perm storage.
- Server side Encryption
- Use either S3 master key or KMS master key.
- Assists with accidental data exposure as long as the keys are not compromised.
- VPC Endpoint
- Can use data inside the VPC without making it public.
- Securing Data at Rest in Glacier
- We can archive data present in S3 in glacier
- Server side encryption
- All data is encrypted using AES-256
- Each archive gets a unique key.
- A master key is then created and stored securely.
- Securing data at Rest in Elastic Block Storage(EBS)
- Replication
- EBS stores 2 copies of each volume in the same availability zone
- Helps with hardware failures but is not intended to help availability.
- Backup
- Snapshots(Point in time captures).
- Can use IAM to control access to these snapshot objects.
- Server Side Encryption
- AWS KMS master-key
- Microsoft Encrypted file system
- Microsoft Bit Locker
- Linux dmcrypt
- Third Party Solutions
- Securing data at Rest in Relational Database Service(RDS)
- Permissions
- Use IAM policies on users, groups and roles to limit access.
- Rule of least privilege
- Encryption
- Keys Management System(KMS) is integrated for most instance sizes(not t2.micro).
- Mysql,Orcale, and Microsoft SQL have cryptographic functions at the platform level
- Keys are managed at the application level.
- Must reference the encryption and key in queries on encrypted database files.
- Securing data at Rest in Dynmo DB
- Dynamo DB is a managed service in AWS i.e. we do not have a lot of access to things which we can modify.
- Permissons
- Use IAM policies on users groups and roles to limit access.
- Rule of least privilege
- Encryption
- Same as RDS can encrypt at application layer(effects the query process).
- Best practice to use raw binary or base 64 - encoded fields when storing encrypted fields.
- VPC Endpoint
- Can use data inside VPC without making it public.
- Securing data at Rest in Elastic Map Reduce
- Amazon Managed Service
- AWS provides the AMI's(no custom AMI's)
- EMR instances do not encrypt data at rest
- Data Store
- S3 or DynmoDB
- HDFS(Hadoop Distributed File System)
- If HDFS, AWS defaults to Hadoop KMS
- Techniques to improve data security
- S3 server side encryption(if not using HDFS)
- Application level encryption
- Hybrid
- Decommission data and media securely
- Different than on-prem decommissioning
- When a delete request is made,AWS does not decommission the underlying hardware.
- Storage blocks are marked unallocated
- Secure mechanisms reassign the blocks else ware
- Reading and writing to blocks
- When an instance writes to a block of storage
- Previous block is overwritten
- The it is overwritten with our data
- If instance reads from the block previously written
- Previous stored data is returned
- If there is no previous data from that instance then the hypervisor returns a zero.
- End of Life
- AWS follows techniques in
- DoD 5220.22 -M("National Industrial Security Program Operating Manual")
- NIST SP800-88("Guidelines for media sanitation")
- If a device is unable to adhere to these two standards, it is physically destroyed.
- Securing Data in Transit
- Concerns with communicating over public links(Internet)
- Accidental Information Disclosure
- Compromised data Integrity
- Identity Spoofing(man-in-the-middle)
- Approaches for protecting data in transit
- Use HTTPS whenever possible for web applications(SSL/TLS)
- Can offload HTTPS processing to an elastic load balancer if processing is a concern
- Https is an asymmetric encryption strategy which requires a little bit more communication upfront.
- Remote Desktop protocol accessible servers should have X.509 certificates to prevent identity spoofing.
- SSH is preferred for administrative connections to Linux servers
- Database server traffic should use SSL/TLS as well
- AWS console and AWS API's use SSL/TLS for connection to clients
- Example Services are S3,RDS,DymoDb,Elastic Map Reduce(EMR)
- X.509 certificates are used by the client browser to authenticate identity
- Carries public key and binds that key to an identity
- AWS Certificate Manager
- Allows AWS users to easily create and manage SSL/TLS certificates
- Works With
- Elastic Load Balancer
- Amazon Cloud Front
- API Gateway
- Elastic Beanstalk/Cloud Formation
- Automatic Certificate Renewal
- Import 3rd party certificates as well
- It's Free
- To Request a certificate
- In security and compliance click on certificate manager.
- In the domain name add the domain which needs to be secured with SSL/TLS certificate.
- Select the type of validation i.e. DNS,Email.
- Validate
- Now whenever we use the specific domain name the certificates will be automatically attached.
- OS Security
- Recommendations by AWS
- Disable Root User API access Keys
- Use limited source IP's in security groups.
- Password protect pem files on user machines
- Keep authorized_key_file up to date on your instances
- Rotate Credentials(access keys)
- Use Access Advisor to identify and remove unnecessary permissions.
- Use Basiton Hosts.
- Develop Configuration standards for all resources and regularly review them
- Securing Custom AMI's(Amazon Machine Images)
- AMI's can be public or private.
- We should define a base configuration to be deployed on instances
- Operating System
- Applications
- Security Settings(authorized_key,local accounts,file and directory permissions)
- Cleanup Harding tasks to perform before publishing
- Disable insure applications like telnet.
- Minimize Exposure- Disable all ports except management and those necessary for the application it self.
- Protect credentials
- Access Keys,Certificate, or third party credentials are deleted.
- Software should not be using default accounts.
- SSH keys must not be published
- Disable guest account(windows)
- Protect Data
- Delete shell history and logs(event log in windows)
- Remove printer and file sharing, or any other sharing service that is on by default(windows).
- Make sure your systems do no violate AWS system use policy
- Example open SMTP relays or proxy server
- Make sure there are no pem files in AMI home directory.These may contain private keys used by other systems
- If you have run AWS configure on the machine then under aws hidden directory the credentials file is there which has access key and secret access key for the user created in the instance. This needs to be taken care off.
- Take care of bash history which may contain information such as pem file names and names of server that used that pem file
- Bootstrapping AMI's
- cloud-init,cfn-init,tools like puppet and chef
- Considerations
- Patching/Updates
- Dependencies should be considered
- Security software updates might update beyond the patch level of the AMI.
- Application updates might patch beyond the build in the AMI.
- Solution for this is to keep AMI updated frequently
- Bootstrapping applications should take into account differences in
- Production Environment
- Test Environment
- DMZ/Extra net environment
- Instance updates might break external management and security monitoring.
- Test on non-critical resources.
- AWS System Manager features - Patching/Automation
- Resource groups allow you to group your resources logically(Prod,Test,DMZ) web-servers, Instances using db running locally.
- Insights: Aggregates monitoring on live cloud trail,cloud watch,Trusted Advisor, and more into a single dashboard for each resource group.
- Inventory: A listing of your instances and software's installed on them.
- Can collect data on applications,files,network configs,services and more.
- Automation: Automate IT operations and management tasks through scheduling,triggering from an alarm or directly.
- Start stop instances of a group together.
- Run Command: Secure remote management replacing need for Bastion hosts or SSH.
- Run Shell Scripts on a bunch of instances remotely.
- Patch Manager: Helps to deploy OS and software patches across EC2 or on-prem.
- Run security patches on all instances depending on their type by creating a patch baseline.
- Maintenance Window: Allows for scheduling administrative and maintenance tasks.
- State Manager and Parameter Store: Used for configuration management.
- To access System Manager first create a role.
- Roles can be created for EC2 and under EC2 Role for simple systems manager.
- Select Policy
- Give name to the role.
- Now we can attach role to Groups and perform actions above.
- Mitigating Malware
- Malware
- Executing untrusted code on a system can result in rootkits,botnets and more
- That system is no longer yours
- Combact malware by :
- Use only trusted AMI's, softwares and software depots
- Use the principle of least previledge
- Keep patches update (which means updating AMI's regularly as well)
- Use antivirus/antispam software
- Host based IDS(can detect rootkits and check file integrity)
- Suggested resoloution
- Antivirus may be able to "clean" the system.
- Best practice save the data and reinstall the system,application and data from trusted sources.
- could be as simple as terminating the instance(Auto Scaling)
- Mitigating Abuse and Compromise
- Abuse activities
- Externally observed behavior of AWS customer's instances or resources that are malicious, offensive,illegal or could harm other internet sites.
- AWS will shut down malicious abusers, but many of the abuse complaints are about customers conducting legitimate business on AWS.
- Causes of abuse that are not intentional
- Compromised Resource- EC2 instance becoming a botnet
- Unintentional abuse - Web crawlers can sometimes register a DOS attack
- Secondary abuse - Enduser of your service posts and infected file
- False Complaints - Internet users mistake legitimate activities for abuse.
- Best Practices for response to abuse
- Do not ignore AWS abuse communication and make sure they have the most effective email address on file.
- Follow security best practices
- Mitigate unidentified compromises.
Comments
Post a Comment