AWS Solution Architect

AWS Global Infrastructure

Components of AWS Global Infrastructure are as follows. Components marked in bold are those which needs to be studied for AWS Solution Architect. The components marked in italics are the ones which must be mastered thoroughly for this.

  • Compute
    • EC2
    • Lambda
  • Storage
    • S3
    • EFS
  • Databases
    • Relational
      • RDS
    • Non Relational
      • Dynamo DB
  • Migration and Transfer
    • Snowball
  • Network and Content Delivery
    • VPC,CDM,DNS
  • Developers Tools
  • Robotics
  • Blockchain
  • Satellite
  • Management and Governance
  • Media Services
  • Machine Learning
    • Sage Makers
  • Analytics
  • Security Identity and Compliance
    • IAM
  • Mobile
  • AR & VR
  • Application Integration
  • AWS Cost Management
  • Customer Engagement
  • Business Application
  • Desktop and App Streaming
  • IOT
  • Game Development
Availability Zones
  • It is like a data center. A facility where hardware is placed.
  • A data center is a building filled with servers
  • An availability zone may have several data centers but because they are close together they are counted as 1 Availability Zone.
Regions

  • A region is a geographical area, Each region consists of 2(or more) Availability zones.
Edge Locations
  • Edge Locations are endpoints for AWS which are used for caching content. Typically this consists of cloudfront, Amazon's content delivery network(CDN).
    • There are many more edge locations than regions.Currently there are over 50 edge locations.
  • Files from other locations are cached in edge locations which eases their availability.
Identity and Access Management(IAM)
  • IAM allows you to manage users and their level of access to the AWS console.
  • It is important to understand IAM and how it works for administrating a company's AWS account in real life.
  • IAM allows you to setup users groups permissions and roles.
  • Features are as follows
    • Centralized control of your AWS account
    • Shared access to your AWS account
    • Granular Permissions
    • Identity Federation(Active Directory,Facebook,LinkedIn)
      • Login to AWS using windows authentication or facebook/Linkedin Credentials
      • useful during game development.
    • Multifactor Authentication
      • Provides an additional layer of authentication besides user name and password such as software token oath using Google Authenticator, Microsoft authenticator etc.
    • Provide Temporary access for users/devices and services where necessary using Assume Roles.
    • Allows you to setup your own password rotation policy
    • Integrates with many different AWS services
    • Supports PCI DSS Compliance
      • A compliance framework for credit control and other financial information.
  • Key Terms
    • User
      • End users such as people employees of an organisation etc who operate from AWS console.
      • These users can be Application operating from CLI or a user operating from AWS console.
      • A collection of users under one set of permissions.
      • We can create roles which have access policies for different resources.We can assign them to users,AWS services which upstream/downstream Applications can use to access AWS resources.
    • Groups
      • A collection of users. Each user in the group will inherit the permissions of the group.
      • We can create a group by giving it a name and assign policies to it.
    • Policies
      • An IAM policy is a document that defines one or more permissions.
      • An IAM policy can be attached to a user,group or role.
      • Policies are made up of documents, called policy documents. These documents are in format called JSON and they give permissions as to what a User/Group/Role is able to do.
      • A policy is a set of permissions.
      • A policy is written in json.
    • Roles
      • You create roles and then assign them to AWS resources.
      • Roles are used to grant permissions to entities outside IAM which we trust example users in another IAM account or upstream systems.
        • Upstream system can can be an Application code running in EC2 instance that need to perform action on AWS resources.
          • An AWS service that needs to access resources.
          • Users from a corporate directory who use identity federation with SAML.
      • Select the service which you want to create role for.
      • Select the service which will use the role.
      • Select policy for the Role and give role a name.
    • Root Account
      • An account with all privileges.
      • It is the account created when we first setup your AWS account.
      • It has complete admin access.
      • MFA sign in types
        • Virtual MFA device
        • U2F security key
        • Other hardware MFA device
  • IAM runs on global. It is standard across all regions
  • The various AWS JOB Function policies are as follows
    • Administrator
    • Billing
    • Database administrator
    • Data scientist
    • Developer power user
    • Network administrator
    • Security auditor
    • Support user
    • System administrator
    • View-only user
  • New users have no permissions when first created
  • New users are assigned Access Key ID & Secret Access Keys when first created
    • These are used to programatically access the aws ecosystem.
    • These are not the same as password.You cannot use the access key id and the secret access key to login to the console.
    • We only used to access AWS via the API's and command line.
    • We can only view these once. If we loose them we need to regenerate them.We must save them in a secure location.
  • Add a user with a username and an access type which can be programmatic or console access.
    • A programmatic access can be used to access resources from an upstream system using CLI.
    • We can add user to a group, copy permissions from existing user, Attach policies to a group.
    • We can change account settings and change/create a password policy.
  • Setup Multiple Factor Authentication on your root account.
    • Hook up with a MFA client Application like Google authenticator,Microsoft authenticator etc.
    • We can also hook up to a U2F security key such as Yubikey or any other compliant U2F device.
    • Any other hardware MFA device.
  • We can create and customize our own password rotation policies.
  • Power user has access to all AWS services except the management of groups and users within IAM
  • IAM policy Simulator
    • Helps us to test the effects of IAM policies before committing them to production.
    • Validate that policy works as expected.
    • Test policies already attached to existing users - great for troubleshooting an issue which you suspect is IAM related.
    • Access using https://policysim.aws.amazon.com
    • We can trust policies for various Users,Groups and Roles.
      • Select the user/group/role.
      • Select the attached policies.
      • Select Services.
      • Select Action.
      • Select run simulation.
      • Under the results section it shows whether the concerned action is allowed or denied in the current scenario.
AWS Create a Billing Alarm
  • Cloudwatch is used to monitor your cloud or watch your cloud
  • Cloudwatch service is placed under maintenance and Governance
  • We must create the button Create Alarm under Billing Section to create a billing Alarm
  • If we don't see a screen with metric for an alarm as billing then we may see a button to select metric.
    • Click Select metric and select metric as billing.
    • Select Total Estimated Charges
    • Select Currency as USD.
  • You will be at Billing Alarm Screen where you will see a graph and parameters like currency,status and period.
  • Next we need to select whether our condition will be for a range or static value for that select between Anamoly or Static Threshold.
  • Next select a SNS(Simple Notification Service) topic
    • Name the topic something like Billing Alarm
    • Add Email address
    • Click create topic.
  • The Address given will receive an email to subscribe to the topic.
    • Confirm Subscription
  • Give Alarm a name and description.
    • Click create Alarm
  • Cloud Watch is used to create the Alarms in AWS.
S3
  • S3 stands for "Simple Storage Service"
  • Provides developers and IT teams with secure durable,highly scalable object storage. Amazon S3 is easy to use , with a simple web service interface to store and retrieve any amount of data from anywhere on the web.
  • S3 is a safe place to store your files.
  • It is a object based storage
    • Allows to upload files.
    • Object based files can be from 0 bytes to 5TB
    • Objects is just like files it has following attributes.
      • Key
        • This is the name of the object
      • Value
        • This is the data which is made up of a sequence of bytes
      • Version ID
        • Important for versioning
      • Metadata
        • Data about data we are storing
      • Sub resources
        • Access Control List
        • Torrent 
  • The data is spread across multiple devices and facilities.
    • There is unlimited storage
    • Files are stored in buckets
      • Buckets are like folders
    • S3 is a universal namespace i.e. names of buckets must be unique globally.
    • When we upload a file to S3
      • if request gives us a HTTP 200 code it means upload was successful.
  • Data Consistency in S3
    • Read after Write consistency for PUTS of new objects
      • If we write a new file and read it immediately afterwards, you will be able to view that data.
    • Eventual Consistency for overwrite PUTS and DELETES(Can take some time to propagate)
      • If we update an existing file or delete a file and read it immediately we may get the older version or may not. Basically changes to objects may take a little time to propagate.
  • Amazon guarantees 99.9% availability of S3
  • Amazon guarantees 99.999999999% durability of S3 information(11 X  9's).
  • S3 has tiered storage available.
  • S3 has life cycle management i.e. we can move them between tiers based on time based events.
  • S3 provides versioning of objects
  • S3 provides object encryption
  • S3 provides MFA delete
  • We can secure our data using Access Control Lists and Bucket Policies
    • We can lock our objects at bucket level or object level using ACL's
  • S3 storage classes/tiers
    • S3 standard
      • 99.99% availability
      • 99.999999999% durability
      • Stored redundantly across multiple devices in multiple facilities, and is designed to sustain the loss of 2 facilities concurrently.
    •  S3-IAC(Infrequently Accessed)
      • For data that is accessed less frequently but requires rapid access when needed.
      • Lower fee than S3, but we are charged a retrieval fee.
    • S3 One Zone - IA
      • When we want a lower cost option for infrequently accessed data, but do not require the multiple availability zone data resistance
    • S3 Intelligent tiring
      • Designed to optimize costs by automatically moving data to the most cost effective access tier without performance impact or operational overhead.
    •  S3 Glacier
      • S3 Glacier is a secure durable and low cost storage class for data archieveing.
      • We can store any amount of data at costs that are competitive with or cheaper than on -premises solutions
      • We may need to archive data because of regulations.
      • Retrieval times on first byte latency is configurable from minutes to hours.
    • S3 Glacier Deep archive
      • This is amazon's lowest cost storage class where a retrieval time of 12 hours is acceptable.
      • Retrieval time on first byte latency is configurable only in hours.
    • S3RRS(Reduced Redundancy Storage)
      • This is one of the deprecated/decommissioned classes.
      • This one was before an S3 one zone- IA and replaced by S3 one zone IA.
  • Billing in S3
    • Billing in S3 is based on following paramerters
      • Storage
      • Requests
      • Storage Management Pricing
      • Data Transfer Pricing
      • Transfer Acceleration
        • S3 transfer acceleration enables fast,easy, and secure transfer of files over long distances between your end users and an S3 bucket.
        • Transfer acceleration takes advantage of Amazon's cloud front's globally distributed edge locations.
        • As the data arrives at an edge location, data is routed to Amazon S3 over an optimized network path.
      • Cross Region Replication Pricing
  • Files are stored in buckets.
  • A DNS name is provided to your bucket once it is created for example.
    • gauravm.s3.amazonaws.com
      • means buckets in the north Virginia which is default region.
      • if we create in other region it will have its name in subdomain example
        • gauravm.eu-west-1.amazonaws.com
  • S3 is not suitable to install operating system or Database on
    • For that we need block based storage and S3 is object based storage
  • S3 is a global service and does not require any region selection.
    • When we select S3 from services region automatically changes to global
    • The buckets in S3 may belong to a region though
  • To Create a Bucket we require
    • Bucket Name
    • Bucket Region
    • By default the bucket access to S3 is blocked
    • Bucket Versioning i.e. version control on files in bucket
    • Tags for bucket in key value pairs
    • Encryption type i.e. server side or manual
  • Once a bucket is created it has
    • Objects
      • These are the files that we put in a bucket
      • Each object has a public url which is accessible depending on permissions provided
      • We can set separate storage class for each object too by default it acquires the storage class of bucket.
      • If public access for a bucket is blocked we cannot enable public access for its objects
        • Even if we make a public bucket the objects in it are not public by default
  • We can replicate the contents of one bucket to another bucket automatically by using cross region replication.
  • Restricting Bucket access
    • Bucket Policies
      • Applied across the whole bucket
    • Object Policies
      • Applied to individual files
    • IAM policies to Users and Groups
      • Applies to Users and Groups
    • Access Control Lists
  • S3 Security and Encryption
    • S3 buckets can be configured to make access logs which log all requests made to S3 bucket
    • This can be sent to another bucket or another bucket in another account
    • Encryption in Transit 
      • HTTPS protocol encrypts data in transit
      • Achieved by SSL/TLS
    • Encryption at REST(Server Side) is achieved by
      • S3 managed keys - (SSE-S3)
      • AWS Key Management Service,Managed keys - (SSE-KMS)
      • Server Side Encryption with Customer provided keys - (SSE-C)
      • In order to enable encryption at rest using EC2 and elastic block store we must configure encryption when creating the EBS volume.
    • Client Side Encryption
      • We create the encrypted object and upload to S3
        • Example Encrypted PDF,Word File
  • S3 Versioning
    • Stores all version of an object(including all writes and even if we delete object)
    • Once Enabled,Versioning cannot be disabled, only suspended
    • Integrates with life cycle rules.
    • Comes with MFA delete capability
    • Each version once uploaded must be made public
      • If we make latest version public it will not make earlier versions as public
    • When we delete an object it is marked as deleted i.e. its a soft delete.
    • To retrieve an object just remove the delete marker.
  • Lifecycle Management with S3
    • In Bucket go to management
      • In management we will find life cycle rules
    • Create a life cycle rule
      • add prefix to rule
        • prefix can be anything like folder name or images etc
      • or select if rule applies to all objects in a bucket.
    • Lifecycle Rule Actions
      • Transition current versions of objects between storage class.
        • We can transition the current version of objects between storage classes for example if we want to transition current version to storage class Standard IA we can do so using this option.
      • Transition previous versions of objects between storage classes.
        • Similarly we can transition previous versions after 30 days or any specified time limit.
      • Expire current version of object.
      • Permanently delete previous version  of objects.
      • Delete expired delete markers or incomplete multipart exports
    • We can add as many transitions like this for example we can transition previous versions to a storage class after 30 days and then to another class after 60 days.
    • We are also shown a time line summary below according to our selection
    • Automates moving our objects between the different storage lines.
    • Can be used in conjunction with versioning.
    • Can be applied to current version and previous versions as well.
  • S3 Object Lock and Glacier Vault Lock
    • S3 object lock is used to store objects using a write once,read many(WORM) model.It can help you prevent objects from being deleted or modified for a fixed amount of time or indefinitely.
    • You can use S3 object lock to meet regulatory requirements that require WORM storage , or add an extra layer of protection against object changes and deletion.
    • Modes of object lock
      • Governance Mode
        • In Governance mode users cant overwrite or delete an object version or alter lock settings unless they have special permissions.
        • With governance mode, you protect objects against being deleted by most users, but you can still grant some users permission to alter the retention settings or delete the object if necessary.
      • Compliance Mode
        • A protected object version cant be overwritten or deleted by any user, including the root user in your AWS account.
        • When an object is locked in compliance mode its retention mode can't be changed and its retention period cant be shortened.
        • Compliance mode assures an object version can't be overwritten or deleted for a duration of the retention period.
    • Retention period protects an object version for a fixed amount of time.When you place a retention period on an object version, Amazon S3 stores a timestamp in the object version metadata to indicate when the retention period expires.
    • After the retention period expires the object version can be overwritten or deleted unless you also placed a legal hold on the object version.
    • A legal hold by S3 object lock on an object version prevents an object version from being overwritten or deleted.
      • However a legal hold dosen't have an associated retention period and remains in effect until removed
      • Legal holds can be freely placed and removed by any user who has the S3:Put Object Legal Hold Permission.
    • Glacier Vault lock 
      • S3 Glacier vault lock allows you to easily deploy and enforce compliance controls for individual S3 Glacier Vaults with a Vault lock policy.
      • We can specify controls , such as WORM in a Vault,Lock Policy and lock the policy from future edits.
      • Once locked the policy can no longer be changed.
    • Use S3 object lock to store objects using a write once , read many model
    • Object locks can be on individual objects or applied across the bucket as a whole.
  • S3 performance
    • prefix is the folder location after bucket name i.e if a file is in mybucket/folder1/subfolder1/myfile.jpg
      • Then prefix in this case is folder1/subfolder1
    • S3 has extremely low latency. We can get the first byte out of S3 within 100-200 milliseconds.
    • We can also achieve a high number of requests: 3500 PUT/COPY/POST/DELETE and 5500 GET/HEAD requests per second per prefix.
    • We can get better performance by spreading our reads across different prefixes
      • If we are using 2 prefixes you can achieve 11000 requests per second.
    • S3 Limitations when using KMS
      • If you are using SSE-KMS to encrypt your objects in S3, you must keep in mind the KMS limits.
      • When we upload a file, we will call Generate Data Key in KMS API
      • When we download a file we will call Decrypt in the KMS API.
      • Uploading/Downloading will count towards the KMS quota.
      • Currently we cannot request a quota increase for KMS.
      • Region Specific, however its either 5,50,10,000 or 30,000 requests per second.
    • Multipart uploads
      • Recommended for files over 100MB.
      • Required for files over 5GB.
      • Parallelize uploads(increases efficiency)
    • S3 Byte Range fetches
      • Parallelize downloads by specifying byte ranges.
      • If there is a failure in the download its only for a specific byte range
      • Can be used just download partial amounts of the file (eg header information)
  • S3 select & Glacier Select
    • S3 select enables application to retrieve only a subset of data from an object by using simple SQL expressions.
    • By using S3 select to retrieve only the data needed by your application we can achieve drastic performance increases.
      • in Many cases you get as much as a 400% improvement.
    • So if our data is present in zip files that contain csv files. In normal retrieval we need to download decompress and process CSV for data retrival.
    • Using S3 select we only need to use a simple sql expression to return only data from the store we are interested in instead of retrieving the entire object.
    • This means we are dealing with an order of magnitude less data which improves the performance of our underlying application.
    • Glacier Select
      • Some companies in highly regulated industries eg financial services,healthcare, and others write data directly to Amazon Glacier to satisfy compliance needs like SEC Rule,17a-4 or HIPPA.
      • Many S3 users have lifecycle policies designed to save on storage costs by moving their data into Glacier when they no longer need to access it on a regular basis.
      • Glacier select allows us to run SQL queries against Glacier directly.
    • Using S3 & Glacier select we get data by rows or columns using simple regular expressions.
    • Save money on data transfer and increase speed.
  • AWS organization and Consolidated Billing
    • AWS organization is an account management service that enables you to consolidate multiple AWS accounts into an organization that you create and centrally manage.
    • When using AWS organization the billing is done on aggregation basis.
    • Advantages of Consolidated Billing
      • One bill per AWS account.
      • Very easy to track and allocate costs.
      • Volume pricing discount.
    • From AWS Console go to AWS organizations under Management and Governance
      • Click create organizations
      • It will add your current account to organization and make it as root account.
      • Next we can create a new account or invite an account.
        • New account is created in master account
      • Accept invite from your newly created/added account
      • You will be added to the organizational unit.
      • There may be organizational units in an organization depending on departments like finance, marketing etc.
        • We can group these organizations under Organizational Units under organize accounts.
    • We can also apply policies to AWS organizational accounts such as 
      • Service Control Policies
      • Tag Policies
    • Best practices with AWS organizations are as follows 
      • Always enable MFA on root account and complex password on root account
      • Paying account should be used for billing purposes only.Do not deploy resources into the paying account.
      • Enable/Disable AWS services using Service Control Policies(SCP) either on OU or individual account
  • Sharing S3 buckets across Accounts
    •  3 different ways to share S3 buckets across accounts
      • Using Bucket policies & IAM(applies across entire bucket)
        • Programmatic access only.
      • Using Bucket ACL's and IAM(individual objects)
        • Programmatic access only.
      • Cross account IAM Roles
        • Programmatic and Console access.
      • In IAM create a Role for another AWS account and attach policies related to S3 to that role.
  • Cross Region Replication
    • First create a destination bucket in destination region
    • Next in S3 source bucket under management go to life cycle rules and create a replication rule.
      • In rule first add a rule name
      • select a IAM rule
      • select a filter for source bucket weather rule is applied to all objects or limited objects.
    • Next in destination bucket select the bucket which can be from this account or another account.
      • The versioning of destination and source bucket must be turned on if not it will give a warning and we can enable directly from rule page.
      • We can change the storage class of our replicated objects
      • We have option to add a time control to replication.
      • We have option to replicate metrics and events
      • We have option to replicate objects encrypted with KMS
      • We have option to replicate and delete markers.
    • Objects already in bucket will not be replicated.
    • Only new versions/objects added will be replicated.
    • We need to change permission of object in source bucket it will not impact that object in destination bucket.
    • We can replicate prod log files to another aws account S3
    • Delete markers are not replicated
    • Deleting individual versions or delete makers will not be replicated
  • S3 Transfer Acceleration
    • S3 Transfer Acceleration utilities the Cloud Front Edge Network to accelerate your uploads to S3.
    • Instead of uploading directly to your S3 bucket you can use a distinct URL to upload directly to an edge location which will transfer that file to S3.
    • A distinct url of edge location can be like 
      • mine.s3-accelerate.amazon.aws.com
    • Amazon has built a tool to compare speed across different regions.
    • Files are transferred from your edge location to the bucket in region that we specify
  • AWS data sync
    • Allows to move large amount of data into AWS
    • We install the AWS data sync agent on our server that connects to our NAS or file system
      • This will then copy data to AWS or write data from AWS
    • Automatically encrypts our data and accelerates transfer over wide area network.
    • Performs data integrity checks in transit and at rest as well.
    • Connects of Amazon's S3,EFS and FSX to copy data and metadata from AWS
    • Way of syncing our data to AWS
    • Used with NFS and SMB compatible file systems.
    • Replication can be done hourly,daily and weekly.
    • Can be used to replicate EFS to EFS.
  • Cloud Front
    • It is a CDN or content delivery network
      • A CDN is a system of distributed servers (network) that deliver webpages and other web content to a user based on the geographic locations of the user, the origin of the webpage, and a content delivery server.
    • Edge locations is a location where content is cached. This is separate to an AWS Region/Availability zone
    • Origin is the origin of all the files that the CDN will distribute. This can be S3 bucket , an EC2 instance, an elastic load balancer or Route53 .
    • Distribution - This is the name given to the CDN which consists of a collection of edge locations.
    • Amazon cloud front can be used to deliver your entire website, including dynamic,static, streaming and interactive content using a global network of edge locations.
      • Request for your content are automatically routed to the nearest edge location, so content is delivered with the best possible performance.
    • Two different types of distribution are
      • Web distribution
        • Typically for websites
      • RTMP
        • Used for media streaming using Adobe Flash
    • Edge locations are not just read only we can write to them too(i.e. put an object on them)
    • Way of caching large files to nearest edge locations.
    • Objects are cached for the life of the TTL(Time to Live) which is always in seconds
    • We can clear cached objects early too but we will be charged for that.
  • Create a Cloud Front Distribution
    • Goto Cloud Front Service Dashboard
    • Create a distribution
      • Create web distribution
      • Select origin domain name which can be a S3 bucket,Elastic load balancer , Package Origin, Media Storage Containers
        • Select S3 bucket here
      • Add origin path i.e. specific directory in your origin.
      • Origin id can be left as default generated.
      • We can restrict to use S3 bucket only using cloud front url and not S3 url.
      • TTL can be set also by setting value of minimum TTL and maximum TTL
      • We can restrict access to users using signed urls or signed cookies to access content
        • Example netflix wants only paid users to access content
      • We can add a layer of set of web application files or web ACL before content can be accessed.
      • Click "Create distribution" to create a distribution
        • This may take upto an hour
      • Once a distribution is deployed we can use the domain name to access the bucket added to cloudfront.
      •  Once created under settings we can add Invalidations.
        • We can invalidate various individual objects or entire directory or sub directory
        • "/*" will invalidate everything
        • Invalidation means its no longer going to be on edge location.
      • If we want to delete a distribution we must disable it first
  • Cloud Front Signed Urls and Cookies
    • Use signed Url for individual files
      • 1 file=1 url
    • Use signed cookie for multiple files
      • 1 cookie = multiple files
    • When we create a signed url or signed cookie, we attach a policy
      • Policy includes
        • URL expiration
        • IP ranges
        • Trusted signers(which AWS accounts can create signed url's)
    • A signed url uses Origin Access Identity(OAI) to connect to S3
    • Application generates signed url for authenticated users using cloud front API.
    • Using signed URL client can access the files of the bucket
    • Cloud Front Signed Url
      • Can have different origins does not have to be EC2.
      • key-pair is account wide and managed by the root users.
      • Can utilize caching features
      • Can filter by date,path,IP address, expiration etc
    • S3 signed url
      • Issues a request as the IAM user who created the presigned URL
      • Limited lifetime
    • Use signed url's/cookies when you want to secure content so that only the people you authorize are able to access it.
    • If origin is EC2 then use cloud front else we can use S3 signed url's.
  • Snowball
    • Snowball is a petabyte scale data transport solution that uses secure appliances to transfer large amounts of data into and out of AWS.
    • Using snowball addresses common challenges with large scale data transfer including high network costs,long transfer times and security concerns.
    • Transferring data with Snowball is simple,fast,secure and can be as little as one fifth the cost of high speed internet.
    • Snowball is either 50TB or 80TB storage
    • Snowball uses multiple layers of security designed to protect your data including tamper resistant enclosures, 256 bit encryption and an industry standard Trusted Platform Module(TPM) designed to ensure both security and full chain-of-custody of your data.
    • Once the data transfer job has been processed and verified. AWS performs a software erasure of the Snowball appliance.
    • AWS Snowball edge is a 100 TB data transfer device with onboard storage and compute capabilities.
      • We can use snowball edge to move large amounts of data into and out of AWS, as a temporary storage tier for large local datasets or to support local workloads in remote or offline locations.
      • For example airlines testing is done at remote and offline locations so aircraft is equipped with Snowball edges.
      • Snowball edge connects to our existing application and infrastructure using standard storage interfaces, streamlining the data transfer process and minimizing setup and integration.
      • Snowball edge can cluster together to form a local storage tier and process your data on-premises, helping ensure your applications continue to run even when they are not able to access the cloud.
    • AWS snowmobile is an exabyte scale data transfer service used to move extremely large amounts of data to AWS.
      • We can transfer upto 100PB per snowmobile a 45 foot long rugged shipping container, pulled by a semi trailer truck. Snowmobile makes it easy to move massive volumes of data to the cloud, including video libraries, image repositories, or even a complete data center migration. Transferring data with Snowmobile is secure fast and cost effective.
    • Snowball can import and export data from S3
    • We can order a snowball from Amazon from snowball dashboard under migration and transfer.
      • Select please send me a snowball.
      • A job is created and snowball is delivered to you.
      • Then snowball is sent back to AWS once it is uploaded with data.
      • We need to install Snowball CLI to access snowball on our system.
      • To unlock snowball we will find credentials on job details page itself.
  • Storage Gateway
    • AWS storage gateway is a service that connects an on premises software appliance with cloud based storage to provide seamless and secure integration between an organization's on premises IT environment and AWS's storage infrastructure.
    • The service enables you to securely store data to the AWS cloud for scalable and cost effective storage.
    • AWS Storage gateway's software appliance is available for download as a Virtual Machine(VM) image that you install on a host in your data center. 
    • Storage Gateway supports either VMWare ESXi or Microsoft Hyper-V.
    • Once you have installed your gateway and associated it with your AWS account through the activate process, we can use the AWS management console to create the storage Gateway option that is right for us.
    • There are 3 different types of gateway
      • File Gateway(NFS & SMB)
        • Used to store files in S3 based on filesystem
        • Files are stored as objects in your S3 buckets and accessed through a Network File System(NFS) mount point
        • Ownership,permissions and timestamps are durably stored in S3 in the user metadata of the object associated with the file.
        • Once objects are transferred to S3 they can be managed as native S3 objects and bucket policies such a versioning,lifecycle management,and cross-region replication apply directly to objects stored in your bucket.
        • It is used for flat files stored directly on S3.
      •  Volume Gateway(iSCSI)
        • Used to store copies of our HDD as virtual HDD in S3
        • The volume interface presents your application with disk volumes using ISCSI block protocol.
        • Data within two days volumes can be asynchronously backed up as point in time Snapshots of your volumes and stored in the cloud as Amazon EBS snapshots.
        • Snapshots are incremental backups that capture only changed blocks.
        • All snapshot storage is also compressed to minimise your Storage charges.
        • It is of 2 types
          • Stored Volumes
            • Stored volumes let you store your Primary data locally while asynchronously backing up that data to AWS.
            • Stored Volumes provide your on premises applications with Low latency access to their entire data sites while providing durable offsite back ups.
            • We can create storage Volumes and mount them on ISCSI devices from you on premises application servers.
            • Data Written to your stored Volumes is Stored on your on premises storage hardware. This data is asynchronously backed up to Amazon simple storage service(Amazon S3) in the form of Amazon Elastic Block store(Amazon EBS) snapshots. 
            • 1 GB-16TB in size for stored volumes.
            • Entire data set is stored on site and is asynchronously backed up to S3
          • Cached Volumes
            • Cached Volumes let us use Amazon S3 as our primary data storage while retaining frequently accessed data locally in our storage gateway.
            • Cached volumes minimise the need to scale Our on premises storage infrastructure while still providing our applications with low Latency access to their frequently accessed data.
            • We can create storage volumes up to 32TB in size and attach to them as ISCSI devices from your on premises application servers.
            • Entire data set is stored on S3 and the most frequently accessed data is cached on site.
      • Tape Gateway
        • Uses Virtual Tape Library(VTL)
        • Tape gateway offers a durable cost-effective solution to archive our data in the AWS cloud the virtual tape library or VTL interface it provides lets us leverage our existing tech-based backup application infrastructure to store data on virtual Tape Cartridges that we create on our app gateway.
        • Each tape Gateway is pre-configured with a media changer and tape drives which are available to our existing client backup applications as  iSCI devices Which we can add as tape cartridges to archive our data.
        • Supported by Net backup exec,Veeam etc.
  • Athena and Macie
    • Athena is Interactive Query service Which enables you to analyse and query data located in S3 using standard SQL
      • Server less, nothing to provision, pay per query/Per TB scanned.
      • No need to set up complex extract/transform/load(ETL) processes.
      • Works directly with data stored in S3
      • Interactive query service.
      • It allows you to query data located in S3 using standard SQL.
      • Server less
      • Commonly used to analyse log data stored in S3
      • Athena can be used
        • To query log files stored in S3 example ELB logs S3 access logs etc.
        • Generate business reports on data stored in S3.
        • Analyse AWS cost and usage reports.
        • Run queries on click stream data.
    • Macie
      • Macie is a security service which uses machine learning and NLP that is natural language processing to discover classify and protect sensitive data such as PII stored in S3.
      • PII or Personally Identifiable Information Is used to establish an individual’s identity.
        • This data can be exploited by criminals used in identity theft and financial fraud.
        • Example Home address, email address, SSN.
        • Passport number, drivers license number.
        • Date of birth, phone number, bank account, credit card number.
      • Uses artificial intelligence to recognise if your S3 objects contain sensitive data such as PII.
      • Dashboard reporting and alerts
      • Works directly with data stored in S3
      • Can also analyse cloud trail logs
      • Great for PCI – DSS and preventing ID theft.
      • Uses AI to analyse data in S3 and helps identify PII
      • Can also be used to analyse cloud trail logs for suspicious a API activity
      • Includes dashboards,  reports and alerting
      • Great for PCI – DSS compliance and preventing ID theft.
EC2
  • Stands for Elastic Cloud Compute(EC2)
  • Amazon Elastic Compute Cloud(Amazon EC2) is a web service that provides secure,resizable compute capacity in the cloud.
  • It is like a virtual machine which is hosted in AWS instead of your own Data Center.
  • We are in complete control of our owned instances.
  • Designed to make web-scale cloud computing easier for developers.
  • Amazon EC2 Would you reduces The time Required to obtain And boot New server instances in minutes, allowing you to quickly scale capacity, both up and down, as your computing requirements change.
  • Pay only for what you use.
  • Public IP changes each time we stop and start the Instance but does not change when we restart the instance.
  • The capacity you want when you need it.
    • Select the capacity that you need right now, grow and shrink when you need.
  • EC2 instances are provisioned in availability zones.
  • Pricing models of EC2 Are as follows
    • On demand
      • Allows You to pay a Fixed weight by the hour or By the second with no commitment.
      • Charges depend on the instance you run.
      • Useful for Users who want the low cost and flexibility of Amazon EC2 Without any upfront payment or Long term commitment.
      • Applications with short term, spiky, or unpredictable workloads that cannot be interrupted.
      • Applications being developed or Tested on Amazon EC2 for the first time.
    • Reserved
      • Provides You with the capacity Reservation and offer a significant discount on the hourly charge for an instance.
      • Contract Terms vary From one year to three years for certain instance type of EC2.
      • Up to 72% discount on the hourly charge.
      • Operates at regional level.
      • Application with steady state or predictable usage.
      • Applications That require reserve capacity. 
      • Users able to make upfront payments to reduce Their total Computing costs Even further.
      • Reserved pricing types
        • Standard reserved instances
          • These offer Up to 75% off on-demand instances.
          • Restriction is we cannot change the reservation quota based on the application requirements.
          • The more you pay upfront and the longer the contract the greater the discount.
          • Standard Reserved and Instances cannot be moved between regions.
        • Convertible Reserved instances
          • These Offer Up to 54% off on demand capability to change the attribute of the reserved instances as long as the Exchange results in the creation of reserved instances of equal or greater value.
        • Scheduled reserved instances
          • These are Available to launch within the time windows you reserve.
          • This option allows you to match your capacity Reservation to a Predictable recurring Schedule that only requires a fraction of day, Week, or a month.
    • Spot
      • Enables you to Bid Whatever price do you want for instants capacity, Providing for even greater savings if your applications have flexible start and end times.
      • Spot instances are used in case of urgent capacity, flexibility , cost sensitive.
      • Purchase unused capacity at a discount of up to 90%.
      • Prices, fluctuate with supply and demand.
      • Applications that have flexible start and end times.
      • Applications that are only feasible at very low compute prices.
      • Users with urgent computing needs for large amount of additional computing capacity which is temporary.
        • Workloads like image processing or parallel workloads, like genome sequencing, or even running calculations for algorithmic trading engines.
      • If the Spot instance is terminated by Amazon EC2 we will not be charged for a partial hour of usage.
      • However if we terminate the instance ourself we will be charged for any hour in which the instance ran.
    • Dedicated hosts
      • Physical EC2 Server Dedicated for your use.
      • This is the most expensive option.
      • Dedicated hosts Can help you reduce Costs By allowing you to use your existing server bound software licenses.
      • Use full for regulatory Requirements that may not support multi tenant Virtualization.
      • Great for licensing which does not support multi tenancy Or Cloud deployments.
      • Can be Purchased on demand(hourly)
      • Can be Purchased as a reservation for up to 70% of the on demand price.
      • Uses
        • If you have software license tied to physical hardware.
        • You have compliance requirements that we cannot use multi tenant hardware for our application.
  • Saving plans
    • Save up to 72%.
      • All AWS compute usage, regardless of instance type or region.
    • Commit to one or three years
      • Commit to use a specific amount of computing power, measured in dollar per hour for a one year or three year period.
    • Super flexible
      • Not only EC2, also includes server less technologies, like lambda and Fargate.
  • AWS pricing calculator
    • Search for service, configure it and prepare a cost estimate for you application.
    • Link is calculator.aws
  • Remember to turn off the instances that are no longer needed and the resources they are using.
  • Termination protection
    • Prevents accidental termination of instance.
    • Useful for production instances.
    • If you want to terminate your instance, you must first disable this.
Different EC2 instance types
Family Specialty Use case
F1 Fixed Programmable Gateway Geonomics Research,Financial Analytics,Real Time Video Processing,Big Data etc.
I3 High Speed Storage No Sql DB's, Data Warehousing
G3 Graphic Intensive Video Encoding/3D Application Streaming
H1 High Disk Throughput Map reduce based workloads,distributed filesystems such as HDFX and Map R-FS
T3 Lowest Cost General Purpose Web servers/Small DB's
D2 Dense Storage File Servers/Data Warehousing/Hadoop
R5 Memory Optimized Memory Intensive Apps/DB's
M5 General Purpose Application Servers
C5 Compute Optimized CPU Intensive apps/DB's
P3 Graphics/General Purpose GPU Machine Learning/Bit Coin Mining etc.
X1 Memory Optimized SAP HANA/Apache Spark etc
Z1D High Compute capacity and a high memory footprint Ideal for electronic design automation(EDA) and certain relational database workloads with high per core licensing costs
A1 Arm based workloads Scale out workloads such as web servers
U-6 TB1 Bare Metal Bare Metal Capabilities that eliminate Visualization Overhead

Mnemonic to Remember
FIGHT DR MCPXZAU
  • EC2 instance, types are categorized based on following parameters.
    • Hardware
      • When we launch an EC2 instance, the Instance type determines the hardware of the host computer used for your instance, or where you are instance is running.
    • Capabilities
      • Each instance, type offers, different compute, memory And storage capabilities. These types are grouped into instance families.
    • Application requirements
      • Select an instance type based on the requirements of the application that you plan to run on your instance.
  • Instance types are optimized to fit different use cases and give you the flexibility to choose the appropriate mix of resources for your application.
  • Latest instance, types are available in AWS documentation.
  • All instance types are organized into instance families.
  • When we start an EC2 Instance it goes through various status checks
    • System status checks
    • Instance status checks
  • Xen and Nitro all two underlying hypervisors of EC2.
  • Launch EC2 instance
    • Select your region
    • Select EC2 service and on the EC2 service dashboard click on launch instance.
    • Select your machine image.
    • Select your instance type.
    • Configure instance details.
      • Number of instances.
      • Request for spot instance.
      • Network configuration.
      • Placement group.
      • Capacity reservation.
      • IAM Roll.
      • Shut down behaviour.
      • Enable termination protection.
      • Monitoring.
      • Tenancy.
      • Elastic inference.
      • T2/T3 Unlimited.
    • Add storage
      • A default root EBS volume is Already provided
      • We can add additional volumes too.
    • Add tags.
    • Configure security groups
      • Configure different types of communication protocols and ports.
    • Click on launch to launch instance.
    • Select an existing key pair all create a new key pair.
      • The public key will be used to SSH into our instance.
    • We can use connect at the top of The dashboard to connect to our instance this opens terminal in another tab.
    • We can encrypt our route device volume now.
    • Termination Protection is turned off by default you must turn it on.
    • On an EBS Backed instance the default action is for the root EBS volume To be deleted when the instance is terminated.
    • EVS suite volumes of our default AMI’s is can be encrypted. We can also use a third-party tool(Such as bit locker) To encrypt the root volume all this can be done when creating a AMI’s in the AWS console or using the API.
    • Additional volumes can be encrypted too.
  • Security groups
    • On EC2 dashboard click on security groups
      • We are able to create security groups, edit security groups, delete security groups.
      • We have description of our security group.
      • Inbound protocol rules.
      • Outbound protocol rules.
    • Every time we make a rule change in our security group the effect of change is immediate.
    • Outbound rule applies to the response that comes back when we send request to the server.
    • In VPC we have network access control list these are stateless when we create an inbound rule we also need to create an outbound rule.
    • Security groups or state full that is when we create an inbound rule and outbound rule is created automatically.
      • So if we allow HTTP in It’s already allowed out as well similar for other protocols to like RDP, SSH, MySQL etc.
    • We can’t blacklist any individual port or IP address in security groups but we can perform the same on network access control list In VPC.
    • When we create a security group by default everything is blocked and we need to allow access to each and every port/protocol.
    • Actions ->Networking ->Change Security groups
      • We can add more than one security group to an instance from here.
    • In security groups
      • All inbound traffic is blocked by default.
      • All outbound traffic is allowed.
      • Changes to security groups take effect immediately.
      • We can have more than one EC2 instances in the security group.
      • We can have multiple security groups attached to EC2 instances.
      • Security groups are state full.
        • If we create an inbound rule allowing traffic in that traffic is automatically allowed back out again.
        • You cannot block specific IP address using security groups instead use network access control lists.
        • We can specify allow rules but not deny rules.
  • Elastic Block Storage(EBS)
    • Amazon EBS also called as elastic block storage provides persistent block storage volumes for use with Amazon EC2 instances in the AWS cloud.
    • We can attach them to EC2 instance, and use the same way as we use our system disk.
      • Create a file system.
      • Run a database.
      • Run an operating system.
      • Store data.
      • Install applications.
    • Each Amazon EBS volume is automatically replicated within its Availability zone to protect you from component failure, offering high availability and durability.
    • It can be used for production workloads i.e. designed for mission critical workloads.
    • It is highly scalable i.e. we can dynamically increase the capacity and change the type of volume with no downtime or performance impact to our live systems.
    • Termination protection is turned off by default you must turn it on.
    • If an Amazon EBS volume is attached as additional disk we can detach it without stopping Instance.
    • On an EBS backed instance, the default action is for the root EBS volume to be deleted when the instance is terminated.
    • EBS root Volumes off your default AMI’s can be encrypted. You can also use a third-party tool such as bait locker etc. to encrypt the root volume, or this can be done when creating AMI’s in the AWS Console are using the API.
    • Additional Volumes can be encrypted as well.
    • Five different types of EBS storage are
      • General-purpose.(SSD)
        • General-purpose SSD volume that balances price and performance for a wide variety of transactional workloads.
        • Used in most workloads.
        • It’s a API name is gp2.
        • Volume sizes 1GB – 16 TB
        • Max IOPS/volume is 16,000
      • Provisioned IOPS(SSD)
        • Highest performance SSD volume designed for mission critical applications.
        • Used for databases
        • API Name is IO1
        • Volume sizes 4GV – 16 TB
        • Max IOPS/volume is 64,000
      • Provisioned IOPS SSD io2 block express.
        • Storage Area Network(SAN) in the cloud.
        • Highest performance, sub millisecond, latency.
        • Uses EBS block express architecture.
        • 4 X throughput, IOPS, and capacity of regular io2 volumes.
        • Upto 64TB, 256000 IOPS per volume.
        • 99.999% durability.
        • Great for the largest, most critical, high-performance application like SAP HANA, Oracle, Microsoft’s, SQL, server, and IBM DB2.
      • Throughput optimised hard disk drive(st1)
        • Low-cost HDD volume designed for frequently accessed throughput-intensive workloads.
        • Used for big data and data warehouses.
        • API name is ST1.
        • Volume size 500 GB– 16 TB
        • Max throughput off 500 MB/s per volume.
        • Frequently-accessed, throughput intensive workloads.
        • Big data, data warehouses, ETL and log processing.
        • A cost-effective way to store mountains of data.
        • Baseline throughput of 40 MB/s per TB.
        • Ability to burst upto 250 MB/s per TB.
        • Cannot be a boot volume.
      • Cold hard disk drive(sc1)
        • Lowest cost HDD volume designed for less frequently accessed workloads.
        • Baseline throughput of 12 MB/s per TB.
        • Ability to burst up to 80 MB/s per TB.
        • Maximum throughput of 250 MB/s per volume.
        • A good choice for colder data requiring fewer scans per day.
        • Good for applications that need the lowest cost and performance is not a factor.
        • Cannot be a boot volume.
        • Used for file servers.
        • API name is SC1.
        • Volume size 500 GB– 16 TB
        • IOPS/volume 250
      • Magnetic hard disk drive
        • Previous generation HDD.
        • Workloads where data is infrequently accessed.
        • Volume size 1gb – 1 TV
        • IOPS/volume Max 40–200
    • Cold(sc1) and throughput optimized(st1) are least expensive EBS options.
    • Types of General Purpose SSD
      • gp2
        • 3 IOPS per GB, up to a maximum of 16,000 IOPS per volume.
        • gp2 volumes smaller, than 1 TB can burst up to 3000 IOPS.
        • Good for boot volumes, or development and test applications which are not latency sensitive.
      • gp3
        • The latest generation.
        • Baseline of 3000 IOPS for any volume size(1GB – 16 TB).
        • Delivering up to 16,000 IOPS
        • 20% cheaper than gp2.
        • Like GP2, they are good for boot volumes or development and test applications which are not latency sensitive.
    • Types of provisioned IOPS SSD
      • io1
        • The high-performance option and the most expensive.
        • Up to 64,000 IOPS per volume. 50 IOPS per GB.
        • Use if you need more than 16,000 IOPS.
        • Designed for I/O intensive applications, large databases, and latency sensitive workloads.
      • io2
        • Latest generation.
        • Higher durability and more IOPS.
          • io2 is the same price as io1
        • 500 IOPS per GB
          • Upto 64,000 IOPS
        • 99.999% durability, instead of up to 99.9%.
        • I/O intensive apps, large databases, and latency sensitive workloads. Applications which need high levels of durability.
  • Types of encryption in EBS volume.
    • Default encryption
      • If the encryption by default is set on your account by your account admin, you cannot create unencrypted EBS volumes.
    • Encrypted snapshots
      • If you can create an EBS volume from an encrypted, snapshot, then you will get an encrypted volume.
    • Unencrypted snapshots
      • If you create an EBS volume from an unencrypted snapshot then encryption is only optional. If default encryption has not been set at account level by your account, admin.
  • EBS volumes and snapshots
    • Go to EC2 dashboard Go to volumes and we see the volumes which Are running.
    • The volume is in the same availability zone as of the EC2 instance.
    • If we create our first Snapshot, it may take some time to create.
    • If we terminate the instance the concerned volume is removed automatically.
    • When we add additional volumes delete on termination is not checked automatically.
    • To move your volume from one availability zone to another we must create a snapshot.
      • Select that volume
      • Click on actions and select create a snapshot.
      • The snapshot will then be shown in your snapshot section of the EC2 under EBS.
      • Next from that snapshot create an image.
      • We can deploy this image into other availability zone.
      • Give a name to image and make sure to select write virtualisation type.
        • There are two types of virtualisation
          • Hardware assisted virtualisation(HVM)
            • HVM virtualisation uses hardware assisted technology provided by The AWS platform.
            • With HVM virtualisation the guest VM Runs as if it were on a native hardware platform, except that it still uses PV network and storage drivers for improved performance.
            • Some instance types support both PV and HVM while others support only one of them.
            • HVM is mostly supported by most EC2 types.
          • Para virtual(PV)
      • We can see our images in a AMI section.
      • Select the image and click launch and we will see all the available EC2 instances that can be launched for that image.
        • Select an instance click next and configure the instance.
        • Select the availability zone for instance from the subnet option.
        • Select next to add additional storage, tags, security groups and launch the instance.
      • We can also move our images into different Regions by copying the AMI into another region.
      • To remove an AMI just deregister that AMI.
    • Volumes exist on EBS. Think of EBS as a virtual hard disk.
    • Snapshots exist on S3. Think of snapshot as a photograph of  The disk.
    • Snapshots a point in time copies of volumes.
    • Snapshots are incremental that is that only the blocks that have changed since your last snap shot all moved to S3.
      • So if we create a new snap shot after changing the volume only be changed blocks are replicated to S3.
    • To create a snapshot for Amazon EBS Volumes that serve as root devices you should stop the instance before taking the snapshot.
      • We can take the snapshot while instance is running too.
    • We can create AMI’s from snapshots.
    • We can change EBS volume sizes on the fly, including changing the size and storage type.
    • It is possible to perform API actions on existing Amazon EBS snapshot through AWS APIs, CLI, and AWS Console.
    • To move an EC2 volume from one availability zone To another take a snapshot of it create an AMI from the snapshot and then use the AMI to launch the EC2 instance in a new availability zone.
    • To move an EC2 volume from one region to another take a snapshot of it create an AMI from the snapshot and then copy the AMI from one region to another. Then use the copied AMI to launch the new EC2 instance in the new region.
    • We can create AMI’s from both volumes and snapshots.
    • Snapshots of encrypted Volumes are encrypted automatically.
    • Volumes restored from encrypted snapshots are encrypted automatically.
    • We can share snapshots but only if they are unencrypted. 
    • The snapshots can be shared with other AWS accounts or made public.
    • aws ec2 create – snapshot is the CLI command to create snapshot of EBS volume.
    • Root Device Volumes can now be encrypted.
    • If you have any unencrypted root device volume that needs to be encrypted do the following.
      • Create a snapshot of the unencrypted route device volume
      • Create a copy of the snapshot and select the encrypted option.
      • Create an AMI from encrypted snapshot.
      • Use that AMI to launch new encrypted instances.
  • AMI types
    • EBS Vs Instance store
    • We can select AMI based on
      • Region
      • Operating system
      • Architecture (32 bit or 64 bit)
      • Launch permissions
      • Storage for root device(Root Device Volume)
        • Instance store(Ephemeral Storage)
        • EBS backed volumes
    • All AMI’s are Categorised as either Backed by Amazon EBS or backed by instance store.
      • For EBS volumes
        • The root device for an instance launched from the AMI is an Amazon EBS volume created from an Amazon EBS snapshot.
      • For instance store volumes
        • The root device for an instance launched from the AMI is an instance store volume created from a template stored in Amazon S3.
    • From the EC2 dashboard launch an instance based AMI by selecting from community AMI’s and filter by instance store.
      • Select an instance type and configure the same.
      • We can add any instance based volumes here only before launching instance, once instance is launch we can only attach EBS volumes.
      • Add Tags, security groups and launch instance.
    • We can either reboot or terminate an instance-based EMI only we cannot start or stop it.
      • This is because it is instance store backed.
      • So if we have an instant store volume sitting on top of a hypervisor and hypervisor fails The system status check will say impaired.
      • For this In EBS we will stop the EC2 instance And start again.
        • The instance will start off with a new hypervisor.
      • For instance Based store volume we cannot do this so if our hypervisor fails we will not be able to get our instance back.
    • EBS based storage is persistent storage where as instance-based storage is called as Ephemeral.
    • Instance store volumes cannot be stopped. If the underlying host fails, you will lose your data.
    • EBS Backed instances can be stopped. You will Not lose the data on this instance if it is stopped.
    • We can reboot both, we will not lose data.
    • By default both root volumes will be deleted on termination however with EBS volumes you can tell AWS to keep root device volume.
  • ENI versus ENA versus EFA
    • ENI
      • Elastic network interface – essentially a virtual network card.
      • ENI is simply a a virtual network card for You are EC2 instances. It allows
        • A primary private IPV4 address from the IPV4 address range of your VPC.
        • One or more secondary private IPV4 addresses from the IPV4 address  range of Your VPC.
        • One elastic IP address (IPV4) per private IPV4 addresses.
        • One public IPV4 address.
        • One or more IPV6 addresses.
        • One or more security groups.
        • A MAC address.
        • A source/destination Czech flag.
        • A description of what ENI is.
      • ENI can be used in following scenarios
        • Create a management network.
        • Use network and security appliances in your PC.
        • Create dual-homed instances with workloads/roles on distinct subnets.
        • Create a low budget, high availability solution.
        • For basic networking perhaps you need a separate management network to your production network or a separate logging network and you need to do it at lower cost.
    • EN
      • Enhanced networking – uses single I/O virtualisation (SR–IOV) to provide high-performance networking capabilities on supported instance types.
      • It uses single root I/O virtualisation (SR-IOV) to provide high-performance networking capabilities on supported instance types.
      • SR–IOV is a method of device virtualisation that provides higher I/O performance and lower CPU utilisation when compared to traditional virtualised network interfaces.
      • Enhanced networking provides higher bandwidth, higher packet per second (PPS) performance, and consistently lower instance latencies. There is no additional charge for using enhanced networking.
      • Use it where you want good network performance.
      • Depending upon your instance type enhanced networking can be enabled using:
        • Elastic Network Adapter(ENA), Which supports network speeds up to 100 GBPS for Supported instance types.
        • Intel 82599 Virtual Function(VF) Interface, which supports network speeds up to 10 GBPS for supported instance types. This is typically used in older instances.
        • ENA is always preferred over VF in most of the scenarios.
        • For when you need speeds between 10 GBPS and 100 GBPS. Anywhere you need reliable high throughput.
    • EFA(Elastic fabric adapter)
      • A network device that you can attach to your Amazon EC2 instance to accelerate high-performance computing(HPC) and machine learning applications.
      • An Elastic Fabric Adapter EFA is a network device that you can attach to your Amazon EC2 instance to accelerate high-performance computing(HPC) And machine learning applications.
      • EFA provides lower and More consistent latency and higher throughput then The TCP transport traditionally used in cloud-based HPC systems.
      • ESA can use OS bypass which enables HPC and machine learning applications to bypass The operating system Kernal And to communicate directly with the EFA Device. 
        • It makes it a lot faster with a lot lower latency 
        • Not supported with Windows, Currently only Linux.
      • For when you need to accelerate high-performance computing(HPC) and machine learning application or if you need to do an OS bypass.
      • Mostly for HPC or machine learning scenarios EFA is used.
  • Encrypted root device volumes and snapshots
    • Earlier we couldn’t encrypt our root EBS instance during initial provisioning.
    • We had to provision our EC2 instance with an unencrypted root device volume then had to take a snapshot copy this snapshot and we can encrypt this copy.
      • From this Snapshot we can provision an AMI and can launch encrypted root device volumes.
    • Now we can provision root device volumes immediately while creating an EC2 instance.
    • Launch an instance from EC2 – while selecting storage we now have an option to encrypt the root device.
    • We cannot delete a snapshot off the root device of an EBS Volume used by a registered AMI.
    • If you have not encrypted during the initial provisioning then follow the following steps to encrypt
      • Create a snapshot of the root volume.
      • Create a copy of this snapshot and while creating a copy we can encrypt the new copied snapshot.
      • Create image of this snapshot and using this image we can launch an EC2 instance.
      • While launching an EC2 from encrypted snapshot the storage should also be encrypted.
        • If we select the storage as non-encrypted it will give us a warning.
      • Snapshots of encrypted volumes are encrypted automatically.
      • Volumes restored from encrypted snapshots are encrypted automatically.
      • These snapshots can be shared with other AWS accounts or made as public.
      • We can now encrypt root device volumes upon creation of the EC2 instance.
      • To encrypt an unencrypted root volume.
        • Create a snapshot of the unencrypted Root device volume.
        • Create a copy of the snapshot and select encrypt option.
        • Create AMI form encrypted snapshot.
        • Use that AMI to launch new encrypted instances.
  • Spot Instances and Spot Fleets
    • Amazon EC2 spot instances let you take advantage of unused EC2 capacity in the AWS cloud.
    • Spot instances are available at up to a 90% discount compare to on demand prices.
    • You can use spot instances for various stateless, fault – tolerant, or flexible applications, such as big data, containerised work loads, CI/CD, web servers, high-performance computing (HPC), and other test and development workloads.
    • To use spot instances we must first decide on a maximum spot price.
    • The instance will be provisioned as long as the spot price is below you are maximum spot price.
      • The hourly spot price varies depending on capacity and region.
      • If the spot price goes above your maximum spot price you have two minutes to choose whether to stop or terminate your instance.
    • We can also use a spot block to stop our spot instances from being terminated even if the Spot price Goes over your max spot price.You can set spot blocks For between 1 to 6 hours currently.
    • Spot instances are useful for following tasks
      • Big data and analytics
      • Containerised workloads
      • CI/CD and testing
      • Web services
      • Image and media rendering 
      • High-performance computing
    • Spot instances is not useful under following conditions
      • Persistent workloads
      • Critical jobs
      • Databases
    • To launch a spot instance we create a request
      • The request contains our maximum price
      • Desired number of instances
      • Launch specifications
      • Request type: one time/persistent
      • Valid from, Valid until
    • The request May fail or launch an instance i.e activate an instance
    • Next once the spot price reaches our bid price we get and interrupt and if the request is one time instance is closed , if it is persistent it is stopped temporarily and relaunched again once the spot price reaches below the bid price.
    • Spot instances can save up to 90% of the cost of on demand instances.
    • Useful for any type Of computing where you don’t need persistent storage.
    • We can block spot in chances from terminating by using spot block.
    • Spot fleets is a collection of spot instances and optionally on demand instances.
      • A spot Fleet attempts to launch the number of spot instances and on demand instances to meet the target capacity you specified in the spot fleet request. 
      • The request for the spot instances is fulfilled, if there is available capacity and a maximum price you specified in the request exceeds the current spot price, the spot fleet also attempts to maintain its target capacity fleet if you are spot instances are interrupted.
      • Spot fleet will try and match the target capacity with your price restraints.
        • Set up different launch pools. Define things like EC2 instance type, operating system and availability zone.
        • We can have multiple pools and the fleet will choose the best way to implement depending on the Strategy you define.
        • Spot fleets will stop launching instances once you reach your price threshold or capacity desired.
      • Different strategies in spot fleets are follows
        • Capacity optimised
          • The spot instances come from the pool with optimal capacity for the number of instances launching.
        • Diversified
          • The spot instances are distributed across all pools.
        • Lowest price
          • The spot instances come from the pool with lowest price.This is the default strategy.
        • Instance pools to use count
          • The spot instances are distributed across the number of spot instances pools you specify.This parameter is valid only when used in combination with lowest price.
  • EC2 hibernate
    • We have learned so far we can stop and terminate EC2 instances.
    • If we stop the instance, the data kept on the disk(with EBS) And will remain on the disk until EC2 instance is started.
    • If the instance is terminated, then by default the root device volume is also terminated.
    • When we start our EC2 instance the following happens.
      • Operating system boots up.
      • User data script is run(bootstrap scripts)
      • Applications start(can take some time)
        • SQL Server
        • Apache tomcat etc.
    • EC2 hibernate allows for EC2 hibernation that is the operating system is told to perform hibernation (suspend-to-disk).
    • Hibernation Saves the contents from the instance memory (RAM) to your Amazon EBS root volume.
    • We persist the instance’s Amazon EBS root volume and any attached Amazon EBS data Volumes.
    • When we start about instance out of hibernation
      • The Amazon EBS root volume is restored to its previous state.
      • The RAM Contents are re-loaded.
      • The processes that were previously running on the instance are resumed.
      • Previously Attached data volumes are reattached And the instance retains its instance ID.
    • We do not need to restart operating system or restart the application
    • With EC2 Hibernate ,The instance Boots much faster.The operating system does not need to reboot because the in-memory state (RAM) Is preserved.
    • This is useful for:
      • Long running processes 
      • Services that take time to initialize
    • Go to EC2 dashboard and launch an instance
      • While configuring instance select “Enable hibernation as an additional stop behavior”.
      • To use hibernation root volume must be encrypted so in storage page encrypt the root volume.
      • Connect do you EC2 instance And from the Shell run
        • Uptime command
          • This shows the time since your instance is running.
      • Next on the EC to dashboard go to instances select the instance go to “actions” then “instance state” and select “stop-hibernate”.
      • Start instance once again and SSH into it Run the “Uptime” command Again and we will see that instance was running since last start.
    • EC2  hibernate preserves the in-memory RAM on persistent storage(EBS)
    • Much faster to boot up because we do not need to reload the operating system.
    • Instance RAM must be less than 150 GB.
    • Instance families include C3, C4, C5,M3, M4, M5, R3,R4 and R5.
    • Available for windows, Amazon Linux2 AMI, and Ubuntu.
    • Instances can’t be hibernated for more than 60 days.
    • Available for on-demand Instances and Reserved Instances.
  • Cloud Watch
    • Amazon cloud watch is a monitoring service to monitor your AWS resources, as well as the applications that you run on AWS.
    • Cloud watch monitors performance.
    • Cloud watch can monitor things like
      • Compute
        • EC2 instances
        • Autoscaling Groups
        • Elastic load balancer’s
        • Route S3 health checks
      • Storage and content delivery
        • EBS Volumes
        • Storage gateways
        • Cloud front
    • Host level metrics consist of
      • CPU
      • Network
      • Disk
      • Status check 
      • Check if hypervisor is running
      • Check EC2 instance
    • Cloud watch monitors performance
    • Cloud watch is used for monitoring performance
    • Cloud watch can monitor most of the AWS as well as your applications that run on the AWS.
    • Cloud watch with EC2 will monitor events every five minutes by default.
    • You can have one minute intervals by turning on detailed monitoring.
    • You can create cloud watch alarms which trigger notifications.
    • Cloud watch is all about performance.
    • To Check what network throughput is or Disk I/O In EC2 Instance that is checked via Cloud Watch.
    • While creating an instance we must enable cloud watch detailed monitoring if we want detailed monitoring from cloud watch.This will involve a cost.
    • We can see our system status check and instance status check under status checks.
    • Under monitoring we can see host level metric like
      • CPU utilization
      • Disk reads
      • Network 
      • Status checks
    • To set an alert for CPU usage go to cloud watch under maintenance and governance.
      • Go to alarms
      • Select your metric from Cloud Front,EBS, EC2, S3.
      • Select per instance metric.
      • Select your instance ID with metric name to monitor (CPU utilization in this case).
      • Give Alam a name and description.
      • Give Alam a metric to compare to and a time duration in terms of data points
      • Specify an action when alarm triggers.
    • We can create cloud watch dashboards based on parameters such as regions etc.
    • Logs
      • Allows us to do performance logging essentially.
    • Events
      • Delivers a real time Stream off system events that describe changes in AWS resources.
    • Standard Monitoring time is five minutes.
    • Detailed monitoring is one minute.
    • Cloud watch has following functions
      • Dashboards
        • Creates awesome dashboards to see what is happening with your AWS environment.
      • Alarms
        • Allows you to set alarms that notify you went particular thresholds are hit.
      • Events
        • Cloud watch events helps you to respond to state changes in your AWS resources.
      • Logs
        • Cloud watch logs helps you to aggregate, monitor and store logs.
    • Cloud watch monitors performance.
    • Monitor performance of EC2 instance is done by cloud watch.
  • Cloud Trail
    • AWS cloud trail increases visibility into our user and resource activity By recording AWS Management console actions and API calls.
    • You can identify which users and accounts called AWS, The source IP address From Which the calls were made, and When the calls occurred.
    • Cloud trail Monitors API calls in the AWS platform.
    • Cloud Trail is all about auditing
    • So to check who is provisioning what resource in AWS such as S3 or EC2 then we will check cloud trail.
    • Cloud trail monitors API calls in the AWS platform.
    • Who set up S3 bucket or who provisioned EC2 instance information is provided by cloud trail.
  • AWS Command Line
    • Create a user and give it programmatic access from console.
    • Get its access key ID and secret access key.
    • Under user in your security Credentials we can generate a new one if we have lost previous one.
    • Launch an instance Preferably of t2.micro which is used for general purpose.
      • Create a key pair for instance before launching and download the pem file.
      • Move the Pem file to your SSH directory and change it’s permissions to 400.
    • SSH into your instance And elevate your Privileges to root.
    • Next one command for AWS services like
      • AWS S3 ls 
        • Used to list buckets
      • AWS S3 mb S3:/bucket_name
        • Used to create/make bucket
    • We must configure credentials before using AWS services
      • Run AWS configure
      • Copy and paste your access key, secret access key and default region name US-east-1
    • There is a hidden directory in In your home directory called as “.aws”
      • It contains our configuration and our credentials.
      • From home directory just “cd .aws” End it will navigate into this directory.
    • We can interact with AWS from anywhere in the world just by using command line.
    • You will need to set up access in IAM.
  • Identity and access management roles
    • Go to IAM dashboard
    • Click on roles and we will see our list of roles
    • Click create a role to create a new role.
    • Choose the service that will use the role.
    • Attach a policy.
    • Give Role name.
    • Click create role to finish creating the role.
    • Next go to EC2 dashboard.
    • Select your EC2 instance
    • Goto Actions -> Instance Settings -> Attach/Replace IAM role.
    • Select your new role name and click apply.
    • SSH into your instance and delete “.aws” folder in root.
      • This will delete any previous security Credentials you are using.
    • Next try to list buckets Using command
      • AWS S3 LS
      • We see that command runs despite the fact That credentials are not there.
      • This is because we have attached a role With admin access policy to the instance.
    • Roles are more secure then storing your access key and secret Access key.
    • Roles are easier to manage.
    • Roles can be assigned to an EC2 instance after it is created using both the console and command line.
    • Roles are Universal – you can use them in any region.
    • When updating the policy used by an IAM role attached to an EC2 instance, the changes are immediately affected.
  • Using Bootstrap scripts
    • Automate AWS EC2 deployments.
    • Bootstrap Scripts run when EC2 instance first boots.
    • Can be a powerful way of automating software installed and updates.
    • Running commands when our EC2 boots.
      • Helps to install updates
      • Helps to install software’s like Apache etc
    • Provision and EC2 instance for testing we can choose of type t2.micro
      • While provisioning under configure instance click on advance details write the script as text, as file, starting with #!/bin/bash
  • Instance Metadata
    • Used to get information about an instance such as public IP etc.
    • SSH into your EC2 instance
    • Elevate to root privileges
    • Curl to instance http://instanceip/latest/user-data and we will Be able to see the script we passed in advanced details While creating the instance.
    • Curl to http://169.254.169.254/latest/meta-data and we will get options For metadata.
      • These are end points to metadata
    • Put any of these end points after meta-data To get respective meta-data information.
    • Meta-data is used to get information about an instance such as public IP.
    • CURL http://169.254.169.254/latest/meta-data/
      • To get user data run
      • Curl http://169.254.169.254/latest/user-data/
    • If we knew the private and public IP address of EC2 Instance we can retrieve meta-data from
      • http://169.254.169.254/latest/meta-data/local-ipv4
      • http://169.254.169.254/latest/meta-data/public-ipv4
  • EFS
    • Amazon EFS Stands for Elastic File System is a file storage service for Amazon elastic compute cloud (Amazon EC2) instances.
    • Amazon EFS is easy to use and provides a simple interface that allows you to create and configure file systems quickly and easily.
    • With Amazon EFS, storage capacity is Elastic, growing and shrinking automatically as you add and remove files so our applications have the storage they need when they need it.
    • Too EC2 instances can share and EFS volume whereas they cannot share an EBS volume.
    • Go to EFS under storage
      • Provision and EFS file system.
      • Select the availability zones it will be spread across and select the security group for each zone.
      • Next configure tags, Life-cycle policies, throughput Mode, performance mode And encryption to enable/disable encryption
      • Review and create your file system
    • Provision new EC2 instance with two instances and add a user script if needed.
      • Under Advanced Details add script to install Apache server.
    • Web servers communicate with EFS via NFS protocol so we need to open this on our inbound firewall rules in security group selected for EFS
      • Open NFS for the security group of our EC2
    • Once the mount target states of EFS are available we can use it or mount it on our EC2.
    • Install the Amazon EFS utility tools to the EC2 using following command.
      • yum install -y Amazon-efs-utils
    • SSH into your first easy to instance and elevate privileges to youth and check for WWW directly to ensure Apache server is installed.
    • SSH into your second EC2 instance and elevate privileges to root and make same checks.
    • Next on the way EFS dashboard we have link called as “(Amazon EC2 mount instructions from local VPC)”.
      • We have commands to mount our EFS here
    • If we want Encryption in transit of our data we should use the TLS mount option.
      • Make sure for this in caption at Rest is turned on For EFS.
      • Mount -t EFS -o tls fs - 98190582:/ path to directory 
        • Path to directory is the path where EFS will mount too.
    • Use the above command in our both instances to mount the EFS Preferably Mount at “/var/www/html” So that we can add files and test from Apache server using public IP address.
    • Create an HTML file in the mount directory with some HTML content.
    • We will find it in other instance too If we change the file in other instance it will reflect in the first one.
    • EFS is a way of having common file systems of storage using NFS between different EC2 instances.
    • Supports the network file system version 4(NFSv4) Protocol.
    • We only pay for storage we use there is no pre provisioning required.
    • Can scale up to the petabytes.
    • Can support thousands of concurrent NFS connections.
    • Data is stored across multiple availability zones with in a region.
    • Read after Write consistency.
    • Amazon does not support EC two instances that are running windows to connect to EFS it’s Linux support only.
    • When you need distributed highly resilient  Storage for Linux Instances and Linux based applications.
    • For NAS File based system we will use EFS.
  • Amazon FSX for windows and Amazon FFX for lustre
    • Amazon FSX for a Windows file server provides a fully managed native Microsoft Windows filesystem so you can easily move your Windows based applications that require file storage to AWS.
    • Amazon FSX is built on windows server.
    • Designed for use with Microsoft applications like SQL server, Active directory, I I S, SharePoint etc.
    • A managed window server that runs windows Server Message block (SMB) – Based file services.
    • Designed For windows and windows application.
    • Supports active directory users, access control lists, Groups, and security policies, along with distributed file system (DFS) name spaces and replications.
    • It is different from EFS in following terms
      • EFS is a managed NAS filer for EC2 instances based on network file system (NFS) Version 4.
      • One of the first network file sharing protocols native to UNIX and Linux.
    • For SMB Based Storage or Message Block Based Storage we will use Windows FSX.
    • When you need centralized storage for Windows based applications such as SharePoint, Microsoft SQL server, workspaces, IIS web server, Or any other native Microsoft application(SMB storage)
  • Amazon FSX for Lustre
    • Amazon FSX for Lustre is a fully managed file System that is optimized for compute-intensive workloads , such as high-performance computing, machine learning, media data processing,workflows, and electronic design automation (EDA).
    • With Amazon FSX, you can launch and run Lustre file System that can process massive data sets at up to hundreds of gigabytes per second of throughput , millions of IOPS, and sub millisecond latencies.
    • Designed specifically for fast processing of workloads such as machine learning, high performance Computing(HPC), Video processing, financial modeling, and electronic Design automation.
    • Lets you launch and run a file system that provides sub millisecond access to your data and allows you to read and write data at speeds of up to hundreds of gigabytes per second of throughput and Millions of IOPS.
    • When you need high speed, High capacity distributed Storage. This will be for applications that do high performance Compute (HPC), financial modeling etc.
    • FSX for Lustre can store data directly on S3.
  • EC2 placement groups
    • EC2 placement group is a way of placing your instances.
    • Three types of placement groups are there
      • Clustered placement group
        • A cluster placement group is a grouping of instances within a single availability zone.
        • This Placement group is recommended for applications that need low network latency, high network throughput, or both.
        • Only certain instances can be launched into a clustered placement group
        • Clustered placement cook is used for cases where low network Latency/high network throughput is required.
        • Are clustered placement group can’t span multiple availability zones
      • Spread placement group
        • A spread placement group is a group of instances that are each placed on distinct  underlying hardware.
        • Spread placement groups are recommended For applications that have a small number of critical Instances that should be kept separate from each other. 
        • Think of individual Instances.
        • They may or may not be in the same availability zone but are configured on different racks.
        • Spread placement group is where we have individual critical EC2 Instances.
        • A spread placement and partitioned Group can Span multiple availability zones.
        • Spread Placement group can be deployed across multiple availability zones.
      • Partitioned placement group
        • When using partitioned placement groups Amazon EC2 divides each group into logical segments called partitions.
        • Amazon EC2 ensures that each partition within a placement group has its own set of racks.
        • Each rack has its own network and power source.
        • No to partition within a placement group share the same racks, allowing you to isolate the impact of hardware failure within your application.
        • Think multiple instances
        • In partitioned placement group we have multiple instances on the rack as compared to spread placement group where we have only one instance on a rack.
        • Partitioned placement group for multiple instances on a rack
          • HDFS, H base, Cassandra
    • The name you specify for a placement group must be unique within your AWS account.
    • Only certain types of instances can be launched in a placement group
      • Compute optimized
      • GPU
      • Memory optimized
      • Storage optimized
    • AWS recommended homogeneous instances within clustered placement groups.
    • You can’t merge placement groups.
    • You can’t move in existing Instance in a placement group. You can create an AMI from your existing instance and then launch a new instance from the AMI into Placement group.
    • You can’t move on existing Instance into a placement group. Before you move the instance, the instance must be in the Stop state. You can move or remove an instance Using  the AWS CLI or AWS SDK,You can’t do it via console yet.
  • HPC on AWS
    • HPC is easily achievable on AWS machines.
    • We can create a large number of resources in almost no time.
    • We only pay for resources we use and once finished we can destroy the resources.
    • HPC is used for industries such as genomics, finance and financial risk modeling, machine learning, weather prediction, and even Autonomous driving.
    • Different services we can use to achieve HPC on AWS are as follows
      • Data transfer
        • Data transfer can be done using
          • Snowball, Snow mobile (terabytes/petabytes) worth of data.
          • AWS data sync to store on S3, EFS, FSX of windows etc.
          • Direct connect
            • Direct connect is a cloud service solution that makes it easy to establish a dedicated network connection from your  Premises to AWS.
            • Using AWS direct connect, We can establish private connectivity between AWS and our data center, office or colocation environment - which in many cases can reduce our Network Costs, increase bandwidth throughout , and provide a more consistent Network experience than Internet-based connections.
      • Compute and networking
        • Compute and Networking services that allow us to achieve HPC on AWS are as follows
        • Compute services include
          • EC2 Instances that are GPU or CPU optimized
          • EC2 Fleet spot instances and spot fleets.
          • Placement Groups (cluster placement groups)
        • Network services include
          • Enhanced Networking single root I/O Virtualization SR-IOV.
            • Enhanced Networking Uses single root I/O Virtualization (SR-IOV) to provide High-performance networking capabilities on supported instance types.
            • SR-IOV Is a method off device Virtualization that provides Higher high/low performance and lower CPU utilization when compared to traditional  Virtualized network interfaces.
            • Enhanced Networking provides higher bandwidth,Higher packet per second performance (PPS), and consistently lower inter-instance latencies.
            • There is no additional charge for using enhanced networking.
            • Use where you want good network performance.
            • Depending on our instance type enhanced Networking can be enabled using
              • Elastic network adapters
                • Elastic network adapter (ENA), which supports network speed of up to 100 GBPS for supported Instance types.
              • Intel 82599 virtual Function (VF) interface which supports Network speeds of up to 10 GBPS for supported instance types.
                • This is typically used on older instances (Legacy).
            • We mostly choose ENA over VF in most scenarios.
          • Elastic Fabric adapters.
            • An Elastic Fabric Adapter (EFA) is a network device which you can attach to your Amazon EC2 instance to accelerate HPC and machine learning applications.
            • EFA provides Lower, more consistent latency and higher throughout  than the TCP transport traditionally Used in cloud-based HBC systems.
            • EFA can use OS-Bypass , Which enables HPC and machine learning applications to bypass the operating system Kernal and communicate directly with the EFA device. It makes it a lot faster with much lower latency. It is not Supported with windows currently only Linux.
      • Storage
        • Storage services that allow us to achieve HPC on AWA are as follows
          • Instance attached storage
            • EBS
              • Scale up to 64,000 IOPS with Provisioned IOPS (PIOPS).
            • Instance store
              • Scale millions of IOPS; lower latency
          • Network storage
            • Amazon S3
              • Distributed object based storage, not a file system.
            • Amazon EFS
              • Scale IOPS based on total size, or use provisioned IOPS.
            • Amazon FSX for Lustre
            • HPC- optimized distributed file system
            • Millions of IOPS which is also backed by S3.
      • Orchestration and automation
      • Orchestration and automation services that allow us to achieve HPC on AWS
        • AWS batch
          • AWS batch enables developers, scientists and engineers to easily and Efficiently Run hundreds of thousands of batch computing jobs on AWS. 
          • AWS batch supports multi node Parallel jobs which allows you to run a single job that spans multiple EC2 instances.
          • You can easily schedule jobs and launch easy to instances according to your needs.
        • AWS parallel cluster
          • Open source cluster management tool that makes it easy for you to deploy and manage HPC clusters on AWS
          • Parallel cluster uses a simple text File to model and provision all resources needed for your HPC applications in an automated and secure manner.
          • Automate creation of VPC, subnet, cluster type, and instance types.
  • AWS WAF
    • AWS WAF is a web application firewall that lets you monitor the HTTP and HTTPS requests that are forwarded to Amazon Cloud Front, an Application Load Balancer or API Gateway.
    • AWS WAF also lets you control access to your content.
    • Since HTTP and HTTPS requests occur at application later that is layer seven in OSI model. WAF is also called as layer seven firewall.
    • It can see the query string parameter or various parameters that are passed to a web service request or HTTP request.
    • Physical hardware firewalls can you go up to layer 4 only.
    • Web application firewall are more secure they can see more information than a typical firewall.
    • We can configure conditions such as what IP addresses are allowed to make this request or what query string parameters need to be passed for the request to be allowed.
    • Then the application load balancer or Cloud front or API Gateway will either allow this content to be received or to give a HTTP 403 status code.
    • AWSWAF allows three different behaviors
      • Allow all requests except the ones you specify.
      • Block all requests Accept the ones you specify
      • Count the request that match the property to specify.
    • To provide extra protection we can define conditions by using characteristics of web requests such as
    • IP address that request originate from
    • Country that request originate from
    • Values in request headers
    • Strings that appear in request, either specific strings or String that match regular expression (regex) patterns.
    • Length of request
    • Presence of SQL code that is likely to be malicious (known as SQL injection).
    • Presence of a script that is likely to be malicious (known as cross-site scripting)
  • To block malicious IP address, prevent SQL attacks or SQL injection, prevent Cross site scripting, block individual countries we use AWS WAF.
  • We also use network ACL to block malicious IP addresses.
AWS Caching
  • Elastic Cache
    • Elastic a Cache is a web service that makes it easy to deploy, operate, and scale on in-memory Cache in the cloud. The service improves the performance of web applications by allowing you to retrieve Information from fast, managed, in memory cached, instead of relying entirely on Slower Disk Based Databases.
    • Elastic cache supports two open source, in memory caching engines.
      • Memcached
      • Redis
Requirement  Memcached Redis
Simple cache to offload DB Yes Yes
Availability to scale horizontally Yes Yes
Multithreaded performance Yes No
Advanced Datatypes No Yes
Ranking/sorting data sets No Yes
Publish/Subscribe capabilities No Yes
Persistence No Yes
Multi – AZ No Yes
Back up and restore capabilities No Yes

    • Use elastic Cache to increase data base and web application performance.
    • Database is overloaded what two steps we should take to make database perform better
      • Read replica
        • Redis is Multi Availability zone
        • You can do back ups and restores of Redis
      • Elastic Cache
  • Cacheing strategies on AWS
    • The following services have cacheing Capabilities
      • Cloud front(caches at edge location)
      • API Gateway
      • Elastic Cache-Memcached and Redis
      • Dynamo DB Acceleration (DAX)
    • They may be cacheing at different levels or one level depending on architecture.
      • Cloud front may directly Cache from S3, EC2, etc.
      • Cloud front may get information from API Gateway Which may get information from lambda/EC2 which may get information from elastic cache (Memcache or Redis) or Dynamo DB/RDS
        • In this we can have caching capabilities at cloud front API Gateway, lambda/EC2, Elastic cache or RDS/Dynamo DB.
        • In this case the deeper we cache the more latency we are going to have.
    • Cacheing is balancing act between up to date, accurate information and latency.We can use the following services to cache on AWS.
      • Cloud front
      • API Gateway
      • Elastic cache-Memcache/Redis
      • Dynamo DB Accelerator (DAX)
  • Elastic map Reduce(EMR) overview
    • Amazon EMR is the industry leading cloud big data platform for processing vast amounts of data using open source tools such as Apache Spark,Apache hive, Apache HBase, Apache F link, Apache Hudi, and Presto.
    • With EMR , You can run Petabyte-scale analysis at less than half the cost of traditional on-premises solutions and over three times faster then standard Apache spark.
    • The control component of Amazon EMR Is the cluster. A cluster is a Collection of Amazon Elastic Compute Cloud (Amazon EC2) Instances. Each instance in the Cluster is called a node. Each node has a role within the cluster, referred to as node type.
    • Amazon EMR also installs different software components on each node type, giving each node a role in a distributed application like Apache Hadoop.
    • The node types In Amazon EMR are as follows
      • Master node
        • A node that manages the cluster. The master node tracks the status of tasks and monitors the health of the cluster. Every cluster has a master node.
      • Core node
        • A node with software components that runs tasks and stores data in the Hadoop Distributed File System (HDFS) on your cluster. Multi node clusters have at least One core node.
      • Task node
        • A node with software components that only runs task and does not store data in HDFS. Task nodes are optional.
    • If we loose our Master node we lose our log data
      • We can configure a cluster to Periodically Archive the log files Stored on the master node to Amazon S3.
      • This ensures the log files are available after the Cluster terminates, whether this is through normal shut down or due to an error.
      • Amazon EMR archives the log files to Amazon S3 at five minute intervals.
      • We can only set this up when we first create a cluster.
    • EMR is used for big data processing.
    • Consist of A master node, A core node, and (optionally) a task node.
    • By default, log data is stored on the master node.
    • We can configure replication to S3 on five minute intervals for all log data from the master node.However, this can only be configured when creating the Cluster for the first time.
Advanced IAM
  • AWS Directory Service
    • Family of managed services
    • Connect AWS resources with on-premises Microsoft Active Directory.
    • Standalone directory in the cloud.
    • Use existing corporate credentials to access AWS services.
    • SSO to any domain joined EC2 instance.
    • Active directory
      • On premises directory service
      • Hierarchical database of users, groups, computer-trees and forests.
      • Group policies.
      • Based on protocols LDAP and DNS
      • Supports Kerberos, LDAP, and NTLM authentication.
      • Highly available.
    • If we have our fleet of EC2 instances joined to an active directory domain we need not configure credentials on each and every instance.
    • AWS Managed Microsoft Active Directory
      • Managed services
      • Active directory domain controllers (DC’s) running Windows server
      • By default We get 2 domain controllers each has a separate availability zone.
      • Reachable by applications in your VPC.
      • Add domain controllers for high availability and performance.
      • Exclusive access to domain controllers.
      • Extend existing Active Directory to on-premises using active directory trust.
      • AWS is responsible for following services
        • Multi availability zone deployment
        • Patch, monitor and recover
        • Instance rotation
        • Snapshot and re-store
      • Customer is responsible for
        • Users, groups, GPO’s (Group Policy Objects)
        • Standard active directory tools.
        • Scale out domain controllers.
        • Trusts(Resource forest)
        • Certificate authorities (LDAPS)
        • Federation
    • Simple active directory
      • Standalone managed directory
      • Basic active directory features.
      • Small less than 500, large less then 5000 users.
      • Easier to manage EC2.
      • Linux workloads that need LDAP.
      • Does not support trusts (can’t join on premises Active Directory)
    • Active directory connector
      • Directory gateway Proxy for on premise is active directory.
      • Avoid caching information in the cloud.
      • Allow on premises users to log into a WS using active practice.
      • Join EC2 instances to your existing active directory domains.
      • Scale across multiple active directory Connectors.
    • Cloud directory
      • Directory Based store for developers.
      • Multiple hierarchies with hundreds of millions of objects.
      • Use cases
        • Organizational charts
        • Course catalog
        • Device registries
      • Fully managed services.
    • Amazon Cognito user pools.
      • Managed User directory for SAAS application.
      • Sign-up and sign in for web or mobile.
      • Works with social media identities.
        • We can log in to a SAAS application using our Facebook, Google or Amazon credentials .
    • Active Directory compatible Services
      • Managed Microsoft Active Directory (i.e, Directory service for Microsoft active directory).
      • Active directory connector.
      • Simple active directory.
        • We can sign into our Amazon services like Amazon workspace And Quick site with Active Directory credentials.
    • Non-Active directory compatible services
      • Cloud directory 
      • Cognito user pools
  • IAM policies
    • Amazon resource name (ARN)
      • ARN Uniquely identifies any resources in AWS
      • All ARN’s begin with
        • arn:partition:Service:Region:accountid:
          • AWS partition is different AWS partitions that AWS operates on example AWS or AWS-IN
          • Services is any of the AWS services like 3PS, EC2, RDS,Dynamo DB etc.
          • Region like US-East-1 or eu-central-1 etc.
          • Then we have our 12 digit account ID.
      • ARN ends with
        • Resource
        • Resource-type/resource
        • Resource-type/Resource/Qualifier
        • Resource-Type/resource:qualifier
        • Resource-Type:Resource
        • Resource-Type:resource: qualifier
      • Examples of ARN are as follows
        • arn:AWS:iam::123456789112:user/gaurav
        • arn:aws:s3:::my-bucket/image.png
        • arn:aws:dynamodb:us-east-1:123456789012:table/orders
        • arm:aws:ec2:us-east-1:123456789012:instance/*
      • Since IAM is global we don’t have a region value for that and we have consecutive Colons (::)
      • Similarly in buckets we don’t need any region or account ID two uniquely identify them we have three consecutive Colons(:::)
      • In last /* represents a wild card which represents all EC2 Instances.
    • IAM Policies
      • Json document that defines permissions.
      • Identity policy attached to an IAM user, Group or Roll.
      • Resource policy are attached to a resource for example S3 buckets,SQS queues, KMS encryption keys and so on. Resource policies helps us to Specify who has access to the resource and what action they can perform on it.
      • No Effect until attached.
      • List of statements
        • Each statement is enclosed in curly braces.
        • Each statement matches on AWS API request.
      • Example of IAM policy is as follows
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "ListAndDescribe",
            "Effect": "Allow",
            "Action": [
                "dynamodb:List*",
                "dynamodb:DescribeReservedCapacity*",
                "dynamodb:DescribeLimits",
                "dynamodb:DescribeTimeToLive"
            ],
            "Resource": "*"
        },
        {
            "Sid": "SpecificTable",
            "Effect": "Allow",
            "Action": [
                "dynamodb:BatchGet*",
                "dynamodb:DescribeStream",
                "dynamodb:DescribeTable",
                "dynamodb:Get*",
                "dynamodb:Query",
                "dynamodb:Scan",
                "dynamodb:BatchWrite*",
                "dynamodb:CreateTable",
                "dynamodb:Delete*",
                "dynamodb:Update*",
                "dynamodb:PutItem"
            ],
            "Resource": "arn:aws:dynamodb:*:*:table/MyTable"
        }
    ]
}
      • Statement in SID are grouped as shown above
      • Effect is either allow or deny.
      • Matched based on their action.
      • And the resource the action is against.
      • Go to IAM from AWS console
        • Go to policies
        • We have AWS managed policies and customer managed policies.
          • AWS managed policies are created bye AWS for our convenience and are denoted bye orange box icon. These are not editable by us.
          • Customer managed policies are created by us.
        • We can create a policy using a visual editor or directly input them using JSON.
        • Some actions operate on resource some operate on objects in our source for example list bucket operates On bucket while “Put object, get object” Operate on objects in a bucket, we must specify resource accordingly.
      • Next we attach this policy to a role.
      • We attach the role to the AWS service or instances.
      • We can attach multiple policies of different resources to a role.
      • If we do not want to define an external policy we can also define an in-line policy to a role.
        • The scope of an in-line policy is limited to current role only that is in role in which it was created.
      • Any permissions that are not explicitly allowed are implicitly Denied.
      • An explicit deny overrides everything else on any other policies.
      • Only attach policies have effect.
      • AWS joins all applicable policies together when it performs its evaluation.
      • Permission boundaries
        • Used to delegate Administration to other users.
        • Prevent privilege escalation or unnecessarily broad permissions.
        • A Permission boundary Is an advanced features for Using a managed policy to set the maximum permissions that an entity-based policy can grant to an IAM entity.
        • Controls the maximum permissions an IAM policy can grant.
      • Use cases:
        • Developers creating roles for lambda functions.
        • Application owners creating roles for EC2 Instances.
        • Admins creating ad hoc users.
      • Permissions boundary can be set from users in IAM.
        • We can attach permissions to a user Ina permission boundary.
      • So even if user has admin access permission policy and permission boundary is set to dynamo DB then he will not have admin access permissions.
  • AWS Resource Access Manager(RAM)
    • Account isolation in AWS
      • Different accounts for different purposes like administration, billing.
      • Multi account strategy in AWS
      • Creates a challenge to share resources across accounts.
    • Resource access manager helps to overcome this challenge.
    • AWS resource access manager allows resource sharing between accounts.
      • We create resources centrally.
      • We can reduce operational overhead because we won’t be duplicating resources in each of our accounts which can be Real pain to manage.
    • Following AWS resources can be shared using RAM.
      • App Mesh
      • Aurora
      • Code Build
      • EC2
      • EC2 image builder
      • License Manager
      • Resource Groups
      • Route 53
    • If Account 2 Is able to Access a private Subnet of Account one then it can also create resources in that subnet.
    • Go to RAM Console in the account which has the resources and click on “create a resource share”
      • Give it a name
      • Select resource type
      • Select resource
      • Add the account number of account With which which resource needs to be shared.
      • Add tags if needed and create share
      • Now click on The resources we see that the shared resource status Is in associated state.
      • But shared principles status is in associating state.
      • Next go to RAM In Second account and Click on resource shares There you will see a pending invitation.
        • This invitation is sent by account 1 to account 2 for sharing resource.
      • Click on pending invitation and click accept resource share.
      • Now the resource is shared from account 1 and will be visible on that resource dashboard.
      • We can create a clone of the resource and start working on it in our new account.
  • AWS Single Sign On
    • Manage user Permissions for all individual accounts.
    • SSO service Helps us to centrally manage access to AWS accounts and business applications.
    • Applications can be third-party apps such as
      • Office 365
      • Sales force
      • Dropbox
      • Github
      • Slack et cetera
    • We can sign into the apps using our AWS credentials
      • Centrally manage accounts
      • Use existing corporate identities
      • SSO access To business applications.
    • SSO Integrates with any SAML 2.0 security provider.
      • Security Assertion Markup Language or SAML is used for logging users In applications based on their sessions in another context.
      • All sign on activity is recorded in AWS SSO cloud trail This helps to meet audit and compliance requirements.
      • SAML is closely related to single sign-on (SSO).
  • Summary
    • Active directory
      • Convect AWS resources with on premises active directory.
      • Use SSO to log into any domain joined EC2 instance.
      • AWS managed Microsoft active directory
        • Real active directory domain controllers running Windows server inside AWS.
      • Active directory trust
        • Extend existing AWS active directory to on premises active directory.
      • AWS versus customer responsibility
      • Simple Active Directory 
        • Does not support trusts
        • We can only join simple active directory to on premises active directory using active directory connector.
        • This is a directory gateway or proxy for your on premises active directory.
      • Cloud directory
        • Services for developers looking to work with hierarchical data.
      • Cognito user pools
        • Managed user service that works with social media identities.
      • IAM policies
        • ARN
        • IAM policy structure
        • Effect/Action/Resource
        • Identity versus resource policies
        • Policy evaluation logic
        • AWS managed Versus customer managed policies
        • Permission boundaries
      • Resource Access Manager
        • Allows resource Sharing Between accounts.
        • Works on individual accounts or accounts in AWS organisation.
        • Types of resources we can share
      • Single sign-on(SSO)
        • Centrally manage access
        • Used to sign into third-party apps like G suite, office 365, salesforce.
        • Use existing identities.
        • Account level permission
        • SAML
Route 53
  • DNS
    • DNS is actually on port 53 and that is where Route 53 gets its name.
    • Route 53 is Amazon’s DNS service.
    • DNS is used to convert human friendly domain names (such as http://cloudimplant.com into and Internet protocol(IP) address such as http://82.154.58.1).
    • IP addresses are used by computer to identify each other on the network.
    • IP addresses commonly come in to different forms IPv4 and IPv6.
    • DNS is used to get IP address for a corresponding domain name.
    • Allows you to map a domain name that you own to:
      • EC2 instances.
      • Load Balancers
      • S3 buckets
    • IPv6 addresses were created because IPv4 addresses were running out.
      • IPv4 is a 32 bit field and has over 4 billion different addresses
      • IPv6 was created to solve this depletion issue and has an address space of 128 bits Which in theory has 340 undecillion addresses.
    • Top level domains
      • If we look at common domain names such as google.com, bbc.co.uk, cowin.gov.in Etc. we notice string of characters separated by dots (Periods).
      • The last word in a domain name represents the “top level domain”.
      • The second word in a domain name is known as a second level domain.
      • Some top level domains are
        • .com
        • .edu
        • .gov
        • .co.uk
        • .co.in
        • .gov.in
    • These domain names are controlled by the Internet assigned numbers authority or IANA in a root zone database which is essentially a database of all available top level domains.
    • We can view this database at
      • www.iana.org/domains/root/db
    • Domain registrars
      • Because all of the names in a given domain name have to be unique there needs to be a way to organize this all so that domain names aren’t duplicated.This is where domain registrars come in.
      • A registrar is an authority that can assign domain names directly under one or more top level domains. These domains are registered with InterNIC, a service off ICANN, which enforces uniqueness of Domain names across the Internet. Each domain name becomes registered In a central database known as whoIs Database.
    • Popular domain registrars are
      • Amazon, Go Daddy, hostinger, Bluehost etc.
    • Start of authority (SOA)
      • The SOA Record stores information about:
        • The name of the server That supplied the data for the zone.
        • The administrator of the zone. 
        • The current version of the data file.
          • The default number of seconds for the time to live file an resource records.
    • NS Records
      • NS stands for Name Server Records
      • They are used by top level domain servers To direct traffic to the content DNS server which contains the authoritative DNS records. 
      • So when We hit cloudimplant.blogspot.com On browser Our browser goes to top level domain server Which returns it’s name server record corresponding to our domain something like 191010 IN ns.blogspot.com
      • Next we query The name server records for the record which gives us SOA i.e. Start of authority record.
      • Inside the start of authority we have DNS records.
    • A Records
      • An “A” record is the fundamental Type of DNS record. 
      • The “A” in A record Stands for address.The “A” record is used by a computer to translate the name of the domain to an IP address.
      • For example http://cloudimplant.blogspot.com may point to HTTP://132.9.11.70.
    • TTL
      • The length that DNS record is cached on either the resolving server or the users own local PC is Equal to the value of the “Time To Live”(TTL) in seconds.The lower the time to live, The faster changes to DNS records take to propagate throughout the Internet.
      • Default time to live is 48 hours.
      • So if the IP address of DNS changes the browser may take 48 hours to link to new IP address.
    • CName
      • A Canonical Name(CName) Can be used to resolve one domain name to another.
      • For example mobile website like m.cloudimplant.blogspot.com with domain name that is used for when users browse our domain via mobile devices. We may want do use mobile.cloudimplant.com to Resolve to this same address.
    • Alias Records
      • Alias Records are used to map Resource record sets in your hosted zone to Elastic Load balancers,Cloud front distributions or S3 buckets that are Configured as websites.
      • Alias records work like CNAME Record in that you can map one DNS Name to another “target” DNS names.
    • Understand the difference between Alias record and CNAME
      • A CNAME Can’t be used for naked domain names (Zone Apex Record).
      • You can’t have a CNAME for http://cloudimplant.blogspot.com, It must be either an “A record” or an Alias.
    • ELB’s (Elastic Load Balancer) do not have predefined IPv4 addresses,You resolve them using a DNS name.
    • Mostly we prefer Alias record over a CName.
    • Common DNS types
      • SOA records
      • NS records
      • A records
      • C Names
      • MX records
        • Used for mail
      • PTR records
        • It’s reverse of an “A” record it is a way off looking up a DNS name against IP address.
  • Register a domain name
    • Go to route 53 dashboard.
      • Start domain registration.
      • Write the domain name and check for its availability.
      • Add to cart and fill in registrant information.
      • Except terms and conditions and complete your purchase.
      • May take from 2 hours to 3 days to complete.
    • Once domain registration is successful you will see it in hosted Zones.
    • Next provision an EC2 instance.
    • We can buy domain names directly with AWS.
    • Route 53 routing policies available
      • Simple routing policy
        • If you choose the simple routing policy you can only have one record with multiple IP addresses.
        • If we specify multiple values in a record,Route 53 returns all values to the user in a random order.
        • Create a record set in your hosted zone in route 53
          • Add IP addresses of your EC2 Instances
          • Change your the TTL to 1 minute.
          • We can also reset our TTL by flushing our DNS too.
          • So when we try to hit our website it may return from any of the DNS specified.
        • Simple routing policy does not support health checkups.
      • Weighted routing
        • Allows you to split your traffic based on different weights assigned.
        • For example we can set 30% of our traffic to go to US-East-1 and 70% to go to EU-West-1.
        • Create a Record Set and add IP address and change the routing policy two weighted give it a weight and ID.
        • Create another Record Set and add ip address desired for this traffic weight add weight and give ID.
        • Like this we can add multiple Record Sets.
        • If we associate a Record Set with health check and Health Check fails it will remove the Record set from list.
        • In health checks we can monitor various parameters of ip address like endpoint, cloud Watch alarms etc.
        • We can set health checks on individual record sets.
        • If a record set fails a health check it is removed from Route 53 until it passes the health check.
        • You can set SNS Notifications to alert you if a health check failed.
      • Latency based routing
        • Allows you to route your traffic based on the lowest network latency for your end-user (i.e which region will give them the fastest response time).
        • To use latency-based routing, you create a latency resource Record Set for the Amazon EC2 or (ELB) Resource in each region that hosts your website.
        • When Amazon route 53 receives a query For your site, it selects The latency Resource Record set For the region that gives the user the lowest latency.Route 53 then responds with the value associated with that record set.
          • Create a Record Set for each location routing policy type Latency.
          • The region will automatically populate based on location of EC2 Instance we can change it too.
          • Set ID and click create.
        • So if a User has less latency in EU-West-2 than AP-SouthEast-2 Both for which Record Set With latency routing policy is configured Then it will redirect user to EU-West-2.
      • Failover routing
        • Failover Routing Policies are used when you want to create an active/Passive set up.
        • For example, you may want primary site to be in EU-West-two And your secondary site in AP-Southeast-2.
        • Route 53 will monitor the health of your primary site using a health check.
        • Health check monitors the health of your endpoints.
        • We define two sites in records set active and passive.
        • When active fails request is routed automatically to passive website.
        • Create a Records set In your hosted zones from Route 53.
        • Select routing policy to fail over.
        • Select if it’s a primary or secondary.
        • Similarly create a record set With secondary site.
        • We have 2 Record sets one with primary site and Other with secondary site.
      • Geo location routing
        • Geo location routing lets you choose where your traffic will be sent based on the  Geographic location of your users that is location from which DNS queries originate.
        • For example you might want all queries from Europe To be routed to a Fleet off EC2 Instances that are specifically Configured for European customers.
        • These servers may have the local language of your European customers and all prices are displayed in Euros.
        • If we want users from a particular region to be redirected to a particular instance we can use Geo location routing for example in the shopping website for office page European customers must be routed to European version and US customers Must be routed to US version.
        • We can test this by using a VPN and redirecting our service request from different locations.
        • Create a record set with routing policy as Geo location.
        • Select location
        • Create another record set with another location.
        • Routes traffic based on your users location.
      • Geo proximity routing(traffic flow only)
        • A viable in traffic flow only mode.
        • Geo proximity routing let’s Amazon Route 53 route traffic to Your resources based on the geographic location of your users and your resources.
        • You can also optionally choose to route more traffic or less to a given resource by specifying a value known as bias.
        • A bias expands or shrinks the size of the geographic region from which traffic is routed to a resource.
        • To use geo-proximity routing You must use route 53 traffic flow.
        • Go to traffic policies under traffic flow and create a traffic policy.
          • Give it a name and description.
          • Select the DNS type
          • Create a Geo proximity rule to connect to.
            • Enter end Point location coordinates.
            • Select a bias
            • Select health checks
          • We can add this for multiple regions based on coordinates and other parameters.
        • Next we can connect do endpoints based on conditions In these regions
      • Multivalue answer routing
        • Multivalue answer routing lets you configure Amazon route 53 to return multiple values, searches IP addresses for your web servers, in response to DNS queries.
        • You can specify multiple values for almost any record but multivalue answer routing also lets you check the health of each resource sure route 53 returns only values for healthy resources.
        • This is similar to Simple routing however it allows you to put health checks on each Record Set.
        • So in simple routing there are no health checks on record sets.
        • Create a record set from hosted zones under route 53.
        • Select routing Policy as multi-value answer
          • Click create
          • We See we can add health checks to multi-value answer
        • We can check this By terminating the EC2 Instance and we will see that it goes to other web server.
        • It is basically Simple routing with health checks.
    • DNS Summary
      • Elastic load balancing does not have Pre defined IPv4 addresses you receive them using a DNS name
      • Understand the difference between an alias record and a CNAME
        • Alias Record Is mostly used over CNAME
      • Common DNS types
        • SOA records
        • NS records
        • Α records
        • C Names
        • MX records
        • PTR records
      • The following policies are available with Route 53
      • simple routing
      • Waited routing
      • Latency based routing
      • Fail over routing
      • Geo location routing
      • Geo proximity routing(Traffic Flow Only)
      • Multivalue answer routing
      • Health checks
        • You can set Health checks on individual Record Sets
        • If a Record set fails a health check it will be removed from Route 53 until it passes the health check.
        • We can set SNS Notifications to alert is a health check is failed.
VPC
  • Overview
    • Think of VPC as a virtual data center in the cloud.
    • Amazon virtual private cloud (Amazon VPC) Let’s you provision a logically Isolated section of the Amazon Web services Cloud where are you can launch AWS resources in a Virtual net work that you define.
    • You have complete control over your virtual networking environment, including selection of your own IP address range, creation of subnets, and configuration of route Tables and Network gateways.
    • We can easily customize the network configuration for your Amazon virtual Private cloud.
    • For example, you can create a public facing Subnet for your web servers that has access to the Internet and place your backend systems such as Databases or application servers in a private facing Subnet with no Internet access.
    • You can leverage multiple layers of security including security group and network access control lists, to help control access to Amazon EC2 Instances in each Subnet.
    • We can create a hardware virtual private network connection between your corporate data center and your VPC and leverage AWS cloud as an extension of Your corporate data center.
    • Internet Gateway or virtual private Gateway connects with Router which checks route table and network ACL which connects with security group.
      • Network ACL’s Act like firewalls. They are stateless They allow us To have allow rules As well as deny rules.
      • They are first in line of defense.
      • Security groups are state-full That access instances.
      • We have two subnets we have a public subnet and we have a private Subnet.
      • In private subnet EC2 can’t connect to Internet on their own.
    • Internet authority has Defined three sets of IP address for private Subnet ranges.
      • 10.0.0.0–10.255.255.255(10/8 prefix)
      • 172.16.0.0–172.31.255.255(172.16/12 prefix)
      • 192.168.0.0-192.168.255.255(192.168.1/16 prefix)
    • VPC uses
      • Launch Instances into a Subnet of your choosing.
      • Assign custom IP address ranges in each Subnet.
      • Configure route tables between subnets.
      • Create Internet Gateway and Attach it to our VPC.
      • Much better security control over your AWS resources.
      • Instance security groups.
      • Subnet Network access control lists(ACL’s).
    • Default VPC versus custom VPC
      • Default VPC is user-friendly, allowing you to immediately deploy Instances.
      • All subnets in Default VPC have a route out to The Internet.
      • Each EC2 Instance has both public and private IP address.
    • VPC peering
      • Allows you to connect one VPC to another wire direct network route Using private IP addresses.
      • Instances have as if they were on same private network.
      • You can peer VPC’s With other AWS accounts As well as with other VPC’s In the same account.
      • Peering Is in a star configuration that is one central VPC peers with four others. No   Transitive peering.
      • We can peer Between VPC and regions too
      • So if we have a star formation and VPC A Is in Center then to Peer any Two VPCs at its edges We need to have a new connection between them that is we can’t Peer VPC B And VPC C Via VPC A We need to have a direct connection between VPC B and VPC C.
        • Transitive peering Means to transit One VPC through other Which is not allowed.
    • VPC Is a logical data center in AWS.
    • Consists of IGW’s or virtual private gateways, Route tables, network access control list, subnets, and security groups.
    • One Subnet can be a part of only one availability zone.
      • We can have multiple subnets in Availability Zones.
    • Security groups are statefull, network access control lists oh stateless.
    • No transitive peering.
  • Create your own VPC
    • Go to VPC dashboard.
    • When we click on VPC dashboard there is always a default VPC.
    • Click on create a VPC
    • Give it a name
    • IPv4 CIDR block
    • IPv6 CIDR block
      • Preferably use Amazon provided IPv6 CIDR Block.
    • Tenancy should be on dedicated hardware but it is costly so for learning purposes use default.
      • Click create
      • A new VPC Is created.
    • Once VPC is Created we need to create a subnet which is not automatically created.
    • Internet gateway is not also created.
    • Network a ACL is created when we create a VPC.
    • A security group is also created once we create a VPC.
    • A route table Is also created when we create a VPC. 
    • So we have a router Which checks route table which intern checks network ACL which in turn checks security group in our VPC stack.
    • No to use this VPC we need to create some subnets.
      • Go to Subnet Dashboard and click create Subnet.
      • Give you subnet a name.
      • Select your VPC.
      • Select Availability zone.
      • Give a IPv4 Address range in IPv4 CIDR block.
      • Assign IPv6 if needed.
      • Click Create to create a Subnet.
    • We can create more than one subnets as per requirements.
    • Use website cidr.xyz to get count of IP addresses available to us in a range.
    • Amazon reserves certain IP addresses which cannot be used form a range.
      • 10.0.0.0 : Network addresses
      • 10.0.0.1: Reserved by AWS for VPC router.
      • 10.0.0.2:  Reserved by AWS the IP address of DNS server which is base of the VPC network range but AWS also reserves the base of Subnet plus 2.
      • 10.0.0.3: Reserved by AWS for future use.
      • 10.0.0.255: network Broadcast address AWS does not support broadcast Ina VPC therefore it reserved this address.
    • Next we need to launch our subnets so that they are publicly accessible.
      • Select one of them click on actions select modify auto assign IP address and enable auto assign public IPv4 Addresses.
    • Now next in Our security group we have a public Subnet and a private subnet in our VPC stack.
    • Next to configure our route tables we need to add a network Gateway
      • Create Internet Gateway by Clicking on create Internet gateway on Internet Gateway dashboard.
      • Give it a name and click create.
      • We see it’s State is detached.
        • Next select this gateway go to Actions and select attach to VPC.
        • Select The VPC and click attach.
      • We can only Have one Internet Gateway per VPC.
      • Next in our route tables we need to configure our route associated with VPC. Select the associated route.
        • Our subnets inside the VPC can communicate with each other over IPv4.
        • We can check this in routes.
    • If we see subnet association we see that subnets that have not been associated with any route table are associated with main route table That is any subnet we create by default is public which is a security concern.
      • So we need to have our main route table as private and have a separate route table for our public subnets.
    • Create a New route table
      • Give it a name.
      • Select a VPC To associate with it and click create.
      • Click on edit routes and create a route Out to the Internet.
        • 0.0.0.0/0 as destination
        • Target is our Internet gateway.
        • ::/0 Is a Destination route out for IPv6 if needed.
        • Click on save routes. 
      • This has Given a route out For IPv6 and I Pv4 To our Internet gateway. 
      • Any subnet associated with this route table will automatically become public for both IPv4 and IPv6.
      • Next go to Subnet association
        • Edit Subnet association and add Subnet to our public route table.
      • Next we will provision and EC2 Instances one in public subnet and other in private subnet.
      • When we create a VPC a Default Route table, network access control list (NACL) and a default security group is automatically created.
      • It won’t create any subnets, nor will it create a default Internet Gateway.
      • US-East-1A in your AWS account can be a completely different availability zone to US-East-1A in another AWS account. The Availability Zones are randomized.
      • Amazon always reserves 5 IP addresses within your subnets
      • You can only have one Internet Gateway per VPC.
      • Security groups can’t span VPC’s.
    • Create a new security group for non-public EC2.
    • Assign inbound rules so that we can communicate it.
    • Use the IP address range of your VPC.
  • NAT Instances and NAT gateways
    • NAT stands for network address translation.
    • Enable EC2 Instances in private Subnet to go out and enable software downloads.
    • They need a way to communicate with Internet Gateway.
    • We use NAT Instances and NAT gateways for this.
    • Mostly we use NAT gateways as they are spread across multiple Availability Zones.
    • NAT Instances are only like single EC2 instances.
    • To create a NAT Instance
      • Go to EC2 dashboard and while launching Instance, while choosing an Amazon Machine Image go to Community AMI’s search for NAT and it will Display NAT Instance types select any one of these.
      • Select your Custom VPC and public Subnet.
      • Add Storage and Name and that’s it you are ready to go.
    • We must disable Source/Destination checks in NAT Instance as it should be able to send and receive traffic when source and destination is not itself.
      • Select your instance go to actions, Networking and select Change source and destination Checks  and disable them.
    • Create a route so that your EC2 Instances can talk to your NAT Instances.
      • In your main route table add this route.
      • Select Destination IP as 0.0.0.0/0 to make it public 
      • Select your NAT Instance as target,
      • Save This route.
    • SSH into your EC2 and try to update the Instance using “yum update” as a root user if this works your EC2 Instance can now talk to Internet.
    • This NAT EC2 Instance has a very Limited scope and is available to only one instance.
    • We use NAT Gateways Two resolve this limitation.
      • On VPC dashboard click on NAT gateways
      • Click create a NAT gateway
      • Select public Subnet
      • Select Elastic IP allocation ID
      • Click create NAT Gateway
      • Edit main Route table to point to this gateway.
    • When creating a NAT Instance, disable Source/Destination Check on the Instance.
    • NAT Instances must be in a public subnet.
    • There must be a route out of the Private Subnet to the NAT Instance, in order for this to work.
    • The amount of traffic that NAT Instances can’t support depends on the Instance size. If you are bottle necking increase the Instance size.
    • You can create high availability in NAT Instances using auto scaling groups, multiple subnets, in different Availability Zones and a script to automate failover.
    • NAT Instances Are behind a security group.
  • NAT gateways
    • Redundant inside the availability zone.
    • Preferred by the enterprise. 
    • Starts at 5GBPS and scales currently 45 GBPS
    • No need to patch
    • Not associated with security groups
    • Automatically assigned a public IP address.
    • Remember to update your route tables.
    • No need to disable Source/Destination Checks.
    • If you have resources in multiple Availability and Zones and they share one NAT Gateway, in the event that the NAT Gateway’s Availability zone is down,Resources in the other availability zones loose Internet access. To create an availability zone independent architecture create a NAT gateway in each Availability zone and configure your Routing to ensure that resources use the NAT gateway in the same availability zone.
  • Network access control lists vs security groups
    • Let’s create a new ACL “myWebNACL”
      • Give it a name.
      • Give it a VPC.
    • We can associate a Subnet to network ACL.
    • Once we associate Network ACL with Subnet it is no longer associated with default Subnet.
    • Any EC2’s connected to that subnet Will not be able to be accessed publicly.
    • We need to allow inbound rules in network ACL so that associated EC2 can communicate with Internet.
    • Similarly we need to allow outbound rules as well for that network ACL for port 80, 443 and ephemeral ports 1024–66535.
      • An ephemeral Port is short-lived Transport port for internal protocol (IP) Communications. Ephemeral ports allocated automatically from a predefined range by the IP stack software.
      • A NAT Gateway uses ports from 1024-65535 That is why we have to select ephemeral ports from that range.
    • The rules in a network access control list are evaluated in Numerical order of ID# In which they are made and there ID# is always in multiple of 100’s.
      • So we should have a deny before the allow.
    • If we do not allow inbound ephemeral port range we cannot update our system.
    • Our VPC automatically comes with the Default network ACL, and by default it allows all outbound and inbound traffic.
    • You can create custom network ACL’s.By default, each custom network ACL Denies all inbound and Outbound Traffic until we add rules.
    • Each Subnet in your VPC must be associated with a network ACL.If you don’t explicitly associate a subnet with a network ACL, the subnet is automatically associated with the default network ACL.
    • Block IP address is using Network ACL’s not security groups.
    • We can associate a Network ACL with multiple subnets, however a subnet can be Associated with only one network ACL at a time.When you associate a network ACL with a Subnet, the previous association is removed.
    • Network ACL’s contain a numbered list of rules that is evaluated in order,Starting with the lowest numbered rule.
    • Network ACL’s Have separate inbound and outbound rules, and each rule can either allow or deny traffic.
    • Network ACL’s are stateless, responses to allowed inbound traffic is subject to the rules for outbound traffic (and vice a versa).
  • Custom VPC’s and Elastic load balancer’s
    • Create A load balancer it is of 3 types
      • Application load Balancer 
      • Network load balancer
      • Classic load balancer
    • Let’s create a application load balancer
      • Give it a name 
      • Give it scheme
        • Internet facing
        • Internal
      • IP address type
      • Listener port and protocol.
      • Availability Zones
        • When provisioning a load balancer we need at least two public subnets.
  • VPC flow logs
    • VPC flow logs Is a feature that enables you to capture information about the IP traffic going to and from net work interfaces in your VPC.
    • Flow Log data is stored using Amazon cloud watch logs
    • After we have created a flow log we can view and retrieve data in Amazon Cloud watch logs.
    • Flow logs can be created at three levels
      • VPC
      • Subnet
      • Network Interface Level
    • Goto VPC Dashboard select your custom VPC
      • Go to actions and create Flow log.
      • Choose to log only accepted, rejected or accepted and rejected traffic.
      • Select destination it can be S3 bucket Or Cloud watch logs.
      • Select Destination Group
        • We create different log groups for different EC2 instances i.e. two similar EC2 instances can share a log group.
      • Select IAM role.
    • We cannot enable flow logs for VPC’s that are peered with your VPC unless the peer VPC is in your account.
    • You can tag flow logs.
    • After you have created a flow log, you cannot change its configuration, for example you cannot associate a different IAM role with the flow log.
    • Not all IP traffic is monitored
      • Traffic generated by instances when they Contact the Amazon DNS server. If you use your own DNS server, then all traffic to that DNS server is logged.
      • Traffic generated by a Windows Instance for Amazon windows license activation.
      • Traffic to and from 169.254.169.254 for instance Metadata.
      • DHCP traffic
      • Traffic to the Reserved IP addresses for the VPC router.
  • Bastion Host
    • A bastion host is a special purpose computer on a network specially designed and configured to withstand attacks.
    • The computer generally hosts a Single application, For example a proxy server and all other services are removed or Limited to reduce threat to the computer.
    • It is hardened in this manner primarily due to its location and purpose, which is either on the outside of a firewall or In a demilitarized zone (DMZ) and usually involves access from untrusted networks or computers.
    • It’s a way to SSH or RDP into our private Instances.
    • A NAT Gateway NAT Instance is used to provide Internet traffic to easy to instances in a private subnets.
    • A Bastion is used to securely administer EC2 Instances using (SSH or RDP).
    • Bastions are called as jump boxes in Australia.
    • We cannot use a NAT Gateway as a bastion host.
  • Direct Connect
    • AWS direct connect is a cloud service solution that makes it easy to establish a dedicated Network connection from your premises to AWS.
    • Using AWS direct connect You can establish private connectivity between AWS and your data center,Office, Or colocation environment,Which in many cases can reduce Your network costs,Increase bandwidth throughput,And provide a more consistent network experience then Internet-based connections.
    • Direct connect directly connects your data center to AWS.
    • Useful for high throughput workloads(I.e Lots of network traffic)
    • Or if you need a stable and reliable secure connection.
    • Setting up direct connect
      • Create a virtual interface in the direct connect Console. This is a public Virtual Interface.
      • Go to the VPC Console and then to VPN connections. Create a customer Gateway.
      • Create a virtual private Gateway
      • Attach the Virtual private Gateway to desired VPC.
      • Select VPN connection and create new VPN connection.
      • Select the virtual private Gateway and The customer gateway.
      • Once the VPN is available, set up the VPN on the customer Gateway or firewall.
  • Global Accelerator
    • AWS Global Accelerator is a service in which we create Accelerator to improve availability and performance Of your application for local and global Users.
    • Global accelerator directs traffic to optimal endpoints over the AWS global network.
    • This improves the availability and performance of your internet applications that are used by a global audience.
    • By default , Global Accelerator provides You with two static IP addresses that you associate with your Accelerator
      • Alternatively you can bring your own
    • AWS global Accelerator includes the following components
      • Static IP addresses
        • By default,Global Accelerator provides you with two static IP addresses that you associate with your accelerator or, you can bring your own.
        • 1.2.3.4
        • 5.6.7.8
      • Accelerator
        • An Accelerator directs traffic to optimal Endpoints over the AWS global network to improve the availability and performance of your Internet applications.
        • Each Excelerator includes one or more listeners.
      • DNS Name
        • Global Accelerator assigns each accelerator a default Domain Name System(DNS) name similar to a1234567890abcdef.awsglobalaccelerator.com That points to the static IP addresses that global Accelerator assigns to you.
        • Depending on the use case, you can use your accelerator’s Static IP addresses or DNS name to Route traffic to your Accelerator, or set up DNS records to Route traffic using your own custom domain name.
      • Network zone
        • A network zone services the static IP addresses for your Accelerator from a unique IP Subnet.
        • Similar to an AWS Availability zone, a Network zone is An isolated unit with its own set of physical infrastructure.
        • When you configure an Accelerator by default, Global Accelerator allocates two IPv4 Addresses for it.
        • If one IP address from a network zone becomes unavailable due to IP addresses blocking by certain Client networks or Network disruption, client application can retry on the healthy static IP addresses from the other isolated Network zone.
      • Listener
        • A Listener processes Inbound Connections from Clients To global Accelerator, based on the port(or port range) and protocol that you  configure. Global Accelerator supports both TCP and UDP protocols.
        • Each listener has one or more Endpoint groups associated with it, and traffic is forwarded to endpoints in one of the groups.
        • You can associate Endpoint Groups with listeners by specifying the regions that you want to distribute traffic to.
        • Traffic is distributed to optimal Endpoints within the Endpoint Groups associated with a listener.
      • Endpoint Group
        • Each Endpoint group is associated with a specific AWS region.
        • Endpoint Groups include one or more endpoints in the region.
        • You can increase or decrease the percentage of traffic that would be otherwise directed to an Endpoint Group by adjusting a setting called a traffic dial.
        • The traffic dial lets you easily do performance testing or blue/green deployment testing for new releases across different AWS regions.
      • Endpoint
        • Endpoints can be Network load balancers, Application load balancers, EC2 Instances, all elastic IP addresses.
        • An Application load Balancer Endpoint can be an Internet facing or internal.
        • Traffic is routed to endpoints based on configuration options that you choose, such as Endpoint weights.
        • For each Endpoint, you can configure weights which are numbers that you can use to specify the proportion of traffic To route each one. This can be useful, for example, to do performance testing within a region.
        • Let’s now create an endpoint
          • Create an EC2 Instance. Once Instance is launched we have an endpoint to our instance.
        • Next go to global Accelerator in networking and content delivery section.
          • Create an accelerator
          • Configure Listeners
          • Add Endpoint Groups
          • Add endpoints (EC2 instance)
          • Click create Accelerator.
        • AWS Global Accelerator is a service in which You create Accelerator’s to improve availability and performance of your application for local and global users.
        • You are assigned two static IP Addresses (or alternatively you can bring your own).
        • You can control traffic using traffic dials. This is done within the endpoint group.
        • We can control weighing to individual Endpoints using weights
  • VPC Endpoints 
    • A VPC Endpoint enables you to privately connect your VPC To supported AWS services and VPC Endpoint services powered by private link without requiring an Internet Gateway, NAT device, VPN connection, or AWS direct connect connection.
    • Instances in your VPC do not require public IP addresses to communicate with resources in the service.
    • Traffic between your VPC and the other service does not leave the Amazon network.
    • Endpoints are virtual devices. They are horizontally scaled, redundant, and highly available VPC components that allow communication between instances in your VPC and services without imposing availability risks or bandwidth constraints on your network traffic.
    • There are two types of VPC Endpoints
      • Interface Endpoints
      • Gateway Endpoints
    • An Interface Endpoint is an Elastic network Interface with a private IP address that Serves as an entry point For traffic destined to a supported service.The following services are supported.
      • Amazon API Gateway
      • AWS cloud formation
      • Amazon Cloud watch
      • Amazon cloud watch events
      • Amazon cloud watch logs
      • AWS code Build
      • AWS config
      • Amazon EC2 API
      • Elastic load balancing API
      • AWS key management service
      • Amazon Kinesis data streams
      • Amazon sage maker and Amazon sage maker runtime
      • Amazon sage maker notebook instance.
      • AWS secrets manager
      • AWS security token service
      • AWS service catalog
      • Amazon SNS.
      • Amazon SQS
      • AWS systems manager
      • Endpoint services hosted by other AWS accounts.
      • Supported AWS market place partner services.
    • Gateway Endpoints only support
      • Amazon S3.
      • Dynamo DB.
    • To create a VPC Endpoint
      • In your VPC dashboard go to VPC Endpoints and click create Endpoint.
      • Select the service name to create Endpoint to.
      • Example select S3 for S3 gateway endpoint.
      • Select VPC
      • Select route table
      • Select policy
      • Click create to create an endpoint.
  • AWS Private Link
    • Used to open up our services in VPC to another VPC.
      • Open VPC up to the Internet
        • Security consideration, everything in public subnet is public.
        • A lot more to manage 
      • Use VPC peering.
        • You will have to create and manage many different peering relationships.
        • The whole network will be accessible. This isn’t good if you have multiple applications within your VPC
      • Use private link
        • The best way to expose a service VPC to Tens, hundreds, or thousands of customer VPCs
        • Doesn’t require VPC (peering) No route tables , NAT, IGW’s etc.
        • Requires a network load balancer on the service VPC and an ENI on the customer VPC.
        • When we want to peer VPCs to tens, hundred, or thousands of customer VPC’s , AWS private link is best solution.
        • Doesn’t require VPC Peering, no Route tables, NAT, IGW’s etc.
        • Requires a network load balancer on the service VPC and an ENI on the customer VPC.
  • AWS transit gateway
    • Allows you to have transitive peering between thousands of VPC’s and on premises data centers.
    • Works on a hub-and spoke model.
    • Works on a regional basis, but you can have it across multiple regions.
    • You can use it across multiple AWS accounts using RAM (Resource Access Manager).
    • We can use route tables to limit how VPCs talk to one another.
    • Works with direct connect as well as VPN connections.
    • Supports IP multicast(Not supported by any other AWS service).
  • AWS VPN cloud hub
    • Helps to connect VPC using VPN connections.
    • If we have multiple sites, each with its own VPN connection, we can use AWS VPN cloud hub to connect those sites together.
    • Hub-and-spoke model.
    • Low-cost, easy to manage
    • It Operates over the public Internet, but all traffic between the customer gateway and the AWS VPN cloud hub is encrypted.
  • AWS network costs
    • Use private IP addresses over public IP addresses to save on costs.
    • This then utilizes the AWS backbone Network.
    • If you want to cut all Network costs,Group your EC2 instances In the same availability zone and use Private IP addresses.This will be cost-free, but make sure to keep in mind single point of failure issues.
  • Practical Exercises
    • Learn how to make a VPC from memory and make sure your instances in public subnet can communicate with your Instances in private Subnet.
    • Private subnets Instances must be able to do “Yum updates”.
    • Networking components are most important for exam.
Highly Available(HA) Architecture
  • Elastic load balancer
    • A load balancer distributes network traffic across a group of servers.
    • We can easily increase capacity when needed.
    • A physical or virtual device that is designed to help you balance the Load, balance the network load across several web services.
    • We can use for applications too does not have to be necessarily be a Internet load balancer.
    • Basically used to balance load across web services.
    • Common load balancer error
      • Error, 504, Gateway Time out.
        • Target failed to respond.
        • Elastic load balancer could not establish connection to the target, example, the web server, database or lambda function.
          • Your application is having issues.
          • Identify where the application is failing and give the problem.
    • Types of load balancer
      • Application load balancer
        • Application load balancers are best suited for load balancing of HTTP & HTTPS traffic.
        • Used for load balancing HTTP/HTTPS traffic.
        • They operate at layer 7 and are application-aware.
        • They are intelligent, and you can create Advanced Request routing, sending specified request to specific web servers based on the HTTP header.
        • For example, an electronic website, the sales go to one portal, loans, go to another portal, service and repairs go to 3rd portal.
        • If we change currency, language of our web application then load balancer can redirect us to specific website accordingly.
      • Network load balancer
        • Network load balancers are best suited For load balancing of TCP traffic where extreme performance is required. Operating at the connection level (Layer 4) i.e. Transport Layer, Network load balancer are capable of handling millions of requests per second, while maintaining ultra-low latencies.
        • Used for Extreme performance.
      • Classic Load Balancer
        • Classic load balancer are the legacy Elastic load balancer. You can load balance HTTP/HTTPS applications and use layer 7 Specific features, such as X-Forwarded for headers and sticky sessions.
        • You can also use strict Layer 4 for load balancing for application that rely purely on the TCP protocol.
        • If your application stops responding, the ELB (classic load balancer) Responds with 504 error.
        • This means that the application is having issues.
        • This could be either at the web server layer Or at the Database layer.
        • Identify where the application is failing, and scale it up or out where possible.
      • Gateway load balancer
        • Allows you to load balance workloads for third-party virtual appliances running in AWS, such as
          • Virtual appliances purchased using AWS Marketplace
          • Virtual firewalls from companies like Fortinet, PAL Alto, Juniper, Cisco.
          • Intrusion detection and prevention systems.
          • IDS/IPS systems from companies like checkpoint, Trend micro etc.
  • X-forwarded-For Header
    • User hits our Elastic Load balancer which is forwarded from his public IP address.
    • This IP address is passed to our EC2 instance which treats it as our user’s IP address.
    • Now our  EC2 instance does not knows the user IP address.This can be obtained from X-forwarded-For header.
    • If you need IPv4 address of your end-user, look for the X-forwarded-For header.
  • Launch two different EC2 Instances in two different availability zones.
    • Install Apache, start Apache and create a webpage in both Instances.
      • Use a Bootstrap script for this for quick set up.
    • Navigate to the public IP address just to ensure web servers are working.
  • Next go to load balancer dashboard and create a load balancer.
    • Creator a classic load balancer
    • Give it a name
    • Give it a VPC
    • Can make it Internal which means they will be inside our private subnets (optional).
    • Enable Advanced VPC configuration (optional)
      • Lets us select subnets in Which our elastic load balancer will be deployed into.
      • Should be minimum 2 subnets.
    • Configure Listener configuration
    • Leave default configuration of HTTP on port 80 we can also use for HTTPS for port 443.
    • Next select security groups
    • Configure Health Check
    • Add EC2 instances
      • Enable Cross – zone load balancing
        • Evenly distribute traffic across targets in a availability zone.
      • Enable connection draining
        • The number of seconds to allow existing traffic to continue flowing.
    • Add tags
    • Review and create load balancer
    • Load balancer are created out of free tire in AWS.So you will be charged for it.
    • We are never given an IP address for Elastic load balancer we are always given a DNS name.
    • Check that our instances are in-service.
    • Once an instance stops the load balancer marks the instance as unhealthy and takes it out of the load balancer.
      • Instances Tab in load balancer shoes the health of Instances.
    • Next create a target group 
      • Your Load balancer routes request to the targets in a target group using the target group settings that you specify, and performs all the health checks on the targets using the health check settings that you specify.
      • Give it a name
      • Give it a target type which can be an instance or IP or Lambda Function.
      • Give it a protocol, port and VPC
      • Set health check Settings on protocol HTTP and path.
      • Click create to create a target group.
      • Next add targets Instances web 01 and web 02.
    • Create another load balancer of type Application load balancer.
      • Give it a name
      • Make it Internet facing
      • Select Availability Zones
      • Select security group
      • Select target group for routing
      • Review and create application load balancer.
      • Add registered targets on port 80.
      • Wait for the targets status to change to healthy.
    • We see In our application load balancer we find listeners and can view and edit rules for these listeners.
      • These can be used to perform intelligent routing.
    • Instances monitored by elastic load balancer are reported as.
      • In-service, or out of service
    • Health Checks Check the Instance Health by talking to it.
    • Load balancer have their own DNS name. You are never given an IP address.
    • Read the ELB FAQ fall classic load balancers.
  • Advanced Load balancers
    • Sticky sessions
      • Classic load balancer routes each request independently To the registered EC2 Instance With the smallest Load.
      • Sticky sessions allow you to Bind a user’s Session Onto specific EC2 Instance.This ensures that all requests from the user during the session are sent to the same instance.
      • You can enable sticky session for Application Load, Balancer as well but the traffic will be sent at the target group level.
      • Sticky sessions enable your users to stick to same EC2 instance, can be useful if you are storing information locally to that instance.
    • Cross zone load balancing 
      • Load balancer can work across zones.
      • Cross zone load balancing enables you too load Balancer Across multiple Availability Zones.
    • Path Patterns
      • You can create a Listener with rules to forward requests based on the URL path.
      • This is known as path-based routing.
      • If you are running micro services you can Route traffic to multiple backend services using path-based routing.
      • For example, you can route general requests to one target group and requests to render images to another target group.
      • Path patterns allow You to direct traffic to different EC2 instances based on the URL contained in the request.
  • Auto scaling
    • Auto scaling has three components
      • Groups
        • Logical component, web server group or application Group or database group etc.
      • Configuration templates 
        • Groups uses a launch template or a launch configuration as a configuration template for its EC2 instances. 
        • You can specify information such as the AMI ID,Instance type, key pair security Groups, and block device mapping for your instances.
      • Scaling options
        • Scaling option provides several ways for you to scale your auto scaling groups.
        • For example, you can configure a group to scale based on the occurrence of a specified condition (dynamic scaling) or on a schedule.
        • Following scaling options are provided
          • Maintain current Instance levels at all times.
            • You can configure your auto scaling Group to maintain specified number of running Instances at all times.
            • To maintain the current Instance levels, Amazon easy to auto scaling performs a periodic health check on running Instances within an auto scaling group.
            • When Amazon EC2 auto scaling finds an unhealthy Instance,it terminates that instance and launch is a new one
          • Scale manually
            • Manual scaling is the most basic way to scale your resources, where you specify Only the change in the maximum, minimum,Or desired capacity of your auto scaling group.
            • Amazon easy to auto scaling manages The process of Creating or terminating Instances to maintain the updated capacity.
          • Scale based on schedule
            • Scaling by schedule means that scaling actions are performed automatically as a function of time and date.
            • This is useful you know exactly when to increase or dickies the number of instances in your group, simply because we need arises on a predictable schedule.
          • Scale based on demand
            • A More advanced way to scale your resources using scaling policies lets you define barometers that control the scaling process.
            • For example let’s say that you have a web application that currently runs on two instances and you want the CPU utilization of the auto scaling group to stay at around 50% when the load on the Application changes. This method is useful for scaling in response to Changing conditions, when you don’t know when those conditions will change. You can set up Amazon EC2 to auto scaling to respond for you.
          • Use predictive scaling
            • You can also use Amazon EC2 Auto scaling in combination with AWS auto scaling to scale resources across multiple services.
            • AWS auto scaling can help you maintain optimal Availability and Performance by combining predictive scaling and dynamic scaling (Proactive and reactive approaches, respectively) To scale your Amazon EC2 capacity faster.
  • Launch configuration and Auto Scaling groups
    • Under Auto scaling go to launch configuration 
    • Click Create a launch configuration
    • Select the Linux AMI
    • Select the instance type 
    • Give it a name
    • Give it an IAM role
    • Add a Bootstrap script if required
      • Add to start Apache service and add file to server.
    • Select IP address configuration For launch configuration
    • Add Storage
    • Add security group
    • Click review and create launch configuration.
    • Next Create an auto scaling group Which will deploy our EC2 instances.
    • Give it a name
    • Give it a group size.
    • Give it a VPC and subnet.
    • Configure auto scaling groups for scaling policies.
    • Configure notifications
    • Add tags
    • Review and create auto scaling group
    • Next in our instances we will see the number of instances created by this group 
      • We see When we terminate one of the instances a gnu replicated instance is automatically created.
    • When we delete an auto Scaling group the corresponding Instances will also be automatically created
  • Following points must be considered while creating oh highly available architecture
    • Plan For failure
    • Simian army project bye Netflix
      • Chaos monkey
      • Chaos gorilla
      • Chaos Kong
      • Janitor monkey
      • Doctor monkey
      • Compliance monkey
      • Latency monkey
      • Security monkey
    • Example of highly available architecture is fail over in case of a region or availability zone goes down.
      • Fail over from one region to another and failover From one availability zone to another.
      • We have a website that requires A minimum of six instances and it must be highly available. We must be able to tolerate the failure of one availability zone.What is the ideal architecture for this environment while also being the most cost-effective?
        • Two availability zones with two instances in each Availability zone.
          • Two availability zone with two instances is wrong because we require six instances and here we have only four.
        • Three availability zone with three instances in each availability zone.
          • This is the correct option which provides us with failure tolerance of one availability zone And minimum required Instances in all specified cases.
        • One availability zone with six instances in each Availability zone.
          • Here we can eliminate one availability zone option if we lose that availability zone our application will go down.
        • Three availability zone with two instances in Each Availability and Zones.
          • Three availability zone with two instances will give us six instances but if one of the availability zone fails we will have only four instances and we require minimum of six instances so our fault tolerance of One availability zone fails here.
    • Always design for failure.
    • Use multiple Availability and Zones and multiple regions where ever we can.
    • Know the difference between multi availability zone and read replicas for RDS.
    • Know the difference between scaling out and scaling up.
    • Read the question carefully and always consider the cost element.
    • Know the different S3 storage classes.
  • Building a fault tolerant website - setup
    • Network Diagram
    • Setup S3 Buckets First
      • Create two buckets for code and media
    • EC2 uses bucket with cloud front.
    • Create a Cloud front Distribution
      • Go to cloud front in net work content and delivery.
      • Create a distribution
        • Select web distribution
        • Select origin domain name as media one Since it will be fronting our media bucket.
        • Select Origin path
        • Select Origin ID
        • Bucket access restriction
        • Origin custom headers
        • Configure Cache behavior settings
        • Click create distribution
      • Go to VPC’s and create security groups.
        • Create separate security groups for WEB and RDS i.e. HTTP and Database ports.
        • Open up your database security port for web application security group.
      • Next provision your DS Instance
        • Go to RDS In Databases
        • Click create database
        • Select database type
        • Select Template type
        • Give Database Instance name
        • Give username, master password and confirm master password.
        • Select Burs-table class in DB Instance size to t2.micro
        • Configure storage
        • Configure availability and durability.
        • Configure connectivity which involves VPC’s selection, public accessibility and security group selection.
        • Give your database an initial name.
        • Click create database to create database.
      • Next go to IAM And create a role for EC2 Instance to connect to S3.
        • EC2 service will use a role to connect to S3.
        • Only EC2 instances with the role will be able to connect to S3.
      • Next provision EC2 Instances
        • Select the IAM role created above.
        • Add a Bootstrap script which installs Apache, PHP and MySQL.
          • Adds a file to our Application Server Directory for health check up.
          • Gets the latest version of WordPress, unzips it and copies it to our application server directory.
          • Removes Wordpress Folder and zip from original location.
          • Permission(755) And owner changes to WP-content directory.
          • Rename .htaccess file
          • Starts Apache server
        • Add storage
        • Add tags
        • Configure security group to web one configured before.
        • Hit review and launch.
      • Check everything created and configured is ready to go.
  • Building a fault tolerant website-setting up EC2
    • SSH into your EC2 Instance.
    • “ls” Your /var/www/HTML Directory and check for Wordpress file.
    • Check contents of .htaccess file.
    • Start your Apache server
    • Navigate to your EC2 public IP from browser and you will see WordPress set up to start.
    • Set up the word press using the DB credentials that we had created earlier.
    • Since we are running MySQL on an RDS Instance we will enter “RDS Endpoint” in our host. If we are running RDS on our EC2 instance it will be “localhost”.
    • Create a WP-config.php file If the WordPress set up is unable to create.
    • Click submit if the setup Hangs here then check security group configuration from RDS port to EC2 HTTP port.
    • Run the installation give site title a username and password for admin panel and your email to which alerts will be Sent. Click install WordPress.
    • Login to your admin panel And create a post.
    • Add images to your post and we see Those images are currently saved to EC2 Instance.
    • Next we want those images to be replicated to our S3 bucket for redundancy.
    • We will also use cloud front to serve the files From S3 using our cloud front distribution rather than EC2 Instance. This will make site load faster.
    • List S3 buckets from terminal and check your bucket is listed if not Check EC2 Instance is assigned roll to access S3 bucket.
    • Next copy files from EC2 Instance to your S3 Media Bucket using command 
      • “aws S3 cp - - recursive source_directory_path destination_bucket.
    • Next copy the website code into your S3 Code bucket using same command and changing parts of bucket and directory.
    • In .htaccess File add a rewrite Rule to fetch files from cloud front distribution instead of EC2 Instance. 
      • replace The old cloud front url to the new cloud front distribution we have created.
      • Save the. htaccess file.
    • Use aws S3 sync command to sync the new code to the S3 bucket.
      • aws S3 Sync EC2_Directory_to_be_synced s3://s3_Bucket_to_sync_too.
    • In Apache configuration enable URL rewriting.
    • Restart Apache Service 
    • next We need to make our media bucket public.
      • Add a bucket Policy to enable the same.
    • Now when we Load Our post we see that images are coming from cloud front URL instead of static IP.
    • Next We will create an application load balancer and move over EC2 Instance behind our load balancer.
      • Go to EC2 Load balancer and create a new application load balancer.
      • Give it a name
      • It will be having Internet facing scheme
      • Select Availability Zones
      • Select security groups
      • Configure routing.
        • Add a target Group.
        • Add the Health check file to the health checks.
      • Register targets
      • Review and create application load Balancer.
    • Next go to route 53 
      • if you Want to register domain name then only configure route 53.
      • Go to your hosted zone and create a record set.
      • Give Alias as your application load balancer.
      • Click create to create a record set.
    • Next go to target Groups in your EC2 and place EC2 instance in the target group we created while creating application load balancer.
      • Wait for status of Instance to change to healthy.
  • Building fault tolerant website – add Resistance and auto Scaling.
    • Architecture
      • So when the user hits IP address it will Go to writer node where they can write post too And when they hit route 53 it will go to reader node where they can only read posts.
    • Let’s first create a con job Or scheduled task for reader node which will scan S3 bucket look for changes and will copy those changes to our EC2 Instance.
      • */1 * * * * root aws s3 sync —delete s3://s3_code_bucket_name /var/www/html
      • We are syncing our S3 with code to our server directory.
    • Next let’s create an amazon machine image of our running Instance.
      • This will serve as an boot image for our WordPress servers in autoscaling Groups.
      • Go to EC2 Instances Select Instance, go to options, create image.
      • give image a name
      • Give image description and click create image.
      • Go to AMI and wait for image status to be available and now we can use this in our auto Scaling groups.
    • Next create a cron job for our Write node. There will be to cron jobs for this one for code S3 bucket and one for media S3 bucket.
      • */1 * * * * root aws s3 sync —delete /var/www/html s3://s3_code_bucket
      • */1 * * * * root aws s3 sync —delete /var/www/uploads s3://s3_media_bucket
    • Now we will use the AMI To put it into auto Scaling Group.The auto Scaling Group will sit behind our application load balancer.
    • So people using our route 53 will be sent to Application load balancer and then to EC2 Instances behind it. These Instances will be pulling latest data from our S3 buckets.
    • Let’s now Create a launch Configuration group and then auto scaling group
      • If we create an auto Scaling Group first it will also go initially to launch configuration.
    • Create a launch configuration using readNode AMI
      • Select Instance type t2.micro 
      • give it a name
      • Give it S3 access IAM role which we created before.
      • A Bootstrap script which syncs S3 to our /web/www/HTML Directory.
      • Add storage
      • Configure security group for web.
      • Review and create launch configuration.
    • Next we will create auto Scaling group.
      • Give a group name
      • Group size
      • VPC and subnets
      • Select receive traffic from one or more load balancers.
      • Target group which we created before.
      • Health checks
      • Configure auto Scaling Policies 
      • configure notifications
      • Add tags
      • Review and create auto Scaling group.
    • Make sure you remove your write node from target groups.
    • Now as per our launch configuration we will see two new Instances in our EC2.
    • Next go to your route 53 URL and we see our website running.
      • When we go to Admin it uses the IP address.
    • Now to check the availability of this website
      • Stop one of the instance we find that our website is still available via other instance.
      • The auto Scaling Group will determine the fault and will bring up another instance in couple of moments.
      • So loss of an Availability zone doesn’t impact our website.
  • Building fault tolerant site – RDS fail over in Availability zone
    • Go to your RDS server and reboot the same using reboot with failover.
    • We find That our website will be down momentarily during RDS failover and will be up again.
    • So it is fault tolerant from RDS too.
  • Building fault tolerant website - cleanup
    • Delete RDS
    • Delete EC2 Instances(write node)
    • Delete load balancers 
    • Delete target Groups
    • Delete auto Scaling Groups
    • Delete S3 buckets
    • Disable cloud front distribution.
  • Building fault tolerant website - cloud formation
    • Under management and governance go to cloud formation.
    • Click create Stack
    • click sample template
    • Select sample template as WordPress blog.
    • We will get a S3 link to the Cloud formation template.
    • Click next and give a stack name
    • Give Dbname, password and user.
    • Select Instance type
    • Key name
    • SSH location
    • Configure stack options
      • Tags 
      • IAM permissions
      • Policies
      • Rollback configuration
      • Notification options
    • Create next, review and click create stack.
    • Wait for template to be completed
    • In the output tab we will get a link to our WordPress site.
    • We also have an EC2 Instance created which is running this site.
    • To delete the stack select stack and hit delete.
      • Instance is also terminated once we delete template.
    • There are a lot of preconfigured cloud formation templates in AWS Quick starts.
    • We can select any template and click launch Quickstart.
      • This will start a cloud formation set up with template preselected.
      • Is a way of completely scripting your cloud environment.
      • Quickstart is a bunch of cloud formation templates already Built by JWS solution architects allowing you to create complex environments very quickly.
  • Elastic beanstalk
    • Cloud formation is very scripted, using json we can create templates and deploy resources at scale. It is for advanced AWS users.
    • Elastic beanstalk Is for developers very new to AWS who want too quickly provision a website.
    • Go to elastic beanstalk under compute.
    • Click on get started.
    • Give an application name.
    • Select platform.
    • We can use a sample application Or upload our own code.
    • Click create application.
      • This will start creating your environment like security groups, S3 storage bucket etc.
    • Once you are environment is deployed it will reflect in your dashboard.
    • We can check our environment by going to the URL provided on top once our environment is all set.
    • Next we can change the different configuration of our environment
      • Modify Instance types
      • Add load balancer
      • Change capacity
    • We can use autoscaling with elastic beanstalk
    • elastic beanstalk is a way of deploying applications in the cloud without knowing anything about them.
    • With elastic beanstalk, we can quickly deploy and manage applications in the AWS Cloud without worrying about The infrastructure that runs those applications. 
    • You simply upload your application, and elastic beanstalk automatically handles the details of capacity provisioning, load balancing, scaling and Application health monitoring.
  • High-availability with Bastion hosts
    • Two EC2 Instances, two availability zones, Network load balancer with static IP address.
      • Since when we SSH into our Bastion hosts it’s a layer 4 connection And net work load balancer is a Layer 4 load balancer So we are using it.We can’t use an application load balancer here since it works on Layer seven.
      • We SSH into our network load balancer on a static IP address which load balances between two different Bastions.
      • If one Bastion fails then load balancer will reroute traffic to other bastion.
      • The Bastions will then connect to Instances inside our private Subnet.
      • We can also have autoscaling groups so that if one Bastion fails it could be replaced with another bastion.
      • Network load balancer expensive and since we are running two different easy to Instances in two different subnets so it is more expensive option.


    • One EC2 instance, Two availability zones, Auto Scaling Group.

      • We have EC2 Instance in private Subnet Then we have A public Subnet with A bastion In it which has An elastic IP address.
      • We have an autoscaling group with a minimum and maximum of one.
      • Now when we RDP or SSH into ours Bastion host and we connect into the private instances.
      • If we lose one bastion host autoscaling will detect that and since we have minimum of one it will provision another Bastion in another Subnet.
      • We can use a user data script to get new elastic IP address and SSH into it
      • This is much cheaper way of doing it But will have some downtime while the new Bastion is being provisioned in another Subnet.
      • One host in one availability zone behind and auto scaling group with health checks and a fixed Elastic IP address. If the host fails, the health check will fail and the auto scaling up will provisional new EC2 Instance in a separate Availability zone.You can use a user data Script to provision the same Elastic IP address to the new host.This is the cheapest option but is not hundred percent fault tolerant.
  • On premises strategies with AWS
    • Following high-level AWS services can be used on premises
      • Data migration service (DMS)
        • Allows you to move Databases to and from AWS
        • Might have your DR environment in AWS and your on premises environment as your primary.
        • Works with most popular database technologies, such as Oracle, MySQL, Dynamo DB etc.
        • Supports homogeneous migration
          • Oracle to Oracle
        • Supports heterogeneous migration
          • SQL server two Amazon Aurora.
      • Server migration service (SMS)
        • Server migration service supports incremental Replication of your on premises servers into AWS.
        • Can be used as a back up tool, multisite Strategy (on premises and off premises), and a DR tool.
      • AWS application discovery service
        • AWS Application discovery service helps enterprise customers plan migration projects buy gathering information about there on premises data centers.
        • You install AWS Application discovery agent-less connector As a virtual appliance on VMware Vcenter.
        • It will then build a server utilization map and dependency map of your on premises environment.
        • The collected data is retained in encrypted format In an AWS Application discovery service data store. You can export this data as a CSV file and used to estimate the total cost of ownership (TCO) of running An AWS and to plan your migration to AWS.
        • This data is also available in AWS Migration hub, where you can migrate the discovered servers And track their progress as they get migrated to AWS.
      • VM import/export
        • Migrate existing application to EC2.
        • Can be used to create a DR strategy on AWS or use AWS as a second site.
        • You can also use it to export your AWS VM’s to your on premises Data Center.
      • Download Amazon Linux two as an ISO
        • Works with all major Virtualization Providers, such as VMware, hyper-V, KVM, virtual box (oracle) etc.
  • Cloud formation templates
    • Cloud Formation Templates may be in json or yml.
    • It has following sections
      • AWS template format version
        • This is optional but it’s a way by AWS for future proofing the product if format changes.
      • Description
        • We provide description for the template here.
        • We can provide any free form text here.
        • It is optional but if we include it it must come immediately after “AWS template format version” section.
      • Metadata
        • It is used for controlling how template works while creating a Stack
        • If we apply template inside AWS management console we can control what information will be presented to the user.
        • It can contain information that whole template can use so all the resources in template able to use that information.
        • This section is also optional
      • Parameters
        • We define information that we want template to ask for like name of things, size of instances, type of database etc.
        • We can specify defaults for the information here as well
        • Parameters are also entirely optional.
      • Mappings
        • Lets us include data which can be used conditionally.
        • Example parameters to deploy in different environments.
        • This is also optional.
      • Conditions
        • Conditions Control weather certain resources Are created Or certain resource properties are assigned based on values.
        • We can conditionally create a resource depending on the environment.
        • This is optional.
      • Transform
        • Transform section is used for serverless applications
        • This is also optional
      • Outputs
        • Allows us to return values once the stack completes.
        • We can return “URL to sites” etc. as output.
        • This is optional
      • Resources
        • This is the required part.
        • Anything inside the resources section off the template is a logical resource.
        • Each section in a template uses a one or more key-value Pair.
        • We take a template full of these logical resources And other supporting elements and create a Stack while using cloud formation.
        • Every logical resource has a “Type” which is defined.
        • A stack creates the logical resources defined in the template to physical resources in AWS.
        • If we change the logical resource corresponding physical resource also changes.
    • A resource created by cloud formation for which we do not provide a physical resource ID is given a generic resource ID automatically.
      • The name of Resource is given as Stack name_logical name of Resource_physical Resource ID.
    • Properties is the section in logical resource where we can define options for the resource.
      • Example we can use option “bucket name” to define physical resource ID of bucket.
    • Examples of templates can be found at
      • https://cloudimplant.blogspot.com/2021/08/cloud-formation-template.html
  • Summary 
    • 3 different types of Load Balancer
      • Application Load Balancer
      • Network load balancer 
      • Classic load balancer
    • 504 Error Memes the Gateway has timed out. This means that the application is not responding Within the idle time out period.
      • Troubleshoot the application is it the Web server or database server.
    • If you need the IPv4 address of your end-user, look for the X-forwarded-for header
    • Instances monitored by elastic load balancer are reported as in-service, Or out of service.
    • Health Checks Check the Instance Health by talking to it.
    • Load Balancer have there own DNS name, you are never given IP address.
    • Advanced Load Balancer
      • Sticky sessions enable your users to stick to the same EC2 Instance. Can be useful if you are storing information locally to that instance.
      • Cross zone load balancing enables You to load balance across multiple Availability Zones.
      • Path Patterns allow you to direct traffic to different EC2 Instances based on the URL contained in the request.
    • Cloud formation
      • Is a way Of Completely scripting your Cloud environment
      • Quickstart is a bunch of cloud formation templates already built by AWS solution architects allowing you to create complex environments very quickly.
      • With elastic beanstalk, we can quickly deploy and manage applications in the AWS Cloud without worrying about the infrastructure that runs those applications. You simply upload your application and elastic beanstalk automatically handles details of capacity provisioning , load balancing, Scaling and Application health monitoring.
Applications 
  • SQS
    • Amazon SQS is a web service that gives you access to a message queue that can be used two story messages while waiting for a computer to process them.
    • It’s distributed queue system that enables web service applications to quickly and reliably queue messages that one component in the application generates to be consumed by another component.
    • A queue is temporary repository for messages that are awaiting processing.
    • SQS is one of the oldest services of AWS.
    • It is a way of storing messages independently from your EC2 instances.
    • Using Amazon SQS, you can decouple the components of an application so they can run independently, easing message management between components.
    • Any component of a distributed application can store messages in a fail-safe queue.
    • Messages can contain up to 256 KB of text in any format. Any component can later retrieve the messages programmatically using the Amazon SQS API.
    • For example while searching for flights on a travel website Users query can be saved on SQS which can be later fetched by an EC2 Instance and process and return results back accordingly to the client.
    • As per Amazon if you want to decouple your infrastructure, decouple your services think of SQS.
    • If our information goes above 256KB it will be stored in S3 then.
    • The queue acts as a buffer between the component producing and saving data, and the component receiving the data for processing.
    • This means that the queue resolves issues that arise if the producer is producing work faster then the consumer can process it, or if the producer or consumer are only intermittently connected to the network.
    • There are two types of queues
      • Standard queues
        • Amazon SQS offers standard as a default queue type. A Standard queue let’s you have a nearly-unlimited number of transactions per second.Standard queues guarantee that a message is delivered at least once.
        • Occasionally (because of highly-distributed Architecture that allows high throughput), more than one copy of a message might be delivered out of order.
        • However, standard queues provide best-effort ordering which ensures that messages are generally delivered in the same order as they are sent.
        • Your application should be able to cope With two things when using standard queues
          • Your message being delivered out of order.
          • Multiple copies of same message being delivered twice.
      • FIFO queues
        • The FIFO queue complements the Standard queue.
        • The most important feature of this queue type are FIFO (first-in-first-out) delivery and exactly once processing.
        • The order in which messages are sent and received is strictly preserved and a message is delivered once and remains available until a consumer processes and deletes it, duplicates are not introduced into the queue.
        • FIFO queues also support message groups that allow multiple ordered message Groups within a queue.
        • FIFO queues are limited to 300 transactions per second (TPS), but have all the capabilities of standard queues.
        • FIFO queues are not as fast as Standard queues.
    • SQS is pull based, not push based.
    • We need to have an EC2 Instance pulling the messages out of the queue.
    • Messages are 256KB or less in size.
    • Messages can we kept in the queue from 1 minute to 14 days. The Default Retention period Is four days.
    • Visibility time out is the amount of time that the message is invisible in the SQS queue after a reader picks up that message. Provided The job is processed before the visibility time out expires the message will then be deleted from the queue. If the job is not processed within that time, the message will become visible Again and another reader will process it. This could result in the same message being delivered twice.
    • If we are getting same message being delivered twice then our visibility time out is not long enough as compared to job processing time. In this case we should increase the visibility time out.
    • Visibility time out maximum is 12 hours.
    • SQS guarantees that your messages will be processed at least once.
    • Amazon SQS long polling is a way to retrieve messages from your SQS queues. While The regular short polling returns immediately (even if the message queue being polled is empty), Long polling doesn’t return response until a Message arrives in the message queue, or the long Poll times out.
      • Long polling helps you to reduce your bill as every time queue is short polled there is a charge.
    • Anytime you see a scenario-based question about decoupling your infrastructure think SQS i.e. Decoupling micro services or decoupling monolithic Architecture.
  • Simple workflow Service(SWF)
    • Amazon simple workflow service (Amazon SWF) Is a web service that makes it easy to coordinate work across Distributed Application components.
    • SWF enables Applications for a range of use cases, including media processing, web application back-ends, business process workflows, and analytics pipelines, to be designed as a Coordination of tasks.
    • Tasks represent invocations of various processing steps in an application which can be performed by executable code, web service calls, human actions, and Scripts.
    • SWF is a way of coordinating our application with automated and human tasks.
    • Where ever human element is required we can’t use SQS we use SWF.
    • SWF versus SQS
      • SQS has a retention. Of up to 14 days, with SWF, workflow execution can last up to one year.
      • Amazon SWF presents a task oriented API, where as Amazon SQS offers a message oriented API.
      • Amazon SWF ensures that a task is assigned only once and is never duplicated.With Amazon SQS,You need to handle duplicated messages and may also need to ensure that Message is processed only once.
      • Amazon SWF Keeps that track of All the tasks and events in an application. With Amazon SQS, You need to implement your own application-level tracking, Especially if you’r Application uses multiple queues.
    • SWF actors
      • Workflow starters
        • An application that can initiate(start) a workflow could be your e-commerce website following the placement of an order, or a mobile app searching for bus times.
      • Deciders
        • Control the flow of activity tasks in a workflow execution. If something has finished (or failed) in a workflow, a decider decides what to do next.
      • Activity Works
        • Carry out activity tasks.
  • Simple Notification Service
    • Amazon simple notification service (Amazon SNS) Is a Web service that makes it’s easy to set up, operate and send notifications from the cloud.
    • It provides developers with a highly scalable, flexible, and cost-effective capability to publish messages from an application and immediately deliver them to subscribers or other applications.
    • We can do push notifications two Apple, Google, Fire OS, and windows devices, as well as android devices in China with Baidu Cloud push.
    • Besides pushing cloud notifications directly to mobile devices, Amazon SNS can also deliver notifications by SMS text messages our email to Amazon simple Queue Service(SQS) queues , or to any HTTP endpoint.
    • SNS allows you to Group multiple recipients using topics. A topic is in “access point” for following recipients to dynamically subscribe for identical copies of the same notification.
    • One topic can support deliveries two multiple types - for example, you can group together iOS, Android and SMS recipients. 
    • When you publish once to a topic, SNS delivers appropriately formatted copies of your message to each subscriber.
    • To prevent messages from being lost, all messages published to Amazon SNS are stored Redundantly across multiple Availability Zones.
    • Instantaneous, push Based delivery (no polling)
    • Simple APIs and easy integration with application.
    • Flexible message delivery over multiple Transport protocols.
    • Inexpensive, pay as you go model with no upfront costs.
    • Web-based AWS Management Console offers the simplicity of point and click Interface.
    • SNS versus SQS
      • Both are messaging service in AWS
      • SNS push
      • SQS-polls (pulls) based by EC2 Instance.
  • Elastic Transcoder
    • Media transcoder in the cloud
    • Convert media files from their original source format into different formats that will play on smart phones, tablets, PCs etc. 
    • Provides transcoding Presets for popular output formats, which means that you don’t need to guess about which settings work best on particular devices.
    • Pay based on the minutes that you transcode and the resolution at which you transcode.
    • Elastic transcoder is a media transcoder in the cloud. It converts media files from the original source format into different formats that will play on smart phones, tablets, PC’s etc.
  • Amazon API Gateway 
    • Amazon API gateway is the fully managed service that makes it easy for a Developers to Publish, maintain, monitor and secure APIs at any scale.t
    • With a few clicks in the AWS management console, you can create an API that acts as a “front door” for applications to access data, business logic, or functionality from your back end services, such as Applications running from Amazon Elastic Compute Cloud(Amazon EC2), code running on AWS lambda, Or any web applications.
    • Expose HTTPS Endpoints to define her restful API.
    • Server-less-ly connect to Services like lambda and Dynamo DB. 
    • Send each API Endpoint To a different target.
    • Run efficiently with low cost.
    • Scale effortlessly.
    • Track and control usage by API key
    • Throttle request to prevent attacks.
    • Connect to Cloud watch to log all request for monitoring.
    • Maintain multiple versions of your API.
    • To configure API gateway 
      • Define an API (container)
      • Define resources and nested resources(url paths)
      • For each resource
        • Select Supported HTTP methods (verbs).
        • Set Security.
        • Choose target such as (EC2, lambda, dynamo DB) etc.
        • Set request and response transformation.
    • How do I deploy Gateway?
      • Deploy API to a storage.
        • Uses API Gateway Domain, by default.
        • Can use a custom domain
        • Now supports AWS certificates manages free SSL/TLS certificates.
    • Remember what API Gateway is at high-level.
    • API Gateway has cacheing capabilities to increase performance.
    • API Gateway is low-cost and scales automatically.
    • You can throttle API Gateway to prevent attacks.
    • You can log results to cloud watch.
    • If you are using JavaScript/Ajax that uses multiple domains with API Gateway, ensure that you have enabled CORS on API Gateway.
    • CORS is enforced by the client.
  • Kineses
    • Streaming data is data that is generated continuously by thousands of data sources, which typically send in the data records simultaneously, and in small sizes.(order of kilobytes)
      • Purchases from online store.
      • Stock prices
      • Game data
      • Social network data
      • Geo spatial data
      • IOT sensor data
    • Amazon kinesis is a platform on AWS to send your streaming data to.
    • Kinesis Makes it easy to load and analyze streaming data, And also providing the ability for you to build your own custom applications for your business needs.
    • Three different types of Kinesis
      • Kinesis streams
        • Kineses streams is a place to store data streamed by various devices. They can store data from 24 hours to seven days.
        • Data is contained in shards.
        • We may have a shard for different purposes like special data, stock data, social media data, IOT data.
        • This data is then consumed by EC2 Instances called as consumers.
        • The EC to Instances after processing data storing it in Dynamo DB, S3, EMR, Red Shift etc.
        • Kinesis streams consist of shards. Shards allow five transactions per second for reads, up to a maximum total data read rate of 2 MB per second and up to 1000 records per second for writes, up to a maximum total data write rate of 1 MB per second(including partition keys).
        • The data capacity of your stream is a function of the number of shards that you specify for the stream. The total capacity of the stream is the sum of the capacities of its shards.
      • Kinesis firehose
        • There is no persistent Storage the Data has to be analyzed as it comes in from the producers.
        • The producers can be EC2 , mobile, IOT which stream data.
        • So Kineses fire hose has an Optional Lambda Function which processes data.
        • Once data is processed. It may output to S3, to red shift via S3, elastic search cluster.
      • Kinesis Analytics
        • Kinesis Analytics works with Kinesis streams and Kinesis fire hose And can analyze the data on the fly inside either service and then it stores data either on S3, Redshift, Elastic Search Cluster.
        • If we want to analyze our data inside Kinesis then we use Kinesis Analytics.
  • Web identity Federation and Cognito
    • Web identity Federation let’s you give your users access to AWS resources after they have successfully authenticated with the web-based identity provider like Amazon, Facebook, or Google.Following Successful Authentication The user Receives an Authentication code from the web ID provider, which they can trade for temporary AWS Security Credentials.
    • Amazon Cognito provides web identity Federation with the following features.
      • Sign up and sign in to your apps
      • Access for guest users
      • Acts as An identity broker between your application and web ID providers, so you don’t need to write any additional code.
      • Synchronizes user data for multiple devices.
      • Recommended for all mobile applications AWS services.
    • The recommended approach fall web identity Federation using social media accounts like Facebook.
    • Cognito brokers between the app and Facebook or Google to provide temporary credentials which map to an IAM role allowing access to the required resources.
    • No need for the application to embed or Store AWS credientials locally on the device and it gives users a seamless experience across all mobile devices.
    • We can enable API caching in Amazon API Gateway to Cache your endpoints Response. With cacheing you can Reduce the number of calls made to your endpoint And also improve the latency of the requests To your API.When you enable cacheing for a stage, API Gateway Caches Responses from your end point for a specified time-to-live (TTL) period in seconds.API Gateway then responds to the request by looking up the endpoint response from the cache instead of making request do you are endpoint.
    • In computing, the same origin Policy is an important concept in the web applications security model. Under the policy, a web browser Permits Scripts contained in a first webpage to access data in a second webpage, but only if both the web pages I have the same origin.
      • This is done prevent Cross-site scripting(XSS) attacks Enforced by web browser.
      • Ignored by Tools like postman and Curl.
    • CORS is one way the server at the other end (not the client code in the browser) can relax the same-origin-policy.
      • Cross Origin Resource Sharing (CORS) is a mechanism that allows restricted resources (eg fonts) on a webpage to be requested from another domain outside the domain from which first resource was served.
      • Browser makes an HTTP OPTIONS call for  a URL (Options is an HTTP Method like get, put, and post).
      • Server Returns a response that says “These other domains are approved to get this URL”.
      • Error - “Origin policy Cannot be read at the remote resource?” You need to approve CORS on API Gateway.
    • Cognito user pools Are user Directory’s used to manage sign-up and sign-in Functionality of mobile and web applications. Users can sign indirectly To the user pool, or using Facebook, Amazon, all google Cognito acts as an identity broker between the identity provider and AWS. Successful Authentication Generates a JSON Web token (JWT’s).
    • Identity pools enable provide Temporary AWS credentials to access AWS Services like S3 or Dynamo DB.
    • Cognito tracks the association Between user identity and the various different devices they sign in from. In order to provide seamless user experience for your application, Cognito uses push synchronization to push updates and synchronize user data across multiple devices.Cognito uses SNS to send a notification to all the devices associated with a Given user Identity whenever data stored in the cloud changes. 
      • User pools Contain it’s users Names passwords etc.
      • Identity polls grant permissions to access AWS resources.
    • Federation Allows Users to authenticate with a web identity provider (Google, Facebook, Amazon).
    • The user authenticates first with the web ID provider and receives an authentication token, which is exchanged for temporary AWS credentials allowing them to assume an IAM role.
    • Cognito is an Identity broker Which handles interaction between your applications and web ID provider(You don’t need two write your own code for this).
    • User pool is user based. It handles things like registration, authentication and account recovery.
    • Identity pools authorize access to your AWS resources.
  • Event processing patterns
    • One or more AWS services perform work in response to events triggered by other AWS services.
    • This architectural pattern makes services more reusable, interoperable, and scalable.
    • Event Driven Architecture
      • In modern cloud architecture applications are decoupled into smaller independent building blocks that are easier to develop deploy and maintain.
      • Publish Subscribe or PubSub messaging provides instant event notifications for these kind of distributed applications.
      • Pubsub model allows messages to be broadcast to different parts of system asynchronously.
      • SNS message topic is center to this architecture which provides a mechanism to broadcast asynchronous event notifications and end points that allows other AWS services to connect to the topic in order to send or receive those messages.
      • To broadcast a message a component called as publisher pushes the message to the topic. The publisher can be a service or another application that can publish messages to SNS topics.
      • All services that subscribe to the topic will instantly receive messages that is broadcast. Each subscriber processes the messages in parallel.
        • For example if we want to send supplier information in different downstream systems each subscriber to system will process information accordingly.
      • The publisher and subscriber are unknown to each other.
    • Dead Letter Queue (DLQ)
      • used for undelivered or unclaimed messages.
      • Three services that use DLQ's are as follows
        • SNS
          • Messages published to a topic that failed to deliver are sent to SQS queue which are then held for further analysis or reprocessing.
          • Messages may not be delivered due to client errors or server errors.
        • SQS
          • messages sent to SQS that exceed because Max receive count are sent to a DLQ (another SQS queue)
        • Lambda
          • Result from failed asynchronous invocations, will try twice and sent to either an SQS Queue or SNS Queue.
          • Messages from failed asynchronous executions of our Lambda functions.
    • Fan out pattern
      • The publisher sends messages 2 SNS topic which push to more than one SQS Queue for example an order system may send message to fulfillment system and data warehouse at same time.
      • The queues subscribed to topic will receive the messages.
    • S3 event notification
      • We can log objects as they arrive in S3 buckets or any other event is performed on them in S3 bucket.
      • Notifications can be sent to SQS queue, SNS topic or Lambda functions or all three of them.
      • We can apply filters to notifications for example we can limit them to PNG files only.
      • In order to avoid missing out on notifications enable versioning on S3 bucket to ensure proper delivery of notifications.
      • Events in S3 that can send notifications
        • Object created
        • Object removed
          • Supports deletes of versioned and un-versioned objects
        • Object restored
          • Restoration of object in glacier.
        • RRS object lost
          • Detection of a reduced-redundancy storage object is lost.
        • Replication
          • The application failed.
          • Replication exceeds 15 minutes
          • Object no longer tracked by replication metrics
      • pub/sub pattern is facilitated by SNS.
      • DLQ is supported by SNS, SQS, Lambda
      • Fan out pattern is supported by SNS.
  • Application summary
    • SQS
      • SQS is a way to decouple your infrastructure.
      • SQS is pull based, not push based.
      • Messages are 256 KB in size.
      • Messages can be kept in a queue from one minute to 14 days, the default retention is four days.
      • Standard SQS and FIFO SQS
        • For standard order is not guaranteed and messages can be delivered more than once.
        • FIFO order is strictly maintained and messages are delivered only once.
        • Visibility time out is the amount of time that the message is invisible in the SQS queue after a reader picks up the message. Provided the job processed before the visibility timeout expires, the message will be deleted from the queue. If the job is not processed within that time, the message will become visible again and another reader will process it. This could result in the same message being delivered twice.
          • Visibility timeout maximum is 12 hours.
        • SQS guarantees that your messages will be processed at least once.
        • Amazon SQS long polling is a way to retrieve messages from your Amazon SQS queues. 
        • While the regular short polling returns immediately (even if the message queue being polled is empty). Long polling doesn't return a response until a message arrives in the message queue, or the long poll times out.
    • SWF vs SQS
      • SQS has a retention period of up to 14 days, with SWF workflow executions can last up to 1 year.
      • Amazon SWF presents our task oriented API, whereas Amazon SQS offers a message oriented API.
      • Amazon SWF ensures that a task is assigned only once and never duplicated.
      • With Amazon SQS you need to handle duplicated messages and may also need to ensure that a new message is processed only once.
      • Amazon SWF keeps track of all the tasks and events in an application With Amazon SQS, you need to implement your own application level tracking, specially if your application uses multiple queues.
    • SWF actors
      • Workflow Starters - An application that can initiate(start) a workflow. Could be your E commerce website following the placement of an order or a mobile app searching for bus times.
      • Deciders - control the flow of activity tasks in a workflow execution. If something has finished or failed in a workflow, a decider decides what to do next.
      • Activity workers - carry out the activity tasks.
    • SNS benefits
      • Instantaneous, Push based delivery (no polling).
      • Simple APIs and easy integration with applications.
      • Flexible message delivery over multiple transport protocols.
      • Inexpensive, pay as you go model with no upfront costs.
      • Web based AWS management console offers the simplicity of a point and click interface.
    • SNS versus SQS
      • Both messaging services in AWS.
      • SNS - push
      • SQS - polls(Pulls)
    • Elastic transcoder is a media transcoder in the cloud. It converts media files from their original source format in two different formats that will play on smartphones, tablets, PCs etc.
    • API gateway
      • API gateway is at a higher level.
      • API gateway has caching capabilities to increase performance.
      • API gateway is low cost and scales automatically.
      • You can throttle the API gateway to prevent attacks.
      • You can log results to the cloud watch.
      • If you are using JavaScript, Ajax that uses multiple domains with API gateway, ensure that you have enabled CORS on API gateway.
      • CORS is enforced by client browser.
    • Kineses
      • Kinesis streams are data persistent that store your data for 24 hours.
      • Kinesis firehose - Data must be processed in real time and stored manually.
      • Kinesis Analytics - helps you analyze data in streams and fire hose.
    • Cognito 
      • Allows us to do web identity federation allows us to authenticate with a web identity provider (Google, Facebook, Amazon).
      • User authenticates first with web ID provider and receives an authentication token, which is exchanged for temporary AWS credentials allowing them to assume and IAM role.
      • Cognito is an identity broker which handles interaction between your applications and the web ID provider.
      • User pool is user based. It handles things like registration, authentication and account recovery.
      • Identity pools authorize access to your AWS resources.
Security
  • Reducing Security Threats
    • Various bad actors that may impact applications performance or steal data are as follows.
      • Typically automated processes
      • Content scrapers
      • Bad lots
      • Fake user agent
      • Denial of service (DOS)
    • Benefits of preventing bad actors
      • Reduce security threats
      • Lower overall costs
    • we can use network access control lists to block traffic from a suspected IP address.
    • We can also run a host based firewall on EC2 instance which can provide an additional layer of security.
      • Only allow application load balancer security group access to the EC2 security group.
      • We can also use a NACL to block that IP address on a network load balancer.
      • While using application load balancer we can also use a web application firewall.
        • AWS provides web application firewall service.
        • If we want to block attacks like SQL injection all cross site scripting attacks then we use WAF.
        • WAF operates on Layer 7 and can block such attacks.
      • If we want to block an IP or range of IPS then we should operate on layer 4 and use NACL.
      • For a public web application we should use WF only.
  • Key management service(KMS)
    • Regional secure key management and encryption and decryption.
    • Manages customer master keys (CMK's).
    • Ideal for S3 objects, database passwords and API keys stored in systems manager parameter store.
    • Encrypt and decrypt up to 4 kb size.
    • Integrated with most AWS services.
    • Pay per API call.
    • Audit capability using cloud trail dash logs delivered to S3.
    • FIPS 140-2 Level 2.
    • Level 3 is cloud HSM.
  • Three types of CMK's
    • AWS managed CMK
      • Free used by default if you pick encryption in most AWS services only that service can use them directly.
    • Customer managed CMK
      • Allows key rotation controlled via key policies and can be enabled or disabled
    • AWS owned CMK
      • used by AWS  so meaning Smithon a shared basis across many accounts you typically won't see these.
  • Encryption that applies on CMK's
    • Symmetric Asymmetric
      same key used for encryption and decryption mathematically related public/private keypair
      AES-256 RSA and elliptic curve cryptography (ECC)
      Never leaves AWS unencrypted Private key never leaves AWS unencrypted.
      Must call the KMS API's to use must call the KMS API's to use private key
      AWS services integrated with KMS use symmetric CMK's download the public key and use outside AWS
      encrypt, decrypt and re-encrypt data used outside AWS by users who can't call KMS API's
      generate data keys, data key pairs, and random byte strings AWS services integrated with KMS do not support asymmetric CMK's
      import your own key material. Sign messages and verify signatures.

    • Default key policy grants AWS account root user full access to CMK.
    • If we have an object in one region encrypted with CMK and if we need to move it to another region then we must decrypt it move it to other region and encrypt again.
    • To create a CMK from command line use the following commands
      • aws kms create-key --description "Demo CMK"
        • copy the keyid from the values returned
      • Next create alias for the key using the following commands
        • aws kms create-alias --target key-id {{keyid}} --alias-name "alias/mydummy"
      • To check your key use
        • aws kms list-keys
    • With the help of alias we can rotate keys periodically.
    • To encrypt a file use the following command
      • aws kms encrypt --key-id "alias/mydemo" --plaintext file://filename.txt --output text --query Ciphertext Blob
    • To decode first we decrypt
      • aws kms decrypt --ciphertext -blob file b://filename.txt.encrypted --output text --query plaintext
    • We can encode and encrypt to file using
      • aws kms encrypt --key-id "alias/mydemo" --plaintext file://filename.txt --outputtext --query ciphertext Blob 1 base 64 --decode > filename.txt.encrypted & decode.
    • We can now decrypt and decode using
      • aws kms decrypt --ciphertext --blob file://myfile.txt.encrypted --output-text --query Plaintext|base64 --decode
    • CMK can only be used to encrypt or decrypt data upto 4KB in size if we want to encrypt a data file larger than 4KB we use data encryption key or DEK.
      • aws kms generate-data-key --key-id "alias/mydemo" --key-spec AES_256
      • We get plain text key and ciphertext blob in output.
      • The ciphertext blob is used to find which CMK generated this textkey.
      • The plaintext key can be used to encrypt any amount of data.
      • We must discard the plaintext data key once data is encrypted and store the cipher Blob with data for reference.
      • The safety is hacker needs to decrypt "Ciphertext Blob first before data which may not be possible without permissions".
        • This is also called as envelope encryption.
  • Cloud HSM
    • Dedicated hardware security module(HSM)
    • FIPS140-2 Level 3
    • Level 2 is KMS
    • Manage your own keys
    • No access to the AMS-managed component.
    • Runs within a VPC in your account.
    • Single tenant, dedicated hardware, multi-AZ cluster.
    • Industry-standard API's - no AWS API's
    • PKCS # 11
    • Java Cryptography Extensions(JCE)
    • Microsoft Crypto NG(CNG)
    • Keep your keys safe - irretrievable if lost.
    • Regulatory compliance requirements.
    • FIPS 140-2 Level 3.
  • AWS Shield
    • Protects against distributed denial-of-service (DDoS) attacks.
    • There are two types of Shields
      • AWS shield standard
        • Automatically enabled for all customers at no cost.
        • Protects against common layer 3 and 4 attacks
          • SYN/UDP floods
          • Reflection attacks
        • stopped a 2.3 TBPS DDoS attack for three days in February 2020.
      • AWS shield advanced
        • $3000 per month, per organization.
        • Enhanced protection for EC2,ELB,cloudfront,Global Accelerator, Route 53.
        • Have business and enterprise support customers get 24 X 7 access to the DDOS response team(DRT).
        • DDOS cost protection.
  • Web Application Firewall
    • Web Application Firewall that let's you monitor HTTP(S) requests to cloud Front, ALB, or API gateway.
      • Control access to content.
      • Configure filtering rules to allow/deny traffic.
        • IP addresses.
        • Query string parameters.
        • SQL query injection.
      • Blocked traffic returns HTTP 401 forbidden.
    • Web application firewall allows three different behaviors.
      • Allow all requests, except the ones you specify.
      • Block all requests, except the ones you specify.
      • Count the requests that matched the properties you specify.
    • Request properties
      • originating IP address
      • Originating country
      • Request size
      • Values in request headers
      • Strings in request matching regular expressions(regx) patterns.
      • SQL code (injection)
      • Cross site scripting(XSS).
    • AWS firewall manager
      • Centrally configure and manage firewall rules across an AWS organization.
      • We can apply WAF rules for our
        • ALB
        • API gateway
        • Cloudfront distributions
      • AWS shield advanced protection
        • ALB
        • ELB classic
        • EIP
        • Cloudfront distributions
      • Enable security groups for EC2 and ENI's
Serverless
  • Lambda
    • History of cloud
      • Data Center.
      • IAAS
      • PAAS
      • Containers
      • Serverless
    • Lambda is the ultimate abstraction layer.
      • Data centers
      • Hardware
      • Assembly code/Protocols
      • high level languages
      • Operating systems
      • Application layer / AWS API's
      • AWS Lambda
    • AWS Lambda is a compute service where you can upload your code and create a Lambda function. AWS Lambda takes care of provisioning and managing the servers that you use to run the code. You don't have to worry about operating system, patching, scaling etc.
    • You can use Lambda in the following ways.
      • As an event-driven compute service where AWS Lambda runs your code in response to events. These events could be changes to data in an Amazon S3 bucket or an Amazon Dynamo DB table.
      • As a compute service to run your code in response to HTTP requests using Amazon API gateway or API calls made using AWS SDK's
    • Traditional versus serverless architecture.
      • User sends request hit's route 53 which then goes to our elastic load balancer which then sends requests to our web server which then gets data from RDBMS Server and processes response and sends response to user.
      • Using a serverless architecture we eliminate use of a virtual machine or operating system.
        • We send response to API gateway which sends response to Lambda which writes to Dynamo DB or Amazon Aurora.
        • If we have many users hitting our API gateway, it will scale automatically.
    • Languages that Lambda Support
      • Node Js
      • Java
      • Python
      • C#
      • Go
      • Powershell
    • How Lambda is priced
      • Number of Requests
        • First 1 million requests are free. $0.20 per 1 million requests there after.
      • Duration
        • Duration is calculated from the time your code begins executing until it returns or otherwise terminates, rounded upto the nearest 100ms.The price depends upon the amount of memory you allocate to your function. You are charged $0.00001667 for every GB-second used.
    • Advantages of Lambda
      • No Servers
      • continuous scaling
      • Super Saver super cheap yeah
      • Alexa uses Lambda
      • Lambda scales out automatically
      • Lambda functions are independent
        • One event equals one function
      • Lambda functions can trigger other Lambda functions, one event can trigger X functions if functions trigger other functions.
      • Services of AWS which supports serverless.
        • Dynamo DB
        • Aurora
        • S3
      • Architecture can get extremely complicated, AWS X-ray allows you to debug what is happening.
      • Lambda can do things globally, you can use it to backup S3 buckets to other S3 buckets etc.
  • Building an Alexa skill
    • A hardware enabled device with Alexa service enabled and then it communicates with AWS technologies like Automatic Speech Recognition, Natural Language Understanding, Text to Speech, Skills, Learning etc.
      • This helps us create Alexa skills using Amazon developers library.
        • developer.amazon.com
    • create an S3 bucket and make it public
    • Create a Poly service instance
      • Select language and region
      • Voice type
      • Synthesizes the voice to your bucket
    • create a Lambda function
      • We need to create a function in region where Alexa trigger is available.
      • Create AWS serverless application repository function.
        • Select Alexa-skills-kit-node JS-fact skill.
        • Click deploy and it will deploy our Alexa skill to Lambda
      • under Lambda functions you will come to know once skill is deployed.
      • Click on the function and we can see our code and customize it
      • Next sign into Amazon developer account
      • Click on Amazon Alexa and go to my skills and create your fact skill.
      • In invocation gave it a name and end point which is about function ARN.
      • We can then test our function.
  • Serverless application model (SAM)
    • open source framework that allows us to build serverless applications easily.
    • Cloud formation extension optimized for serverless applications.
    • New types: functions, APIs, tables.
    • Supports anything cloud formation supports.
    • Run serverless applications locally
    • Package and deploy using code deploy.
    • Log in to terminal and install SAM.
      • run "sam init".
      • Select template
      • Select language
      • Give project name
      • Select example template
      • Your Project will be created
      • "sam build" to build your application
      • "sam deploy --guided" to deploy your application.
  • Elastic container service (ECS)
    • Container and Docker
      • A container is a package that contains an application, libraries, runtime, and tools required to run it.
      • Runs on the container engine like docker.
      • Provides the isolation benefits of virtualization with less overhead and faster starts than virtual machines.
      • Containerized applications are portable and offer a consistent environment.
    • ECS
      • Managed container orchestration service
      • Create clusters to manage fleet of container deployments
      • ECS manages EC2 or Fargate instances.
      • Schedules containers for optimal placement.
      • Defines rules for CPU and memory requirements.
      • Monitors resource utilization.
      • Deploy, Update, Rollback.
      • Free
      • integrates with VPC, security groups, EBS volumes.
      • ELB
      • Cloud Trail and Cloud Watch integration.
    • Components of ECS
      • Cluster
        • Logical Collection of ECS resources-either ECS EC2 instances or Fargate instances.
      • Task Definition
        • Defines your application. Similar to a docker file but for running containers in ECS can contain multiple containers.
      • Container definition
        • Inside a task definition it defines the individual containers a task uses. Controls CPU and memory allocation and port mappings.
      • Task
        • Single running copy of any container defined by a task definition. One working copy of an application example (DB and web containers)
      • Service
        • Allows task definitions to be scaled by adding tasks. Defines minimum and maximum values.
      • Registry
        • Storage for container images(eg Elastic Container Registry(ECR) or Docker Hub). Used to download images to create containers.
    • Fargate
      • serverless container engine
      • Eliminates need to provision and manage servers
      • Specify and pay for resources per application
      • works with both ECS and EKS
      • each workload runs in its own kernel.
      • Isolation and security
      • Choose EC2 instead if
        • Compliance requirements
        • Requires broader customization
        • require GPU's
    • Elastic Kubernetes Service (EKS)
      • Kubernetes is open source software that lets you deploy and manage containerized application at scale.
      • Same tool set on premises and in cloud
      • Containers are grouped in pools.
      • Like ECS supports both EC2 and fargate.
      • Use EKS if
        • already using Kubernetes.
        • Want to migrate to AWS.
    • ECR
      • managed docker container registry
      • Store, manage, and deploy images.
      • Integrated with ECS and EKS.
      • Works with on premises deployments.
      • Highly available.
      • Integrated with IAM.
      • Pay for storage and data transfer.
    • ECS + ELB
      • distribute traffic evenly across tasks in your service.
      • Supports ALB, NLB, CLB.
      • Use ALB to route HTTP/HTTPS (Layer 7) traffic.
      • Use NLB or CLB to route TCP (Layer 4) traffic.
      • Supported by both EC2 and fargate launch types.
      • ALB allows
        • Dynamic host port mapping
        • Path based routing
        • Priority rules
      • ALB is recommended over NLB or CLB.
  • Serverless Summary
    • Lambda scales out not up automatically.
    • Lambda functions are independent, one event equals one function.
    • Lambda is serverless.
    • Lambda functions can trigger other Lambda functions, one event can trigger X functions if functions trigger other functions.
    • Architectures can get extremely complicated, AWSX ray allows you to debug what is happening.
    • Lambda can do things globally, you can use it to back up S3 buckets to other S3 buckets.

Comments

Popular posts from this blog

Effect : Deny vs No Action

AWS Summaries

Infrastructure Setup using Cloud Formation Templates