AWS AWS Certified

AWS Certified Cloud Practitioner Crash Course


I have studied for my AWS Certified Cloud Practitioner exam for a fortnight, and I want to share with you what I have learnt. It helps me reinforce the concepts, and it might help you with your exam as well. I do not invent the information I write in this article, but rather it is a summary of what I have learned by doing ACloudGuru’s course, the official Amazon training and the tutorial from the freeCodeCamp’s YouTube channel (click here to access it). 

The information in this article is up-to-date, from official AWS documentation and white papers, and ACloudGuru and FreeCodeCamp. Be assured that you get a lot of valuable information for free. This article alone is enough for passing the exam if you have a few weeks of experience with AWS. If not, pair it up with the labs from the sources mentioned in the article, and you are good to go.


I would recommend that you stick to this article and learn all the concepts and information presented here. Once you are comfortable with the information from this article, I suggest that you go on the FreeCodeCamp’s YouTube channel, and do the AWS labs (the link is embedded above). I advise you to read all the links and extra information embedded in the article as well. 

If it is still unclear for you and would like more guidance, I recommend you create an account on ACloudGuru and do their course (which is fantastic, by the way). 

Let’s jump straight into it then!


Let’s start with some fundamental information on cloud concepts. The first question you might ask yourself is “what is cloud computing?”. In layman’s terms, cloud computing is simply like using someone else’s computer. Instead of having your server, you rent the server from somebody else like AWS. More sophisticated, cloud computing is the on-demand delivery of computing, database storage, applications and other IT resources over the internet on a pay-as-you-go basis

What is the difference between cloud computing and on-premise? The most significant difference you need to understand is that you are not concerned with the architecture with cloud computing. You only care about managing those virtual servers (e.g. apply security patches) and about your applications deployed on them. 

What are the benefits of cloud computing? There are six significant benefits of cloud computing. These are:

  1. Variable expense versus capital expense => This means that you pay only when you consume resources, and only for the resources you consume. 
  2. No capacity guessing => You avoid having underutilized or over-utilized resources. That means you either pay for underutilized resources, or your applications are down from over-utilized resources. With cloud computing, you can quickly scale up or down as business needs change. 
  3. Increased speed and agility => Resources can be created or stopped within minutes with cloud computing. You do not have to wait for your IT team for weeks to implement on-premise solutions. 
  4. Benefit from massive economies of scale => You are sharing the cost with other customers to get significant discounts.
  5. Go global => With minimal effort and within a few clicks, you can deploy your application in multiple locations around the world. That means lower latency and better user experience. 
  6. Stop spending money on running and maintaining data centres => Avoid the headache, money, time and other resources needed to build your infrastructure. Let someone else do it, and focus on your applications. 

Now you know what cloud computing is and what are the six significant benefits. The next step is to learn the different types of cloud computing.

What are the different types of cloud computing? There are three different types, as follows:

  1. IaaS (Infrastructure as a Service) (For admins) – You are responsible for managing your servers (either physical or virtual). An example is AWS. 
  2. PaaS (Platform as a Service) (For developers) – There is no need for you to manage the underlying architecture. You are only concerned with deploying and running your applications. An example would be Heroku (where you deploy and run web applications).
  3. SaaS (Software as a Service) (For customers) – Here we are talking about a finished product that is run and managed by the service provider. An example would be Google’s Gmail. You do not have to worry about anything, other than using the service.

Besides the three different cloud computing services, there are three types of cloud computing deployments as well. These are as follows:

  1. Public => Fully utilizing cloud computing. Examples are AWS and Azure. 
  2. Hybrid => Using a mix of public and private deployments. Sensitive and critical information might be stored in a “private” cloud, whereas other information is stored on the “public” cloud. 
  3. Private => Deploying resources on-premise, and using virtualization and resource management tools. 

This is it with the introduction to cloud concepts. Now it is time to dig deep down into the AWS infrastructure. 


At the moment, there are 69 availability zones within 22 geographic regions. There are over 200 edge locations, and the number is likely to increase. 

What is a region? A region is a geographic area. A region consists of AT LEAST two availability zones. The reason for having at least two AZs is in case one of the data centres goes down. For example, a region is eu-west-1 (Ireland). Every region is independent of each other, and the US-EAST is the largest region. As a consequence, almost all services become available first in this region. 

What is an availability zone (AZ)? An availability zone is a data centre (a building containing lots of physical servers). An availability zone might consist of several data centres, but because they are close to each other, they are counted as one AZ. 

What is an edge location? An edge location is an AWS endpoint for caching content. They typically consist of CloudFront, which is AWS’s content delivery network. The purpose of these edge locations is to provide low latency for the end-users.

Edge locations > Availability Zones > Regions

There is a special region which is not available to everyone. This region is called GovCloud, and it is only accessible to companies from the US and US citizens. You also have to pass a screening process. GovCloud allows users to host sensitive Controlled Unclassified Information such as military information. 


One of the downsides of AWS is how easy it is to create an enormous bill. If you do not pay attention and do not make the most out of the budgets and billing alarms, you may rack a bill of a few thousand dollars and even more. 

The billing alarm allows you to set money limits to make sure you do not overspend money. Once you cross a certain threshold and get close to the set limit, you will be notified. 

The billing metrics from CloudWatch is only available in US-EAST-1.

This is all for the time being. We will dig deep down about budgets, and more, in the next section called Billing and Pricing.


IAM is one of the essential tools in AWS. Another vital thing to remember is that IAM is global, which means that you do not have to select any region to access it. There are different groups of people in a company. Some are developers, and some are from human resources; you get the idea. That means they need different types of access. IAM gives you the possibility to do just that. IAM allows you to create users, groups, roles. It also allows you to apply a password policy. A password policy specifies what the password needs to contain – for example, numbers, characters, and so on.  All the users and groups created are created GLOBALLY

Based on the AWS best practices, you should never use the root account or give anyone root access. Once someone gets hold of the root account, it has access to absolutely everything from that account. Also, you should activate the multi-factor authentication (MFA).

By the way, you can set permissions to a group by applying a policy to the group. The policy is just a JSON file containing key-value pairs. 


This section comprises the different AWS technologies such as computing services, storage services, logging services and many more.


AWS Organization is an account management service that allows customers to integrate multiple AWS accounts into an organization you create and manage. It provides the ability to centrally manage billing, control access, security, compliance and share resources across your AWS accounts. For example, you can simplify billing by setting a single payment for all your AWS accounts. 

Organizational units are groups within an organization that can contain other organization units. AWS Organization allows you to isolate different departments in the company. For instance, separate developers from human resources. 

The purpose of creating organizations for your teams is that you can attach policies and specify the access for each team. Service control policies define the rules for each organizational unit, ensuring that your accounts stay within your organizational unit guidelines.


There are several AWS Compute Services. For this exam, we are only looking at EC2, ECS, Elastic Beanstalk, Fargate, EKS, Lambda and Batch. 


The first in line is EC2, which belongs to the computing services. EC2 stands for Elastic Compute Cloud, and it is just a virtual server (or servers) in the cloud. EC2 makes it easy to scale up or down, depending on how your requirements change. 

There are different types of pricing for EC2 instances. They are as follows:

  1. On-demand 
    • It is low cost and provides greater flexibility, as it does not require any up-front payment or long-term commitment.
    • Pay a fixed amount per hour of usage.
    • Suitable for applications with short term, spiky and unpredictable workloads that cannot be interrupted.
  2. Spot 
    • The price moves all the time, and you have to bid a price. Your instance runs when your bid exceeds the spot price.
    • This type of pricing is the best for flexible applications, where the start and end times are irrelevant. It is suitable for data analysis, batch jobs, background processing and optional tasks.
  3. Reserved 
    • The best option for the long term.
    • You are tied on a contract. You can sign a one year or a three-year contract.
    • The longer the contract, and the more you pay upfront, the cheaper it is. 
    • It allows you to resell unused reserved instances.
    • It is suitable for applications with predictable usage and with a steady-state.
    • You can pay all upfront, partial upfront, and no upfront.
  4. Dedicated
    • The most expensive of all these pricing models. 
    • They are physical EC2 servers dedicated to you only.
    • Can be purchased on-demand (per hour basis) or as reserved instances for up to 70% off the on-demand price.
    • Useful when there are regulatory requirements that might not support multi-tenant virtualization, or for licensing which does not support tenancy cloud deployments. 

It is important to note that if Amazon terminates your EC2 instance, you are not billed for the partial hour of usage. However, if you terminate your EC2 instance, you are charged for any hour in which the instance ran. 


ECS is a highly-scalable, high-performance container orchestration service that supports Docker containers. It allows you to deploy and run containerized applications in AWS quickly. You have to choose the type of ECS instance you want, and it comes pre-configured with Docker. 

You can quickly start or stop an application; you can access other services and resources such as IAM, CloudFormation templates, load balancer, CloudTrail logs or CloudWatch events. With ECS, you have to pay for the EC2 instances it uses.


When you think of Fargate, I want you to associate it with the buzz-word serverless. Fargate gives you the ability to run containers without having to manage servers or clusters. Basically, you deploy applications without worrying about the infrastructure. There is no need anymore for you to choose server types and to decide how and when to scale your clusters.

ECS comes with two launch types: Fargate and EC2. For the Fargate launch type, all you have to do is to pack your application in a container, specify the CPU, the memory, define the network and IAM policies. Once you have done all of the above, your application is ready to be deployed.

With Fargate, you pay per tasks and for CPU utilization. That means you do not pay for EC2 instances. Fargate is suitable for applications that have consistent workloads and are Docker containerized.


Does not EKS sound similar to ECS? Of course it does. They have the same purpose with one distinct difference. EKS allows you to deploy, manage, scale and run microservices using open-source Kubernetes. 

One cool thing about EKS is that it runs the Kubernetes management infrastructure for you across multiple AWS availability zones. Why is that? To eliminate the single point of failure.


These are just serverless functions that take care of everything after you have uploaded your code. Basically, AWS Lambda allows you to run your code without provisioning or managing servers. 

You pay for the compute time you consume. When the Lambda is not running, there is no charge. A use case for Lambda functions would be unpredictable and inconsistent workloads. 


AWS Elastic Beanstalk provides a quick and easy way of deploying your application on AWS. This service automatically handles capacity provisioning, load balancing, autoscaling and health monitoring. 

More on Elastic Beanstalk later in the “AWS Provisioning Services” section.


AWS Batch allows you to plan, manage and execute your batch processing jobs. This service plans, manages and runs your batch processing workloads across the full range of AWS Compute Services such as EC2 and spot instances. 

You can read more about the AWS Compute Services by visiting this link


We also need to store our data somewhere, right? Not to worry, AWS allows us to do just that with a wide range of services. Let’s jump straight in!


The first in line is one of the oldest and most fundamental AWS services – that is, S3. S3 allows users to store and retrieve any amount of data from anywhere in the world. It provides a highly-scalable, secure and durable object storage. In simpler words, S3 is a safe place where you can host (store) your flat stuff (e.g. videos, images, etc.). By flat, I mean that the content does not change (e.g. you cannot store a database in S3 as it is continuously changing). The data from your S3 buckets are spread across multiple facilities and devices, in case of failures.

But wait, what do you mean by  “object storage”? Data is stored in buckets, and each bucket consists of key-value pairs. The key represents the name of the file, whereas the value represents the contents of the file

Some essential quick points about S3 are:

  • It is object-based
  • Files can range from O Bytes to 5 TB
  • You have unlimited storage
  • Files are stored in buckets
  • Buckets must have unique names because the S3 namespace is universal – that means, there cannot be two buckets with the same name in the world. 
  • When an object is uploaded successfully in the bucket, it returns the status code 200

What are the features of the S3 service?

  • Tiered storage available – different types of storage for different use cases
  • Versioning – it means that it keeps multiple version for the same file. This allows recovering files in the event of failure or unintended user actions.
  • Lifecycle management – It represents a set of rules to decide what to do with your data stored. For example, you could define when a group of objects should be transferred to another storage class – e.g. for archiving data. Or set a rule to delete the files after they expire.
  • Encryption – It allows you to set necessary encryption behaviour for your S3 buckets. For instance, encrypt the files before they are uploaded and decrypt them when they are downloaded. 
  • You secure your data through Access Control Lists (on an individual file basis) and Bucket Policies (applied across entire buckets).

S3 data consistency is of vital importance as well. What about it, though?

  • Read after Write consistency for PUTS of new objects. You might ask yourself “Whaaat?”. That means you can access the data uploaded to the S3 buckets as soon as the data is uploaded. You can access and view the new file immediately.
  • Eventual consistency for overwriting DELETE and PUTS. That means after deleting a file, you might still be able to access it for a little while. It also means that when you update an existing file, you might get the old file if you try to access it straight after updating it. Why is that? It takes time for the changes to propagate. As we have seen above, the data in S3 buckets are spread across multiple devices and facilities. 

How does S3 charge youS3 charges you based on:

  • Storage
  • Requests 
  • Storage management pricing 
  • Data transfer pricing
  • Transfer acceleration
  • Cross-region replication

The last thing that remains is to look at the different S3 storage classes. They are as follows:

  1. S3 Standard
    • This storage class comes with 99.99% availability and 99.999999999% durability.
    • The data is stored on multiple systems across multiple facilities to sustain the loss of two facilities at the same time
  2. S3 IA (Infrequently Accessed)
    • This storage class is for data that is infrequently accessed but requires quick access when it is needed.
    • Even though it is cheaper than the standard storage, it charges you per file retrieval.
  3. S3 One Zone IA
    • Basically, it is the same thing as S3 IA with the only difference being that your data is stored in one place only – no multiple AZs. 
  4. S3 Intelligent Tiering
    • This storage class automatically moves your data to the most cost-efficient storage tier. E.g. it could push your data from S3 Standard to S3 One Zone IA to reduce costs.
    • It does not impact performance. 
  5. S3 Glacier
    • S3 Glacier is suitable for data archiving where retrieval times between minutes to hours are accepted.
    • It is the second-lowest-cost storage class. 
  6. S3 Glacier Deep Archive
    • Basically, it is the same as S3, with one significant difference: data retrieval takes twelve hours. 
    • It is also the lowest-cost storage class. 

The figure below compares the S3 storage classes.

S3 Storage Classes Comparison
S3 Storage Classes Comparison

There are multiple database services, but they are split into two parts. There are NoSQL and SQL (relational) databases. The NoSQL databases available on AWS are:

  • DynamoDB – It is Amazon’s product
  • DocumentDB

The SQL (relational) databases are:

  • Aurora – It is Amazon’s product (5 times faster than MySQL)
  • MySQL
  • PostgreSQL
  • MariaDB
  • Oracle
  • Microsoft SQL Server

The relational databases have two key features:

  1. Multi-AZ => They are deployed in multiple availability zones for disaster recovery.
  2. Read replicas => Data is read from replicas, instead of being read from the database itself. The writes are done to the database, but the data is read from replicas.

There are three other database services, as follows:

  • Neptune => Graph database developed by Amazon
  • Redis => Columnar database
  • ElastiCache => Redis, or Memcached database

Provisioning refers to the creation of resources and services for a customer. Basically, it is a way of creating resources for your AWS resources. The AWS provisioning resources are:

  1. CloudFormation
  2. Elastic Beanstalk
  3. OpsWorks 
  4. AWS QuickStart
  5. AWS Marketplace 

Let’s start with CloudFormation, which is one of the most powerful and helpful tools in AWS. CloudFormation is simply a JSON or YAML template that turns your infrastructure into code and consists of stacks. You might ask “What do you mean by turning infrastructure into code?”. What I mean is that you can programmatically specify all the resources needed by your application, and they will be created automatically. That means you do not have to manually create resources in the AWS console and then link them together. 

See an example of a CloudFormation template that creates an EC2 instance with security groups here (it is in YAML format). If you plan on advancing in your AWS knowledge and career, you should get familiar with CloudFormation. Read more about CloudFormation here

The next service is Elastic Beanstalk. Elastic Beanstalk allows you to upload your application code, and it automatically creates all the resources for you (it provisions your EC2 instances, your security groups, your application load balancers; all with the click of a button). It automatically handles the details of capacity provisioning, load balancing, scaling, and application monitoring.  

It is an excellent service for quickly deploying and managing applications in the cloud without worrying about the infrastructure if you are not familiar with AWS. It automates everything for you. If you want to associate this service with something more familiar, Elastic Beanstalk is AWS’s own Heroku.

As always, you can read more about Elastic Beanstalk (click me) on the AWS official website. 

Moving to the next service, AWS QuickStart allows you to quickly deploy applications in the cloud by using existing CloudFormation templates built by experts. Let’s say you want to deploy a WordPress blog on AWS. You can go to AWS QuickStart and use a template that does just that so you do not have to build it yourself. 

When it comes to AWS Marketplace, Amazon describes this service perfectly, which does not leave much to me to explain. “AWS Marketplace is a digital catalog with thousands of software listings from independent software vendors that make it easy to find, test, buy, and deploy software that runs on AWS.”. You could use AWS Marketplace to buy a pre-configured EC2 instance for your WordPress blog. 

The last provisioning service is OpsWorks. OpsWorks is a configuration management service that allows you to manage instances of Chef and Puppet. It gives you the ability to use code to automate the configuration of your servers. More OpsWorks information here


One important area we need to cover is logging. If your services go down, you surely want to know why that happened. Thankfully, AWS does a great job, and it provides two logging services:

    1. AWS CloudTrail => CloudTrail is a service that monitors all the API calls within the AWS platform. It is useful to discover who did what. For instance, we can use this service to find out who stopped an EC2 instance, or who created a new S3 bucket. 
    2. AWS CloudWatch => CloudWatch is a service that monitors the resources and applications on AWS and on-premises. It can monitor things like CPU, memory and network utilization, for example. 
      • You can use CloudWatch to detect abnormal behaviour in your environments, set alarms, visualize logs and metrics, take automated actions, troubleshoot issues, and discover insights about your application to keep them running smoothly. 


This section is of essential importance. Why is that? First of all, you do not want to incur unwanted charges (which is very easy to do with AWS), and secondly, it is an important part of the exam.

When using AWS, you are only paying for the services you are using. Once you stop using services, there are no fees. Even though there is no contract, you can make a contract with Amazon. 


You must remember the AWS paying principles. These are as follows:

  • You pay as you go (reduces the risks of under-provisioning or over-provisioning)
  • You pay less when you reserve
  • You pay even less per unit by using more services/resources
  • You pay even less as AWS grows

Also, on AWS you pay for:

  • Compute capacity
  • Storage 
  • Outbound data

You never pay for inbound data. AWS is clever; to attract you to their services, they do not charge you for migrating your data to their services. However, they charge you when you transfer data out from their cloud.

You must remember the above information. Other two important terms you should know are CAPEX and OPEXCAPEX stands for Capital Expenditure, and it means to pay upfront. It is a fixed cost. OPEX stands for Operational Expenditure, and it means to pay only for what you use. You can think of OPEX like a utility bill (electricity bill, for example).


There are four key pricing principles. These are:

    • Basically, this is about what we have talked above. As we have already seen, in AWS, we pay for the compute capacity, for the storage, and for the outbound data. 
    • All this policy says is to put your cost controls in place before your environments grow massive.
    • Managing cost-effectively from the beginning ensures that managing cloud investments do not become an obstruction as you grow and scale.
    • Basically, this policy says that as you are paying for something as you need it, you can actually focus on the environment itself rather than the infrastructure.
    • You maximize the power of flexibility by using your environment only when you need it.
    • A key advantage is that you do not pay for your resources when they are not running, which enables you to be cost-efficient. You can also have all the power you need when the workloads are active. 
    • AWS offers several pricing models depending on the product. The pricing models are as follows:
      • On-demand
      • Dedicated Instances
      • Spot Instances
      • Reserved Instances

These are the key pricing policies folks. Like any of this information, none is invented by myself, and everything is available in the AWS documentation. You can read more about the key pricing policies by following this link. 


Thus, let’s ease in with the free services from AWS. The free AWS services are as follows: 

  • OpsWorks
  • IAM
  • Organizations & Consolidated Billing
  • VPC
  • Elastic Beanstalk 
  • CloudFormation 
  • Auto Scaling
  • AWS Cost Explorer 

However, there is a catch. These services are free, but the resources they use/create are not free. Even though CloudFormation is free, the resources it creates are NOT, for example. You will pay for the EC2 instances and whatever it creates/uses. Always be aware of this fact. 


There are currently four support plans with different features. The different AWS support plans are BasicDeveloperBusiness, and Enterprise. Let’s see how are they different, and what do they offer.


This is the most basic support plan, with actually no support (huh). This plan could be used for testing AWS or very small applications.

  • Cost: Free
  • Tech support: None. You have to use only forums such as the AWS forum.
  • Who opens cases: Nobody. 
  • Case severity/response times: None, as you cannot open cases.
  • Technical Account Manager: No.


With the developer support plan, things get better. We have more benefits, which means that this service is paid.

  • Cost: $20/month
  • Tech support: Business hours via email
  • Who opens cases: One person only. Can open unlimited cases. 
  • Case severity/response times
    • General guidance in less than business 24 hours
    • System impaired in less than 12 business hours 
  • Technical Account Manager: No.

This service is better than the basic plan. 


This support plan is even better. 

  • Cost: $100/month
  • Tech support: 24/7  email & chat & phone
  • Who opens cases: Unlimited persons/unlimited cases
  • Case severity/response times
    • General guidance in less than business 24 hours
    • System impaired in less than 12 business hours 
    • Production system down in less than 1 hour
  • Technical Account Manager: No.

The response times are very good with this support plan. If your production system is down, you get an answer in less than 1 hour. That is admirable.


This plan is the best support plan. However, it comes with a hefty price tag.

  • Cost: $15000/month
  • Tech support: 24/7  email & chat & phone
  • Who opens cases: Unlimited persons/unlimited cases
  • Case severity/response times
    • General guidance in less than business 24 hours
    • System impaired in less than 12 business hours 
    • Production system down in less than 1 hour
    • Business-critical system down in less than 15 minutes 
  • Technical Account Manager: Yes

The main key point with this support plan is that you get a technical account manager. This is a person from Amazon that deals with your account exclusively. 

The main key takeaway from the AWS support plans is to remember the case severities and response times. Also, remember with which support plan you get a Technical Account Manager (not very hard). In the exam, you get a scenario, and you have to choose a support plan.


Looking at the official AWS Marketplace page, we can see that “AWS Marketplace is a digital catalog with thousands of software listings from independent software vendors that make it easy to find, test, buy, and deploy software that runs on AWS.”

What does that mean? It means you can go to the marketplace and buy a configured WordPress blog that runs on AWS, for example. You can buy stuff like CloudFormation templates, Amazon Machine Images, AWS Web Application Firewall rules, and so on.

Be aware that the service might be free, but it also might have associated charges. The charges are deducted from your account, and then AWS pays the vendor. 


AWS offers you the possibility of creating a paying account, where you can consolidate the billings from all your AWS accounts. In simpler terms, you can pay all your bills from only one account. 

One thing to note is that the paying account is independent from all other accounts, and it cannot access the resources of the other accounts. 

What are the benefits of this service? The benefits are as follows: 

  • One bill for all your accounts
  • Very easy to track charges and allocate costs
  • Volume pricing discount (the more you use, the less you are paying)

In this section, we are going to talk about two services called AWS Budgets, and AWS Cost Explorer respectively. What are they?

AWS Budgets provides you with the ability to create custom budgets that alert you when you are close to exceeding the set limit. Or when you actually exceed that limit. An easy to way to remember AWS Budgets is to think this way (props to ACloudGuru for this):

  • AWS Budgets is used to explore costs BEFORE they have been incurred. 

AWS Cost Explorer is a service that allows you to visualize and manage your AWS costs over time. An easy way to remember this is to think this way:

  • AWS Cost Explorer is used to explore costs AFTER they have been incurred.

First of all, TCO stands for Total Cost of Ownership. This service is straightforward to remember. All this calculator does is to provide you with a comparison between the costs of your infrastructure on the AWS cloud versus the costs of your infrastructure on-premises.  

All you need to remember is that it shows how much you could save by moving from on-premise to the AWS cloud. It is important to note that it is an approximation only, and the costs can vary in reality. 


You might ask what tags are. Tags are just key-value pairs that can be attached to AWS resources. They are metadata (data about the data), and they can be inherited. Tags can include specific information such as EC2 public & private addresses, ELB port configuration, or RDS database engines. 

What are resource groups then? Resource groups make it easier for you to group your resources using tags assigned to them. Resource groups contain information such as the region, name, employee id, or department. 

In simpler words, tags and resource groups give you the ability to organize your resources. 


One last step is to look at what determines prices for different services such as EC2, Lambda, S3, and others. This section is going to be only bullet points, so it is not going to take you long.


  • Clock hours of server time
  • Instance type
  • Number of instances
  • Load Balancing
  • Detailed Monitoring
  • Auto Scaling
  • Elastic IP Addresses
  • Operating Systems and Software Packages


  • Compute Time (duration)
  • Number of Invocations (requests)
  • Additional charges if it uses other AWS services or transfers data)


  • Volumes (per GB)
  • Snapshots (per GB)
  • Data Transfer


  • Storage Class
  • Amount of data stored 
  • Number of Requests
  • Type of Requests
  • Data Transfer


  • Amount of Data Stored
  • Data Retrieval Time


  • Service Fee Per Job (50 TB – $200, 80TB – $250)
  • Daily Charge (10 days free, then $15 per day)
  • Data Transfer (data transfer into AWS is free, data out is charged)


  • Number of Requests
  • Outbound Data
  • Traffic Distribution


  • Number of Writes
  • Number of Reads
  • Indexed Data Storage


  • Clock Hours of Server Time
  • Database Characteristics
  • Database Purchase Type
  • Number of Database Instances
  • Provisioned Storage
  • Additional Storage
  • Number of Requests
  • Deployment Type
  • Data Transfer


This section is the last one for this certification. Quite a lot, huh? Luckily, this section is not that long. 


We start this section with the SRM (Shared Responsibility Model). The Shared Responsibility Model states that Amazon AWS is responsible for the security of the cloud, whereas the customers are responsible for the security in the cloud

What do they mean by “security of the cloud“? They say that AWS is responsible for the infrastructure on which the services run. The infrastructure includes the physical servers, the location where they are stored, the networking and the facilities that run the AWS cloud services.

What do they mean by “security in the cloud“? They say that the customers are responsible for things like patching their EC2 instances, protecting their customer data, making sure they are compliant with various legislations, using IAM (Identity Access Management) tools and so on. Shortly, the customers’ responsibilities are determined by the AWS service they are using. You are directly responsible for the data you put on AWS, and for enabling monitoring tools. 

AWS Shared Responsibility Model
AWS Shared Responsibility Model

That is it for the Shared Responsibility Model. The figure illustrates the shared responsibilities between the customers and AWS.


First of all, let’s define what compliance programs are. Compliance programs are a set of internal policies and procedures of the company to comply with laws and regulation. For example, if you are a hospital that uses AWS services, you must be HIPAA compliant. Another example is when you process card payments; you need to be PCI DSS compliant. To make sure that is the case, we have AWS Artifact

AWS Artifact is a self-service for on-demand access to AWS’ compliance programs. AWS Artifact allows you to find, accept and manage agreements with the AWS for either an individual account or for all accounts that are part of your organization. It also allows you to terminate any agreement previously accepted if it is no longer required. (link here

Click here to see the full range of compliance programs. 

That’s all about AWS Artifact. 


This section is all about security in the cloud. Here comes the AWS inspector, which is an automated security service that assesses your applications deployed on AWS to improve their security and compliance. 

What do they mean by “assesses”? AWS Inspector checks your applications to see if they deviate from the existing best practices and if they have any security vulnerabilities. Once the assessment is done, it will generate a report with all the findings organized based on the level of severity. 

Its goal is to eliminate as many security vulnerabilities as possible. For more information on this service, head to this link


I bet you all heard about web attacks such as SQL injections, Cross-Site Scripting (XSS), Sensitive data exposure and many more. As it can be implied from the name, the purpose of the AWS WAF service is to protect your applications from common web exploits such as the one mentioned earlier and many more. 

This service provides you with the ability to filter the traffic based on the contents of the HTTP requests. That is, you can DENY or ALLOW traffic to your application based on what the incoming HTTP requests contain. You can also use an existing ruleset from the AWS WAF Rules marketplace. 

AWS WAF can be attached to CloudFront, to your Application Load Balancer or the Amazon API Gateway. 

The pricing of AWS WAF depends on how many rules you deploy and the number of requests your applications receive.


AWS WAF does not protect your applications from all attacks and exploits though. The applications need protection from Denial-Of-Service (DDoS) attacks as well. What is that? A DDoS attack is an attempt of making an application unresponsive by sending an enormous amount of requests. That is, the server will most likely not be able to serve all the requests because there are too many, and the application crashes. In conclusion, users are not able to access the application anymore.

This is where AWS Shield comes to the rescue. AWS Shield is a security service that safeguards the applications deployed on AWS. It is always activated, and it actively scans the applications. Its purpose is to minimize downtime and latency. Basically, AWS Shield protects your application from DDoS attacks. You are using AWS shield by default when you route your traffic through Route53 or CloudFront.

This service, AWS Shield, comes in two flavours – basic and advanced. The basic version is free and used by default. For the advanced version, it will cost you $3000 per month. However, it is worth the money. The reason is that you are NOT charged for the charges incurred during the DDoS attack. It does not matter if your resources were maxed out during the attack; you will not pay anything. With the basic service, that is not the case, and a DDoS attack can cause significant charges. 

AWS Shield protects an application against three layers of attack:

  • Layer 3 => The Network Layer
  • Layer 4 => The Transport Layer
  • Layer 7 => The Application Layer

For more information, it is worth having a look at the official documentation.


Another security service? You might ask. Well, yes. AWS GuardDuty is a threat detection service that is continuously monitoring the applications deployed on AWS for malicious and suspicious activity and unauthorized behaviour. 

This service uses machine learning, anomaly detection and integrated threat intelligence to scan CloudTrail, VPC, and DNS logs. If it finds issues, it will automatically alert you.

That’s all that is. As per usual, if you want, you can read more about this service on their website


AWS Macie is a security service that exclusively scans S3 buckets using Machine Learning and Natural Language Processing to discover, classify and protect sensitive information. The confidential information refers to data like credit card details, for example. 

Once it finds anomalies, it generates detailed alerts for you to see. You can read about Macie on the AWS website


AWS Athena is a tool that allows you to query the data stored in S3 buckets using SQL. It is a serverless service, which means that you do not have to provision anything. There is also no need for you to set up a complex Extract/Transform/Load processes.

With AWS Athena, you pay per query, or per TB scanned. If you want to learn more about querying your S3 data, or you just want to read more about Athena, you can do so by clicking this link.


The security groups act as a firewall at the instance level, and it implicitly denies all traffic. You can create allow rules to allow traffic to your EC2 instances. For example, you can enable HTTP traffic to your EC2 instances through port 80 by adding a specific rule.

The NACLs (Network Access Control Lists) act as a firewall at the subnet level. You can create ALLOW and DENY rules for the subnets. What does that mean? For example, you could restrict access to a specific IP address known for abuse. 


AWS CLOUDFRONT is Amazon’s Content Delivery Network (CDN). A CDN is just a system of distributed servers around the world that serves web content to the users based on their geographical location, and the webpage origin.

  • Origin => This represents the origin of all the files that the CDN distributes. The origin can be an S3 bucket, EC2, Elastic Load Balancer or Route53.
  • Distribution => The name of the CDN that consists of a collection of edge locations.
  • Edge locations are already explained, but I will explain them again. An edge location is a location where the content is cached.
  • A file is cached for a period specified by the TTL (time-to-live) (usually 48 hours). You can clear the cached objects, but you will be charged.
  • There are two types of CloudFront distributions:
    • Web distributions – for websites
    • RMTP – for media streaming 

AWS ELB (ELASTIC LOAD BALANCER) is used to balance the traffic between your resources. For instance, if one EC2 instance is down, the traffic is redirected to another one, or it creates another EC2 instance. The same happens if one of your resources if overloaded with traffic. That means, your application is always available to users, instead of being “down”. There are three types of load balancers:

  • Classic Load Balancer, which is being phased out. It is useful for dev/test environments.
  • Application Load Balancer
  • Network Load Balancer 

The critical difference between these Elastic Load Balancers is that the Application Load Balancer can “look” into your code, and make decisions based on that. In contrast, the Network Load Balancer is used when you need extremely high performance and static IP addresses.

AWS TRUSTED ADVISOR is a tool that allows users to reduce the costs, increase performance and improve the security by implementing the recommendations given by it. That is, the Trusted Advisor advises the users on cost optimization, performance, security and fault tolerance. It also ensures the users are following the AWS best practices by providing real-time guidance. 

There are three types of trusted advisors: free and business/enterprise. With the free trusted advisor, you get seven trusted advisor checks, whereas, with the business/enterprise advisor, you get all trusted advisor checks. 

AWS CONFIG provides a detailed view of the configuration of your AWS resources. That is, it shows how the services are related to each other, what their configurations were in the past, and how those configurations changed over time. It allows you to compare the settings at different points in time. 

AWS STORAGE GATEWAY is hybrid cloud storage with local caching that allows your on-premise application to access and use the AWS cloud. This service can be used to reduce on-premises storage with cloud-backed file shares, to provide low latency access to data in AWS for on-premise applications, for migration, archiving, processing and disaster recovery. 

AWS VPN gives you the ability to create a secure and private connection to your AWS network. There are two types of VPNs:

  1. AWS Site-to-Site VPN => Allows you to connect your on-premise services to the AWS cloud
  2. AWS Client VPN => Allows you to connect your machine (e.g. a user) to the AWS cloud. 

AWS QUICKSTART is a service that allows you to quickly deploy environments into the cloud by using existing CloudFormation templates built by experts. Most Quickstart reference deployments give you the ability to deploy a whole infrastructure in less than an hour. 

AWS LANDING ZONE is a tool that allows you to quickly set up a secure, multi-account AWS environment based on AWS best practices. 

AWS EBS (ELASTIC BLOCK STORE) is just a virtual hard drive disk that gets attached to your EC2 instances. Once EBS is attached to an EC2 instance, you can use them in any other way you would use an HDD. The EC2 instance needs to be in the same Availability Zone as the EBS. EBS comes in two flavours: SSD and Magnetic.

AWS SYSTEM MANAGER is a tool that allows you to manage your EC2 at scale. That is, if you have multiple EC2 instances, you can manage all of them in one go, instead of logging into multiple EC2 instances. A scenario would be when you want to update them. Instead of logging into each instance individually, you can update them in one go. It is one handy service. 


  • IAM
  • Route53
  • CloudFront
  • Some services give global views but are regional:
    • S3


  • Snowball => Gigantic disk delivered to your office
  • Snowball Edge => Computer. Allows you to deploy lambda on-premise (where you cannot use AWS online services but you need them)
  • Storage Gateway => Physical or virtual (a way of caching your files)
  • CodeDeploy => Deploy code to your on-premise servers
  • OpsWorks 
  • IoT Greengrass
  • Deploy applications on premise:
    • CodeDeploy
    • OpsWorks


That is all folks. Congratulations on making it to the end! I started this article thinking that I would only touch on subjects, and provide something like review points. However, as I kept on writing, I began to consult other sources of information such as AWS documentation and AWS white papers. Therefore, you can be assured that the information from this article is detailed, up-to-date, and insightful. If I had added labs as well, I think this article would be worth paying money for. If you read everything and understood what you read, you are most likely to pass your exam. 

It took me a fortnight to put all the information together and write the article. Thus, if you found it of any help, all I ask is to share it with other people. If it helped you, it would most likely help other people too. Sharing is caring!

Do not forget to go over the other courses mentioned in the Introduction. Read all the links embedded in the article as well. If you do all this, it is impossible to fail the exam. Good luck! 



5 replies on “AWS Certified Cloud Practitioner Crash Course”

Leave a Reply