SAP-C02 Amazon AWS Certified Solutions Architect Professional – Exam Preparation Guide Part 5

  • By
  • September 5, 2023
0 Comment

5. Exam Preparation – Domain 4

Hey everyone and welcome back. In today’s video we will be discussing about the important pointers for exams for domain four. So the first thing here is that you should know about tagging strategies, about resource groups, about the EC to pricing models here then the overview about S three storage classes, EC to tenancy attributes as well as consolidated billing. So let’s discuss. The first point here is about the tagging strategy. So it is really recommended that whatever resources that you create, those resources should have appropriate tags. All right? So there are various benefits of having tags.

Not only it helps you during the billing aspects, it also helps you in the overall access control as well as automation. So when you have the tags associated with, let’s say EC two instances at the end of month you can calculate the total cost of a specific EC two instance based on tags. So tags proves to be very important here. The next important part is the EC to pricing model. So you should know when to use On Demand, when to use Spot, when to use Reserved, as well as when you should use a dedicated instance or a host. So basically when there is a mention within the exam question about a steady workload, then the Reserved instances are the right choice.

So let’s say that you have an application which will be running continuously, let’s say for a year. So that is an example for a steady workload. So reserve instances are the best choice here. So if you need workload for only few hours, then on demand is the right choice there. And if a workload is needed, and if a data is part of something like a Sqsq for such cases, if you need a huge amount of workload for a small period of time, spot instances can be a right choice.

So do remember that if the exam question states that the application is good with unexpected start and stop times, then only Spot instances is good. So basically, if you have the data within the Sqsq which will be processed by the applications and suddenly the EC two instance gets terminated, then whatever new EC two instance gets launched, it can again take that data from the SQS queue and process it. So for such kind of a use cases, Spot instance can be a good factor provided cost is a consideration there.

Now, for the organization who needs hardware isolation to make sure no other company is running on the same physical host, then dedicated instance is the right choice. So along with that you should have good understanding about Reserved instances. Do make sure that you know that you can switch between availability zones between the same region. You can also change the instance size within the same instance family.

So here before you do that, make sure you understand the normalization factor. Now, instance type modification is only supported for Linux and not for Red Hat or Suzy. So you also have the RDS Reserved Instances. So for RDS Reserved Instances you need to specify the region, the DB engine, the DB instance class, the deployment type, and the term length. All right, so all of these factor needs to be taken care and selected before you opt for the RDS Reserved Instance type and it cannot be changed later. Now, when you’re speaking about Reserved Instances, make sure you also know about the Reserved Instances type. You have Standard, you have Convert table, you even have scheduled. So make sure you also understand the capacity reservation as part of Reserved Instances. We already have that discussed in the Reserved Instance video. So generally if you talk about the previous exam so not the February 2019 New Exam Blueprint, but when you talk about the older exam Blueprint, understanding Reserved Instances was a key factor. There were a lot of questions on normalization factors, the Reserved Instance Types and others in this exam, the amount of questions, you will still find some, but not as extensive as the previous exam Blueprint.

So just make sure you understand this before you sit for exams. So, next thing is the consolidated billing. Make sure that you know that it can be enabled via AWS organizations. Now, the paying account is independent and cannot access the resources of the linked account. So let’s say you have AWS organizations and you have linked ten accounts. So whatever paying account is there, although it receives the billing related aspects, but it cannot access the resources of the linked accounts, you will also get volume discounts on all the resources.

So also Reserved Instances which are unused can be applied across the group of AWS accounts. So let’s say that you have brought a Reserved Instance capacity of M four large in AWS account one, and you have M four dot large in AWS account three. You don’t have M four large instance running in AWS account one. So if all both of these accounts are linked with a doubles organizations, then that reserve instance can be applied there. Now, you should also know the resource groups.

So Resource groups basically allows us to see all the resources which have a similar tags across all the regions within your AWS accounts. You should also know that you can also set alarms when you exceed an actual or forecast budget. So this is generally achieved with the help of AWS budget. So speaking about cost management, you have AWS budget which basically allows you to set custom budget. So let’s say I have a budget of $200. So you get an alert when your actual cost or a forecasted cost is exceeding $200. So this proves to be quite useful.

So along with that, be familiar with the term of ROI which is Return of Investment and the TCO. This is generally part of the Cloud Practitioner Exam, but you might find these terms discussed within the exams. Now along with that, be aware about the EC to tenancy. You have shared, you have dedicated and you have the host. So shared is basically the EC two instances which runs on the same hardware, so there can be multiple customers. EC two instances that can run on the same physical host.

Then you have dedicated. So dedicated is basically easy to instance that runs on the hardware, which will only be shared between same account AWS instances. And then you have the dedicated host, which are basically the instances running on the dedicated host, which provides a granular level of hardware access to customers.

Now, along with that, be aware about the S three storage classes. So there are multiple storage classes. Like you have standard you have intelligent tiering. You have standard IA one zone IA glacier, glacier. Deep archive. Reduce redundancy. Again, these are actually part of the Cloud Practitioner certification itself. But for the cost allocation perspective, it is important for you to understand that because when you store your data in standard and your data is not really being accessed, then you can even move it to Glacier, which can reduce the overall cost. So in that perspective, understanding the storage class.

6. Exam Preparation – Domain 5

Hey everyone, and welcome back to the important pointers for exams. In today’s video, our focus would be on the domain five. So domain five is continuous improvement for existing solutions. Now, if you look into the amount of subsections there, in total you have six subsections within the domain. And this domain essentially is the second largest domain of the SA Pro certification and it constitutes to be 29% of the examination. So let’s go ahead and get started. Now, the very first subsection, if you look here, it’s troubleshooting the solutions architectures. Now during troubleshooting logs becomes one of the most important one. And this is the reason why within this specific slide, we have logs related to S Three. You have ELB, you even have Cloud Trail, which proves to be very important during troubleshooting. You have VPC Flow logs, you have Cloud Watch logs, as well as AWS Config. So all of these helps a lot specifically during the troubleshooting area.

So let’s start with the first one, which is the S Three server access log. Now, the S Three server access log basically provides details about the records for the requests that are made for that specific bucket. And this typically proves to be useful if you want to troubleshoot certain access issue and also want to monitor the request that might be coming for your S Three bucket. So let’s say that you have an S Three bucket which has a lot of sensitive data. So if you want to monitor what are the requests which are coming, then S Three server access log can prove to be important there. Second is the ELB access log. So ELB access log basically captures detailed information about a request being made to a specific elastic load balancer. This also helps a lot typically if you want to troubleshoot as well as analyze the access pattern. So let’s say that you have a ELB in the public subnet and you want to see from which part of the world the requests are coming.

So if you want to capture the source of the request, then you can enable the ELB access log and then you can build various geolocation based dashboards out of that. Third is Cloud Trail. Cloud Trail, as we know it provides the history of the API calls made to the AWS accounts. So if you want to audit who did what, when and from where, cloud Trail is the best choice there, then you have the VPC flow logs which basically captures the information from the network interface. Now again, this helps a lot during the troubleshooting connectivity aspect as well as it plays a very important role during the security area. Then you have the AWS Cloud Watch logs where you can go ahead and forward your system logs, forward the application logs, and this can also prove to be important during troubleshooting time. The last one is the AWS Config. AWS Config records the changes made to the AWS resources. It can also monitor them for the compliance.

So it helps a lot during the security reviews as well as while troubleshooting outages. So let’s say that tonight everything is working perfectly and tomorrow morning suddenly there is a big outage. So you want to see if any specific change was made towards say a resource like EC. To instance, AWS Config is one of the quickest way in which you’ll be able to find it out. The next important pointer here is the Trusted Advisor. So Trusted Advisor analyzes your environment and provides best practice recommendation based on various categories. So you have categories based on cost optimization, you have performance, you have security fault tolerance as well as service limits. So this is also one of the great service. Now, as far as S Three is concerned, you can definitely improve it with the help of encryption. Now, there are three ways in which you can encrypt the data in S Three. One is through the server side encryption which is also referred as the SSE Hyphen.

S Three you can also encrypt it with the server side encryption with AWS Kms managed keys here which is ssems. And last one is server side encryption with the customer provided keys which is also referred as the SSE Hyphen C where you can have your own keys which you can use. Now, the next important part is Cloud Watch logs. Now, specifically during Cloud Watch logs, we already know that we can send a various system data like system logs, application logs to Cloud Watch logs. So you should know the steps which are required. First is you need to have an appropriate Im rule to the EC two instance to be able to push the logs to Cloud Watch logs. This is important because if you do not have this and if you have the agent installed, the agent will not be able to push it. The second step is to install the Cloud Watch agent and third is to configure appropriate configuration and start your Cloud Watch agent.

So in exams sometimes you might get a question where it might give you a use case where a Cloud Watch agent is up and running. However, the logs are not coming to the Cloud Watch logs. So one of the ideal solution there would be to assign an appropriate im role with a policy so that the agent would be able to push to Cloud Watch logs. The next important part here is the Elastic MapReduce. So EMR is a service that basically makes use of Spark and hadoop open source frameworks to quickly and cost effectively process and analyze vast amount of data. So any type of use case, if you see in exams that deals with analysis of huge amount of data as well as the use case where ETL is required, then EMR is generally a right solution. So also understand the architecture of EMR. So this architecture understanding will prove to be useful.

So sometimes in exams you might get can you use a spot instances for a task instance group or a spot instance for a core instance group? So you should be able to answer that. So make sure you go through the video again if this is something that might be confusing for you. Now the next thing is the Elasticsearch Service. So Amazon Elasticsearch service again is a fully managed service that is based on the Elastic Search. It also provides Kabana for visualization. So this is a managed service, so you don’t really have to worry about that. So if you have a use case in exams where you want to analyze the logs, you want to build dashboards out of it.

So the Elastic Search service proves to be important here. So you can push the logs to the Elastic Search service and then you can go ahead and build the dashboards out of it. In fact, even Cloud Watch logs from if you have the data in Cloud Watch logs, you can directly push from Cloud Watch logs to Elasticsearch service. Now, along with that at a high level overview, be aware of what System Manager is and some of the subset services of Systems manager. One of the very important one is the run command, which basically allows you to run a specific set of command documents on a target instance. So you should know what command document is here.

The next thing is patch compliance. So patch compliance basically allows us to check the compliance status of easy to instance with respect to the patching activity. The third important service here is the parameter Store. Parameter store is a centralized space to store configuration data which also includes the credentials. So it can include DB, username and password and various others. Now, the next important pointer here is the DynamoDB Performance Optimization. So we have already discussed some of the important performance optimization aspects.

Like you can increase the RCU and WCU, you can also move to on demand. You can also if a cost is a factor here, you make it based on an SQS based setup. This will definitely lead to latencies as well as move to asynchronous setup, but SQS is something that will help you tune the cost. Now, as far as the performance is concerned, you can also make use of the global secondary index and the local secondary index for your DynamoDB table. Again, this can prove to be important performance factor. Now the next important point here is step function. So step function basically coordinates multiple AWS lambda functions or other AWS resources based on a user defined workflow flow steps. So you can define a specific workflow and it can execute it based on the workflow here. So prior to step function, SWF was one of the service which was used quite extensively.

But SWF has a huge learning curve. It is not as simple as step function there. It also cannot you cannot really do things to GUI. So step function is a great service and in fact AWS recommends that you go with step functions if it fulfills your use case. So considering here so you should have an overview about step function versus AWS SWF. So step function basically outshines SWF for most of the workflows. Now, one of the great benefits is that you do not need to maintain the infrastructure both for the workflows as well as for the task. Now, the step function can be defined with cloud formation as well as Sam and workflow can be defined with the GUI. Now, as opposed to that for SWF users are responsible for infrastructure management then only easy to instances can be defined via cloud formation here. Now, whatever workflow that you create in SWF, it can only be defined in application code.

So SWF is typically good for long running workflow which basically involves many human tasks like approvals reviews might be investigation. So workflows where there are multiple human manual tasks which is involved, SWF is still a better choice. Now, you also have a doubles batch service which basically allows customers to easily and efficiently run batch jobs in AWS. So with AWS batch the only thing that we typically have to configure is the configuration settings for batch and also the job definition. So AWS batch can automatically and dynamically provision the optimal quantity and type of compute resources based on volume and specific resource requirements of the batch job that you have submitted during the configuration time. You also have the EC to auto recovery.

So basically you can create a cloud watch alarm that basically can monitor the EC to instance and in case if there is something wrong with the EC to instance, it can automatically recover that instance for you. So for example, if there is a loss of network connectivity, if there is a software issue on a physical host or some others that are mentioned in the list, auto recovery can automatically reboot the instance for you. So you don’t really have to do it manually. The next thing is the data. Lifecycle Manager So the DLM is basically used for EBS snapshots.

So here you can basically define a backup and retention schedule for EBS snapshot by creating a lifecycle policy and it will automatically take regular snapshot of your EBS volume. All right, so before the DLM lot of customers they used to create their own script that would take automated backups in terms of EBS snapshot but now it is no longer required with the introduction of the Data Lifecycle Manager. Now, during the exams you will also see the terms related to RTO and RPO. So make sure your understanding about RTO and RPO is pretty clear. So RTO is basically a recover free time objective and it is basically amount of time that it takes for you to recover your infrastructure and business operation after the disaster has stuck. All right, and the recover point objective is basically it concerns more with the data and it basically looks into the maximum tolerance period to which a data can be lost. So there are certain miscellaneous pointers that you should remember. First is the service catalog. So you should know what service catalog is. So basically it provides organization to create and manage catalog of It services that are approved for use on AWS. So typically in organizations you will see that one developer will launch Ubuntu, some will launch Red Hat, some will launch CentOS, some will launch Amazon Linux. So there is no common platform there.

So with the service catalog you can specify what exactly would be launched. It would be launched from which AMI. So everything remains common there and it helps tremendously during the patching activity. You should also know about the Ideas and IPS. So Ideas IPS can basically look into the network traffic and it can block or detect depending upon the rule that you set. So for Ideas and IPS, typically if you install it in data center then a Promise Curse mode is something that is used. So AWS does not really support Promise Curse or span code. So Agent needs to be installed within the EC to instance for Ideas and IPS. Now along with that you should know what API Gateway is. So API gateway is basically a fully managed service that allows customers to create, publish, maintain, monitor and secure APIs at a scale. So API can be your lambda function, it can also be a various Http API endpoint in your AWS or elsewhere.

So for API gateway at a high level or you just understand the API gateway throttling functionality, the Caching functionality and the validation aspect. Now for the older exam, these three were one of the very important topics that were commonly asked. So in this exam you will not see that frequent questions, but it is important to just have an overview before you sit for the exams. Now the next important part is Athena. So Athena is generally used for use cases. Let’s say you want to analyze logs from S Three like cloud trail or maybe VPC flow logs with simple SQL statements in a serverless manner. So Serverless is a very important here. So within the use case within your exams, if you see that log analysis needs to be done and you don’t really need to or you don’t really want to configure any log monitoring solution or any infrastructure there and you want to maybe opt for Serverless, then Athena is a straightforward answer there.

The next important part is the S Three crossregion replication. Again, crossregion replication allows you to replicate objects between multiple S Three objects which might be in a different region altogether. So for cross region replication versioning enabling is mandatory. Now the last important pointer for today’s video is elastic search performance. Now, Elastic Search performance do remember one important point here, that many times what happens is that application keeps on adding data to your Elastic Search, which would stay for a longer time. And there are no requests that basically does a cache hit there. So after some amount of time, you will see that there is a lot of clutter of data within your Elastic Search. So what you can do is you can add a time to live there.

This will allow you to clear whatever clutter that might be there within your cache. So after certain TTL, your object which are part of the cache would expire. So this will allow you to remove the objects which are not really accessed frequently, to be removed from your cache memory.

Comments
* The most recent comment are at the top

Interesting posts

Everything ENNA: Cisco’s New Network Assurance Specialist Certification

The landscape of networking is constantly evolving, driven by rapid technological advancements and growing business demands. For IT professionals, staying ahead in this dynamic environment requires an ongoing commitment to developing and refining their skills. Recognizing the critical need for specialized expertise in network assurance, Cisco has introduced the Cisco Enterprise Network Assurance (ENNA) v1.0… Read More »

Best Networking Certifications to Earn in 2024

The internet is a wondrous invention that connects us to information and entertainment at lightning speed, except when it doesn’t. Honestly, grappling with network slowdowns and untangling those troubleshooting puzzles can drive just about anyone to the brink of frustration. But what if you could become the master of your own digital destiny? Enter the… Read More »

Navigating Vendor-Neutral vs Vendor-Specific Certifications: In-depth Analysis Of The Pros And Cons, With Guidance On Choosing The Right Type For Your Career Goals

Hey, tech folks! Today, we’re slicing through the fog around a classic dilemma in the IT certification world: vendor-neutral vs vendor-specific certifications. Whether you’re a fresh-faced newbie or a seasoned geek, picking the right cert can feel like trying to choose your favorite ice cream flavor at a new parlor – exciting but kinda overwhelming.… Read More »

Achieving Your ISO Certification Made Simple

So, you’ve decided to step up your game and snag that ISO certification, huh? Good on you! Whether it’s to polish your company’s reputation, meet supplier requirements, or enhance operational efficiency, getting ISO certified is like telling the world, “Hey, we really know what we’re doing!” But, like with any worthwhile endeavor, the road to… Read More »

What is Replacing Microsoft MCSA Certification?

Hey there! If you’ve been around the IT block for a while, you might fondly remember when bagging a Microsoft Certified Solutions Associate (MCSA) certification was almost a rite of passage for IT pros. This badge of honor was crucial for those who wanted to master Microsoft platforms and prove their mettle in a competitive… Read More »

5 Easiest Ways to Get CRISC Certification

CRISC Certification – Steps to Triumph Are you ready to stand out in the ever-evolving fields of risk management and information security? Achieving a Certified in Risk and Information Systems Control (CRISC) certification is more than just adding a prestigious title next to your name — it’s a powerful statement about your expertise in safeguarding… Read More »

img