AWS Certified Solutions Architect - Associate SAA-C02 Certification Video Training Course
AWS Certified Solutions Architect - Associate SAA-C02 Certification Video Training Course includes 10 Lectures which proven in-depth knowledge on all key concepts of the exam. Pass your exam easily and learn everything you need with our AWS Certified Solutions Architect - Associate SAA-C02 Certification Training Video Course.
Curriculum for Amazon AWS Certified Solutions Architect - Associate SAA-C02 Certification Video Training Course
AWS Certified Solutions Architect - Associate SAA-C02 Certification Video Training Course Info:
The Complete Course from ExamCollection industry leading experts to help you prepare and provides the full 360 solution for self prep including AWS Certified Solutions Architect - Associate SAA-C02 Certification Video Training Course, Practice Test Questions and Answers, Study Guide & Exam Dumps.
So it's just a quick lecture to touch base on what's meant by scalability and high availability. This is quite a beginner's level. So if you feel very confident about this concept, feel free to skip this lecture. But scalability means that your application system can hinder a greater load by adapting. And so there are two kinds of scalability. There's going to be vertical scalability or horizontal scalability, also called elasticity. and so scalability is different from high availability. They're linked, but different. So what I want to do is deep dive into all this distinction, and we'll use a call centre as a fun example to really put into practise how things work. So let's talk about vertical scalability. vertical scalability, which means that you need to increase the size of your instance. So let's take a phone operator, for example. We have a junior operator, and we just hired him. He's great, but he can only take five calls per minute. Now we have a senior operator, and he's much greater; he can take up to ten calls per minute. So we've basically scaled up our junior operator into a senior operator. It means faster and better. This is vertical scalability. As you can see, it goes up. So, for example, in EC 2, our application goes on AT2 micro, and we want to upscale the application. That means maybe we want to run it on a T that's too large. So when do we use vertical scalability? Well, it's very common when you have a non-distributed system such as a database. As a result, it's quite common for a database, such as RDS or ElastiCache. These are services that you can just scale vertically by upgrading the underlying instance type. although there are usually limits to how much you can vertically scale, and that's a hardware limit. But still, vertical scalability is fine for a lot of use cases. Now let's talk about horizontal scalability. Horizontal scalability means that you can increase the number of instance systems for your application. So let's take it again to our call center. We have an operator and he is being overloaded. I don't want to vertically scale it. I want to hire a second operator, and I've just doubled my capacity. Actually, I'll hire a third operator. You know what, I'll hire six operators. I've horizontally scaled my call center. So when you have horizontal scaling, that implies you have distributed systems, and this is quite common when you have a web application or a modern application. But remember that not every application can be a distributed system. And I think it's easy nowadays to horizontally scale thanks to the cloud offerings such as Amazon EC2 because we just right-click on it on the web page and boom, all of a sudden we have a new EC2 instance and we can just horizontally scale our application. Now, let's talk about high availability. High Availability that goes really handin hand with horizontal scaling butnot all the time High Availability. That means that you're running your application or system in at least two datacenters or two availability zones in AWS. And the goal of high availability is to be able to survive a data centre loss. So in case one centre goes down, then we're still running. So let's talk about our phone operators. Maybe I'll have three of my phone operators in the first building in New York, and maybe I'll have three of my phone operators in the second building on the other side of the United States in San Francisco. Now, if my building in New York loses their Internet connection or their call connection, then that's okay; they can't work. But my second building in San Francisco is still fine, and they can still take the phone calls. So in that case, my call centre is highly available. Now. High availability can also be passive. For example, when we have RDS Multi-AZ, we have a passive kind of high availability, but it can also be active. And this is when we have horizontal scaling. So this is where, for example, I have all my phone calls in two buildings in New York. They're all taking calls at the same time. So for EC two, what does that mean? While vertical scaling again increases the instancesize, it's called "scale up" or "scale down." So for example, the smallest kind of instance you can get in AWS today is T, or two nano. The smallest is a UT, twelve TB, on one metal, with twelve three terabytes of RAM and 450 vCPUs, and the largest is a UT, twelve TB, on one metal, with twelve three terabytes of RAM and 450 vCPUs. So this is a significant case. And I'm sure these things will get bigger as time goes along. So you can vertically scale from something very, very small to something extremely large. Horizontal scaling. That is, you increase the number of instances. And in AWS terms, it's called "scale out" or "scale in": out when you increase the number of instances, in when you decrease the number of instances. And this could be used for other scaling groups or load balancers. And then finally, high availability is when you run the same instance of the same application across multiple AZs. And so this is for an autoscaling group that has multiaz enabled or a load balancer that has multiaz enabled as well. So that's it for a quick rundown. So we're finally at the terms "high availability" and "scaleability." They're necessary for you to understand when you look at the exam questions because they can trick you sometimes. So make sure you're very confident with those. They're pretty easy when you think about them. Remember the call centre in your mind when you have these questions. Okay, that's good. I will see you at the next lecture.
Now let's learn about load balancing. So first of all, you may be asking me, but what is load balancing? Load balancers are servers that will forward internet traffic to your multiple back-end EC2 instances. So as a diagram, what does it look like? Assume we have three simple-to-run instances and they have our application in our load balancer, which will be in the middle and will redirect traffic from users to some of these simple-to-run instances. So for example, my first user is going to go directly to my first E2 instance through the load balancer, and the E2 instance will respond something to the load balancer, and the load balancer will respond something back to the user. Okay, but that happened for another user. User two, for example, may use the same request mechanism, but this time the user will be served by the second EC instance in the back end, and user three will be served by the third EC instance. And as we can see here, the users just interface with a single point of entry, which is our load balancer, and on the back end, we can scale our easy two instances and have many of them serve the traffic. And this is why, because the load is balanced. It's called a load balancer. So why should we use a load balancer? Well, as I just said, we can spread load across multiple downstream instances, and we can expose a single point of access via a DNS to your application. So we don't need to know about all the backendeasy two; we just need to know about the point of access, which is the hostname of your load balancer. It's going to be able to seamlessly handle failures in downstream instances through health checks And as I said, it's going to do regular health checks on your instances so it knows when not to send traffic to your instances. It's also going to provide you SSL termination, or HTTPS security, for your website directly on the load balancer side, and we'll see this in details in this section, and also help you enforce stickiness with cookies. We'll see what that means. give you high availability across availability zones. That means your load balancer and E2 instances can be spread across multiple eggs, and it will be able to cleanly separate traffic that is public facing from your users to your load balancers and traffic that is private facing from your load balancer into your E2 instances. So you may be asking: why do we want to use an ELB from Amazon and EC Two?Lobbieser, well, it is a managed load balancer, and so AWS guarantees that they will be working either way and will take care of all the upgrades, the maintenance, and the high availability, so we don't have to worry about any of that. It will provide us a few configuration knobs to make sure that the behaviour we get is the one we expect. So, while doing your own load balancer on your end would be much less expensive, it would require a lot more effort and would most likely not scale. And so using a managed AWS load balancer is the way to go, and it's because it's also integrated with so many offerings and services that it's a no-brainer. You use load balancers all the time on AWS. So let's talk about the health checks. Why are health checks so important for load balancers and why?because they enable your load balancers to know if the instances that it sends the traffic to are healthy. And by healthy, we mean available to reply to requests in a good way. So the health check will be done by the load balancer, and you have to specify a port and route; usually, in the real world, tip-slash health is quite common, and if the response is not 200 okay, then the instance will be deemed unhealthy, and so that's the idea. Our load balancer, which is a traditional load balancer, will perform health checks on your EC2 instance on port 4567, and if the EC2 instance responds with 200 OK, which means I'm healthy, the load balancer will mark that instance as healthy and will be able to send traffic to the EC2 instance. If it's not healthy, it will stop sending traffic, and so with help track, the nice thing is that they happen every 5 seconds; you can also configure them to happen every 510 seconds, and so your load balancer will try to ensure that your two simple instances are available to respond to requests continuously. So, load balancers, there are three types of AWS, and you must understand the distinctions between them. So they're all managed, and the first one is the classic load balancer; it's called the V one; it's older generation; it was first created by AWS in 2009; and it supports http, https, and TCP traffic. Then we have our application loungers and they'reV two, they're newer generation and the examwill make sure that you choose more oftenthe newer generation load balancers. And so the first one is the application load balancer; it's from 2016 and it supports HTTP, HTTPS, and WebSockets, and you have to remember these things. The network load balancer, which is also a newer generation from 2017, supports TCP, TLS (secure TCP), and the UDP protocol. We'll do a deep dive on all these load balancers, so don't worry too much, right? Overall, it is now advised to use newer generation load balancers because they offer more features. They're more integrated with AWS, and they are really the way forward really.You can set up two kinds of load balancers on AWS: either an internal load balancer or an external one. which means that it's private within your account and you cannot access it from the public website, or you can have it as an external public load balancer, or ELB. And this will allow your users to access—for example, if it's your website—your website through that load balancer. So one last thing on security before we get started with some more deep diving: So the security is that your users are communicating with your load balancer, which is communicating with your EC2 instance, and your users may be communicating with your load balancer via HTTPS or HTTP from anywhere. So as such, if you want to set up a security group that's going to be reasonable, we should set something like this where we allow HTTP on port 80 of source 0.0.0 slash zero, which means anywhere. So, okay, HTTP is on port 80. So come from anywhere and then HTTP, four, four, three from anywhere. So this is a very classic and easy security group. Now the more interesting part is between the load balancer and your EC-2 instance. So there's going to be HTTP traffic between these two because your load balancer has to talk to your EC2 instances through HTTP, but this time we want that traffic to be strictly restricted to your load balancer. What that means is that your EC2 instance expects only your load balancer to send traffic to it. And as such, we can have an application security group that allows only traffic from the load balancer. And so here's the interesting thing: It says okay, HTTP on port 80, the source of which is a security group. You see, there's a long security group with an ID, and that security group ID represents the load balancer security group ID. So the load balancer here has a security group. Your EC2 instance has a security group. The EC two instance security group references the one from the load balancer, which is right here, and the one from the load bouncer has the rules that allow the users to access from anywhere on HTTP and HTTPS. So I hope you see that this setup is the most secure one. Okay? Before we get started, there are a few things you should know. The load balancers can scale, but they're not instantaneous. So, if you need to do something on a large scale, you should contact AWS for a warm-up and troubleshooting. If you get a four-xx type of error, there are client-induced errors; a five-xx error means that there are application-induced errors; and a load balancer error 503 means that the load balancer is at capacity or that there is no registered target. So it's overloading. And if the load balancer cannot connect to your application, you need to check your security groups. So, for example, in the following lecture, this would be common troubleshooting. And we've seen on the previous slide how to set up your security groups for monitoring. Well, the ELBX logs will love all the access requests, so we can debug every single request that gets to our load balancer. And we have CloudWatch Metrics, which provides us with metrics in your account and aggregate statistics on your load balancers. For example, how many connections are currently ongoing for your load balancer? Alright, that's it. So far, we've covered the theory, but in the next lecture, we'll dive deeper into classic applications and network load balancers, as well as do some hands-on work. Alright, I will see you in the next lecture.
Let's first start with our Classic Load Balancer; these may not be as prominent in the exam, but they may still be mentioned, and they're still very valid load balancers. So they support two things: the TCP, or traffic controller, for HTTP and HTTPS, and TCP is called layer four, while HTTP is called layer seven. I'm not going to get too deep into what these layers mean, but just know that TCP is layer four and the other two are layer seven. So the health checks are either TCP-based or Http based.And what you will get out of a load balancer—a classic load balancer—is a fixed hostname, and we'll see this in the hands-on as well. So what we'll set up in this lecture is a client talking to our classic load balancer via an HTTP listener, and internally, that CLB will be redirecting the traffic into our EC2 instances. So let's have a look at how we can do this. In the console, we have my second instance that's already created and running our HTTP server, and if we go to this IP and press Enter, we get back "Hello, world" from our instance. So this is great. Now let's go ahead and create our classic load balancer. So for this, I'm going to scroll down and go to Load Balancers, and I'm going to go ahead and create a Load Balancer. So we have three choices, and we're going to choose the classical load balancer, which is the previous generation, but don't worry, we will get to the application Louncer and network load balancers. But as you can see from the screen, it's always kind of pushing for us to use the ALB and the NLB more than the CLB. Okay? But for this exercise, we'll just use a classic load balancer. Click on Create, and then we have to give it a name. So that's my first CLB, and then we need to choose the VPC. So we'll choose my default VPC. Is it an internal load bouncer or an external one? If it's internal, we're not going to be able to publicly access it. So we'll leave this unticked. And do we want advanced VPC configuration? We will not accept this box for the time being. The listener configuration is saying: What is our CLB going to listen to? So it will have the port 80, which is HTTP, and they will be talking to our instance on HTTP port 80. So this looks good, this is classic, and I'm going to assign a security group here. I'm going to create a new security group and I'm going to call it "my first load balancer" and it will be my first load balancer security group. OK? Now what it will allow is to allow anyone on port 80 from anywhere to access our load balancer, which is what we need because we want to access our newly created load balancer. Then for security settings, there's nothing on HTTPS, so we get this warning, but this is fine. Then for health checks, we need to set up a health check. For instance, so it's saying okay, you need to use the HTTP protocol on port 80, and then you need to ping on a specific path. And if I leave this as index HTML, will this work? Let's add index HTML and press Enter. As you can see, this still works. So we can keep this pink path as is. But if we wanted to, we could also just not have index HTML and remove that part. That's up to you. It's the exact same thing in this example. Now for the advanced details: how long are we willing to wait for a response? How frequently do we want to communicate with our instance? Maybe we want to talk to it every 5 seconds. How many HELP sharks have failed? Is this instance going to be unhealthy? We'll set it as two. And how many successful habitats does it have? We'll set it as five. Click on "Add these two instances." And here we're able to add our second instance directly to our classic load balancer. And click on Add tags and clickon Review and create and create. Now the load balancer is being created, and we need to review and resolve this. Obviously, the health check is not good, so we need to have an interval of maybe 10 seconds so that it is greater than the response timeout, obviously. So I'll just fix that real quick and go again and click on "Create." And now everything should be working. Okay, so our CLB is now being created and we have to wait for it to be provisioned. So as you can see now, it is being provisioned. Our classic load balancer is now created and, if you go to instances, one of them is in service. It took a little while to get into service, but now it's in service. That means that it's passing the health checks, and therefore my load balancer is ready. So if I use this DNS name right here, copy this entire DNS name, open a new tab, and add it. As we can see, using this DNS name, we get Hello World from the same IP as before. So if I go back to my E-two instance and refresh this page, I get the exact same message. So the CLB is working because it is responding the exact same way as my EC-2 instance because it is actually my easy-to-instance responding behind my load balancer. Okay, but the thing is, now we have a load balancer, and it's in front of our instance, and we're able to access both our instance and our load balancer at the same time. So this is kind of problematic. We want to be able to access our load balancer to expose that and not expose our easy two instances directly. So for this, we need to go back to security groups and make a little bit of change. So let's go to security groups. And here we have my first load balancer security group and my first security group, which was attached to our EC2 instance. So I'm going to click on my first load balancer inbound, and this looks good. We're saying that any HTTP traffic on port 80 can access our load balancer, but if we go to my security group, this is too open; we're saying that anyone, anywhere, can access HTTP, but what we want to do is change this and say that only someone from a new type of SG, someone from my first load balancer, can access my easy to instance. And so instead of saying allow HTTP traffic from anywhere, we can say from my load balancer only. So, once we've completed this and saved it, we're saying, OK, only traffic from my load balancer can go into my EC2 instances. So if I go back to my CLB and refresh, as we can see, this still works, and I still get the response "hello world." But now if I go to this and press refresh, you can see the spinning here on the top of my screen the spinning.And so this looks like a timeout. And as you remember, whenever there's a timeout issue, that means that this is a security group working and not providing access to our instance. So our EC2 instance right now is not accessible directly. It can only be accessed through our CLB, and it will time out if we try to access the IP directly. So here we have set up a much better security mechanism because the only entry point into our EC2 instances is through our load balancer. So this is quite cool. and something we can do now is add more instances. So if we go back to our instances right here, and I right click and I launch more like this, and I can look at the details, there is some user data. So this is perfect; it's going to reuse the same user data. We say okay, we're going to launch one of them, we have one more, and maybe we should launch another one. So I'll actually launch this instance again—right click, launch more like this—and then I will actually edit my instance detail this time, just to review it. So let's click on "preview." Previous, previous, previous And what I want to change is the availability zone. So right now it's going into EU West 3C, but I want to launch it in a different one, 3A, for example, and go back and review and launch it okay; everything looks good. So now our three instances are being launched right here, and what I have to do is go to my load balancer, which is on the left hand side, and here I need to edit my instances, and I'm going to add these two instances right here. So they'll all be added, and then you can save them. And now we have three instances attached to our load balancer. Two of them are out of service because they're still launching. So let me wait a little bit until the health check passes. So now our three instances are in service. We have two in EU West three Cand one in EU West three A. If I go to my load balancer and refresh this page, I should be seeing the IP changing every single time. So as you can see, because the IP changes every single time, that means that our load balancer is actually getting responses from our three Easy2-instances one at a time. And so this is really cool because we're currently demonstrating what it's like to do load balancing. Okay, well, that's awesome. All right, so that's it for this lecture. I will not be using the CLB in the future. So what you can do is right click and then select all, and then we'll be done with the CLB. However, we will save our issue two instances for the next head on. So I'll see you in the next class.
Now let's get into the second kind of load balancer we'll see, the application load balancer. So it's a layer 7-only load balancer. So that means HTTP, and it allows you to route to multiple HTTP applications across machines. And these machines are going to be grouped into something called a target group, and it will make a lot of sense once we get into the handson.It allows you to load-balance multiple applications on the same EC2 instance. So, as we'll see, it supports HTTP/2 and web sockets, as well as redirects, when using containers and ECS. So if you want to redirect traffic from HTTP to HTTPS automatically, it could be done at the load balancer level. It also supports route routing. So there's routing based on different target groups. For example, you can route based on the target path of your URL. for example, example.com users and example.com posts. Users and posts are different routes, or paths, in your URL. And so you can redirect these two things to different target groups. We'll see what that means in a second. You can also do routing based on the host name of the URL. So if your load answer is accessed using one example.com or another example.com, it could be routed to different target groups. And you could also be routing based on the query strings and headers. So for example, if Comus Users and ID equal one, two, or three and Order equals Falsecould, they would be routed to a different target group. So ALB, which is short for "application load balancers," They're great when you have microservices and container-based applications. So, once we've learned what Docker is, Amazon ECS ALB will be the go-to load balancers because they have port mapping features that allow you to redirect to a dynamic port on the ECS instance, and more on that in the ECS section. And in comparison, if we wanted to have multiple applications, okay, behind a load balancer, a classic load balancer, we would have to have multiple classic load balancers. We need to actually have one per application, whereas with load balancers, we're able to have one application load balancer in front of many applications. So maybe a graph will help. So we have our external application advancer, which is public-facing. And behind it, we have our first target group made up of two instances, and this one is going to be routing for the route user. And we have a second target group made of easy2instances again, and this one is going to be our search application, and there's going to be a healthcheck as well with it, and it's going to be routed through rules for the search route. So as you can see here, we have two independent microservices that do different things. The first one is a user application, the second one is a search application, but they're both behind the same application bouncer, which knows how to intelligently route to the target groups based on the route that's being used in the URL. So the target groups, where are they? The first one is that they can be easy to instance and that they can be managed, as we'll see very soon. They can be managed by autoscaling groups. It could be ECS tasks, and we'll see this in the ECS section; it could be lambda functions, and this is something that's not very known. So application load balancers can be in front of lambda functions, and we'll see what lambda functions are in the future section. But they're the base of everything called "serverless" in AWS. And finally, they can be from two IP addresses, and they must be private IP addresses. So ALB can route to multiple target groups, and the health checks are going to be done at the target group level. It's so good to know before we go into the handson.The first one is that you also get a fixed hostname with your application bouncer, just like the classic one. The application servers don't see the IP of the client directly. The true IP of the client is going to be inserted instead in a header called X forwarded to. As a result, you can obtain the port by using X-forwarded ports and the protocol by using exported proto. And so what that means is that our client's IP, which is 123-4567-80, is directly talking to our load balancer, which performs something called a connection termination. And when your load balancer talks to your EC2 instance, it's going to use the load balancer's IP, which is a private IP, to connect to your instance. And so for the instance to know the client IP, it will have to look at these extra headers in your HTTP request, which are called x-4, port, and proto. All right, that's it. Now let's go to the hands-on part and create our first application balancer. Okay, so let's go ahead and create our application, Louncers. So I'm going to go back to load bouncers and create a load balancer. This time it is the application load balancer, which is for HTTPS and HTTP traffic only. Let's click on Create, and then I'll call it my first ALB. It's going to be internet facing, and the IP address type is IPZ-4 for the listeners, which is what is exposed on your little bouncer and is going to be HTTP on port 80. And then for the AZ, I'm going to say yes; I want it in EU Three A, Three B, and Three C. Excellent. Click on "next." Now, for the security settings, again, we get the same warning because we haven't configured HTTP, but this is fine. Then click on Security Groups, and we're going to choose an existing security group. We're going to reuse the "My First Load Balancer Security Group," which we know is already preconfigured to allow port 80 and is also well configured with my First Security Group attached to the ECU instances. So next click on Configure Routing, and we're going to create a new target group. I'll call it my first target group. And as you can see, we have three types: EC, two instances, IP, or lambda functions. In our case, we'll use instance. The protocol is HTTP and the port is A. The protocol for the health check is HTTP again, and the path is slash, which makes sense. And then we could override the Advanced Health Check settings. So the healthy threshold is five, unhealthy is two. That's good. timeout is a five-interval day. We're going to set it as 10, and the success code is 200. So as you can see, we have slightly more options. Now click on Register Targets, and we have to register some targets into our target group. So right now I'm going to register only two targets. So enter three C and only three A and press the review button. Everything looks good. And I'll click on Create. So now we can go directly into our console and wait for our first application advance to be provisioned back into my ALB. It is now active, and if I copy the DNS name and open a new page, I get a 503. So that means something is misconfigured. But that's okay, we'll figure this out. So our ALB is good. I mean, we're able to access it—it's just returning as an error. So that means that something is happening at the target group level. So we click on the left-hand side for target groups, and we have my first target group right here. And so, as you can see, it's good and well configured. So this is fine. And we go to Target, and yes, okay, this is a problem. I probably missed that. So we don't have any targets assigned to our target group. So it's just saying, Hey, there are no instances to send traffic to. So you need to configure your instances first. So I'll edit this and read my three A and three C and click on "Add to Register." I probably missed that step. Add to the list of registered users and save your changes. And now that my two instances have been registered in my Target group, it will go and check their health. So let's just switch for a little bit. Now my two instances are healthy because they passed a health check. So, if I go back to this and refresh now, we can see Hello World from both of our instances. So as I refresh, it is changing the HelloWorld message from one instance to the other. So this is perfect. And so what I said is that we can have multiple target groups, so I'm able to create my second target group. So this is not going to be a functional demo, but at least I will show you what I mean. So we have a second target group, and in this one I'm able to register another target. So I'm able to register this instance right here and add it to the list of registered instances. I'm not sure if that's the last one, but anyway, so now we have two different target groups, and they both have different instances. And so if I go back to my load balancer, I'm able to go to Listener and edit the rules. So remember how I told you about the rules? Now this is the place where you should go ahead and edit these rules. So let's go ahead and view and edit the rules. And see here, we're saying that by default the action is that you should forward to my first target group, but I can click on edit in here to insert a new rule, and we're saying okay, based on the host, the path, the HTTP header, the query string, the source IP, and so on. So for example, we can say based on the path that if the path is "test," then "action" forward, and we can say my second target group. And the cool thing about it is that now we're able to set up a bunch of rules. So I'll click on save, although it's not going to work because test doesn't exist for my second target group, but we're saying okay, if the path is slash test, then forward to my second target group, or if the path is anything else, then forward to my first target group. So this is a rule that is going to be working, and you can also add tonnes more rules. So if the path is constant, for example, we can addaction and return a fixed response saying 400404, and I'll just say we can't find this page and our ELB right here, and our ALB will be returned a fixed response. So let's go ahead and test these and see how that works. So this is my first ALD, and if I refresh, I get my two EASYtwo instances. If I do slash constant, then I get confined to this page, which is the 6th constant message we want to set, and if I do slash test, I'm going to get a 404 not found because the target group is not well configured and we need to set up the ECG instance a bit further to make this work. But what I wanted to show you through the listeners is that we're able to have many, many rules, and these rules allow you to dairy direct not just to one target group but to multiple target groups, and that's the whole power of the application load balancers. Okay? So just to make things more simple, I'm going to go ahead and remove some rules. So I'm going to remove these rules right here. Click on delete, and now they're gone. And for our target group in here, I can delete the second target group, and what I'm going to do in the first target group is edit it and add the missing instance. I can't remember which one it is. So I'll just add the three. Here we go. And click on "Save." And now we have three instances in my first target group. And so what that means is that if I go back to the four URLs once my instance passes the health checks, then it will be all good and all set. Okay, so that's it for this lecture. I hope you liked it, and I will see you in the next lecture.
Download Free Amazon AWS Certified Solutions Architect - Associate SAA-C02 Practice Test Questions, Amazon AWS Certified Solutions Architect - Associate SAA-C02 Exam Dumps
|Amazon.practicetest.AWS Certified Solutions Architect - Associate SAA-C02.v2022-08-18.by.ellis.344q.vce||2||4.74 MB||Aug 30, 2022|
|Amazon.practiceexam.AWS Certified Solutions Architect - Associate SAA-C02.v2020-08-11.by.tommy.64q.vce||3||288.96 KB||Oct 05, 2020|
Similar Amazon Video Courses
Only Registered Members Can Download VCE Files or View Training Courses
Please fill out your email address below in order to Download VCE files or view Training Courses. Registration is Free and Easy - you simply need to provide an email address.
Log into your ExamCollection Account
Please Log In to download VCE file or view Training Course
Only registered Examcollection.com members can download vce files or view training courses.
SPECIAL OFFER: GET 10% OFF
Pass your Exam with ExamCollection's PREMIUM files!
SPECIAL OFFER: GET 10% OFF
Use Discount Code:
A confirmation link was sent to your e-mail.
Please check your mailbox for a message from firstname.lastname@example.org and follow the directions.
Download Free Demo of VCE Exam Simulator
Experience Avanset VCE Exam Simulator for yourself.
Simply submit your e-mail address below to get started with our interactive software demo of your free trial.