2V0-21.20: Professional VMware vSphere 7.x Certification Video Training Course
2V0-21.20: Professional VMware vSphere 7.x Certification Video Training Course includes 100 Lectures which proven in-depth knowledge on all key concepts of the exam. Pass your exam easily and learn everything you need with our 2V0-21.20: Professional VMware vSphere 7.x Certification Training Video Course.
Curriculum for VMware 2V0-21.20 Certification Video Training Course
2V0-21.20: Professional VMware vSphere 7.x Certification Video Training Course Info:
The Complete Course from ExamCollection industry leading experts to help you prepare and provides the full 360 solution for self prep including 2V0-21.20: Professional VMware vSphere 7.x Certification Video Training Course, Practice Test Questions and Answers, Study Guide & Exam Dumps.
And I'll be performing all of these demos in my home lab that I created using VMware Workstation. However, if you do want to do this and you don't have your own home lab environment, you can try these tasks out at hol.vmware.com using a hands-on lab kit that you can get there for free. And so I'm just going to browse to the networking view here. We see a distributed switch that's already built, and I'm going to go to Configure. And under Configure, we have a menu option for private VLAN. So what I'm going to do is utilise this screen to edit, modify, and create private VLANs. But before I do that, I'm actually going to create some new distributed port groups. So I'm going to add a new distributed portgroup, and I'm just going to call it Isolated. I'm going to stick with all of the defaults here, including the VLAN type, for the moment. and I'll go ahead and hit finish. I'm going to create a second port group called Community One, again using all of the default settings. I'm going to create a third port group called Community Two, again choosing all the default settings. And then finally I'll create a portgroup called Promiscuous, again using all ofthe default port group settings. So now I've created four different port groups here inside of my VSEFE distributed switch. And I want you to think of these portgroups that I've created sort of like this. So Community One is maybe one part of my organization, and anything that I connect to the Community One port group should be able to communicate with anything else that's in that Community One port group. The same can be said for Community. It's a certain department of my company, and any machines that are connected to the Community Two portgroup should be able to communicate amongst themselves. However, Community Two and Community One should not be able to communicate with each other. And then I've got a port group here called Isolated. So with Isolated, these are virtual machines that I'm going to connect to this port group, and they should not be able to communicate with each other. So I may have 50 virtual machines connected to this isolated port group. None of those virtual machines can communicate with each other. The only thing that they can communicate with are our machines connected to this promiscuous secondary VLAN and this port group. The same is true for communities one and two. All of these port groups can communicate with anything connected to the Promiscuous port group. So let's go back to our Vsphere distributed switch and start to establish some of this private VLAN structure. And I'm going to hit Edit here, and I'm going to establish a primary VLAN. I'm going to call it VLAN 50. So this is a VLAN that my community andmy Isolated and my Promiscuous port groups, they'll allbe a part of this primary VLAN. So basically, any machine in any of those port groups that I just mentioned is going to be on the same IP address range. Any machine connected to either of those community portgroups, the isolated or the promiscuous port group, is all going to be part of the same VLAN, and they're all going to have IP addresses in the same IP address range. But within VLAN 50, I'm going to create some secondary VLANs. So I'm going to create a secondary VLAN called 150 that's going to be a community secondary VLAN, a secondary VLAN called 250 that's going to be another community secondary VLAN, and a secondary VLAN 350. And I'm going to set that as an isolated secondary VLAN. So the type of secondary VLANs that I choose here are going to impact the behaviour of the virtual machines connected to them. Anything connected to the promiscuous secondary VLAN can communicate with anything connected to any of these other secondary VLANs. Secondary VLAN 150 is marked as community. As a result, anything connected to that secondary VLAN can communicate with each other as well as with the promiscuous VLAN. same thing with community secondary VLAN 250. Anything within that community-secondary VLAN can communicate. They can also communicate with the Promiscuous secondaryVLAN and then our isolated secondary VLAN. Anything connected to this cannot communicate with each other, but it can communicate with the promiscuous secondary VLAN. And so now I can go to these port groups that I've created. I can click on "Configure." So from my isolated port group, I can click on Configure, I can click on Edit, and I can establish the VLAN membership. And I'm going to establish a private VLAN here. So now I've associated this port group with that isolated secondary VLAN, and I can go through to Community One and Community Two and do the same thing. On each of these, I'm just going to click on the port group, edit the VLAN settings, configure a private VLAN, and associate them with the appropriate secondary VLAN. And so what I'm basically doing here is having this one big VLAN, VLAN 50. That's my primary VLAN. All of these virtual machines connected to all four of these port groups are going to be part of primary VLAN 50, but they also each have a secondary VLAN associated with them that is basically there to segment traffic within VLAN 50. So I'm taking VLAN 50 and further segmenting it by establishing VLANs within a VLAN. The final port group is the promiscuous port group. I kind of think of this like a shared services port group. So this is something that contains virtual machines that all of the devices need to be able to communicate with. Whether they're in communities or isolated VLANs, it doesn't matter. They can all communicate with this promiscuous secondary VLAN. You.
First off, the purpose of mirroring is to duplicate the traffic from a distributed port or maybe even multiple distributed ports and send that traffic as a copy to some sort of destination. And so I'm going to start out by going to the networking view here and choosing my VSphere distributed switch that's built right into this lab environment. And then here under the configure tab, you can see that we have an option for port mirroring. So if I click here, this gives us the ability to go ahead and create a new port mirroring session. So I'll click on "New," and first off we're going to choose the session type, and the first session type is distributed port mirroring. Now you can click on the little information icon if you want more information on any of these types of sessions. And the first session type is pretty straightforward. We're basically just mirroring traffic from one set of distributed ports to another set of distributed ports. So we're probably using a virtual machine to monitor that traffic. Maybe on the destination port we'll have a virtual machine running some sort of sniffer. The second option is a remote mirroring source, and what we're doing here is mirroring traffic from a set of distributed ports to a set of physical uplinks. So this is a session in which we will have something in the physical network, and the mirrored traffic will go out the uplinks onto the physical network and be captured by something there. If I want to do that in reverse, I can have a remote mirroring destination, meaning that traffic is coming from a set of VLANs and is being received by one of my distributed ports, and then I can also have an encapsulated remote mirroring layer 3 source. So we're going to take traffic that's going to a set of distributed ports and send a copy of it to a particular IP address, and that IP address could be a virtual machine or it could be a physical server somewhere. So in the case of this demo, we're going to go with the first option, distributed port mirroring. So I'll go ahead and choose that session type, and then I'll give my port mirroring session a name; I'm just going to call it RickCreashy Demo, and then I'll choose whether I want to enable or disable the port mirroring session. At the moment I'm going to leave it disabled, and there are some advanced properties here. So number one, I'm going to have a destination distributed port where I'm sending all of this mirrored traffic to. Do I want to allow the virtual machine that's connected to that distributed port to be able to generate normal traffic? Do I want it to be able to operate normally as a virtual machine, or should it strictly receive a copy of this data? So I'm going to go ahead and allow normalIO on the destination ports, but I can modify that if I want to disallow that. I can do that, and I can change things like the sampling rate. If I don't want to mirror all of the traffic, I can change the sampling rate if I'd like to do that. So I'm just going to leave these advanced settings as you see them here, and I'm going to hit Next. And what we can now do is choose the source and destination ports. And so in order to choose the source port, there has to be a virtual machine actually connected to that port. I'm just going to pick this one here and go ahead and hit okay, so that's the virtual machine that's connected to that port group, and I want to capture all of the traffic from this port and send a mirror copy to a destination port. And this destination port is where, let's say, my Sniffer virtual machine is going to be connected to. So I'll choose a destination port and hit OK, and then I'll hit Next. And now I've established a port mirroring session. So that's how you configure a port mirroring session. Now, there are a couple of things I want to mention when configuring a port mirroring session, and we kind of rushed through this, but let's go back a few steps and look at the properties. And one of the properties that we can actually modify is the mirrored packet length. So we have the ability to basically say, "What is the packet length that we actually want to duplicate and send a copy of the traffic to?" So as you can see here, it's currently configured for 60 bytes. That's what's configured here. And if I leave this as is, only the first 60 bytes of each packet are actually going to be mirrored. So why would you want to do that? Why wouldn't you just mirror the entire packet? Well, most of the time when you're setting up a mirroring session, you're not trying to capture all of the data that the source port is actually sending and receiving. What you're trying to do is analyse those packet headers. And so you don't necessarily have to capture the entire packet to do that, but if you're doing something like deep packet inspection or intrusion prevention in those scenarios, you may want to mirror the entire packet. So if you're planning on taking a VMware exam, the main thing that I would focus on in preparation for that exam is these four different types of port mirroring sessions and exactly what each of them do. I would click on this little information icon and make sure you're very clear about these four different types of port mirroring sessions and what the differences are between them prior to taking a VMware certification test.
So here you can see I've clicked on a vsphere.distributed switch that I've already created. And what I'm trying to do with NetFlow is capture traffic patterns in order to figure out what kind of traffic is passing through this vSphere. Distributed switch. And so what NetFlow is going to do is summarise these traffic flows and then send those little summaries to a Netflix collector like Solar Winds, for example. That way, I have the ability to do a historical analysis and troubleshoot traffic patterns occurring on this vSphere. Distributed switch. So in order to make that work, there are certain things that I need to set up. So I'm going to click on my V spare distributed switch, and I'm actually just going to right-click it and go to edit NetFlow. So we'll go to Edit NetFlow, and what we're going to do is put in the IP address of whatever system we're using to collect NetFlow data. So maybe this is my SolarWinds server in my environment. I'll go ahead and specify the IP address and the port of that server that I want to send all of this NetFlow data to. And then I also have to give my virtual switch's IP address because the virtual switch itself is going to be communicating with this Netflix server. So it's got to have an IP address. They can't talk without an IP address. So what we're doing here is actually assigning an IP address to this vSphere. Distributed switch. So what's going to happen is that we'll have this vSphere. distributed switch that spans many ESXi hosts. But as far as my NetFlow server is concerned, it looks like one switch with one IP address is sending all of these traffic flows to that collector. And then I can also modify some of the more advanced settings here. Like, for example, do I want to modify the sampling rate? So this represents the number of packets that NetFlow drops after every collected packet. So if I don't necessarily want to capture all of my traffic—maybe I only want to capture a certain percentage of the traffic—then I can modify this, and by default it's set to zero. When set to zero, net flow will sample every single packet sent over this versus a distributed switch. I can also choose to only process traffic that's flowing within this distributed switch and not capture anything flowing to the physical network. I'm going to leave that disabled, and I want to capture all of the traffic flowing through this vivid switch and send all of that net flow data to my collector. So now let's go to our port group. I'm going to right-click this port group and I'm going to go to Edit Settings. And then under monitoring, I can pick and choose on a per port group basis whether I want to enable NetFlow on that particular port group. So the more port groups I enable Netflix on, obviously, the more traffic that's going to be generated and the more data that's going to be sent to my NetFlow collectors. So nothing is going to happen with NetFlow unless I go in here and enable it on some of my port groups. So I'll pick and choose which port groups I want to enable NetFlow on in order to determine which port groups should actually be monitored for NetFlow.
When we think about a virtual machine, there are really two parts to a virtual machine. We've got the running state of a virtual machine that exists on our hypervisor, and we've got the files that make up our virtual machine, and those exist on some sort of storage system. So a virtual machine is similar to a physical machine in many ways. We have an operating system, and we have to give this virtual machine resources. So our operating system has no idea that this VM is running as a VM. Windows doesn't have any awareness that it's running on top of ESXi. So we basically have to trick Windows into thinking, "Hey, you've got memory, you've got a CPU." Even though we're not actually dedicating any physical hardware to this VM, we want it to appear as if it has its own memory and its own CPU. We want it to think it has its own scussy controller. And this is really the critical part of storage. So what we're going to do is present drivers to the virtual machine that it can use to access storage. Just as a physical machine would, a physical machine would have some sort of SCSI controller. And so we're going to trick our virtual machine. We're going to give it a driver. We're going to allow it to use something like a pair of virtual SCSI controllers or an LSI logic SCSI controller. We're going to trick that machine into thinking it has access to physical storage when really it's simply going to take those storage commands, dump them into the hypervisor, and the hypervisor will take it from there. The same goes for our virtual neck. We're tricking the virtual machine into thinking it has a network interface card when in reality it just has a driver for a VMX Net-3 network interface or an E 1000 network interface. It's not real hardware. It's just taking that network traffic and dumping it into the hypervisor, and the hypervisor takes it from there. And so the beautiful thing about this is that the hypervisor then gives us a layer of abstraction. It doesn't matter to the virtual machine what the actual underlying hardware is. So in the case of storage, the virtual machine never sees whether it's dealing with an NFS storage device, whether it's dealing with vSAN, whether it's dealing with Iguzzy fibre channel, or local physical storage. It has no awareness of any of that. Windows sees a virtual scuzzy controller; it sends those storage commands, those scuzzy commands, to that virtual scuzzy controller, and that's all it really knows. So let's take a look at this process. We have a few different options when it comes to our virtual disks. Just like the CPU and memory, the VM doesn't actually have any physical storage hardware. It's accessing a shared resource, and in this case, our resource is called a data store. So Windows needs to see the storage hardware. Windows needs to think, "Hey, I've got a SCSI controller. I've got something that I can send the SCSI commands to." So we're going to trick it. We're going to give it a virtual SCSI controller. We're going to give it a driver. The SCSI commands are then sent to the virtual scuzzy controller whenever Windows needs to read or write data. And from there, those storage commands hit the hypervisor, and they are redirected to the appropriate VMDK file for this virtual machine. This is what gives me a lot of the possible storage features that we have with VSphere. I can do something like a storage V motion; I could move this VMDK to some other data store, and the virtual machine will be completely unaware that that has happened because all the virtual machine really sees is that SCSI command going out on the virtual SCSI controller. What the hypervisor does with that SCSI command after the fact is completely hidden from the virtual machine. Now, there are also some options for the type of virtual disc that we create for our VMs, and the most common choice is a thinly provisioned disk. So let's assume that we have a VM created with an 80-gig virtual disk, but the VM only has 40-gigs of actual data. This means that only 40 gigabytes of actual storage capacity are consumed on the data store itself. The VM thinks it has an 80-gig disc, but on the actual physical storage system, it's only consuming 40 gigs of space. And we have to recognise that that comes with a little bit of risk. If I create too many thinly provisioned discs on a data store, it's possible that as those virtual machines continue to add more data and their virtual discs expand and take up more space, I could accidentally fill up that data store. So you want to make sure that you have your datastore usage alarms and free space alarms in V Center configured properly and that they are actually set up to either send you an email or an SNMP trap or something like that. If you use thin provision disks, you must take these kinds of precautions to avoid overburdening a data store. Because when a data store becomes full, all the virtual machines on it are impacted. They'll be unable to write data, and those virtual machines will halt. So it is service-impacting when a datastore becomes full, and that's one of the risks you run with thinly provisioned disks. And that's one of the reasons why some customers choose to leverage a thickly provisioned disk. So when you create a thickly provisioned disk, all of the space for that virtual disc is immediately set aside. So my VM has an 80-gig virtual disk. 100% of that storage capacity is immediately consumed on the data store. and so it's not as efficient with its space usage. But you're not really running the risk of accidentally filling up a data store. And if you choose to do thick provisioning There are two ways you can do it. You can do it either lazy-zeroed or eager zeroed.So, when I create a VMDK, I can select thin provisioned, thick provisioned lazy zeroed, or thick provisioned eager zeroed. Most of the time, the best option is thickly provisioned, eagerly zeroed, or thinly provisioned. One of those two, I strongly advise against using thick provisioned lazy zeros. So here we see a thickly provisioned, eagerly zeroed VMDK. You can see zeros have been written to the entire VMDK. So we know that a thinly provisioned disc saves space and is very efficient. And a thickly provisioned disc allocates 100% of the space up front. But in either case, before space can be used, the blocks must have zeros written to them. and this can impact applications. That right. A lot of data, like, for example, a database virtual machine with thinly provisioned disks that zero out blocks on demand Lazy zeroed discs are the same. They will zero out as the application is trying to write to them. And so for things like write-intensive databases, this can really slow down the way that they perform. But a thick provision disc is zeroed out up front. So when you create that virtual disk, it's going to take a little longer because zeros are going to be written to every single block of that virtual disk. But now when your application tries to write to new blocks on that disk, it's not going to have to wait for them to be zeroed. So this is a good option for things like Exchange, database availability groups, or SQL databases. Now let's take a moment to look at the overall storage architecture and see where problems can occur and where latency can occur. And so here you see a nice little storage diagram. And on the left, I've got my virtual machine, and I've got a virtual machine with a virtual scuzzy controller. So if I'm troubleshooting some sort of problem with Storage Number One, I want to know the scope of the problem that I'm having. Is it on every VM or is it specific to one particular VM? And if it's just one VM, maybe I can kind of zero in on that VM and say, hey, is there something wrong inside the operating system of the VM? Is there something going on with this virtual fuzzy controller or something else within Windows that could potentially be causing that problem? If I find out the problem is local to my ESXi host and all the VMs on a host are impacted, what could it be? Well, I could have kernel latency. Kernel latency means the hypervisor itself is introducing latency to my storage commands. And this is expressed in ESX topas, Ka, V, and G kernel average latency. So if the kernel average latency is over one millisecond, you've got a problem on the ESXi host. If the ESXi host is taking more than a millisecond to process these storage commands, that's a problem. and that we can really focus our troubleshooting efforts on the host itself. Another thing that could potentially be harming the performance of a host in an NFS environment is some sort of problem with the physical network adapter. So with NFS, all my storage commands are being funnelled through a virtual switch. And so, as such, if I'm having a problem at that virtual switch level, or maybe one of these physical adapters is having some sort of problem, or maybe these connections to the Ethernet switch are having some sort of problem in those cases, that'll cause a hostile slowdown in storage as well. So that's my next step for troubleshooting. And the way I've diagrammed this is really not ideal. Hopefully, if I'm setting up NFS, maybe the second adapter is connected to a second physical switch. That would really be ideal. So in this diagram, I've got both adapters going to the same physical switch. In real life, I probably wouldn't do that. I probably have the adapters going to two different physical switches for failover purposes. But the other problem that can occur here is at the Ethernet switch level, right? So I could have some kind of problem here with the Ethernet switch. I could have incorrect MTU settings. I could have a switch that's just plain overwhelmed. Maybe the CPU and memory are overwhelmed. There's something along those lines. That's another potential spot where my problem might exist. I could have a congested link between the switch and my actual storage device. The processors on my NASD device could be overwhelmed, and then I could have physical storage issues in the device itself. Now, these sorts of problems are typically going to manifest on multiple ESXi hosts. So maybe I don't have a high enough spindle count. Spindle count reminds me of bottles of ketchup. If I'm squirting ketchup out of one bottle, I'm limited to the throughput of one bottle. But if I squirt Ketchup from four bottles at once, I can squirt four times as much catch up at the same time. I know it's kind of a weird analogy, but that's sort of like disks. If I'm pulling data from one disc versus pulling it from four, I can pull it four times as fast. That's why spindle count is important. So maybe I don't have a high enough spindle count. Maybe my discs aren't fast enough. Maybe the cache is inadequate. Whatever the case may be, there are a bunch of potential problems that you could have with the actual storage device itself as well. So having a diagram like this is really useful. Let's take a look at one more diagram. This one's for IceCazzy. So again, the diagram doesn't really change that much, but there are some different pieces to the puzzle. Again, if the problem is isolated to a VM, look at the operating system of that VM. When the virtual machine generates a scuzzycommand, the SCSI command is relayed to the storage adapter of the ESXi host. So in this case, we're using ISCZY, and we've got some kind of storage adapter here. It could be a dedicated hardware Iscz initiator. It could be a software I scuzzy initiator. It could be a dependent hardware I scuzzy initiator. In this case, I've chosen to make it a software ISGUZZY initiator. So the job of the storage adapter is to receive SCSI commands in their raw format from the virtual machine and then take those SCSI commands and prepare them as ISCZY packets so that they can traverse the physical network. So what do I want to potentially troubleshoot here? That storage adapter is only going to perform well if there are adequate CPU resources on the ESXi host. It's a software component of ESXi, the virtual switch VM kernel port). Those things will only work well if the host has enough processing power, and then on the virtual switch or potentially multiple virtual switches. So with Iscosy, what I could do is create a VM kernel port on one virtual switch and a VM kernel port on a second virtual switch. And I can have traffic on both of those VM kernel ports round robin. That's a great way to ensurethat, number one, you're spreading yourtraffic out across multiple physical adapters. But number two, if one of these switches fails or if one of these network connections fails, our traffic still continues to flow. So if I'm having performance problems and they're local to this host, I'm looking at the CPU of this host, and I'm looking at the physical network connections for this host. Are they overwhelmed? And if those are not the scenarios that we're experiencing, if those are not the problems, then I can make my way into the network itself. So are the physical switches being overwhelmed with CPU or memory, or is there just too much traffic hitting them? Have one of these connections to a storage processor failed? All my connections are actually up and running. Is there some kind of network problem that's creating latency? And then I can look at my storage rack. Now, again, if I'm having storage array or network issues, I'm probably seeing problems on multiple ESXi hosts. So if the CPUs of my storage processor are overwhelmed, that's going to cause all sorts of latency. I might even see storage commands being aborted. There are those in Essex County, as well as the number of abortions performed every second. Abortions are bad news. If you have storage commands that are being aborted, they're sitting out there for so long that they're basically just being dropped. So aborts are bad, and they can happen if the storage array is overwhelmed. Or again, maybe I have inadequate spindle count. Maybe I don't have fast enough disks. Maybe I'm using 7200 rpm SATA when I should really be using 150 rpm SAS, or I should be using Flash or SSD. So important in troubleshooting and problem solving. With storage, determination always starts with a solid understanding of my storage network topology so that I can kind of systematically work my way through it and determine what the root cause of my storage problem is.
Download Free VMware 2V0-21.20 Practice Test Questions, VMware 2V0-21.20 Exam Dumps
|VMware.selftesttraining.2V0-21.20.v2022-10-17.by.julian.62q.vce||2||713.1 KB||Oct 31, 2022|
|VMware.test-king.2V0-21.20.v2021-11-10.by.luke.53q.vce||2||264.47 KB||Apr 20, 2022|
|VMware.test-king.2V0-21.20.v2020-10-12.by.liyong.35q.vce||3||676.42 KB||Apr 11, 2022|
Similar VMware Video Courses
Only Registered Members Can Download VCE Files or View Training Courses
Please fill out your email address below in order to Download VCE files or view Training Courses. Registration is Free and Easy - you simply need to provide an email address.
Log into your ExamCollection Account
Please Log In to download VCE file or view Training Course
Only registered Examcollection.com members can download vce files or view training courses.
SPECIAL OFFER: GET 10% OFF
Pass your Exam with ExamCollection's PREMIUM files!
SPECIAL OFFER: GET 10% OFF
Use Discount Code:
A confirmation link was sent to your e-mail.
Please check your mailbox for a message from email@example.com and follow the directions.
Download Free Demo of VCE Exam Simulator
Experience Avanset VCE Exam Simulator for yourself.
Simply submit your e-mail address below to get started with our interactive software demo of your free trial.