VMware 1V0-21.20 Exam Dumps & Practice Test Questions

Question 1:

Which option is not mandatory when setting up a new virtual machine through the vSphere Client?

A. Data center
B. Compute resource
C. Guest OS
D. Folder

Answer: C

Explanation:

Deploying a new virtual machine (VM) within the vSphere Client requires several critical configurations to ensure the VM is properly instantiated and ready for use. However, not every configuration option is compulsory during the initial deployment.

Let’s analyze each choice:

Data center (A): Selecting a data center is essential because it determines the logical container where the VM will reside. The vSphere environment is organized into data centers, and every VM must belong to one, making this selection mandatory.

Compute resource (B): This selection specifies the physical host or cluster that will run the VM. Without choosing a compute resource, the VM has no physical or logical host to operate on, which makes this step non-negotiable.

Guest OS (C): Choosing the guest operating system is not required at the time of VM creation. While selecting a guest OS helps vSphere optimize VM settings such as virtual hardware compatibility, it’s possible to deploy the VM without this information initially. The guest OS can be specified or adjusted later after deployment, which makes this choice optional during setup.
Folder (D): Folders serve organizational purposes, helping administrators group VMs logically. Although useful for managing large environments, selecting a folder during VM deployment is optional. VMs can be placed directly into data centers without folder assignment.

In summary, while data center and compute resource selections are foundational and required to create a VM, both the guest OS and folder are optional choices at deployment. Among these, the guest OS stands out because it directly impacts VM optimization but can be deferred without blocking VM creation. Therefore, option C is the correct answer as it is the only truly optional choice necessary for VM deployment in the vSphere Client.

Question 2:

Which alarm action should an administrator choose to execute a batch file on a specified virtual machine?

A. Send SNMP traps
B. Run a PowerCLI command
C. Run script
D. Send email notifications

Answer: C

Explanation:

When configuring alarms in a vSphere environment, it’s important to select an action that matches the desired response to specific events. In this case, the goal is to automatically execute a batch file on a virtual machine when an alarm triggers.

Let’s review each alarm action option:

Send SNMP traps (A): This action sends alerts to external network management systems via the SNMP protocol. It is purely informational and cannot trigger script execution on a VM. Hence, it is not suitable for running batch files.

Run a PowerCLI command (B): PowerCLI is a powerful scripting tool based on PowerShell used for vSphere automation. Although administrators can write scripts in PowerCLI to manage VMs, the vSphere alarm framework does not natively support triggering PowerCLI commands as an automatic alarm action. Using PowerCLI would require external orchestration outside the alarm system, making this option impractical for direct batch execution.

Run script (C): This is the most appropriate choice. The “Run script” alarm action enables execution of a specified script or batch file on the vCenter Server when an alarm condition occurs. Although the script itself runs on the vCenter Server host, it can be written to remotely execute commands on the targeted VM—via PowerShell remoting, SSH, or other remote execution methods. This option provides the flexibility to automate batch file execution in response to alarms effectively.

Send email notifications (D): Email alerts serve only as notifications to administrators. They do not perform any executable actions, so this option cannot fulfill the requirement to run a batch file.

In conclusion, to automate batch file execution upon alarm triggering within vSphere, Run script is the only native alarm action that supports this functionality. It provides the capability to run custom scripts that can remotely control virtual machines, making option C the correct and best choice.

Question 3:

A system administrator needs to configure a virtual machine to ensure it always receives guaranteed resources, even when the host system is heavily loaded or overcommitted. 

Which configuration should the administrator apply to the virtual machine to achieve this?

A. vSphere High Availability Admission Control
B. High Performance Power Policy
C. CPU and Memory Shares set to High
D. CPU and Memory Reservations

Correct Answer: D

Explanation:

To guarantee that a virtual machine (VM) receives a dedicated amount of CPU and memory resources regardless of the overall system load, the most effective method is to configure CPU and Memory Reservations. This ensures that the specified portion of physical host resources is reserved exclusively for that VM, protecting it from resource contention during periods of overcommitment.

When reservations are set, VMware vSphere allocates the exact amount of CPU and memory resources to the VM, guaranteeing availability even if the ESXi host has over-allocated resources to multiple VMs. This is critical for business-critical or latency-sensitive workloads that cannot tolerate resource starvation or performance degradation.

Let’s examine the other options and why they don’t meet the requirement:

Option A: vSphere High Availability (HA) Admission Control is designed to ensure that sufficient resources are available across the cluster to restart VMs in case of host failure. It does not guarantee resource availability under normal or high-load conditions. Its focus is on failover readiness rather than real-time resource allocation.

Option B: High Performance Power Policy relates to the host’s CPU power management settings. Enabling this policy may keep the CPU running at maximum frequency but does not allocate or guarantee resources to any particular VM.

Option C: CPU and Memory Shares influence resource allocation priorities only when the host is under contention. Shares represent relative importance but do not guarantee minimum resources. Even with high shares, if total demand exceeds physical availability, a VM may still be starved.

Thus, CPU and Memory Reservations are the only settings that provide a hard guarantee of resource availability, making them essential when consistent VM performance is required despite host load or overcommitment. This makes Option D the correct and most reliable choice.

Question No 4:

Which storage protocol is natively supported and compatible for use with datastores in VMware vSphere environments?

A. NFS
B. SMB
C. CIFS
D. FTP

Correct Answer: A

Explanation:

In VMware vSphere environments, selecting the correct storage protocol is crucial for reliable, high-performance shared storage access. Of the options listed, only NFS (Network File System) is officially supported and widely used as a native datastore protocol by vSphere.

NFS is a file-level network storage protocol that allows ESXi hosts to mount shared storage over a network, treating remote storage similarly to local storage. This shared access is essential for enabling vSphere features like vMotion, Distributed Resource Scheduler (DRS), and High Availability (HA), which require multiple hosts to access the same datastore simultaneously.

VMware supports both NFS version 3 and version 4.1, with v4.1 providing enhancements such as improved security through Kerberos authentication and multipathing support for redundancy and performance.

Let’s review why the other protocols are unsuitable for vSphere datastores:

  • SMB (Server Message Block) is primarily used in Windows environments for file sharing. While excellent for Windows clients, SMB is not supported by vSphere as a datastore protocol. It lacks integration with hypervisor-level storage management.

  • CIFS (Common Internet File System) is an older variant of SMB (mainly SMB 1.0). It suffers from performance and security issues, making it inappropriate and unsupported in enterprise virtualization contexts.

  • FTP (File Transfer Protocol) is designed for transferring files between systems but is not a storage protocol. FTP lacks the concurrency, file locking, and integration features needed for a hypervisor to use it as a datastore.

In conclusion, NFS is the only protocol among these options that is fully compatible with VMware vSphere datastores, supporting key virtualization features while providing a straightforward and efficient storage solution. Therefore, Option A is the correct choice.

Question 5:

Which two capabilities are exclusively available on VMware distributed switches and are not supported on standard switches? (Select two.)

A. Port Mirroring
B. VLAN IDs
C. NetFlow
D. NIC Teaming
E. Load Balancing

Answer: A and C

Explanation:

In VMware vSphere networking, it’s important to distinguish the features supported by standard switches (VSS) and distributed switches (VDS) when designing your virtual environment. Both switch types handle basic networking tasks such as VLAN tagging and NIC teaming, but distributed switches offer advanced capabilities required for enterprise environments.

Two notable features exclusive to distributed switches are Port Mirroring and NetFlow.

Port Mirroring (option A) is a powerful network diagnostic tool that duplicates network traffic from one port to another for monitoring and analysis purposes. It is widely used for troubleshooting, intrusion detection, and traffic analysis with external tools. Standard switches operate on a host-by-host basis and lack the centralized architecture needed to mirror traffic across hosts or virtual networks. Distributed switches, however, operate at the data center level, giving them a global view of traffic, enabling port mirroring functionality.

NetFlow (option C) is a network protocol originally developed by Cisco that collects detailed information about IP traffic flows. It provides network administrators insight into bandwidth usage, traffic patterns, and potential security issues. This requires a centralized collection of flow data from multiple hosts, which only distributed switches can facilitate due to their aggregation capability across the entire virtual infrastructure.

On the other hand, features like VLAN IDs (B), NIC Teaming (D), and Load Balancing (E) are available on both standard and distributed switches, though distributed switches often offer enhanced policy controls and performance optimizations.

In summary, distributed switches enable enterprise-level features such as Port Mirroring and NetFlow, which require a centralized, cross-host network view. Standard switches provide basic network segmentation and redundancy but do not support these advanced monitoring functions. Therefore, the correct answers are A and C.

Question 6:

If an administrator wants to migrate a powered-off virtual machine to another cluster that does not share storage, which migration option should be selected for success?

A. Change compute resource only
B. Change both compute resource and storage
C. Migrate VM(s) to a specific datacenter
D. Change storage only

Answer: B

Explanation:

Migrating virtual machines within a VMware vSphere environment requires understanding the relationship between compute resources (hosts or clusters) and storage locations (datastores). When migrating a powered-off VM to a different cluster that lacks shared storage with the source, special considerations apply.

Option A, changing compute resource only, involves moving the VM’s execution host or cluster while keeping the VM’s files on the original datastore. This works only if both clusters share storage so that the destination host can access the VM’s disk files. Without shared storage, this method will fail because the new cluster cannot access the VM’s disks.

Option B is the correct approach. It involves migrating both the compute resource and storage simultaneously. This means the VM’s configuration and disk files are copied from the source datastore to a datastore accessible to the destination cluster. Since the VM is powered off, this is known as a cold migration. The VM’s files are physically moved to the new storage location, ensuring that the destination cluster can fully operate the VM without storage access issues.

Option C, migrating VM(s) to a specific datacenter, relates more to organizational placement within vSphere’s hierarchy rather than the technical movement of VMs between clusters and storage. It is not typically used for migrating powered-off VMs between clusters when storage is not shared.

Option D, changing storage only, means moving the VM’s files between datastores while keeping the same compute resource. This does not solve the problem of relocating the VM to a different cluster and is therefore insufficient.

In conclusion, when migrating a powered-off VM to another cluster without shared storage, selecting Change both compute resource and storage is necessary. This ensures the VM’s files are copied to an accessible datastore, and the VM runs on the new cluster without storage or access conflicts, making option B the correct choice.

Question 7:

Which two block storage protocols are supported in vSphere environments? (Select two.)

A. SAN
B. SMB
C. NFSv4.1
D. iSCSI
E. NFSv3

Answer: A, D

Explanation:

In vSphere environments, choosing the right storage protocols is vital for performance, reliability, and compatibility. Storage protocols are categorized into block-level and file-level, based on how they handle data. Block storage protocols allow direct access to raw storage blocks, which is essential for VMware’s VMFS datastores.

Let’s analyze the options:
A. SAN (Storage Area Network) is a broad term describing a high-speed network architecture that provides block-level storage access. While SAN itself isn’t a protocol, it commonly uses Fibre Channel (FC) or Fibre Channel over Ethernet (FCoE) protocols. These Fibre Channel SANs are fully supported in vSphere, allowing ESXi hosts to directly interact with block storage devices, making SAN a valid choice.

B. SMB (Server Message Block) is a file-based protocol mainly used for file sharing in Windows environments. It doesn’t provide block-level access, and VMware does not support SMB shares as datastores for VMs. Therefore, SMB is not applicable for block storage in vSphere.

C. NFSv4.1 is an advanced file-level protocol supported by vSphere but it only offers file-level access, not block-level. Despite improvements in features, it remains a file protocol and not a block storage protocol.

D. iSCSI (Internet Small Computer Systems Interface) encapsulates SCSI commands in IP packets, providing block-level access over Ethernet networks. It is widely supported in vSphere and commonly used to connect ESXi hosts to SANs or standalone targets.

E. NFSv3, like NFSv4.1, is a file-level protocol and not a block protocol. It is supported by vSphere for file-based datastores but does not provide block storage access.

In summary, SAN (using Fibre Channel) and iSCSI are the block storage protocols compatible with vSphere, enabling direct block-level access necessary for VMFS datastores. File-based protocols such as NFSv3, NFSv4.1, and SMB do not qualify as block protocols, although some are supported for file storage.

Question 8:

If a vSphere administrator sees the error message: “Permanently inaccessible device: naa.1234567890 has no more open connections,” what conclusion should they draw?

A. The storage device is disconnected but can be restored after rebooting the host.
B. The storage device has failed but is expected to become available again.
C. The storage device has failed and is not expected to recover.
D. The storage device is disconnected but the issue is temporary.

Answer: C

Explanation:

This error message indicates a Permanent Device Loss (PDL) condition within the VMware vSphere environment. PDL signals that the ESXi host has lost access to a storage device permanently and has been explicitly informed by the storage system that the device is gone for good. This is a critical situation that differs substantially from transient connectivity problems.

In a PDL scenario, the storage device is marked as inaccessible, and VMware’s multipathing software stops all I/O operations toward it. The device is considered dead, and associated datastores are unmounted to protect data integrity. Virtual machines relying on those datastores will experience errors or crashes.

Option C is correct because it captures the essence of PDL: the storage device is permanently lost and not expected to return. Rebooting the ESXi host or retrying the connection will not fix the issue unless the underlying hardware or array problem is resolved.

The other options are incorrect because they describe temporary or recoverable states:

  • A describes a transient disconnect where a reboot might help, which matches an All Paths Down (APD) condition, not PDL.

  • B contradicts PDL because expecting the device to return conflicts with the permanent loss status.

  • D also refers to transient disconnects like APD, where the device might become available again; this does not apply to PDL.

To summarize, the PDL message clearly means the device is permanently unreachable and requires administrative action such as replacing hardware or adjusting storage configurations. Thus, the accurate interpretation is that the storage device has failed and will not become available again, making C the correct answer.

Question 9:

When vSphere Distributed Resource Scheduler (DRS) chooses a host for placing a virtual machine, which resource factor does it primarily consider?

A. Storage bandwidth available on the host
B. Network usage generated by the virtual machine
C. Total network bandwidth capacity of the host
D. Disk space utilization of the virtual machine

Answer: B

Explanation:

vSphere Distributed Resource Scheduler (DRS) is a critical feature in VMware environments that automatically balances computing workloads across multiple ESXi hosts in a cluster. Its main goal is to optimize resource usage and maintain performance by intelligently distributing virtual machines (VMs) based on their resource demands.

When DRS makes decisions about where to place a VM—either at initial power-on or during live migration (vMotion)—it evaluates several resource metrics. Among these, network usage by the virtual machine plays a significant role. This means DRS looks at the actual network traffic generated or consumed by individual VMs to prevent network bottlenecks on any single host.

The logic behind this is straightforward: if a VM is consuming heavy network resources, DRS will try to avoid placing or migrating that VM to a host already handling high network traffic. This helps balance network load within the cluster and ensures that no single host becomes a network performance bottleneck.

Let's consider why the other options are less applicable:

  • A. Storage bandwidth on the host: Although storage performance is important, DRS does not use storage bandwidth as a primary factor when selecting a host. Storage concerns are handled separately by features like Storage DRS and Storage I/O Control, which focus on datastores rather than host placement.

  • C. Network bandwidth on the host: While overall host network capacity matters, DRS does not prioritize total host bandwidth directly. Instead, it focuses on per-VM network activity to make more precise placement decisions.

  • D. Disk space utilization by the virtual machine: Disk usage impacts storage provisioning but is not a key metric for DRS when distributing workloads across hosts.

In conclusion, vSphere DRS’s decision-making process involves analyzing per-VM CPU, memory, and network consumption to optimize load balancing across the cluster. Of the options listed, network usage by the virtual machine is the most relevant factor DRS considers for host selection, making B the correct answer. Understanding these distinctions is vital for effective VMware resource management and exam success.

Question 10:


In VMware vSphere 7, when creating a Distributed Virtual Switch (DVS), which feature allows network administrators to centrally manage network configurations across multiple ESXi hosts, and what are the key benefits of using this feature over standard virtual switches?

A. Network I/O Control
B. Distributed Virtual Switch (DVS)
C. vSphere Standard Switch (VSS)
D. Network Health Check

Answer: B

Explanation:

This question centers on a fundamental concept tested in the VMware 1V0-21.20 exam: understanding vSphere networking components and their capabilities. Distributed Virtual Switches (DVS) are a crucial feature in VMware vSphere environments, especially for enterprise data centers requiring centralized network management.

Distributed Virtual Switch (DVS) (option B) is a virtual switch that spans multiple ESXi hosts in a vSphere cluster. Unlike a vSphere Standard Switch (VSS), which is configured independently on each host, a DVS allows network administrators to manage network configurations from a single interface in vCenter Server. This centralized control greatly simplifies network management by applying consistent network policies, port groups, and settings across all participating hosts.

Key benefits of using DVS include:

  1. Centralized Management: With a DVS, changes such as VLAN configuration, traffic shaping, and security policies only need to be configured once in the vCenter Server. These settings automatically propagate to all ESXi hosts connected to the DVS, reducing configuration errors and administrative overhead.

  2. Consistent Network Policies: Ensures uniformity across the environment, which is vital for maintaining compliance and troubleshooting.

  3. Advanced Features Support: DVS supports features like Network I/O Control (NIOC), Private VLANs, and Health Check, which are not available on Standard Switches. This allows for better network performance and security.

  4. Seamless VM Mobility: Since the virtual networking configuration remains consistent across hosts, virtual machines (VMs) can migrate using vMotion without network disruption. This is essential for maintaining uptime and operational flexibility.

Incorrect options:

  • A (Network I/O Control) is a feature used to prioritize network traffic but is not itself a switch type.

  • C (vSphere Standard Switch) is the legacy method of managing virtual networking, host-specific rather than centralized.

  • D (Network Health Check) is a monitoring tool for network connectivity but does not provide centralized network management.

Understanding DVS and its advantages over VSS is critical for the 1V0-21.20 exam, as it demonstrates proficiency in managing scalable and efficient virtual networks within VMware environments.


SPECIAL OFFER: GET 10% OFF

ExamCollection Premium

ExamCollection Premium Files

Pass your Exam with ExamCollection's PREMIUM files!

  • ExamCollection Certified Safe Files
  • Guaranteed to have ACTUAL Exam Questions
  • Up-to-Date Exam Study Material - Verified by Experts
  • Instant Downloads
Enter Your Email Address to Receive Your 10% Off Discount Code
A Confirmation Link will be sent to this email address to verify your login
We value your privacy. We will not rent or sell your email address

SPECIAL OFFER: GET 10% OFF

Use Discount Code:

MIN10OFF

A confirmation link was sent to your e-mail.
Please check your mailbox for a message from support@examcollection.com and follow the directions.

Next

Download Free Demo of VCE Exam Simulator

Experience Avanset VCE Exam Simulator for yourself.

Simply submit your e-mail address below to get started with our interactive software demo of your free trial.

Free Demo Limits: In the demo version you will be able to access only first 5 questions from exam.