VMware 5V0-23.20 Exam Dumps & Practice Test Questions

Question 1:

An administrator is managing a vSphere with Tanzu setup and wants to ensure that all persistent volumes created within a particular namespace are provisioned only on a specific group of datastores. Tags have already been applied to the target datastores using the vSphere Client. 

What should the administrator do next to achieve this objective?

A. Create a storage policy containing the tagged datastores and apply it to the vSphere Namespace
B. Create a storage class containing the tagged datastores and apply it to the Supervisor Cluster
C. Create a persistent volume claim using the tagged datastores and apply it to the vSphere Namespace
D. Create a storage policy containing the tagged datastores and apply it to the Supervisor Cluster

Correct Answer: D

Explanation:

In vSphere with Tanzu, managing storage allocation for Kubernetes workloads requires precise coordination between vSphere-defined storage policies and Kubernetes-native resources. If an administrator wants to control which datastores are used for persistent volumes within a namespace, the best practice is to define a storage policy that includes the required datastores—typically identified by tags—and apply this policy at the Supervisor Cluster level.

The Supervisor Cluster is responsible for orchestrating Tanzu Kubernetes workloads and mediating between Kubernetes resource requests and underlying vSphere infrastructure. By applying a storage policy to the Supervisor Cluster, it becomes available to Kubernetes as a storage class that developers can reference in their Persistent Volume Claims (PVCs). This guarantees that when a PVC is created within a namespace, only datastores compliant with the applied storage policy are eligible for provisioning.

Option A is incorrect because while vSphere Namespaces define the resource boundaries for Kubernetes workloads, storage policies are not applied directly to namespaces.
Option B is misleading because storage classes exist within Kubernetes but must be backed by vSphere storage policies. You cannot directly associate a vSphere tag with a Kubernetes storage class without the intermediary step of defining a storage policy.
Option C misunderstands the function of Persistent Volume Claims; PVCs request storage but don’t control datastore selection directly. They rely on the backend storage policy through the associated storage class.

In contrast, Option D outlines the correct sequence: define the storage policy including the tagged datastores and associate that policy with the Supervisor Cluster. This ensures that any PVC created within that Supervisor Cluster is constrained to the specified datastores. This model provides centralized storage governance and ensures consistent provisioning behavior across all Kubernetes workloads in the environment.

Question 2:

What are three core responsibilities handled by the Spherelet component within a vSphere with Tanzu environment? (Choose three)

A. Decides where to place vSphere Pods
B. Configures and manages node setup
C. Launches vSphere Pods
D. Acts as a distributed key-value store
E. Communicates with the Kubernetes API server
F. Creates new Tanzu Kubernetes clusters

Correct Answers: B, C, E

Explanation:

The Spherelet is a vital component in the vSphere with Tanzu architecture, functioning similarly to the kubelet in a standard Kubernetes environment. It runs as an agent on the ESXi hosts that are part of the Supervisor Cluster and is designed specifically to bridge Kubernetes workload requirements with the vSphere infrastructure.

One of the key roles of the Spherelet (B) is managing the node configuration on the ESXi hosts. This includes preparing the environment to support Kubernetes workloads, ensuring networking, storage, and resource configurations are correctly established for the containers and pods that will be hosted.

Another critical function (C) is to start and manage vSphere Pods, which are Kubernetes-native pods implemented through the vSphere Pod Service. These pods run directly on the ESXi hypervisor for enhanced performance and isolation. The Spherelet handles the lifecycle of these pods, ensuring that they are deployed, monitored, and restarted as needed.

The Spherelet also communicates with the Kubernetes API server (E), which allows it to report the status of the workloads running on its host, retrieve specifications for desired states, and receive instructions for creating or modifying pods. This bidirectional communication ensures seamless orchestration between the Kubernetes control plane and the underlying vSphere infrastructure.

Option A is incorrect because the Kubernetes scheduler is responsible for pod placement decisions, not the Spherelet.
Option D is also incorrect as the key-value store function, essential to storing cluster state, is handled by etcd, not the Spherelet.
Option F refers to the Tanzu Kubernetes Grid (TKG) infrastructure, where cluster provisioning is managed by higher-level orchestration tools such as Tanzu CLI or Kubernetes Cluster API, not the Spherelet.

Therefore, the Spherelet's responsibilities are specifically limited to node configuration, pod lifecycle management, and Kubernetes API integration, making B, C, and E the correct answers.

Question 3:

Why might a development team choose to deploy their application using a vSphere Pod rather than creating a new Tanzu Kubernetes cluster?

A. They require the ability to run containers with privileged access
B. The application handles confidential customer data and demands high levels of security and resource separation
C. They want full administrative access to both the control plane and worker nodes of Kubernetes
D. Their application depends on a newer Kubernetes version than the one installed on the supervisor cluster

Correct Answer: B

Explanation:

When deciding how to deploy containerized applications on a VMware infrastructure, developers must weigh the benefits and constraints of different deployment models. Two primary options within the VMware Tanzu ecosystem are deploying as a vSphere Pod or within a Tanzu Kubernetes cluster. Each model serves distinct use cases, particularly when it comes to performance, control, and isolation.

The correct choice—B—focuses on security and resource isolation, which are key reasons developers may opt for deploying their application in a vSphere Pod. vSphere Pods offer tighter integration with the vSphere hypervisor and run natively on the supervisor cluster, giving them direct access to vSphere's security and resource controls. This results in higher degrees of workload isolation, both in terms of compute resources and security domains. Applications that process sensitive customer data or require dedicated, protected environments benefit most from this setup, as the infrastructure provides built-in isolation and protection mechanisms enforced by the hypervisor.

Option A, which mentions the need for privileged pods, is not a differentiator here. Both vSphere Pods and Tanzu Kubernetes clusters support privileged containers, so this is not a compelling reason to choose one over the other.

Option C involves wanting root-level access to the control plane and worker nodes. While this might be necessary in some scenarios, this level of access is not offered in vSphere Pod deployments. Developers needing deep control of the Kubernetes environment typically opt for Tanzu Kubernetes clusters, where such access is permitted.

Option D references using a newer Kubernetes version than what’s running on the supervisor cluster. Since vSphere Pods are tightly coupled to the supervisor cluster’s Kubernetes version, they cannot operate on newer versions. In contrast, Tanzu Kubernetes clusters are more flexible, allowing users to deploy different Kubernetes versions.

To summarize, the unique advantage of vSphere Pods lies in their hypervisor-level integration, which provides strong isolation and enhanced security—exactly what’s required when handling confidential or sensitive workloads. This makes B the most appropriate answer.

Question 4:

A company is seeking centralized oversight and unified policy enforcement across various Tanzu Kubernetes clusters, spanning multiple namespaces and cloud platforms. Which VMware product should they implement?

A. vSphere with Tanzu Supervisor Cluster
B. vCenter Server
C. Tanzu Mission Control
D. Tanzu Kubernetes Grid Service

Correct Answer: C

Explanation:

Organizations that operate multiple Kubernetes clusters—especially across hybrid or multi-cloud environments—often face challenges in maintaining consistency in management, security, and compliance. In such cases, a platform that provides a centralized management layer becomes essential for visibility, policy enforcement, and lifecycle control across all clusters and namespaces.

The correct answer, C, is Tanzu Mission Control, a VMware SaaS-based solution built specifically to manage multiple Kubernetes clusters, whether they are running on-premises, in VMware environments, or in public clouds like AWS, Azure, or Google Cloud. Tanzu Mission Control allows administrators to enforce global security policies, control access, monitor compliance, and perform cluster lifecycle operations, such as provisioning and upgrades—all from a single control plane.

Option A, vSphere with Tanzu Supervisor Cluster, is a powerful platform for enabling Kubernetes on VMware vSphere infrastructure. However, its scope is limited to managing Kubernetes workloads within a vSphere environment. It doesn't offer multi-cloud or multi-cluster centralized management capabilities.

Option B, vCenter Server, is traditionally used to manage vSphere environments, including virtual machines and ESXi hosts. While it plays a critical role in managing the underlying infrastructure that supports Tanzu, it does not provide Kubernetes-native management features or cross-cluster visibility.

Option D, Tanzu Kubernetes Grid Service, facilitates the deployment and lifecycle management of Tanzu Kubernetes clusters within a specific environment. However, it is more focused on provisioning and maintaining clusters, not on global policy enforcement or visibility across clouds.

In essence, Tanzu Mission Control offers the capabilities the company is looking for: cross-cloud and cross-cluster governance, policy automation, and compliance management for Kubernetes environments at scale. Whether the clusters are on AWS, Azure, or in a private data center, Tanzu Mission Control provides a unified platform to oversee them all. Thus, the correct answer is C.

Question 5:

When a developer uses the kubectl vsphere login command to access a Tanzu Kubernetes Cluster, which two elements must they provide in addition to the cluster name and the Supervisor Cluster Control Plane IP?

A. The path to an existing kubeconfig file and the SSO username
B. The path to an existing kubeconfig file and the token ID for SSO credentials
C. The Supervisor Namespace name and the token ID for SSO credentials
D. The Supervisor Namespace name and the SSO username

Correct Answer: D

Explanation:

Connecting to a Tanzu Kubernetes Cluster (TKC) using the kubectl vsphere login command requires more than just the cluster name and control plane IP. Two critical pieces of information that must be included are the Supervisor Namespace name and the SSO username. These components ensure proper authentication and identification within the vSphere with Tanzu environment.

The Supervisor Namespace is a dedicated logical space within the Supervisor Cluster where Kubernetes resources are provisioned and managed. This namespace serves as a boundary for workload placement and permissions. Without specifying the correct namespace, the login process will not be able to direct the user to the intended cluster environment.

The SSO (Single Sign-On) username is also essential. It’s used to authenticate the user against the vSphere Identity Provider (such as vCenter SSO or an external directory). This ensures secure access and proper role-based permission mapping within the Tanzu environment.

Now let’s consider why the other options are incorrect:

  • Option A mentions the path to an existing kubeconfig file. This is not required when using the kubectl vsphere login command, as this command is designed to generate or update the kubeconfig file dynamically during the login process.

  • Option B refers to a token ID for SSO credentials. However, in typical vSphere SSO environments, a username (and password, or sometimes an authentication provider) is used rather than a manually entered token ID. Token-based login is more relevant in API automation scenarios, not in interactive login sessions.

  • Option C partially gets it right by including the Supervisor Namespace name but incorrectly adds the requirement for a token ID, which, again, is not typically used for kubectl vsphere login.

Thus, Option D correctly identifies both required elements for establishing a secure connection to the Tanzu Kubernetes Cluster: the Supervisor Namespace and the SSO username.

Question 6:

To scale a Tanzu Kubernetes Cluster horizontally, which value should be adjusted?

A. Namespaces
B. etcd instance
C. Worker node count
D. ReplicaSets

Correct Answer: C

Explanation:

Horizontal scaling in a Tanzu Kubernetes Cluster (TKC) refers to the process of increasing or decreasing the number of worker nodes within the cluster. These worker nodes are responsible for running the actual container workloads (pods). Adjusting the worker node count directly impacts the cluster's compute capacity, enabling it to handle more applications, users, and resource-intensive processes.

By increasing the worker node count, an administrator can provide additional CPU, memory, and storage to accommodate growing workloads. This kind of scalability is crucial in dynamic cloud-native environments where demand can fluctuate rapidly. Likewise, reducing the number of worker nodes helps optimize resource utilization and cost when demand decreases.

Let’s examine the other options and why they are incorrect:

  • Option A – Namespaces: Namespaces are organizational units within Kubernetes that help segment and manage resources. While they assist in managing resource policies and access control, modifying the number of namespaces has no effect on the actual resource availability or workload capacity of the cluster. Therefore, namespaces do not contribute to horizontal scaling.

  • Option B – etcd instance: etcd is the distributed key-value store that holds cluster configuration and state. Scaling etcd might be important for control plane performance and availability, especially in very large clusters. However, this is unrelated to increasing the capacity to run more workloads. etcd scaling affects cluster metadata operations, not compute capacity.

  • Option D – ReplicaSets: Adjusting a ReplicaSet does scale the number of application pods, which is a form of horizontal scaling at the application level. However, this assumes that the cluster has enough resources to accommodate the additional pods. If the cluster is already resource-constrained, increasing the ReplicaSet size won't help unless more worker nodes are added. Therefore, ReplicaSet scaling is different from cluster-level (infrastructure) scaling.

In conclusion, worker node count is the correct metric to adjust when horizontally scaling a Tanzu Kubernetes Cluster. It increases the cluster's ability to deploy and manage more workloads efficiently, making Option C the most accurate answer.

Question 7:

Which two container network interfaces (CNIs) are supported with Tanzu Kubernetes clusters created by the Tanzu Kubernetes Grid Service? (Choose two.)

A. NSX-T
B. WeaveNet
C. Flannel
D. Antrea
E. Calico

Answer: A, E

Explanation:

Tanzu Kubernetes Grid Service (TKGS) supports specific CNIs that integrate well with the VMware ecosystem, and these CNIs are critical for ensuring secure, scalable, and manageable Kubernetes networking.

NSX-T is one of the officially supported CNIs. It is VMware's enterprise-grade network virtualization platform and provides advanced features like micro-segmentation, distributed firewalls, and load balancing. When used with Tanzu Kubernetes clusters, NSX-T ensures deep integration with vSphere, enabling seamless network policy enforcement and automation.

Calico, another supported CNI, is popular in the Kubernetes community for its robust network policy management and IP-in-IP routing capabilities. It provides a scalable and efficient networking solution, making it ideal for production environments. VMware has included support for Calico in its Tanzu portfolio due to its flexibility and widespread adoption.

On the other hand, WeaveNet, Flannel, and Antrea are not officially supported CNIs for Tanzu Kubernetes Grid Service:

  • WeaveNet offers simple networking solutions but lacks tight integration with the VMware stack, which is necessary for enterprise-level deployments in Tanzu.

  • Flannel is another lightweight option, but it does not provide the advanced networking or policy capabilities required in Tanzu's enterprise environment.

  • Antrea, though it is a VMware-sponsored open-source CNI based on Open vSwitch (OVS), is typically associated with other Tanzu editions or non-TKGS use cases and isn’t supported out of the box with TKGS.

Therefore, NSX-T and Calico are the two CNIs supported by Tanzu Kubernetes Grid Service.

Question 8:

Where are the virtual machine images stored that are used to deploy Tanzu Kubernetes clusters?

A. Content Library
B. Supervisor Cluster
C. Harbor Image Registry
D. Namespace

Answer: B

Explanation:

The virtual machine images that are used in the deployment of Tanzu Kubernetes clusters reside within the Supervisor Cluster, which is the central management construct in vSphere with Tanzu. This cluster manages the lifecycle of Tanzu Kubernetes clusters and orchestrates VM provisioning.

The Supervisor Cluster is essentially a specialized Kubernetes cluster that runs directly on vSphere hosts. It enables the provisioning of Kubernetes workloads and services while integrating tightly with vSphere’s compute, storage, and networking infrastructure. The VM images used to instantiate the control plane and worker nodes of Tanzu Kubernetes clusters are maintained in this Supervisor Cluster.

Let’s clarify the incorrect options:

  • Content Library (Option A): While the Content Library in vSphere can be used for storing OVF templates and ISO images, it is not where Tanzu stores the actual virtual machine images used during cluster deployment. Those are handled and maintained by the Supervisor Cluster itself.

  • Harbor Image Registry (Option C): Harbor is used for container images, not virtual machine images. It's suitable for storing Kubernetes application images but not the base VM templates used by Tanzu to deploy cluster nodes.

  • Namespace (Option D): In Kubernetes, namespaces are logical partitions for workloads and resource quotas within a cluster. They are not storage locations for VM images.

Thus, the correct answer is the Supervisor Cluster, which serves as the control point and image repository for deploying and managing Tanzu Kubernetes clusters.

Question 9:

What function do persistent volumes serve for applications deployed in containers?

A. Automatically archive data to disk
B. Enable support for in-memory databases
C. Facilitate temporary workload storage
D. Preserve application data and state across container lifecycles

Correct Answer: D

Explanation:

In Kubernetes and other container orchestration platforms, persistent volumes (PVs) provide a mechanism for maintaining application data beyond the lifecycle of individual containers or pods. This is especially important because containers are inherently ephemeral—when a container is terminated or crashes, any data stored within it is typically lost. Persistent volumes solve this problem by decoupling storage from the lifecycle of a pod, ensuring that data remains intact even if the pod is deleted, rescheduled, or replaced.

This persistent storage capability is crucial for applications that manage stateful data, such as relational or NoSQL databases, message brokers, or file servers. Without persistent volumes, these applications would lose data during routine Kubernetes operations like scaling or rolling updates.

Let’s now consider why the other options are incorrect:

  • A. Automatically archive data to disk:
    This option misrepresents what persistent volumes do. Archival storage refers to long-term data storage for compliance or backup purposes, often with reduced performance characteristics. Persistent volumes, in contrast, are intended for active use by applications that need reliable, durable access to their data. Archiving typically involves additional backup tools and policies outside of what Kubernetes provides through PVs.

  • B. Enable support for in-memory databases:

  • In-memory databases such as Redis or Memcached operate entirely in RAM to maximize speed. They typically do not rely on disk-based storage during normal operations. Persistent volumes are not required for these databases unless configured to support optional disk persistence features, which is not the primary use case of PVs.

  • C. Facilitate temporary workload storage:
    This refers more closely to ephemeral storage, which is tied directly to the lifetime of a pod. Once the pod is deleted, ephemeral storage is removed along with it. Persistent volumes offer the opposite capability—they retain data across pod deletions or failures, making them unsuitable for purely temporary storage needs.

In summary, the defining feature of persistent volumes is their ability to retain application data and state independently of the container or pod lifecycle. This makes D the correct answer for workloads requiring long-term, stable data storage across container restarts or failures.

Question 10:

What is the appropriate method for removing a Persistent Volume Claim (PVC) in Kubernetes?

A. Use the kubectl delete persistentvolumeclaim command
B. Use the kubectl remove pvc command
C. Delete the PVC using vSphere's SPBM policy engine
D. Unmount the volume manually and delete it via vSphere datastore

Correct Answer: A

Explanation:

Persistent Volume Claims (PVCs) are Kubernetes objects that allow users to request storage from available Persistent Volumes (PVs). To manage Kubernetes resources, including PVCs, the kubectl command-line tool is the standard and authoritative interface.

This command properly de-registers the PVC from the Kubernetes cluster, ensuring that the storage resource it was using can be managed based on the defined ReclaimPolicy of the Persistent Volume (Retain, Delete, or Recycle).

Here’s why the other options are incorrect:

  • B. kubectl remove pvc:
    This is not a valid Kubernetes command. Kubernetes CLI uses kubectl delete for deleting resources, and no such subcommand as kubectl remove exists. Attempting to use this will result in a syntax error.

  • C. Delete the PVC using vSphere’s SPBM policy engine:
    The Storage Policy-Based Management (SPBM) system is part of VMware's storage policy framework. While SPBM helps define storage behavior for vSphere-based volumes, it does not interact directly with Kubernetes PVCs. PVCs are Kubernetes-native resources and should be handled through Kubernetes-native tools.

  • D. Unmount and delete from vSphere datastore manually:
    While it's technically possible to delete a volume manually from the datastore, this method bypasses Kubernetes entirely. Such manual intervention can lead to orphaned resources, data loss, or inconsistencies in the Kubernetes control plane. Proper cleanup always begins within Kubernetes, and only then should low-level storage management occur, if necessary.

In conclusion, the best and most reliable method to remove a Persistent Volume Claim in Kubernetes is to use the kubectl delete persistentvolumeclaim command. This ensures clean removal and allows Kubernetes to manage any additional clean-up actions based on the associated volume's lifecycle policies. Therefore, the correct answer is A.


SPECIAL OFFER: GET 10% OFF

ExamCollection Premium

ExamCollection Premium Files

Pass your Exam with ExamCollection's PREMIUM files!

  • ExamCollection Certified Safe Files
  • Guaranteed to have ACTUAL Exam Questions
  • Up-to-Date Exam Study Material - Verified by Experts
  • Instant Downloads
Enter Your Email Address to Receive Your 10% Off Discount Code
A Confirmation Link will be sent to this email address to verify your login
We value your privacy. We will not rent or sell your email address

SPECIAL OFFER: GET 10% OFF

Use Discount Code:

MIN10OFF

A confirmation link was sent to your e-mail.
Please check your mailbox for a message from support@examcollection.com and follow the directions.

Next

Download Free Demo of VCE Exam Simulator

Experience Avanset VCE Exam Simulator for yourself.

Simply submit your e-mail address below to get started with our interactive software demo of your free trial.

Free Demo Limits: In the demo version you will be able to access only first 5 questions from exam.