Mirantis DCA Exam Dumps & Practice Test Questions

Question 1:

When adding files during the process of building a container image, there are two Dockerfile instructions commonly used: COPY and ADD

What are two distinct capabilities that differentiate these instructions? (Choose two.)

A. One supports pattern matching (e.g., wildcards), while the other does not.
B. One can automatically extract compressed archive files (e.g., .tar.gz), whereas the other simply copies them unchanged.
C. One has the ability to fetch files directly from a URL or internet location, while the other relies solely on local files.
D. One allows file matching using expressions like wildcards, while the other cannot.
E. One supports compressed files, and the other completely blocks their use.

Correct Answers: B and C

Explanation:

In Docker, container images are built using instructions specified in a Dockerfile, where files can be included in the image via either the COPY or ADD instruction. Though these commands may appear similar at first glance—both move files into the image—they differ in functionality and behavior in a few key ways.

Let’s first examine COPY. This instruction is simple and straightforward: it takes files or directories from your local build context (i.e., your machine) and copies them into the image at the destination path you specify. That’s it. COPY does not decompress archives, parse file contents, or download from the internet. It is reliable and predictable, which makes it the recommended choice when you just need to move files without any transformation.

ADD, on the other hand, is more powerful—and thus potentially more complex. While it can do everything COPY does (i.e., transfer files from your local machine into the image), it also provides two additional features:

  1. Automatic extraction of compressed archive files: If the source is a .tar, .tar.gz, or similar archive file, ADD will automatically unpack it inside the image. COPY would simply place the archive file as-is, without extracting its contents.

  2. Ability to retrieve files from a remote URL: ADD can also download files directly from the web using an HTTP or HTTPS URL. COPY, by contrast, cannot access remote locations—it only works with files already present in your build directory.

These distinctions form the basis of why Option B and C are correct. B is correct because only ADD can extract compressed files, and C is correct because only ADD can pull content from external sources.

Options A and D, while they sound plausible, are incorrect because neither COPY nor ADD supports file pattern matching or wildcards in the way that glob patterns might work in scripting. You must specify exact file paths unless using a build tool or script wrapper.
Option E is also incorrect. COPY can include compressed files, but it doesn’t do anything special with them—it won’t extract or unpack them.

Therefore, ADD stands out for its extended features, while COPY remains a preferred, safer choice for most use cases that don’t require downloading or decompressing files.

Question 2:

While managing applications using Docker Universal Control Plane (UCP), you want to identify user or system actions that occurred just before a failure. 

What must be configured in advance to capture and view this activity after the incident occurs?

A. UCP audit logging must be enabled with settings to capture metadata or full request details.
B. All Docker nodes must have their logging levels set to metadata or request mode.
C. The general UCP logging level must be set to informational or debugging.
D. The Kubernetes API server logs must be adjusted to a warning or higher level.

Correct Answer: A

Explanation:

In large-scale container environments, ensuring traceability and visibility is essential for both troubleshooting and security. Docker Universal Control Plane (UCP) provides centralized management for container clusters, including user access, policy enforcement, and orchestration. When an issue occurs—such as a container crash or misconfiguration—it’s critical to understand who did what and when within the UCP environment.

To gather this kind of insight, administrators must rely on audit logs. These logs are not enabled by default, which is why proactive configuration is crucial.

Option A is correct because UCP audit logging records user actions and API calls across the control plane. When audit logging is enabled, you can select between two levels:

  • Metadata level: Captures high-level details such as who made the request, the time, and what type of request was made.

  • Request level: Provides more granular insight, including the exact content of the API request, which can be extremely valuable during incident analysis.

Without this logging in place before the failure, there’s no retroactive way to determine what actions were taken, by whom, or which API endpoints were involved. Therefore, setting up audit logging in advance is vital for accountability and root cause analysis.

Option B is misleading because adjusting Docker node log levels has no impact on UCP API visibility. Node logs record container-level runtime data but won’t reflect actions taken through the UCP interface.

Option C, which refers to general UCP log levels like “info” or “debug,” may be helpful for monitoring system performance or detecting errors, but they don’t include detailed audit trails of user activity.

Option D references Kubernetes API server logging. While Kubernetes logs are useful for Kubernetes-specific diagnostics, they are distinct from UCP audit logs and don’t record broader UCP management actions or API access patterns.

In summary, if your goal is to investigate unexpected behavior or potential misuse in UCP, audit logging must be proactively enabled, and configured to capture either basic metadata or detailed request contents. This allows you to track administrative and user-level changes over time—an essential capability in incident response, compliance auditing, and secure cluster operations.

Question 3:

A containerized application attempts to modify the system clock from within its environment, but the operation fails. 

Could this failure be attributed to SELinux (Security-Enhanced Linux) being enforced on the host system?

A. Yes
B. No

Correct Answer: A

Explanation:

When running applications inside containers, it's important to remember that these environments are intentionally isolated from the host system. This design ensures security, stability, and resource control. One operation that is highly restricted is changing the system clock, because doing so affects critical system-level components such as file timestamps, logs, scheduled jobs, cryptographic functions, and more.

Containers—even when running with root privileges—do not automatically gain full access to the host system’s kernel or hardware interfaces. This behavior is not accidental but deliberate, ensuring containers cannot interfere with one another or compromise the host.

Now, let’s consider SELinux, or Security-Enhanced Linux. SELinux is a powerful, policy-driven security module integrated into many Linux distributions. It operates on the principle of mandatory access control (MAC), which enforces finely tuned permissions beyond traditional UNIX file permissions or discretionary access control.

When SELinux is running in enforcing mode, it applies strict rules that control how processes interact with system resources. Even if a process inside a container has root access within its own namespace, SELinux policies may still deny it access to sensitive host-level features—such as altering the system clock.

Why does this matter for changing time from inside a container?

Changing the clock is considered a privileged action, normally requiring elevated host-level permissions (specifically, the CAP_SYS_TIME Linux capability). By default, containers do not have this capability. Furthermore, SELinux adds an additional layer of restriction. Even if the container has been modified to include certain capabilities or run in privileged mode, SELinux policies can still block time modification attempts unless explicitly configured to allow them.

Therefore, SELinux can absolutely be the cause of a failure to change the system time from within a container. It will block unauthorized access to such critical resources as part of its security posture.

Can this behavior be changed? Yes, but it’s not recommended for most production environments:

  • The container could be granted privileged mode, allowing broader access to the host.

  • SELinux can be configured to run in permissive mode or be disabled (although this reduces security).

  • Specific Linux capabilities like CAP_SYS_TIME can be granted to the container at runtime.

However, giving containers these privileges breaks isolation guarantees and increases the risk of system compromise. Hence, such configurations are only advisable in tightly controlled and well-understood environments.

In conclusion, yes—SELinux can prevent a container from changing the system clock, even if the user inside the container appears to have elevated privileges. The correct answer is A (Yes).

Question 4:

In a Kubernetes cluster, a container within a pod consistently fails its liveness probe and is marked unhealthy. 

Will Kubernetes automatically restart this container in response to the failed health checks?

A. Yes, the container will be restarted automatically by the orchestrator.
B. No, the container will not be restarted. The issue must be fixed manually.

Correct Answer: A

Explanation:

In Kubernetes, maintaining the health and availability of containerized workloads is a primary responsibility of the orchestrator. One of the tools Kubernetes uses for this purpose is the liveness probe, which helps determine if a container is still running as expected.

A liveness probe is a health check configured by the application developer or administrator. It can perform actions such as:

  • Sending HTTP requests to specific endpoints inside the container.
    Attempting TCP connections to a port.

  • Running command-line scripts or shell commands within the container.

If the container fails this liveness check repeatedly, Kubernetes identifies it as "unhealthy."

What happens next?

Kubernetes takes automated corrective action. Specifically, the orchestrator kills and restarts the failing container within the same pod. This is not a manual operation—it is built into the Kubernetes control loop, which continually works to ensure the declared state of the system matches the actual state.

Here’s how it works in detail:

  • The frequency of the liveness check, and how many failures are tolerated before a restart, can be configured in the pod specification (initialDelaySeconds, periodSeconds, failureThreshold).

  • When the number of allowed failures is exceeded, the orchestrator restarts the container.

  • Importantly, only the affected container is restarted—not the entire pod. This maintains stability and minimizes disruption.

This process is critical to self-healing, one of Kubernetes’ core features. It allows the platform to react swiftly to application crashes, freezes, or other issues without human intervention.

However, if the container keeps failing even after repeated restarts, Kubernetes may eventually stop trying, depending on the configured limits. At that point, manual debugging would be needed—such as inspecting logs, adjusting health check configurations, or analyzing resource usage.

This automated restart behavior is invaluable in real-world production scenarios. Applications may occasionally run into transient issues like network interruptions, memory leaks, or temporary unresponsiveness. A restart can often resolve these issues without service interruption.

To summarize, when a container in Kubernetes repeatedly fails its liveness probe, the orchestrator will automatically restart it to maintain service availability. This makes Option A the correct answer.

Question 5:

Which of the following commands is used to create a Docker image from a Dockerfile?

A. docker image ls
B. docker create
C. docker build
D. docker run

Correct Answer: C

Explanation:

In Docker, creating an image from a set of instructions written in a Dockerfile is a fundamental task, particularly in environments using containerized CI/CD pipelines or infrastructure as code. The command used for this is docker build.

The docker build command reads the Dockerfile from a specified context directory, processes its instructions (like FROM, COPY, RUN, EXPOSE), and constructs an image based on them. The image is stored locally and can later be tagged, run, or pushed to a registry. 

This command tells Docker to use the Dockerfile in the current directory (.), build an image from it, and tag it as myapp:latest.

Let’s look at the other options:

  • A (docker image ls): This lists all local Docker images; it does not build them.

  • B (docker create): This creates a stopped container from an image. It’s not used for image creation.

  • D (docker run): This runs a container from an image; it does not create new images directly.

Knowing how to build images is essential for the exam because many DCA objectives focus on image creation, optimization, and container orchestration using these images. For example, in a microservices architecture, each service has its own Dockerfile, and teams need to ensure those Dockerfiles are well-constructed and produce consistent, efficient images. Understanding the build process helps in troubleshooting errors, optimizing layers, and managing dependencies.

Therefore, mastering docker build and how it integrates with the overall container lifecycle (from image to container deployment) is a critical part of passing the Mirantis DCA exam.

Question 6:

Which storage driver is most commonly used in Docker on Linux systems using overlay file systems and supports image layering efficiently?

A. aufs
B. devicemapper
C. overlay2
D. btrfs

Correct Answer: C

Explanation:

Docker uses storage drivers to manage image and container layers on disk. The choice of storage driver can significantly impact performance, stability, and compatibility.

Among the available drivers, overlay2 is the preferred storage driver on most modern Linux distributions. It is the successor to the original overlay driver and provides improved performance, stability, and better compliance with the POSIX file system standard.

overlay2 works by layering file system changes. When you build a Docker image or run a container, each change (like installing software or creating files) forms a new layer. overlay2 efficiently stacks these layers without duplicating content, enabling fast builds and minimal disk usage.

Here’s why overlay2 is widely recommended:

  • Performance: It provides better read and write performance compared to older drivers like aufs.

  • Stability: It’s actively maintained and tested by Docker and the Linux community.

  • Compatibility: It’s compatible with many major Linux kernels (3.10 and newer) and distributions like Ubuntu, CentOS, and Debian.

Other options:

  • A (aufs): Once popular, aufs has fallen out of favor due to limited kernel support and slower performance.

  • B (devicemapper): Still available, but less performant and complex to manage. It uses block-level storage, which isn't as efficient as file-level solutions like overlay2.

  • D (btrfs): Offers advanced features but is considered experimental in many production Docker setups.

Understanding the underlying storage driver is essential for troubleshooting performance issues, managing storage growth, and optimizing Docker for production environments. In the DCA exam, knowledge of how overlay2 handles copy-on-write and file modifications is often tested through scenario-based questions.

Question 7:

When deploying Docker containers in a multi-host environment using Docker Swarm, which component ensures that the containers are distributed and orchestrated properly?

A. Docker Engine
B. Docker Compose
C. Swarm Manager
D. Kubernetes Controller

Correct Answer: C

Explanation:

When running Docker containers across multiple hosts, Docker Swarm provides built-in clustering and orchestration. The Swarm Manager is the key component responsible for managing this distributed environment.

The Swarm Manager maintains the desired state of the cluster. It schedules tasks (containers) on the appropriate worker nodes based on resource availability, placement constraints, and service definitions. Managers also handle:

  • Leader election (one active manager and optional standby replicas)

  • Task reconciliation (ensuring desired service replicas are running)

  • Service discovery and load balancing within the cluster

For example, a command like:


instructs the Swarm Manager to deploy three NGINX containers, distributing them across available nodes.

Other components explained:

  • A (Docker Engine): This is the core runtime for containers but doesn’t handle orchestration across multiple nodes on its own.

  • B (Docker Compose): Useful for defining multi-container apps, but it's more suited to single-host deployments and development environments.

  • D (Kubernetes Controller): This is part of Kubernetes, not Docker Swarm, and serves a similar but distinct role in Kubernetes orchestration.

Understanding the Swarm architecture, including manager vs. worker roles, is vital for the DCA exam. Expect scenario questions where you must decide where to run commands, manage scale, or diagnose orchestration issues.

Question 8:

What is the main function of the Docker CLI command docker exec?

A. It launches a new container based on an image
B. It removes a container and its associated volumes
C. It runs a command inside an existing container
D. It pauses all processes in a container

Answer: C

Explanation:

The docker exec command is used to run a command inside an already running container. This is especially helpful for interacting with containers while they're live—either for debugging, configuration, or routine management.

For example, if you're running a web server inside a container and need to inspect its logs or check environment variables, you might use:This command opens an interactive bash session inside my_container, allowing real-time command execution.

Here’s what the other options do:

  • A. docker run is the correct command to start a new container from an image.

  • B. docker rm -v or docker container rm --volumes would remove a container and its associated volumes.

  • D. docker pause suspends all processes within a container.

Understanding docker exec is crucial for day-to-day container operations and appears frequently on the DCA exam. You must know how to use it for troubleshooting and interacting with running services inside containers.

Question 9:

In Kubernetes, what is the primary purpose of a Deployment object?

A. To provide persistent storage for Pods
B. To expose a Pod using a stable IP and DNS name
C. To manage the desired state of application replicas
D. To run jobs that complete and exit

Answer: C

Explanation:

A Deployment in Kubernetes is a controller that manages the desired state of an application by ensuring the specified number of replica Pods are running and up-to-date.

The Deployment defines:

  • What image to use

  • The number of replicas

  • Update strategy (rolling updates or recreates)

  • Labels and selectors to identify Pods

When you apply a Deployment YAML file, Kubernetes creates or updates Pods accordingly. If a Pod fails or is deleted, the Deployment controller detects the deviation and brings the system back to the desired state.

Incorrect options explained:

  • A. Persistent storage is managed through PersistentVolumeClaims and PersistentVolumes, not Deployments.

  • B. Services—not Deployments—are used to expose Pods via stable IPs and DNS.

  • D. Jobs or CronJobs are designed for tasks that run to completion.

Understanding how Deployments work—including rollback, scaling, and strategy parameters—is vital for success on the DCA exam. Candidates are tested on both creating and managing Deployments using kubectl and YAML manifests.

Question 10:

Which of the following Kubernetes resources would you use to ensure a container restarts if it fails unexpectedly?

A. ConfigMap
B. ReplicaSet
C. InitContainer
D. Liveness Probe

Answer: D

Explanation:

The Liveness Probe is a Kubernetes feature that checks the health of a running container. If the probe fails, Kubernetes will restart the container, ensuring the application stays operational.

There are three main types of probes:

  1. Liveness Probe – Checks if the container is alive; restarts it if it fails.

  2. Readiness Probe – Checks if the container is ready to serve traffic.

  3. Startup Probe – Used to delay other probes while the container starts up.

In this example, Kubernetes sends HTTP requests to /healthz every 5 seconds, starting 10 seconds after the container launches. If a failure is detected (non-200 response), the container is restarted.

Why the other options are incorrect:

  • A. ConfigMaps store configuration data but don’t affect restarts.

  • B. ReplicaSets manage Pod replication, not restarts of individual containers.

  • C. InitContainers are run before the main container starts and don't manage failure recovery.

For the DCA exam, you should understand not just how to configure Liveness Probes, but also how they differ from other probes and how they affect Pod lifecycle management. Liveness Probes are essential for creating self-healing applications within Kubernetes clusters.


Top Mirantis Certification Exams

Site Search:

 

VISA, MasterCard, AmericanExpress, UnionPay

SPECIAL OFFER: GET 10% OFF

ExamCollection Premium

ExamCollection Premium Files

Pass your Exam with ExamCollection's PREMIUM files!

  • ExamCollection Certified Safe Files
  • Guaranteed to have ACTUAL Exam Questions
  • Up-to-Date Exam Study Material - Verified by Experts
  • Instant Downloads
Enter Your Email Address to Receive Your 10% Off Discount Code
A Confirmation Link will be sent to this email address to verify your login
We value your privacy. We will not rent or sell your email address

SPECIAL OFFER: GET 10% OFF

Use Discount Code:

MIN10OFF

A confirmation link was sent to your e-mail.
Please check your mailbox for a message from support@examcollection.com and follow the directions.

Next

Download Free Demo of VCE Exam Simulator

Experience Avanset VCE Exam Simulator for yourself.

Simply submit your e-mail address below to get started with our interactive software demo of your free trial.

Free Demo Limits: In the demo version you will be able to access only first 5 questions from exam.