Cisco 350-401 Exam Dumps & Practice Test Questions
How do the Routing Information Base (RIB) and Forwarding Information Base (FIB) differ in their functions on a network device?
A. The FIB is derived from and populated using the RIB’s information.
B. The RIB is an exact duplicate of the FIB.
C. The RIB is utilized for forwarding packets based on the source IP address.
D. The FIB stores every route from all routing protocols.
Answer: A
Explanation:
In modern routing architecture, both the Routing Information Base (RIB) and the Forwarding Information Base (FIB) play distinct yet interconnected roles in how network traffic is managed and routed. Understanding their differences is fundamental for anyone working with routers or network design.
The RIB is essentially a comprehensive routing table maintained by the control plane of a router. It aggregates routing information learned from different sources, including dynamic routing protocols like OSPF, EIGRP, and BGP, as well as static routes and directly connected networks. This table contains all possible routes to various network destinations, along with administrative metrics and next-hop information.
On the other hand, the FIB resides in the data plane and is optimized for high-speed packet forwarding. It is built from the RIB but only includes the best or most efficient routes selected by the routing protocol decision process. The router evaluates all the routes in the RIB, selects the best paths based on metrics and administrative distance, and then installs these entries into the FIB. This allows the FIB to act as a streamlined lookup table that can forward packets quickly without having to process the entire routing table.
Option B is incorrect because the FIB is not a copy of the RIB—it only includes the best routes.
Option C is misleading because routing decisions are generally based on the destination IP address, not the source, and the RIB doesn’t perform switching functions.
Option D is false because the FIB does not store all routing data—only the best routes from the RIB make it into the FIB.
In conclusion, the correct understanding is that the FIB is populated using the best route entries from the RIB, and this makes A the accurate answer.
Which component of Quality of Service (QoS) is responsible for modifying a packet to influence its handling across a network?
A. Policing
B. Classification
C. Marking
D. Shaping
Answer: C
Explanation:
Quality of Service (QoS) mechanisms are designed to manage network traffic effectively, particularly when bandwidth is limited or when supporting latency-sensitive applications like voice or video. One critical function within QoS is determining how different types of traffic should be treated by networking devices. This is where the concept of marking comes into play.
Marking refers to the act of embedding specific values into packet headers to indicate how that traffic should be prioritized or treated throughout the network. This can include setting fields like the Differentiated Services Code Point (DSCP) in the IP header or the Class of Service (CoS) bits in an Ethernet frame. Once marked, network devices along the path use these indicators to apply QoS policies, such as prioritizing voice packets over bulk file transfers.
Option A, Policing, is about enforcing traffic limits and can drop or re-mark packets that exceed a defined threshold, but it doesn’t inherently define the original markings.
B, Classification, is the step prior to marking, where packets are examined and sorted based on attributes like IP address or protocol. It identifies traffic types but doesn’t alter packets.
D, Shaping, involves delaying packets to ensure that traffic adheres to a defined rate—again, it modifies timing, not packet content.
Therefore, marking is the only process among the choices that directly changes packet headers to influence QoS treatment. These markings guide other devices on how to queue or prioritize traffic, making C the correct answer.
Question 3:
Which of the following correctly describes how Cisco Express Forwarding (CEF) operates within a router?
A. The CPU is actively engaged in making forwarding decisions for each packet.
B. It utilizes a high-speed cache located in the data plane.
C. It builds and uses two key data plane tables: the FIB and the adjacency table.
D. Forwarding decisions are handled by the IOS scheduler process.
Answer: C
Explanation:
Cisco Express Forwarding (CEF) is a critical feature in Cisco routers designed to maximize performance by speeding up packet forwarding while minimizing CPU load. It achieves this by pre-computing and storing route information in optimized tables that reside in the data plane.
The two essential tables CEF uses are the Forwarding Information Base (FIB) and the adjacency table. The FIB is derived from the Routing Information Base (RIB) and contains the best paths for forwarding decisions. The adjacency table complements the FIB by storing Layer 2 next-hop information, such as MAC addresses and outgoing interface details. When a packet arrives, the router quickly references these tables to determine the best path and next-hop information, ensuring high-speed, deterministic forwarding.
Option A is incorrect because the primary benefit of CEF is to offload packet forwarding from the CPU, allowing it to handle more complex control tasks instead.
Option B is misleading; CEF doesn’t use a generic "fast cache" but instead relies on structured tables (FIB and adjacency table) for consistent performance.
Option D incorrectly suggests that the IOS scheduler is involved in forwarding decisions. While the IOS scheduler manages system-level tasks, it does not handle packet forwarding when CEF is enabled.
By maintaining pre-structured tables in the data plane, CEF enables high-throughput, low-latency routing decisions without involving the CPU in every forwarding operation. This makes C the accurate and complete choice.
What is one key advantage of using an on-premises setup instead of cloud-based infrastructure?
A. Easily scale compute capacity without needing hardware installations
B. Requires less power and cooling resources to operate
C. Enables faster deployment since no additional hardware is required
D. Provides reduced latency between systems due to physical proximity
Answer: D
Explanation:
On-premises infrastructure refers to computing systems that are physically hosted within an organization's own facilities. One of the most significant benefits of this approach is the reduced latency between systems that are situated in close physical proximity to each other. When systems are co-located—such as in the same data center—data can be transferred quickly and with minimal delay, which is critical for applications requiring real-time processing or low-latency performance.
Let’s evaluate the answer choices:
A is incorrect because rapid scaling of compute resources is a hallmark of cloud environments, not on-premises setups. In cloud platforms, adding resources can be done dynamically and automatically, while on-premises solutions require manual hardware procurement and setup, which is time-consuming and costly.
B is also incorrect. On-premises systems often demand more electricity and cooling because the organization must maintain and manage the entire physical environment. Cloud providers, by contrast, optimize their data centers for energy efficiency and manage environmental controls at scale.
C does not reflect the realities of traditional on-prem infrastructure. Deploying new systems or applications on-premises typically involves lengthy procurement, configuration, and installation processes. In contrast, cloud platforms allow for much faster provisioning through automated deployment tools.
D is correct. The low latency advantage comes from the physical closeness of servers and devices within the same data center or local network. This proximity significantly reduces the time it takes for data to travel between components, making on-premises solutions ideal for latency-sensitive applications such as financial systems, industrial controls, or media processing.
Thus, while cloud infrastructure offers flexibility and scalability, on-premises setups provide superior control over physical systems and can deliver faster, more predictable data transfer when systems are co-located.
In what way does traffic shaping under Quality of Service (QoS) help to reduce network congestion?
A. It discards packets once a certain bitrate is exceeded
B. It buffers extra traffic and schedules it for later delivery
C. It breaks large packets into fragments for smoother transmission
D. It drops random packets from low-priority traffic queues
Answer: B
Explanation:
Traffic shaping is a method used in Quality of Service (QoS) to manage bandwidth usage and reduce congestion by controlling the flow of network traffic. This technique smooths out bursts of data transmission and helps ensure that the network remains stable and responsive, even under heavy loads.
The correct answer is B, which states that traffic shaping buffers and queues packets exceeding a committed rate and sends them out gradually. This mechanism regulates traffic flow by delaying the transmission of excess data rather than discarding it, ensuring that critical services and applications can function without interruption.
Let’s examine the other options:
A is incorrect because discarding packets when the bitrate exceeds a threshold is more characteristic of traffic policing, not shaping. Policing enforces traffic rates strictly by dropping non-compliant packets, which can lead to data loss.
C refers to packet fragmentation, which is unrelated to traffic shaping. Fragmentation is a function of network protocols when a packet is too large for the underlying network, and it’s not directly associated with managing bandwidth or congestion.
D is associated with a technique called Random Early Detection (RED), which drops packets from queues before they become full. While RED is used to prevent buffer overflows, it does not buffer and schedule traffic the way shaping does.
Traffic shaping acts like a valve that moderates how fast packets are sent. When data rates exceed the agreed-upon threshold, packets are held in a buffer and released in a controlled manner, minimizing congestion and optimizing the quality of service across the network. This is particularly useful for delay-sensitive services such as VoIP, video streaming, and real-time gaming.
In summary, traffic shaping helps mitigate network congestion by delaying excess traffic, rather than discarding it, which makes B the most accurate choice.
A network engineer is explaining QoS and mentions traffic policing. Which two statements are accurate regarding how policing functions? (Choose two.)
A. Policing should occur near the traffic's source
B. Policing queues extra traffic when there’s congestion
C. Policing is best applied near the destination
D. Policing drops data that surpasses the predefined rate
E. Policing usually delays traffic instead of dropping it
Answers: A and D
Explanation:
Traffic policing is a network mechanism used in Quality of Service (QoS) to enforce bandwidth limits by monitoring the data rate of traffic flows and taking action when those rates are exceeded. It is designed to ensure that no user or application exceeds its allocated network resources, thereby protecting overall network performance.
A is correct because traffic policing is most effective when implemented near the source of the traffic. Applying it early in the data flow prevents excessive traffic from propagating through the network, which helps conserve bandwidth and avoid congestion further downstream.
D is also correct. One of the fundamental operations of traffic policing is dropping packets that go over the defined rate limit. In some implementations, instead of dropping, excess packets may be marked for lower priority handling, but typically they are discarded to maintain compliance with traffic profiles.
Now, let’s evaluate the incorrect choices:
B is wrong because queuing excess traffic is a function of traffic shaping, not policing. Traffic shaping smooths out bursts by holding packets in a queue, whereas policing takes a more immediate action by discarding non-compliant traffic.
C is incorrect as applying policing at the destination would be too late to effectively control the flow. By the time the traffic reaches the endpoint, it has already consumed network resources. Early policing helps avoid unnecessary congestion.
E is false. Traffic policing does not delay traffic; it simply checks compliance and drops or marks packets that exceed thresholds. If traffic delay or buffering is needed, that’s where traffic shaping comes in.
To conclude, traffic policing is a strict enforcement tool for network bandwidth management. It ensures that traffic adheres to specified rates by dropping non-compliant packets and is most effective when implemented near the source. Thus, A and D are the correct answers.
Question 7:
Which Cisco SD-WAN component is responsible for managing orchestration within the network architecture?
A. vBond
B. vSmart
C. vManage
D. WAN Edge
Correct Answer: C
Explanation:
In Cisco’s SD-WAN architecture, multiple components work together to deliver a scalable, secure, and centrally managed network. Among these, the orchestration plane plays a crucial role in overseeing the network's centralized configuration, policy deployment, and operational management. The component in charge of this orchestration functionality is vManage.
vManage is a centralized network management system used by administrators to monitor, configure, and maintain all SD-WAN devices. It provides a graphical interface and APIs for interacting with the network. This orchestration function includes the deployment of device configurations, security policies, software updates, and performance monitoring across the entire SD-WAN fabric.
Let’s analyze the other components:
vBond (Option A) serves as the orchestration facilitator, handling initial device authentication and enabling secure connectivity between control and data plane elements. However, it doesn't manage ongoing configuration or monitoring.
vSmart (Option B) is the control plane component. It centralizes policy and route decisions. It enforces traffic steering, segmentation, and security policies, but it does not handle the orchestration or monitoring tasks directly.
WAN Edge (Option D) refers to physical or virtual routers that reside at branch locations and form the data plane. They carry out the routing and forwarding of user traffic but have no orchestration responsibilities.
The orchestration plane’s job is not just about connectivity but managing the lifecycle and performance of the entire SD-WAN network. Through vManage, IT teams can push policies network-wide, troubleshoot issues, and ensure compliance from a single platform, making it integral to daily operations.
Therefore, the correct answer is C, as vManage is the designated orchestrator in the Cisco SD-WAN solution.
Question 8:
In a Cisco SD-Access network, which two roles are assigned to devices within the fabric? (Choose two)
A. Edge node
B. vBond controller
C. Access switch
D. Core switch
E. Border node
Correct Answers: A and E
Explanation:
Cisco SD-Access is an enterprise networking solution that brings the benefits of Software-Defined Networking (SDN) into campus and branch networks. It introduces a fabric-based design, where devices are assigned specific roles that help streamline operations, improve security, and enable policy-based automation. Among the primary roles in SD-Access are edge nodes and border nodes.
An edge node (Option A) acts as the entry point for user devices into the SD-Access fabric. These are typically access-layer switches that connect endpoints such as laptops, printers, and IP phones to the network. In addition to forwarding user traffic, edge nodes apply segmentation and policy rules, enabling identity-based access control and micro-segmentation.
A border node (Option E) serves as the gateway between the SD-Access fabric and external networks. This could include the internet, data centers, or other SD-Access domains. The border node handles ingress and egress traffic, policy translation, and communication with resources located outside the fabric. It’s crucial for ensuring connectivity beyond the local SD-Access environment.
Now, let’s examine the incorrect options:
vBond controller (Option B) is a component of Cisco SD-WAN, not SD-Access. It manages initial device authentication and control plane formation but does not exist in the SD-Access architecture.
Access switch (Option C) is a traditional network role, but in SD-Access, this function is replaced or enhanced by the edge node role. So while the concept is similar, it’s not a formal role in SD-Access fabric.
Core switch (Option D) is another legacy term in traditional networks, generally providing backbone connectivity. While core switches might exist physically, they are not designated roles within the SD-Access architecture.
In summary, the edge node and border node are fundamental roles in Cisco’s SD-Access fabric, enabling device onboarding and external connectivity, respectively. These roles support the intelligent, secure, and automated nature of SD-Access networks.
Which action does a Layer 2 switch perform when it receives a frame with a destination MAC address that is not found in its MAC address table?
A. Drops the frame immediately.
B. Sends the frame only to the default gateway.
C. Floods the frame out all interfaces except the one it was received on.
D. Adds the unknown MAC address to its table and sends the frame back to the source.
Answer: C
Explanation:
When a Layer 2 switch receives an Ethernet frame, one of its first responsibilities is to determine how to forward it. It does this by checking its MAC address table—a dynamic table that maps known MAC addresses to switch ports. If the destination MAC address is found, the switch forwards the frame out of the corresponding interface. But if the destination address is not in the MAC table, the switch follows a default behavior known as flooding.
Flooding means the switch sends the frame out of all ports except the port on which it was received. This process ensures the frame reaches the intended destination, assuming the destination is connected to the same Layer 2 broadcast domain. If the recipient responds, the switch will then learn its MAC address and port location, updating its MAC table accordingly. This enables efficient forwarding in the future.
Let’s evaluate the other options:
A is incorrect because switches do not drop unknown-unicast frames unless security policies are explicitly configured to do so.
B is incorrect because the switch does not send unicast frames to a default gateway; gateways are used for Layer 3 routing, not Layer 2 decisions.
D is wrong because a switch doesn’t forward frames back to the source interface and doesn’t learn unknown destination MACs proactively.
This behavior is foundational to switch operations and essential for basic connectivity in Ethernet networks. Therefore, the correct answer is C.
What is the purpose of the control plane in a network device running a modern Cisco IOS XE architecture?
A. It is solely responsible for the actual packet forwarding function.
B. It handles all Layer 1 and Layer 2 processes in hardware.
C. It manages routing protocols and builds the RIB.
D. It offloads encryption and decryption functions for IPSec.
Answer: C
Explanation:
In Cisco networking architecture, especially within modern platforms like IOS XE, devices operate with a separation of responsibilities between the control plane and the data plane. This separation helps achieve high performance and scalability in enterprise networks.
The control plane is the brain of the device. It is responsible for tasks related to network intelligence, such as running routing protocols (like OSPF, BGP, or EIGRP), building and maintaining the Routing Information Base (RIB), managing neighbor relationships, and handling device configurations. These processes determine how the network should behave and what the optimal paths are, but they do not directly forward packets.
The actual task of packet forwarding is handled by the data plane, using tables (like the FIB and adjacency table) populated by the control plane. Once routes are selected, the control plane programs this information into the data plane for fast processing.
Let’s examine the incorrect options:
A is incorrect because forwarding is handled by the data plane, not the control plane.
B is also wrong, as Layer 1 and Layer 2 processing is often handled in hardware or ASICs, not solely by the control plane.
D is misleading—while some platforms can offload cryptographic functions, this is not a primary function of the control plane and is usually handled by a separate service plane or hardware acceleration.
The control plane’s key role is decision-making and protocol management, making C the correct answer. It ensures that the router or switch has accurate, up-to-date information to make intelligent routing choices, which are then enforced by the data plane.
Top Cisco Certification Exams
Site Search:
SPECIAL OFFER: GET 10% OFF
Pass your Exam with ExamCollection's PREMIUM files!
SPECIAL OFFER: GET 10% OFF
Use Discount Code:
MIN10OFF
A confirmation link was sent to your e-mail.
Please check your mailbox for a message from support@examcollection.com and follow the directions.
Download Free Demo of VCE Exam Simulator
Experience Avanset VCE Exam Simulator for yourself.
Simply submit your e-mail address below to get started with our interactive software demo of your free trial.