LPI 201-450 Exam Dumps & Practice Test Questions
Which of the following commands will effectively wipe out all data from the /dev/sdb3 partition by overwriting it?
A. rm /dev/sdb3
B. dd if=/dev/zero of=/dev/sdb3
C. dd of=/dev/zero if=/dev/sdb3
D. umount /dev/sdb3
Correct Answer: B
Explanation:
The most appropriate command for erasing the contents of a disk partition like /dev/sdb3 in a Unix-like operating system is dd if=/dev/zero of=/dev/sdb3. The dd utility is a powerful tool for copying and converting data at a low level, allowing you to read from one source and write to another with precision.
In this command:
if=/dev/zero defines the input file as /dev/zero, a special device that provides a continuous stream of zero bytes.
of=/dev/sdb3 defines the output file as the partition /dev/sdb3, which will receive the stream of zeroes.
As a result, the command will overwrite every byte in the /dev/sdb3 partition with zeroes, effectively destroying all the data it previously held. This method is commonly used for securely wiping a disk or preparing it for a clean installation or new file system.
Now, let’s clarify why the other options are incorrect:
A. rm /dev/sdb3 attempts to delete the partition device file, which is not allowed because /dev/sdb3 is a special block device and not a regular file. The system will return an error, and no data will be erased from the partition.
C. dd of=/dev/zero if=/dev/sdb3 flips the input and output, effectively trying to write from the partition to /dev/zero. However, /dev/zero is not writable—it only produces zeroes. This command will either fail or have no meaningful effect.
D. umount /dev/sdb3 only detaches the partition from the file system hierarchy. It does not modify or delete any data on the partition itself. While unmounting is a necessary step before erasing or formatting a partition, it does not perform the erasure.
In summary, only dd if=/dev/zero of=/dev/sdb3 effectively wipes the partition by overwriting its data.
When compiling software from source code using GNU make, which two files does it automatically look for as build instructions if no specific file is indicated?
A. configure
B. config.h.in
C. makefile
D. Makefile
E. Makefile.in
Correct Answers: C, D
Explanation:
GNU make is a utility used to build and compile source code based on instructions defined in a special file, commonly called a Makefile. When the make command is run without specifying a file using the -f option, it looks for default filenames in the current directory to use as its build script. The two default filenames that make will automatically recognize and use are makefile (all lowercase) and Makefile (with a capital “M”).
Option C: makefile – This is one of the standard filenames GNU make checks. While it's less common than the capitalized variant, it is still perfectly valid and supported.
Option D: Makefile – This is the most widely used filename for defining build instructions. It is considered the default and is usually preferred for better visibility and convention.
The file contains a series of rules, dependencies, and targets that tell make how to compile and link the program, ensuring the correct sequence and avoiding unnecessary recompilation.
Now, here’s why the other options are incorrect:
A. configure – This is a shell script commonly used in software packages that use the GNU Autotools system. It prepares the system for building by detecting dependencies and generating a suitable Makefile, but it is not read or executed by make directly.
B. config.h.in – This is a template file used by autoheader to create a config.h file, which defines macros for conditional compilation. It’s part of the configuration process but irrelevant to GNU make’s search for build instructions.
E. Makefile.in – This is a template used by configure to produce the final Makefile. make doesn’t read this file; it’s only a step in the configuration process.
Therefore, the correct answers are C and D, as these are the two filenames GNU make uses by default to guide the build process.
Which command should be used to extract only files that include "lpi" in their names from the compressed archive lpifiles.tar.gz?
A. tar xvzf lpifiles.tar.gz --wildcards "lpi"
B. tar xvzwf lpifiles.tar.gz "lpi"
C. tar -xvfz lpifiles.tar.gz --deep "lpi"
D. tar -xvzf lpifiles.tar.gz --subdirs "lpi"
E. tar xvzf lpifiles.tar.gz --globbing "lpi"
Correct Answer: A
Explanation:
The correct command to selectively extract files containing "lpi" from a tar.gz archive is option A, which uses the correct syntax and options recognized by the tar utility.
Let’s break down the command:
tar is the command-line utility used to create, manipulate, and extract archive files.
The flags used are:
x: Extract files.
v: Verbose output, listing each file being extracted.
z: Use gzip to decompress .gz files.
f: Indicates the archive file to work with (in this case, lpifiles.tar.gz).
--wildcards "*lpi*": This instructs tar to match filenames against a wildcard pattern. The pattern *lpi* matches any filename that contains "lpi" anywhere in the name.
Option A is valid and effective because it uses a supported method (--wildcards) to extract specific files by name.
Let’s examine why the other options are incorrect:
B includes w, which is not a valid tar option. This invalidates the entire command.
C uses --deep, which is not a recognized tar option and doesn't affect extraction behavior.
D uses --subdirs, which is not a valid option for the tar utility and is not needed to manage directory structures.
E mentions --globbing, which is not a recognized tar option either.
In conclusion, A is the only command that correctly uses valid options to extract files matching a specific name pattern using wildcards. This is the preferred method when working with large archives and selectively restoring files.
A user attempted to install software using the standard sequence ./configure && make && make install, but the installation step failed.
What actions could the user take to successfully complete the installation? (Choose two.)
A. Install the binaries manually with suinstall
B. Run make install with root privileges
C. Skip ./configure to retain default settings
D. Run ./configure with the --prefix flag set to a user-writable directory
E. Use make install_local to install into /usr/local/
Correct Answers: B, D
Explanation:
When a user runs the sequence ./configure && make && make install, the first two steps—configure and make—usually complete successfully under normal user permissions. However, the final step, make install, often fails if it attempts to write files to system-level directories such as /usr/bin or /usr/local/bin, which require root access.
Option B is correct because using root privileges (e.g., via sudo make install) gives the user the necessary permissions to install files to protected system directories. This is a standard and widely used solution when installing software system-wide.
Option D is also valid. By rerunning ./configure with the --prefix flag, the user can redirect the installation to a directory they own and can write to, such as their home directory:
This avoids permission issues altogether and is especially helpful in environments where the user does not have sudo access.
Now, let’s evaluate the incorrect options:
A refers to suintall, which is not a known command or standard method for installation. Manual copying of binaries can lead to misplacement and dependency errors.
C suggests skipping ./configure, which is essential for setting up build parameters. Skipping this step usually results in build or install failures.
E suggests make install_local, which is not a standard make target unless explicitly defined by the software’s Makefile. Additionally, /usr/local/ still requires root access for writing.
Therefore, the best two solutions are to either gain the required permissions (B) or adjust the configuration to install into a directory the user controls (D). Both options effectively resolve permission-related installation failures.
What is the result if you execute the same sequence again without making changes?
A. An error occurs due to the tape being ejected after the initial use
B. The data originally extracted to /opt is restored again
C. /opt is completely overwritten with the next file on the tape
D. Additional content is extracted from the next archive on the tape into /opt
Correct answer: D
Explanation:
This command sequence interacts with a tape device (/dev/nst0) in a Linux environment using the tar utility. Here's a detailed breakdown of its behavior:
cd /opt changes the working directory to /opt, the target directory for the extraction.
tar xvf /dev/nst0 uses the tar utility to extract files (x), in verbose mode (v), from the device /dev/nst0 (a tape device in non-rewinding mode).
The critical detail here lies in the use of nst0 instead of st0. The n in nst0 stands for non-rewinding, meaning after a tar file is read, the tape head does not rewind to the beginning. Instead, it moves forward to the next tar archive on the tape.
Therefore, running the same tar command again will not extract the same content—it will continue from the next archive stored on the tape. This means that additional files, from the next archive in sequence, will be added to the /opt directory without replacing existing content.
Let’s review the options:
A is incorrect because the tape isn’t ejected automatically after use, especially when using a non-rewinding device.
B is wrong because the tape doesn’t rewind automatically—so the previously extracted content won’t be extracted again.
C is misleading. The contents in /opt aren’t replaced but rather expanded with whatever new files are in the next archive.
D is correct. The next file or archive on the tape is extracted into /opt, resulting in additional content being added.
In summary, rerunning the same command will pull more data from the next archive on the non-rewound tape, appending it to the existing contents in /opt.
You want to simulate the failure of a disk in a RAID 5 array for testing purposes. Which single command should you use to mark a disk as failed?
A. mdadm --remove /dev/md0 /dev/sdd1
B. mdadm --zero-superblock /dev/sdf3
C. mdadm --force-fault /dev/md2 /dev/sde2
D. mdadm --fail /dev/md0 /dev/sdc1
E. mdadm /dev/md0 --offline /dev/sdc1
Correct answer: D
Explanation:
To test fault tolerance or degraded state recovery in a RAID 5 setup, system administrators sometimes simulate a disk failure. This can be done using the mdadm utility, which is used for managing software RAID arrays in Linux.
Option D — mdadm --fail /dev/md0 /dev/sdc1 — is the correct command for this task. The --fail flag tells mdadm to mark the specified device as failed, simulating the scenario where a physical disk actually goes bad. This causes the RAID array /dev/md0 to enter a degraded state, just as it would during a real disk failure. This degraded state allows administrators to observe how the RAID array performs and recovers.
Now, let’s clarify why the other options are incorrect:
A (--remove) simply removes the disk from the RAID array. While the array becomes degraded, this doesn't simulate a disk failure in the same way. It’s more of a clean removal.
B (--zero-superblock) wipes the RAID metadata from the device, which is useful when repurposing a disk. This doesn't simulate a failure—it just erases RAID configuration data.
C uses a non-existent --force-fault flag. This option is not recognized by mdadm, so it’s invalid for simulating disk failure.
E (--offline) is not a standard mdadm option. While some systems may offer ways to take a device offline, it's not the correct or standard method for simulating failure in software RAID via mdadm.
In conclusion, to simulate a failure within a RAID 5 array using standard Linux tools, mdadm --fail is the correct and reliable way to induce a failed state for testing or demonstration purposes.
Question 7:
What is the fewest number of disks necessary to build a RAID 5 array that offers full redundancy?
A. 1
B. 2
C. 3
D. 4
E. 5
Answer: C
Explanation:
RAID 5 is a popular level of Redundant Array of Independent Disks (RAID) used in enterprise and personal computing environments. It is designed to balance performance, capacity, and data protection. In a RAID 5 configuration, data is split and written across multiple disks, a method known as data striping. In addition, parity information—used for fault tolerance—is distributed among the disks. This parity data enables the system to rebuild the contents of a failed disk, allowing for continued operation even in the event of a single disk failure.
For this system to work effectively, at least three disks are required. One disk is used to store parity data, while the others are used for actual data storage. This distribution of parity ensures redundancy, meaning that the RAID 5 array can sustain the failure of one drive without data loss. If fewer than three disks are used, it is impossible to have both striping and distributed parity, which are fundamental to RAID 5’s function.
Now let’s evaluate the other options:
A (1 disk): A single disk setup offers no striping or parity, and therefore no redundancy. This is not a RAID configuration.
B (2 disks): While two disks can be used in RAID 1 (mirroring), they do not meet the minimum requirement for RAID 5. There would not be enough drives to both store data and distribute parity.
D (4 disks) and E (5 disks): These are both valid configurations for RAID 5 and would offer greater storage capacity and potentially improved performance. However, the question asks for the minimum, and three disks are sufficient to implement RAID 5.
In summary, three disks are the absolute minimum necessary to configure a fully functional and redundant RAID 5 array. This allows for both data and parity distribution, offering fault tolerance and efficient storage utilization. Therefore, the correct answer is C.
Question 8:
A Linux system includes one hard disk and one CD writer, both using SATA connections. Which device file typically represents the CD writer?
A. /dev/hdb
B. /dev/sdd
C. /dev/scd1
D. /dev/sr0
E. /dev/sr1
Answer: D
Explanation:
In Linux operating systems, hardware devices are represented by special files located in the /dev directory. Each device file provides a standardized interface through which software can interact with physical components like hard drives, optical drives, and USB devices. When it comes to CD/DVD writers, Linux assigns specific device files to identify and manage these optical drives.
The correct device file for the first CD/DVD writer on a Linux system is usually /dev/sr0. The prefix “sr” stands for SCSI CD-ROM, which remains the convention even though modern optical drives commonly use SATA rather than traditional SCSI interfaces. The Linux kernel abstracts the hardware interface, so SATA optical drives are still treated as if they are SCSI devices for consistency.
Let’s assess the incorrect choices:
A. /dev/hdb: This refers to the second IDE hard disk in legacy systems using Parallel ATA (PATA). Modern systems with SATA drives no longer use this naming scheme, and it does not apply to CD writers.
B. /dev/sdd: This file typically represents the fourth SCSI or SATA disk. Since there is only one hard drive in the system, and the other device is a CD writer, this is not a suitable choice.
C. /dev/scd1: This was used in older Linux systems to refer to SCSI CD-ROM drives. While still supported on some legacy systems, it is now largely replaced by the /dev/srX naming convention in modern distributions.
E. /dev/sr1: This would be the second optical drive in a system. However, since the question states there is only one CD writer, sr1 is not applicable.
In conclusion, the standard and modern Linux device file for the first optical drive is /dev/sr0. This file allows applications and the OS to interact with the CD writer for reading and writing operations. Thus, the correct answer is D.
Which of the following commands is used to load a kernel module and ensure that all dependencies are also loaded automatically?
A. insmod
B. modprobe
C. lsmod
D. depmod
Correct Answer: B
Explanation:
The modprobe command is used in Linux to load a kernel module into memory along with any modules it depends on. It is an essential tool for managing kernel modules in a system running on a modular kernel, where functionality (like device drivers or filesystems) is loaded on-demand rather than compiled directly into the kernel.
When you use modprobe <module_name>, it looks up the module and its dependencies (listed in /lib/modules/$(uname -r)/modules.dep) and loads them in the correct order. This makes it more intelligent and safer than using insmod.
Option A, insmod, only inserts a single module without checking or resolving dependencies. This can result in errors if the module relies on others that haven't yet been loaded.
Option C, lsmod, lists currently loaded modules but does not load any. It is a diagnostic tool, not a management tool.
Option D, depmod, generates the modules.dep and related files used by modprobe to determine dependencies. It does not load modules directly.
Thus, for operational usage where dependency handling is critical, modprobe is the preferred and correct choice. This command is particularly useful during startup sequences (handled via init systems) or when dealing with hot-pluggable devices like USBs or network adapters. It ensures system stability by managing the intricate relationships between kernel components automatically.
Which of the following files in a system using SysVinit controls the default runlevel at system startup?
A. /etc/init.d/default
B. /etc/inittab
C. /etc/default/runlevel
D. /boot/grub/grub.cfg
Correct Answer: B
Explanation:
In systems that use SysVinit as the init system, the /etc/inittab file is responsible for defining the system’s default runlevel. The default runlevel is the mode in which the system boots—such as single-user mode, multi-user mode with or without networking, or full graphical login.
Inside /etc/inittab, the line that controls the default runlevel looks like:Here, 3 signifies that the system should boot into runlevel 3, which typically means multi-user mode with networking but without a graphical interface (in many distributions).
Option A, /etc/init.d/default, is not a valid configuration file. The /etc/init.d/ directory contains startup scripts for different services, but it does not control the default runlevel.
Option C, /etc/default/runlevel, is a misleading name and does not exist in standard SysVinit systems.
Option D, /boot/grub/grub.cfg, is used by the GRUB bootloader to determine which kernel to boot, but it does not influence the runlevel selected once the kernel is running and the init system takes over.
Understanding the role of /etc/inittab is essential for legacy system administration tasks. Although many modern Linux distributions have migrated to systemd, which uses systemctl and /etc/systemd/system/default.target, LPIC-2 candidates are still expected to understand SysVinit due to its presence in older or minimal systems.
By editing /etc/inittab, administrators can ensure systems boot into the desired state, such as a safe mode for maintenance or full graphical mode for user environments. Mastery of this file is crucial for troubleshooting and controlling boot behavior in traditional Linux environments.
Top LPI Certification Exams
Site Search:
SPECIAL OFFER: GET 10% OFF
Pass your Exam with ExamCollection's PREMIUM files!
SPECIAL OFFER: GET 10% OFF
Use Discount Code:
MIN10OFF
A confirmation link was sent to your e-mail.
Please check your mailbox for a message from support@examcollection.com and follow the directions.
Download Free Demo of VCE Exam Simulator
Experience Avanset VCE Exam Simulator for yourself.
Simply submit your e-mail address below to get started with our interactive software demo of your free trial.