Sei sulla pagina 1di 263

Linux System

Administration I
(Red Hat)

Bangladesh Korea Information Access Center (IAC)


Department of Computer Science and Engineering (CSE)
Bangladesh University of Engineering and Technology (BUET)
Linux System Administration I (Red Hat)

Published by
Bangladesh Korea Information Access Center (IAC)
Department of Computer Science and Engineering (CSE)
Bangladesh University of Engineering and Technology (BUET)

Sources of Materials
This document has been compiled from various sources which include but not limited to:

• Red Hat Enterprise Linux 7, System Administrator’s Guide, Red Hat, Inc.
(https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/
7/html/System_Administrators_Guide/).
• Red Hat Enterprise Linux 7 on-line reference manuals.
• The Linux Command Line by William Shotts (http://linuxcommand.org/tlcl.php).
• Ryans Tutorials — Linux Tutorial (http://ryanstutorials.net/linuxtutorial/).
• Tecmint: Linux Howtos, Tutorials & Guides (https://www.tecmint.com/).
• Wikipedia, the Free Encyclopedia (https://www.wikipedia.org/).

Time Allocation of Materials

1. The materials contained in this manual are expected to be covered in total 16 classes.
2. Time allocated for each chapter is shown at the start. It must be mentioned that this
allocation is only tentative and will be adjusted as the need arises. Each class is
3.0 hours. Each class will comprise of a balanced distribution of lecture and hands-on
sessions.
3. Review evaluation will be held in every class.
4. Classes in addition to the above will be allocated for the final evaluation.

Version: 1.2
Last modified: Tuesday the Twenty-Sixth of September, Two Thousand and Seventeen
Contents at a Glance

1 Introduction 1
(1/4 classes)

2 The Basics 5
(2 classes)

3 Text File Operations 33


(3/4 classes)

4 Linux Installation 47
(1 class)

5 Users and Groups 61


(2 classes)

6 Linux Processes 89
(3/4 classes)

7 Services and Daemons 99


(3/4 classes)

8 System Logs 109


(1/2 class)

9 Linux Networking 117


(1 class)

10 Securing Linux Systems 131


(2 classes)

11 Storage Administration 181


(11/2 classes)

12 Linux File Systems 205


(1 class)

i
13 System Issues 213
(11/2 classes)

14 Virtualized Systems 231


(1 class)

ii
Detailed Contents

1 Introduction 1
(1/4 classes)
1.1 Red Hat, Red Hat Enterprise Linux and Others . . . . . . . . . . . . . . . . . . 1
1.1.1 Red Hat Inc. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.2 Red Hat Enterprise Linux . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.3 CentOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 System Administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Expectations from You — the Participants . . . . . . . . . . . . . . . . . . . . . 3

2 The Basics 5
(2 classes)
2.1 The Command Line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.1.1 The Shell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.1.2 Opening a Shell Prompt . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.1.3 Shell Prompt Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.1.4 Structure of Shell Prompt Commands . . . . . . . . . . . . . . . . . . . 7
2.1.5 Useful Tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.1.5.1 Tab Completion . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.1.5.2 Command History . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.1.5.3 [Ctrl]-[Z] and Running Processes in the Background . . . . . . 9
2.1.5.4 Wild Cards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2 The Directory Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2.1 Understanding the File System Tree . . . . . . . . . . . . . . . . . . . . 11
2.2.2 Absolute and Relative Paths . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2.3 File System Hierarchy Standard (FHS) . . . . . . . . . . . . . . . . . . . 12
2.2.4 FHS Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2.4.1 Home Directory . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2.4.2 The /boot/ Directory . . . . . . . . . . . . . . . . . . . . . . . 13
2.2.4.3 The /dev/ Directory . . . . . . . . . . . . . . . . . . . . . . . . 13
2.2.4.4 The /etc/ Directory . . . . . . . . . . . . . . . . . . . . . . . . 13
2.2.4.5 The /lib/ Directory . . . . . . . . . . . . . . . . . . . . . . . . 13
2.2.4.6 The /media/ Directory . . . . . . . . . . . . . . . . . . . . . . 13
2.2.4.7 The /mnt/ Directory . . . . . . . . . . . . . . . . . . . . . . . . 13

iii
2.2.4.8 The /opt/ Directory . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2.4.9 The /proc/ Directory . . . . . . . . . . . . . . . . . . . . . . . 14
2.2.4.10 The /sbin/ Directory . . . . . . . . . . . . . . . . . . . . . . . 14
2.2.4.11 The /srv/ Directory . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2.4.12 The /sys/ Directory . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2.4.13 The /usr/ Directory . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2.4.14 The /usr/local/ Directory . . . . . . . . . . . . . . . . . . . 15
2.2.4.15 The /var/ Directory . . . . . . . . . . . . . . . . . . . . . . . . 15
2.3 Basic Navigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.3.1 Determining Your Current Directory . . . . . . . . . . . . . . . . . . . . 15
2.3.2 More on Paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.3.3 Changing Directories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.4 More About Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.4.1 Everything is a File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.4.2 Linux is an Extensionless System . . . . . . . . . . . . . . . . . . . . . . 19
2.4.3 Linux is Case Sensitive . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.4.4 Spaces in Names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.4.4.1 Quotes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.4.4.2 Escape Characters . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.4.5 Hidden Files and Directories . . . . . . . . . . . . . . . . . . . . . . . . 20
2.5 Manipulating Files and Directories . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.5.1 View Directory Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.5.2 Creating a Directory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.5.3 Removing a File or Directory . . . . . . . . . . . . . . . . . . . . . . . . 23
2.5.3.1 Deleting Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.5.3.2 Deleting Directories . . . . . . . . . . . . . . . . . . . . . . . . 23
2.5.4 Copying a File or Directory . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.5.5 Moving a File or Directory . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.5.6 Renaming a File or a Directory . . . . . . . . . . . . . . . . . . . . . . . 25
2.6 Create Hard and Soft Links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.6.1 Hard Links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.6.2 Soft Links or Symbolic Links . . . . . . . . . . . . . . . . . . . . . . . . 27
2.7 Getting Help in Red Hat Enterprise Linux . . . . . . . . . . . . . . . . . . . . . 29
2.7.1 Manual Pages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.7.1.1 Using man . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.7.1.2 Printing a Man Page . . . . . . . . . . . . . . . . . . . . . . . . 30
2.7.1.3 The man Man Page . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.7.2 info Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.7.3 Files in /usr/share/doc . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

3 Text File Operations 33


(3/4 classes)

iv
3.1 Creating, Viewing, and Editing Text Files . . . . . . . . . . . . . . . . . . . . . 33
3.1.1 Text Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.1.1.1 Shell Prompt Text Editor vi . . . . . . . . . . . . . . . . . . . . 33
3.1.1.2 Text Editor Nano . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.1.2 Viewing Text Files from the Shell Prompt . . . . . . . . . . . . . . . . . 34
3.1.2.1 The head Command . . . . . . . . . . . . . . . . . . . . . . . . 34
3.1.2.2 The tail Command . . . . . . . . . . . . . . . . . . . . . . . . 34
3.1.2.3 The more Command . . . . . . . . . . . . . . . . . . . . . . . . 35
3.1.2.4 Viewing Files with less . . . . . . . . . . . . . . . . . . . . . . 35
3.1.2.5 Viewing and Creating Files with cat . . . . . . . . . . . . . . 35
3.1.2.6 The grep Command . . . . . . . . . . . . . . . . . . . . . . . . 35
3.2 Pipes and Pagers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.2.1 Pipes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.2.2 Using Redirection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.2.3 Appending Standard Output . . . . . . . . . . . . . . . . . . . . . . . . 38
3.2.4 Redirecting Standard Input . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.2.5 Redirecting the Standard Error Stream . . . . . . . . . . . . . . . . . . . 38
3.3 Using Multiple Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.4 File Compression and Archiving . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.4.1 Gzip and Gunzip . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.4.2 Bzip2 and Bunzip2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.4.3 Zip and Unzip . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.4.4 Tar and Star . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.4.4.1 tar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.4.4.2 star . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

4 Linux Installation 47
(1 class)
4.1 Download RedHat Enterprise Linux . . . . . . . . . . . . . . . . . . . . . . . . 47
4.2 System Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.3 Step by Step Installation Instructions . . . . . . . . . . . . . . . . . . . . . . . . 48
4.3.1 Complete Installation and Register the System . . . . . . . . . . . . . . 51
4.4 Software Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.4.1 Yum Package Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.4.2 Red Hat Network, Remote and Local Repositories . . . . . . . . . . . . 57
4.5 RPM Package Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4.6 Update Kernel Package . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

5 Users and Groups 61


(2 classes)
5.1 Manage Local Linux Users and Groups . . . . . . . . . . . . . . . . . . . . . . 61
5.1.1 Users and Groups Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

v
5.1.1.1 User Private Groups . . . . . . . . . . . . . . . . . . . . . . . . 62
5.1.1.2 Shadow Passwords . . . . . . . . . . . . . . . . . . . . . . . . 62
5.1.2 Managing Users in a Graphical Environment . . . . . . . . . . . . . . . 63
5.1.2.1 Using the Users Settings Tool . . . . . . . . . . . . . . . . . . 63
5.1.3 Using Command-Line Tools . . . . . . . . . . . . . . . . . . . . . . . . . 63
5.1.3.1 Adding a New User . . . . . . . . . . . . . . . . . . . . . . . . 64
5.1.3.2 Explaining the Process . . . . . . . . . . . . . . . . . . . . . . 65
5.1.3.3 Modifying an User Account . . . . . . . . . . . . . . . . . . . 67
5.1.3.4 Adding a New Group . . . . . . . . . . . . . . . . . . . . . . . 68
5.1.3.5 Adding an Existing User to an Existing Group . . . . . . . . . 68
5.2 Control Access to Files with Linux File System Permissions . . . . . . . . . . . 69
5.2.1 Viewing File Permission . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
5.2.2 Changing File Permission . . . . . . . . . . . . . . . . . . . . . . . . . . 70
5.2.2.1 Setting Default Permissions for New Files Using umask . . . . 70
5.3 Special File Permissions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
5.3.1 Set User ID (setuid) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
5.4 Set Group ID (setgid) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
5.5 Special Permissions For Directories . . . . . . . . . . . . . . . . . . . . . . . . . 74
5.5.1 Sticky Bit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
5.5.2 setgid Bit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
5.6 Create and Manage Access Control Lists (ACLs) . . . . . . . . . . . . . . . . . 76
5.6.1 Mounting File Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
5.6.1.1 NFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
5.6.2 Setting Access ACLs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
5.6.3 Setting Default ACLs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
5.6.4 Retrieving ACLs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
5.6.5 Archiving File Systems With ACLs . . . . . . . . . . . . . . . . . . . . . 79
5.6.6 Compatibility with Older Systems . . . . . . . . . . . . . . . . . . . . . 79
5.7 Authentication in Red Hat Linux . . . . . . . . . . . . . . . . . . . . . . . . . . 79
5.7.1 Available Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
5.7.2 Configuring System Authentication . . . . . . . . . . . . . . . . . . . . 80
5.7.3 Using authconfig . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
5.7.3.1 Installing the authconfig UI . . . . . . . . . . . . . . . . . . . 81
5.7.3.2 Launching the authconfig UI . . . . . . . . . . . . . . . . . . 82
5.7.3.3 Testing Authentication Settings . . . . . . . . . . . . . . . . . 82
5.7.3.4 Saving and Restoring Configuration Using authconfig . . . 83
5.7.4 Selecting the Identity Store for Authentication with authconfig . . . . 84
5.7.5 Configuring LDAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
5.7.5.1 Configuring LDAP Authentication from the UI . . . . . . . . 85
5.7.5.2 Configuring LDAP User Stores from the Command Line . . . 86
5.7.6 Configuring Kerberos (with LDAP or NIS) Using authconfig . . . . . 86
5.7.6.1 Configuring Kerberos Authentication from the UI . . . . . . 87

vi
5.7.6.2 Configuring Kerberos Authentication from the Command Line 88

6 Linux Processes 89
(3/4 classes)
6.1 Viewing System Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
6.1.1 Using the ps Command . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
6.1.2 Using the top Command . . . . . . . . . . . . . . . . . . . . . . . . . . 91
6.2 Manage Linux Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
6.2.1 Sending Signals to Processes with kill and killall . . . . . . . . . . 92
6.2.1.1 kill Command — Kill the Process by Specifying Its PID . . 92
6.2.1.2 killall Command — Kill Processes by Name . . . . . . . . 93
6.2.2 Manage Priority of Linux Processes . . . . . . . . . . . . . . . . . . . . 93
6.2.2.1 The nice Command . . . . . . . . . . . . . . . . . . . . . . . . 93
6.2.2.2 The renice Command . . . . . . . . . . . . . . . . . . . . . . 94
6.3 Schedule Future Linux Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
6.3.1 Prerequisites for Cron Jobs . . . . . . . . . . . . . . . . . . . . . . . . . 94
6.3.2 Scheduling a Cron Job . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
6.3.2.1 Scheduling a Job as root User . . . . . . . . . . . . . . . . . . 95
6.3.3 Scheduling a Job as Non-root User . . . . . . . . . . . . . . . . . . . . . 96
6.3.4 Scheduling Hourly, Daily, Weekly, and Monthly Jobs . . . . . . . . . . 96
6.4 Configuring at Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

7 Services and Daemons 99


(3/4 classes)
7.1 Introduction to Systemd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
7.2 Managing System Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
7.2.1 Listing Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
7.2.2 Displaying Service Status . . . . . . . . . . . . . . . . . . . . . . . . . . 103
7.2.3 Starting a Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
7.2.4 Stopping a Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
7.2.5 Restarting a Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
7.2.6 Enabling a Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
7.2.7 Disabling a Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
7.3 Configure a System to Use Time Services . . . . . . . . . . . . . . . . . . . . . 106
7.3.1 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
7.3.2 The NTP Package . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108

8 System Logs 109


(1/2 class)
8.1 Locating Log Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
8.2 Interaction of Rsyslog and Journal . . . . . . . . . . . . . . . . . . . . . . . . . 110
8.3 Using the Journal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
8.3.1 Viewing Log Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111

vii
8.3.2 Access Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
8.3.3 Using the Live View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
8.3.4 Filtering Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
8.3.4.1 Filtering by Priority . . . . . . . . . . . . . . . . . . . . . . . . 113
8.3.4.2 Filtering by Time . . . . . . . . . . . . . . . . . . . . . . . . . . 113
8.3.5 Enabling Persistent Storage . . . . . . . . . . . . . . . . . . . . . . . . . 114

9 Linux Networking 117


(1 class)
9.1 Computer Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
9.2 Network Terminologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
9.2.1 IP Address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
9.2.2 Subnet Mask . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
9.2.3 MAC Addresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
9.3 NetworkManager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
9.4 Installing NetworkManager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
9.4.1 The NetworkManager Daemon . . . . . . . . . . . . . . . . . . . . . . . 120
9.4.2 Interacting with NetworkManager . . . . . . . . . . . . . . . . . . . . . 120
9.5 Configure IP Networking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
9.5.1 Using the Text User Interface, nmtui . . . . . . . . . . . . . . . . . . . . 121
9.5.2 Using the NetworkManager Command Line Tool, nmcli . . . . . . . . 122
9.5.2.1 Adding a Dynamic Ethernet Connection . . . . . . . . . . . . 122
9.5.2.2 Adding a Static Ethernet Connection . . . . . . . . . . . . . . 124
9.6 Editing Network Configuration Files . . . . . . . . . . . . . . . . . . . . . . . . 125
9.6.1 Configuring a Network Interface Using ifcfg Files . . . . . . . . . . . 125
9.6.1.1 Static Network Settings . . . . . . . . . . . . . . . . . . . . . . 126
9.6.1.2 Dynamic Network Settings . . . . . . . . . . . . . . . . . . . . 126
9.7 Validating Network Configuration . . . . . . . . . . . . . . . . . . . . . . . . . 126
9.7.1 Check an IP Address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
9.7.2 Checking the Link Status . . . . . . . . . . . . . . . . . . . . . . . . . . 127
9.7.3 Check Route Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
9.8 Setting Up Hostname and Name Resolution . . . . . . . . . . . . . . . . . . . . 128
9.8.1 Configuring Host Names Using Text User Interface, nmtui . . . . . . . 129
9.8.2 Configuring Host Names Using hostnamectl . . . . . . . . . . . . . . 129
9.8.2.1 View All the Host Names . . . . . . . . . . . . . . . . . . . . . 129
9.8.2.2 Set All the Host Names . . . . . . . . . . . . . . . . . . . . . . 129
9.8.2.3 Set a Particular Host Name . . . . . . . . . . . . . . . . . . . . 130
9.8.2.4 Clear a Particular Host Name . . . . . . . . . . . . . . . . . . 130
9.8.3 Configuring Host Names Using nmcli . . . . . . . . . . . . . . . . . . . 130

10 Securing Linux Systems 131


(2 classes)

viii
10.1 Introduction to Computer Security . . . . . . . . . . . . . . . . . . . . . . . . . 131
10.1.1 What is Computer Security? . . . . . . . . . . . . . . . . . . . . . . . . . 131
10.1.2 Common Exploits and Attacks . . . . . . . . . . . . . . . . . . . . . . . 132
10.2 Mitigating Network Attacks Using Firewalls . . . . . . . . . . . . . . . . . . . 133
10.2.1 Linux Firewall Architecture . . . . . . . . . . . . . . . . . . . . . . . . . 134
10.2.1.1 Netfilter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
10.2.1.2 ip tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
10.2.2 Introduction to firewalld . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
10.2.3 Installing firewalld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
10.2.3.1 Stopping firewalld . . . . . . . . . . . . . . . . . . . . . . . . . 136
10.2.3.2 Starting firewalld . . . . . . . . . . . . . . . . . . . . . . . . . 136
10.2.3.3 Checking If firewalld Is Running . . . . . . . . . . . . . . . . 136
10.2.4 Understanding firewalld Concepts . . . . . . . . . . . . . . . . . . . . 137
10.2.4.1 Understanding Network Zones . . . . . . . . . . . . . . . . . 137
10.2.4.2 Understanding Predefined Services . . . . . . . . . . . . . . . 138
10.2.4.3 Understanding the Direct Interface . . . . . . . . . . . . . . . 138
10.2.5 Configuring firewalld Using The Graphical User Interface . . . . . . . 138
10.2.5.1 Configuring IP Sets Using firewall-config . . . . . . . . . . . 142
10.2.6 Configuring the Firewall Using the firewall-cmd Command-Line Tool 142
10.2.6.1 Viewing the Firewall Settings Using the Command-Line
Interface (CLI) . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
10.2.6.2 Changing the Firewall Settings Using the Command-Line
Interface (CLI) . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
10.2.6.3 Using the Direct Interface . . . . . . . . . . . . . . . . . . . . . 148
10.2.6.4 Firewall Lockdown . . . . . . . . . . . . . . . . . . . . . . . . 149
10.2.6.5 Configuring Logging for Denied Packets . . . . . . . . . . . . 150
10.2.7 Using the iptables Service . . . . . . . . . . . . . . . . . . . . . . . . . . 150
10.2.7.1 iptables and IP Sets . . . . . . . . . . . . . . . . . . . . . . . . 151
10.3 Security Enhanced Linux: SELINUX . . . . . . . . . . . . . . . . . . . . . . . . 152
10.3.1 Benefits of running SELinux . . . . . . . . . . . . . . . . . . . . . . . . . 153
10.3.2 SELinux Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
10.3.3 SELinux States and Modes . . . . . . . . . . . . . . . . . . . . . . . . . . 154
10.3.4 SELinux Contexts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
10.3.5 Domain Transitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
10.3.6 SELinux Contexts for Processes . . . . . . . . . . . . . . . . . . . . . . . 157
10.3.7 SELinux Contexts for Users . . . . . . . . . . . . . . . . . . . . . . . . . 158
10.3.8 Targeted Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
10.3.9 Confined Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
10.3.10 Unconfined Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
10.3.11 Confined and Unconfined Users . . . . . . . . . . . . . . . . . . . . . . 163
10.3.11.1 Changing the Default Mapping . . . . . . . . . . . . . . . . . 165
10.3.12 SELinux Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166

ix
10.3.13 Main Configuration File . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
10.3.14 Permanent Changes in SELinux States and Modes . . . . . . . . . . . . 167
10.3.15 Booleans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
10.3.15.1 Listing Booleans . . . . . . . . . . . . . . . . . . . . . . . . . . 167
10.3.15.2 Configuring Booleans . . . . . . . . . . . . . . . . . . . . . . . 168
10.3.16 Information Gathering Tools . . . . . . . . . . . . . . . . . . . . . . . . 168
10.3.16.1 avcstat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
10.3.16.2 seinfo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
10.3.16.3 sesearch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
10.3.17 Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
10.3.18 What Happens When Access is Denied . . . . . . . . . . . . . . . . . . 170
10.3.19 Top Three Causes of Problems . . . . . . . . . . . . . . . . . . . . . . . 170
10.3.19.1 Labeling Problems . . . . . . . . . . . . . . . . . . . . . . . . . 171
10.3.19.2 How are Confined Services Running? . . . . . . . . . . . . . . 172
10.3.19.3 Port Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
10.3.19.4 Evolving Rules and Broken Applications . . . . . . . . . . . . 174
10.3.20 Fixing Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
10.3.20.1 Linux Permissions . . . . . . . . . . . . . . . . . . . . . . . . . 174
10.3.20.2 Possible Causes of Silent Denials . . . . . . . . . . . . . . . . 174
10.3.20.3 Manual Pages for Services . . . . . . . . . . . . . . . . . . . . 175
10.3.20.4 Permissive Domains . . . . . . . . . . . . . . . . . . . . . . . . 176
10.3.20.5 Searching for and Viewing Denials . . . . . . . . . . . . . . . 178
10.3.20.6 ausearch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
10.3.20.7 aureport . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
10.3.20.8 sealert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179

11 Storage Administration 181


(11/2 classes)
11.1 Partitions and File Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
11.1.1 Hard Disk Basic Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . 181
11.1.2 Partitions: Turning One Drive Into Many . . . . . . . . . . . . . . . . . 182
11.1.3 Partitions Within Partitions — an Overview of Extended Partitions . . 183
11.2 Partitioning Schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
11.2.1 Master Boot Record (MBR) Partitioning Scheme . . . . . . . . . . . . . 184
11.2.2 GUID Partition Table (GPT) Partitioning Scheme . . . . . . . . . . . . . 184
11.2.3 Device and Partition Naming Scheme . . . . . . . . . . . . . . . . . . . 185
11.3 Disk Partitions and Mount Points . . . . . . . . . . . . . . . . . . . . . . . . . . 186
11.3.1 Mount Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
11.3.2 Mount Points in a Machine . . . . . . . . . . . . . . . . . . . . . . . . . 187
11.3.2.1 Viewing the /etc/fstab File . . . . . . . . . . . . . . . . . . . 187
11.3.2.2 Viewing /etc/mtab File . . . . . . . . . . . . . . . . . . . . . . 188
11.3.2.3 Viewing /proc/mounts File . . . . . . . . . . . . . . . . . . . . 189

x
11.3.2.4 Issuing the df Command . . . . . . . . . . . . . . . . . . . . . 190
11.4 UUID and Other Persistent Identifiers . . . . . . . . . . . . . . . . . . . . . . . 191
11.4.1 Using the blkid Command . . . . . . . . . . . . . . . . . . . . . . . . . 191
11.5 Managing Partitions and File Systems . . . . . . . . . . . . . . . . . . . . . . . 192
11.5.1 fdisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
11.5.1.1 Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
11.5.2 parted . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
11.5.2.1 Viewing the Partition Table . . . . . . . . . . . . . . . . . . . . 195
11.5.2.2 Creating a Partition . . . . . . . . . . . . . . . . . . . . . . . . 196
11.6 Swap Partitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
11.6.1 Adding Swap Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
11.6.1.1 Add a Swap Partition . . . . . . . . . . . . . . . . . . . . . . . 197
11.6.1.2 Add a Swap File . . . . . . . . . . . . . . . . . . . . . . . . . . 198
11.7 LVM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
11.7.1 LVM Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
11.7.2 LVM Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
11.7.2.1 Physical Volumes . . . . . . . . . . . . . . . . . . . . . . . . . 200
11.7.2.2 Multiple Partitions on a Disk . . . . . . . . . . . . . . . . . . . 200
11.7.2.3 Volume Groups . . . . . . . . . . . . . . . . . . . . . . . . . . 200
11.7.2.4 LVM Logical Volumes . . . . . . . . . . . . . . . . . . . . . . . 200
11.7.2.5 Extents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
11.7.3 LVM Administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
11.7.3.1 Creating LVM Logical Volumes . . . . . . . . . . . . . . . . . 201
11.7.3.2 Resizing LVM Logical Volumes . . . . . . . . . . . . . . . . . 203

12 Linux File Systems 205


(1 class)
12.1 Create, Mount, Unmount, and Use vfat, ext4, and xfs File Systems . . . . . . . 205
12.1.1 VFAT File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
12.1.2 The ext4 File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
12.1.3 The XFS File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
12.2 Mount and Unmount CIFS File Systems . . . . . . . . . . . . . . . . . . . . . . 207
12.2.1 Common Internet File System (CIFS) . . . . . . . . . . . . . . . . . . . . 207
12.2.2 Mount CIFS Share . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
12.3 Mount and Unmount NFS Network File Systems . . . . . . . . . . . . . . . . . 208
12.4 Managing Logical Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
12.4.1 Creating Linear Logical Volumes . . . . . . . . . . . . . . . . . . . . . . 209
12.4.2 Growing Logical Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . 210
12.4.3 Shrinking Logical Volumes . . . . . . . . . . . . . . . . . . . . . . . . . 211

13 System Issues 213


(11/2 classes)

xi
13.1 System Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
13.1.1 Access Remote Systems Using ssh . . . . . . . . . . . . . . . . . . . . . 213
13.1.2 Using Key-Based Authentication . . . . . . . . . . . . . . . . . . . . . . 214
13.2 Securely Transfer Files Between Systems . . . . . . . . . . . . . . . . . . . . . . 216
13.3 Log in and Switch Users in Multiuser Targets . . . . . . . . . . . . . . . . . . . 216
13.4 System Boot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
13.4.1 Boot, Reboot, and Shut Down a System Normally . . . . . . . . . . . . 217
13.4.1.1 shutdown Command . . . . . . . . . . . . . . . . . . . . . . . 218
13.4.1.2 reboot Command . . . . . . . . . . . . . . . . . . . . . . . . . 219
13.4.2 Boot Systems into Different Targets Manually . . . . . . . . . . . . . . . 219
13.4.2.1 Working with Systemd Targets . . . . . . . . . . . . . . . . . . 219
13.4.2.2 Viewing the Default Target . . . . . . . . . . . . . . . . . . . . 221
13.4.2.3 Viewing the Current Target . . . . . . . . . . . . . . . . . . . . 221
13.4.2.4 Changing the Default Target . . . . . . . . . . . . . . . . . . . 222
13.4.2.5 Changing the Current Target . . . . . . . . . . . . . . . . . . . 222
13.4.2.6 Changing to Rescue Mode . . . . . . . . . . . . . . . . . . . . 223
13.4.2.7 Changing to Emergency Mode . . . . . . . . . . . . . . . . . . 223
13.4.2.8 Configure Systems to Boot into a Specific Target Automatically224
13.5 Modify the System Bootloader . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
13.5.1 Introduction to GRUB 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
13.5.2 View Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
13.5.3 Customizing the GRUB 2 Configuration File . . . . . . . . . . . . . . . 226
13.5.3.1 Changing the Default Boot Entry . . . . . . . . . . . . . . . . 227
13.5.3.2 Editing a Menu Entry . . . . . . . . . . . . . . . . . . . . . . . 228
13.6 Interrupt the Boot Process in Order to Gain Access to a System . . . . . . . . . 228

14 Virtualized Systems 231


(1 class)
14.1 Introduction to Virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
14.1.1 What is Virtualization? . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
14.1.2 Why Use Virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
14.1.3 Red Hat Virtualization Solutions . . . . . . . . . . . . . . . . . . . . . . 232
14.1.4 KVM and Virtualization in Red Hat Enterprise Linux . . . . . . . . . . 232
14.1.5 libvirt and libvirt Tools . . . . . . . . . . . . . . . . . . . . . . . . . 233
14.1.6 Virtualized Hardware Devices . . . . . . . . . . . . . . . . . . . . . . . 233
14.1.7 Virtual Networking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
14.2 Creating a Virtual Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
14.2.1 Basic Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
14.2.2 Required Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
14.2.3 Creating a Virtual Machine with Virtual Machine Manager . . . . . . . 235
14.2.3.1 The Virtual Machine Manager GUI . . . . . . . . . . . . . . . 236
14.2.3.2 Creating a Virtual Machine with Virtual Machine Manager . 236

xii
14.2.3.3 Exploring the Guest Virtual Machine . . . . . . . . . . . . . . 239
14.3 Interacting with Virtualization from Command-Line . . . . . . . . . . . . . . . 241
14.3.1 virsh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
14.3.1.1 Running virsh . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
14.3.1.2 Connecting to the Hypervisor . . . . . . . . . . . . . . . . . . 242
14.3.1.3 Displaying Information about a Guest Virtual Machine and
the Hypervisor . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
14.3.1.4 Starting a Guest Virtual Machine . . . . . . . . . . . . . . . . 243
14.3.1.5 Configuring a Virtual Machine to be Started Automatically at
Boot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
14.3.1.6 Rebooting a Guest Virtual Machine . . . . . . . . . . . . . . . 243
14.3.1.7 Shutting Down a Guest Virtual Machine . . . . . . . . . . . . 243
14.3.1.8 Suspending a Guest Virtual Machine . . . . . . . . . . . . . . 243
14.3.1.9 Other virsh Commands . . . . . . . . . . . . . . . . . . . . . 243
14.3.2 Other Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244

xiii
This page is intentionally left blank

xiv
Chapter 1

Introduction
1/4 classes

Chapter Goals
1. Introduce Red Hat Enterprise Linux.
2. Explain the similarities and differences between Red Hat
Enterprise Linux and CentOS.
3. Explain what exactly are the responsibilities of a system
administrator.
4. Put forth the expectations from the participants.

1.1 Red Hat, Red Hat Enterprise Linux and Others


1.1.1 Red Hat Inc.
Red Hat, Inc. is an American multinational software company providing open-source
software products to the enterprise community. Founded in 1993, Red Hat has its corporate
headquarters in Raleigh, North Carolina, with satellite offices worldwide.
Red Hat has become associated to a large extent with its enterprise operating system Red Hat
Enterprise Linux and with the acquisition of open-source enterprise middleware vendor JBoss.
Red Hat also offers Red Hat Virtualization (RHV), an enterprise virtualization product. Red
Hat provides storage, operating system platforms, middleware, applications, management
products, and support, training, and consulting services.
Red Hat creates, maintains, and contributes to many free software projects. It has acquired
several proprietary software product codebases through corporate mergers and acquisitions
and has released such software under open source licenses. As of March 2016, Red Hat is the
second largest corporate contributor to the Linux kernel version 4.5 after Intel.

1
1. Introduction

1.1.2 Red Hat Enterprise Linux


Red Hat Enterprise Linux is a Linux distribution developed by Red Hat and targeted toward
the commercial market. Red Hat Enterprise Linux is released in server versions for x86, x86-64,
Itanium, PowerPC and IBM System z, and desktop versions for x86 and x86-64. All of the Red
Hat’s official support and training, together with the Red Hat Certification Program, focuses
on the Red Hat Enterprise Linux platform. Red Hat Enterprise Linux is often abbreviated to
RHEL, although this is not an official designation.

The first version of Red Hat Enterprise Linux to bear the name originally came onto the
market as “Red Hat Linux Advanced Server”. In 2003 Red Hat rebranded Red Hat Linux
Advanced Server to “Red Hat Enterprise Linux AS”, and added two more variants, Red Hat
Enterprise Linux ES and Red Hat Enterprise Linux WS.

1.1.3 CentOS
CentOS (Community Enterprise Operating System) is a Linux distribution that attempts
to provide a free, enterprise-class, community-supported computing platform functionally
compatible with its upstream source, Red Hat Enterprise Linux (RHEL). In January 2014,
CentOS announced the official joining with Red Hat while staying independent from RHEL,
under a new CentOS governing board.

The first CentOS release in May 2004, numbered as CentOS version 2, was forked from RHEL
version 2.1AS. Since the release of version 7.0, CentOS officially supports only the x86-64
architecture, while versions older than 7.0-1406 also support IA-32 with Physical Address
Extension (PAE). As of December 2015, AltArch releases of CentOS 7 are available for the
IA-32 architecture, Power architecture, and for the ARMv7hl and AArch64 variants of the
ARM architecture.

CentOS is a community project that is developed, maintained, and supported by and for its
users and contributors. Red Hat Enterprise Linux is a subscription product that is developed,
maintained, and supported by Red Hat for its subscribers.

While CentOS is derived from the Red Hat Enterprise Linux codebase, CentOS and Red Hat
Enterprise Linux are distinguished by divergent build environments, QA processes, and,
in some editions, different kernels and other open source components. For this reason, the
CentOS binaries are not the same as the Red Hat Enterprise Linux binaries.

The two also have very different focuses. While CentOS delivers a distribution with strong
community support, Red Hat Enterprise Linux provides a stable enterprise platform with a
focus on security, reliability, and performance as well as hardware, software, and government
certifications for production deployments. Red Hat also delivers training, and an entire
support organization ready to fix problems and deliver future flexibility by getting features
worked into new versions.

2
1.2. System Administration

1.2 System Administration


System administration is typically done by information technology experts for or within
an organization. Their job is to ensure that all related computer systems and services keep
working (e.g. a website or bank/telecommunication data center).

A system administrator, or sysadmin, is a person responsible to maintain and operate a


computer system or network for a company or other organization. System administrators are
often members of an information technology department.

The duties of a system administrator are wide-ranging, and vary from one organization to
another. Sysadmins are usually charged with installing, supporting, and maintaining servers
or other computer systems, and planning for and responding to service outages and other
problems. Other duties may include scripting or light programming, project management
for systems-related projects, supervising or training computer operators, and being the
equivalent of a handyman for computer problems beyond the knowledge of technical support
staff.

It is common for systems administrators and systems analysts charged with developing and
maintaining computer processes to identify operational and developmental systems. This is
done to provide maximum reliability and availability on mission-critical systems used within
the organization’s processes by generic users to accomplish routine work while providing
developmental resources to computer process development or research teams augmenting
existing or developing new processes for the organization.

1.3 Expectations from You — the Participants


This is a course which is rather intensive and you need to fulfill the following expectations to
become successful:

1. Be present on time.
2. Practice thoroughly both in the class and at home.
3. Devote enough time for home study.

One
3
This page is intentionally left blank

4
Chapter 2

The Basics
2 classes

Chapter Goals
1. Access a shell prompt and issue commands with correct syntax.
2. Understand directory structure.
3. Create and edit text files.
4. Create, delete, copy, and move files and directories.
5. Create hard and soft links.
6. Locate, read, and use system documentation including man,
info, and files in /usr/share/doc.

2.1 The Command Line


2.1.1 The Shell
When we speak of the command line, we are really referring to the shell. The shell is a
program that takes keyboard commands and passes them to the operating system to carry
out. Almost all Linux distributions supply a shell program from the GNU Project called bash.
The name “bash” is an acronym for “Bourne Again SHell”, a reference to the fact bash is an
enhanced replacement for sh, the original Unix shell program written by Steve Bourne.

The command line is an interesting entity, and if you’ve not used one before, can be a bit
daunting. Don’t worry, with a bit of practice you’ll soon come to see it as your friend. Don’t
think of it as leaving the GUI behind so much as adding to it. While you can leave the GUI all
together, most people open up a command line interface just as another window on their
desktop (in fact you can have as many open as you like). This is also to our advantage as we

5
2. The Basics

can have several command lines open and doing different tasks in each at the same time. We
can also easily jump back to the GUI when it suits us. Experiment until you find the setup
that suits you best.

A command line, or terminal, is a text based interface to the system. You are able to enter
commands by typing them on the keyboard and feedback will be given to you similarly as
text.

2.1.2 Opening a Shell Prompt


The desktop offers access to a shell prompt, an application that allows you to type commands
instead of using a graphical interface for all computing activities.

You can open a shell prompt by selecting Applications (the main menu on the panel) → System
Tools → Terminal.

You can also start a shell prompt by right-clicking on the desktop and choosing Open Terminal
from the menu.

To exit a shell prompt, click the X button on the upper right corner of the shell prompt window,
type exit at the prompt, or press [Ctrl]-[D] at the prompt.

2.1.3 Shell Prompt Basics


Graphical environments for Linux have come a long way in the past few years. You can
be perfectly productive in the X Window System and only have to open a shell prompt to
complete a few tasks. See Figure 2.1 for an example.

However, many Red Hat Enterprise Linux functions can be completed faster from the shell
prompt than from a graphical user interface (GUI). In less time than it takes to open a file

Figure 2.1: A Red Hat Enterprise Linux shell prompt.

6
2.1. The Command Line

manager, locate a directory, and then create, delete, or modify files from a GUI, a task can
be finished with just a few commands at a shell prompt. As a system administrator, it is
absolutely essential that you are lightening fast in using the shell commands.

A shell prompt looks similar to other command line interfaces with which you might be
familiar. Users type commands at a shell prompt, the shell interprets these commands, and
then the shell tells the OS what to do. Experienced users can write shell scripts to expand
their capabilities even further.

The shell prompt within a terminal window looks something like this:
[ username@localhost . localdomain username ]$

There are any number of symbols that can be used to indicate the end of the shell prompt, and
you can customize what your prompt looks like. However, there are two symbols that you
will see more often than any others, “$” and ”#”. The first symbol, “$”, is the last character in
the prompt when you are logged in as a normal user. The shell prompt for a normal user
looks something like this:
[ username@localhost . localdomain username ]$

The second symbol, “#”, is the last character in the prompt when you are logged in as root.
This is true whether you logged in as root from the initial screen or if you executed the su -
command to become root. The shell prompt for root looks something like this:
[ root@localhost . localdomain root]#

This slight difference can help remind you what privileges you currently have.

2.1.4 Structure of Shell Prompt Commands


In general, a command run from the shell prompt will have the following format:
command -options <filename >

Both -options and <filename> are optional: the command may not require either one, or
it may require multiple options and files. When specifying multiple options, list them as a
group. For example, to see a long listing of information (-l) about all files (-a) in a directory,
you would run the command:
$ ls -la

There are many variations in required information types for individual commands. If you
aren’t sure about a command and how it is used, you can do one of three things:

• Enter the command alone at a shell prompt and press [Enter]. For example, entering cp
alone returns a brief description of the command and its usage. Other commands, such
as cat, require no arguments or files to run as normal. To quit such a command, press
[Ctrl]-[D]. If that does not work, press [Ctrl]-[C].
• Enter man command (Section 2.7.1) at a shell prompt. This opens the manual page for
that command. A man page is a manual written by the command’s developer explaining

7
2. The Basics

how it is used and what each option means. You can also enter man man at a shell
prompt for more information on man pages. Navigate through the man page using the
directional keys on your keyboard. The [Space] bar moves you down a page, [B] moves
you up a page. To quit, press [Q]. If the man page for a command is either unavailable
or does not provide enough information, the info system may provide additional
information.
• Enter info command (Section 2.7.2) at a shell prompt. Some info pages have the same
information as man pages, but navigation is slightly different. For more information,
enter info info at a shell prompt.

2.1.5 Useful Tips


Below are a few useful features of the bash shell that reduce the amount of typing required at
a shell prompt.

2.1.5.1 Tab Completion

Tab completion is one of the most useful shortcuts available at the command line. Red Hat
Enterprise Linux has the ability to “guess” what command, directory, or filename you are
entering at the shell prompt. Press the [Tab] key, and the shell offers possible completions for
partial words. The more letters typed before pressing [Tab], the closer the shell comes to the
intended command.

If there are multiple solutions to the partial text on the command line, the shell presents them
as a list. If the list is very long, the shell will first ask if you would like to display all matches.
Navigate long lists of potential matches by pressing the [Space] bar to go down one page, the
[B] key to go back one page, the directional (or “arrow”) keys to move one line at a time, and
[Q] to quit.

The shell assumes that the first word entered at the prompt is a command. The possible
completions it offers are the names of commands or applications. This can be helpful if you
are not sure of the exact spelling of a command or if you are searching for a certain command.
It can also serve to help a new user become familiar with the available commands.

For example,

1. Type the letter g at a prompt and press [Tab] twice.


2. The shell asks if you want to see all 2551 possibilities. This means that there are 255
commands that start with the letter “g”. Searching through this list would take too
much time.
3. Press [N] for no.
4. Entering more of the command name will produce a shorter list of possible matches.
For this example, type gnome and press [Tab] twice. A list of every command that starts
1 This number may vary.

8
2.1. The Command Line

with gnome appears. This is a much shorter list, and can be scrolled through using the
same keys as man pages. Scroll to the end of the list to return to the shell prompt. The
letters gnome are still entered.
5. To finish entering a command with tab completion, enter just a few more characters,
-ca, and press [Tab] twice. The shell returns a match of gnome-calculator, and if you
then press [Enter], the GNOME Calculator application starts.

Tab completion also works on filenames and directories. The shell prompt assumes that
the second word on the command line is a filename or directory. Typing a partial word
and pressing [Tab] twice will generate possible completions according to the files and sub-
directories in your current working directory. The command line knows your directory
structure. You can use tab completion to enter a long string of sub-directories by typing the
first few letters of each directory and pressing [Tab] instead of navigating to one subdirectory
at a time.

For example, reaching the sub-directory:


example /sub -dir1/sub -dir2/sub -dir3/

would take a great deal of repetitive typing. However, with tab completion, a user would
only have to enter a few keystrokes:
$ cd ex[Tab]su[Tab]su[Tab]su[Tab]

2.1.5.2 Command History

It is unnecessary to type the same command over and over. The bash shell remembers
your past commands. These commands are stored in the .bash_history file in each user’s
home directory. To use the history, press the up arrow to scroll backward through the
commands you have already entered. The [ Ctrl ]−[R] shortcut searches through your previous
commands. Press [ Ctrl ]−[R] and type the beginning of the command you previously issued.
The command history stops at the most recent version of that command.

Commands that you only typed partially and did not follow with [Enter] are not saved into
your command history file. To clear your command history, type history -c.

By default, Red Hat Enterprise Linux stores 1000 commands. Each terminal window or shell
prompt stores a separate set of commands. If you gain root privileges by using the command
su -, the history file (and thus the commands) you access are root’s, not the user’s.

2.1.5.3 [Ctrl]-[Z] and Running Processes in the Background

Applications and processes can be started from the command line. When an application
starts from the command line, that particular shell is taken up with standard output for that
application until the application closes. The screen fills with gibberish or messages that can
be ignored. To continue to use the current shell while running an application from the same
shell, add the ampersand, &, to the end of the command line. For example, oowriter & starts

9
2. The Basics

OpenOffice.org Writer and allows you to continue entering commands on the command line.
This is known as running a process in the background.

If you have started an application or process and forgotten to add the &, first press [ Ctrl ]−[Z]
— this suspends the application. To allow it to continue running without displaying standard
output, type bg and press [Enter]. This is referred to as running the application in the
background.

2.1.5.4 Wild Cards

Wild cards are place holders used to allow users to search for or use multiple files with similar
names. The subject of wild cards is part of the larger subject of regular expressions. Two of
the most common wild cards are * and ?.

The asterisk, *, represents any character or string of characters. The entry a*.txt could refer to
ab.txt as well as aardvark.txt.

The question mark represents a single character. The entry a?.txt could refer to ab.txt and
a1.txt, but not aardvark.txt.

What if you forget the name of the file you are looking for? Using wild cards or regular
expressions, you can perform actions on a file or files without knowing the complete file
name. Type out what you know, substituting a wild card for the remainder.

For example, to find a file called sneaksomething.txt, enter:


$ ls sneak *. txt

The shell lists every file that matches that pattern:


sneakers .txt

Regular expressions are more complex than the straightforward asterisk or question mark.

When an asterisk, for example, just happens to be part of a file name, as might be the case if
the file sneakers.txt was called sneak*.txt, that is when regular expressions can be useful.

Using the backslash (\), you can specify that you do not want to search out everything by
using the asterisk, but you are instead looking for a file with an asterisk in the name.

If the file is called sneak*.txt, type:


sneak \*. txt

Here is a brief list of wild cards and regular expressions:

* : Matches all characters.


? : Matches one character.
\* : Matches the * character.
\? : Matches the ? character.
\) : Matches the ) character.

10
2.2. The Directory Structure

Figure 2.2: File system layout.

2.2 The Directory Structure


2.2.1 Understanding the File System Tree
Red Hat Enterprise Linux organizes its files in what is called a hierarchical directory structure.
This means that they are organized in a tree-like pattern of directories (sometimes called
folders in other systems), which may contain files and other directories. The first directory in
the file system is called the root directory. The root directory contains files and subdirectories,
which contain more files and subdirectories and so on and so on.
Note that unlike Windows, which has a separate file system tree for each storage device,
Unix-like systems such as Linux always have a single file system tree, regardless of how
many drives or storage devices are attached to the computer. Storage devices are attached (or
more correctly, mounted) at various points on the tree according to the whims of the system
administrator, the person (or persons) responsible for the maintenance of the system.
See Figure 2.2 to get an idea of the file system tree.

2.2.2 Absolute and Relative Paths


There are two types of paths we can use, absolute and relative. Whenever we refer to a file or
directory we are using one of these paths. Whenever we refer to a file or directory, we can, in
fact, use either type of path (either way, the system will still be directed to the same location).
To begin with, we have to understand that the file system under Linux is a hierarchical
structure. At the very top of the structure is what’s called the root directory. It is denoted by

11
2. The Basics

a single slash (/). It has subdirectories, they have subdirectories and so on. Files may reside
in any of these directories.

An absolute path name is the location of a directory or file relative to the root directory, and it
always starts with the root directory (i.e., with a forward slash). Absolute paths specify a
location (file or directory) in relation to the root directory. You can identify them easily as
they always begin with a forward slash (/).

Relative paths specify a location (file or directory) in relation to where we currently are in the
system. They will not begin with a slash.

2.2.3 File System Hierarchy Standard (FHS)


Red Hat Enterprise Linux uses the Filesystem Hierarchy Standard (FHS)2 file system structure,
which defines the names, locations, and permissions for many file types and directories.

The FHS document is the authoritative reference to any FHS-compliant file system, but the
standard leaves many areas undefined or extensible. This section is an overview of the
standard and a description of the parts of the file system not covered by the standard.

Compliance with the standard means many things, but the two most important are
compatibility with other compliant systems and the ability to mount a /usr/ partition
as read-only. This second point is important because the directory contains common
executables and should not be changed by users. Also, since the /usr/ directory is mounted
as read-only, it can be mounted from the CD-ROM or from another machine via a read-only
NFS mount.

2.2.4 FHS Organization


The directories and files noted here are a small subset of those specified by the FHS document.

2.2.4.1 Home Directory

A home directory, also called a login directory, is the directory on Unix-like operating systems
that serves as the repository for a user’s personal files, directories and programs. It is also the
directory that a user is first in after logging into the system.

A home directory is created automatically for every ordinary user in the directory called
/home. A standard subdirectory of the root directory, /home has the sole purpose of containing
users’ home directories. The root directory, which is designated by a forward slash (/), is the
directory that contains all other directories and their subdirectories as well as all files on the
system.

The name of a user’s home directory is by default identical to that of the user. Thus, for
example, a user with a user name of mmasroorali would typically have a home directory
named mmasroorali. It would have an absolute path name of /home/mmasroorali.
2 The complete standard is available online at http://www.pathname.com/fhs/.

12
2.2. The Directory Structure

The only user that will by default have its home directory in a different location is the root (i.e.,
administrative) user, whose home directory is /root. /root is another standard subdirectory
of the root directory, and it should not be confused with the root directory (although it
sometimes is by new users). For security purposes, even system administrators should have
ordinary accounts with home directories in /home into which they routinely log in, and they
should use the root account only when absolutely necessary.

2.2.4.2 The /boot/ Directory

The /boot/ directory contains static files required to boot the system, such as the Linux kernel.
These files are essential for the system to boot properly. Do not remove the /boot/ directory.
Doing so renders the system unbootable.

2.2.4.3 The /dev/ Directory

The /dev/ directory contains device nodes that either represent devices that are attached to
the system or virtual devices that are provided by the kernel. These device nodes are essential
for the system to function properly. The udev daemon takes care of creating and removing all
these device nodes in /dev/.

Devices in the /dev directory and subdirectories are either character (providing only a serial
stream of input/output) or block (accessible randomly). Character devices include mouse,
keyboard, modem while block devices include hard disk, floppy drive etc.

2.2.4.4 The /etc/ Directory

The /etc/ directory is reserved for configuration files that are local to the machine. No
binaries are to be placed in /etc/. Any binaries that were once located in /etc/ should be
placed into /sbin/ or /bin/.

2.2.4.5 The /lib/ Directory

The /lib/ directory should contain only those libraries needed to execute the binaries in
/bin/ and /sbin/. These shared library images are particularly important for booting the
system and executing commands within the root file system.

2.2.4.6 The /media/ Directory

The /media/ directory contains subdirectories used as mount points for removable media
such as usb storage media, DVDs, CD-ROMs, and Zip disks.

2.2.4.7 The /mnt/ Directory

The /mnt/ directory is reserved for temporarily mounted file systems, such as NFS file system
mounts. For all removable media, please use the /media/ directory. Automatically detected
removable media will be mounted in the /media directory.

13
2. The Basics

2.2.4.8 The /opt/ Directory

The /opt/ directory provides storage for most application software packages.

A package placing files in the /opt/ directory creates a directory bearing the same name as
the package. This directory, in turn, holds files that otherwise would be scattered throughout
the file system, giving the system administrator an easy way to determine the role of each
file within a particular package. For example, if sample is the name of a particular software
package located within the /opt/ directory, then all of its files are placed in directories inside
the /opt/sample/ directory, such as /opt/sample/bin/ for binaries and /opt/sample/man/
for manual pages.

2.2.4.9 The /proc/ Directory

The /proc/ directory contains special files that either extract information from or send
information to the kernel. Examples include system memory, CPU information, hardware
configuration etc.

Due to the great variety of data available within /proc/ and the many ways this directory can
be used to communicate with the kernel, an entire chapter has been devoted to the subject.

2.2.4.10 The /sbin/ Directory

The /sbin/ directory stores executables used by the root user. The executables in /sbin/ are
used at boot time, for system administration and to perform system recovery operations.

Of this directory, the FHS says: /sbin contains binaries essential for booting, restoring,
recovering, and/or repairing the system in addition to the binaries in /bin. Programs
executed after /usr/ is known to be mounted (when there are no problems) are generally
placed into /usr/sbin. Locally-installed system administration programs should be placed
into /usr/local/sbin.

2.2.4.11 The /srv/ Directory

The /srv/ directory contains site-specific data served by your system running Red Hat
Enterprise Linux. This directory gives users the location of data files for a particular service,
such as FTP, WWW, or CVS. Data that only pertains to a specific user should go in the /home/
directory.

2.2.4.12 The /sys/ Directory

The /sys/ directory utilizes the new sysfs virtual file system specific to the kernel.

2.2.4.13 The /usr/ Directory

The /usr/ directory is for files that can be shared across multiple machines. The /usr/
directory is often on its own partition and is mounted read-only.

14
2.3. Basic Navigation

2.2.4.14 The /usr/local/ Directory

The /usr/local hierarchy is for use by the system administrator when installing software
locally. It needs to be safe from being overwritten when the system software is updated. It
may be used for programs and data that are shareable among a group of hosts, but not found
in /usr. The /usr/local/ directory is similar in structure to the /usr/ directory.

In Red Hat Enterprise Linux, the intended use for the /usr/local/ directory is slightly
different from that specified by the FHS. The FHS says that /usr/local/ should be where
software that is to remain safe from system software upgrades is stored. Since software
upgrades can be performed safely with RPM Package Manager (RPM), it is not necessary to
protect files by putting them in /usr/local/. Instead, the /usr/local/ directory is used for
software that is local to the machine.

For instance, if the /usr/ directory is mounted as a read-only NFS share from a remote host,
it is still possible to install a package or program under the /usr/local/ directory.

2.2.4.15 The /var/ Directory

Since the FHS requires Linux to mount /usr/ as read-only, any programs that write log files
or need spool/ or lock/ directories should write them to the /var/ directory.

/var/ is for variable data files. This includes spool directories and files, administrative and
logging data, and transient and temporary files.

System log files, such as messages and lastlog, go in the /var/log/ directory. The
/var/lib/rpm/ directory contains RPM system databases. Lock files go in the /var/lock/
directory, usually in directories for the program using the file. The /var/spool/ directory
has subdirectories for programs in which data files are stored.

2.3 Basic Navigation


2.3.1 Determining Your Current Directory
As you browse directories, it is easy to get lost or forget the name of your current directory.
By default, the Bash prompt in Red Hat Enterprise Linux shows only your current directory,
not the entire path.

To display the location of your current working directory, enter the command:
$ pwd

2.3.2 More on Paths


Imagine navigating the file system as climbing around in the branches of the tree. The
branches you would climb and traverse in order to get from one part of the tree to another
would be the path from one location to another. As already mentioned above (Section 2.2.2),
there are two kinds of paths, depending on how you describe them. A relative path describes

15
2. The Basics

the route starting from your current location in the tree. An absolute path describes the route
to the new location starting from the tree trunk (the root directory).
Navigating via the shell prompt utilizes either relative or absolute paths. In some instances,
relative paths are shorter to type than absolute paths. In others, the unambiguous absolute
path is easier to remember.
~ (tilde) is a shortcut for your home directory, eg, if your home directory is
/home/mmasroorali then you could refer to the directory Documents with the path
/home/mmasroorali/Documents or ~/Documents.
There are two special characters used with relative paths. These characters are “.” and “..”.
A single period, “.”, is shorthand for “here”. It references your current working directory.
Two periods, “..”, indicates the directory one level up from your current working directory.
If your current working directory is your home directory, /home/user/, “..” indicates the
next directory up, /home/.
Consider moving from the /usr/share/doc/ directory to the /tmp/ directory. The relative
path between the two requires a great deal of typing, and requires knowledge of the absolute
path to your current working directory. The relative path would look like this: ../../../tmp/.
The absolute path is much shorter: /tmp/. The relative path requires you to move up three
directories to the / directory before moving to the /tmp/ directory. The absolute path, which
always starts at the / directory, is much simpler.
However, the relative path between two closely-related directories may be simpler
than the absolute path. Consider moving from /var/www/html/pics/vacation/ to
/var/www/html/pics/birthday/. The relative path is: ../birthday/. The absolute path is:
/var/www/html/pics/birthday/. Clearly, the relative path is shorter in this case.
There is no right or wrong choice: both relative and absolute paths point to the same branch
of the tree. Choosing between the two is a matter of preference and convenience.

2.3.3 Changing Directories


Changing directories is easy as long as you know where you are (your current directory) and
how that relates to where you want to go.
To change directories, use the cd command. Typing this command by itself returns you to
your home directory; moving to any other directory requires a path name.
You can use absolute or relative path names. Absolute paths start at the top of the file system
with / (referred to as root) and then look down for the requested directory; relative paths
look down from your current directory, wherever that may be. The following directory tree
illustrates how cd operates:
/
/ directory1
/ directory1 / directory2
/ directory1 / directory2 / directory3

16
2.3. Basic Navigation

If you are currently in directory3 and you want to switch to directory1, you need to move
up in the directory tree.

Executing the command:


$ cd directory1

while you are in directory3, presents you with an error message explaining that there is no
such directory. This is because there is no directory1 below directory3.

To move up to directory1, type:


$ cd / directory1

This is an example of an absolute path. It tells Linux to start at the top of the directory tree (/)
and change to directory1. A path is absolute if the first character is a /. Otherwise, it is a
relative path.

Using absolute paths allows you to change to a directory from the / directory, which requires
you to know and type the complete path. Using relative paths allows you to change to a
directory relative to the directory you are currently in, which can be convenient if you are
changing to a subdirectory within your current directory.

The command cd .. tells your system to go up to the directory immediately above the one in
which you are currently working. To go up two directories, use the cd ../.. command.

Use the following exercise to test what you have learned regarding absolute and relative
paths. From your home directory, type the relative path:
$ cd ../../ etc/X11

After using the full command in the example, you should be in the directory X11, which is
where configuration files and directories related to the X Window System are available.

Take a look at your last cd command. You told your system to:

1. Go up one level to your login directory’s parent directory (/home).


2. Then go up to that directory’s parent (which is the root, or /, directory).
3. Then go down to the /etc/ directory.
4. Finally, go to the X11/ directory.

Conversely, using an absolute path moves you to the /etc/X11/ directory more quickly. For
example:
$ cd /etc/X11

Absolute paths start from the root directory (/) and move down to the directory you specify.

Always make sure you know which working directory you are in before you state the relative
path to the directory or file you want to get to. You do not have to worry about your position
in the file system, though, when you state the absolute path to another directory or file. If

17
2. The Basics

Command Function

cd Returns you to your login directory

cd ~ Also returns you to your login directory

cd / Takes you to the entire system’s root directory

cd /root Takes you to the home directory of the root, or superuser,


account created at installation; you must be the root user
to access this directory

cd /home Takes you to the home directory, where user login


directories are usually stored

cd .. Moves you up one directory

cd ~otheruser Takes you to otheruser’s login directory, if otheruser has


granted you permission

cd /dir1/subdirfoo Regardless of which directory you are in, this absolute


path takes you directly to subdirfoo, a subdirectory of
dir1

cd ../../dir3/dir2 This relative path takes you up two directories, then to


dir3, then to the dir2 directory

Table 2.1: cd options.

you are not sure, type pwd and your current working directory is displayed, which can be
your guide for moving up and down directories using relative path names.

See Table 2.1 to get a comprehensive idea.

Now that you are starting to understand how to change directories, see what happens when
you change to root’s login directory (the superuser account). Type:
$ cd /root

If you are not logged in as root, you are denied permission to access that directory.

Denying access to the root and other users’ accounts (or login directories) is one way your
Linux system prevents accidental or malicious tampering.

2.4 More About Files


2.4.1 Everything is a File
The first thing we need to appreciate with Linux is that under the hood, everything is actually
a file. A text file is a file, a directory is a file, your keyboard is a file (one that the system reads

18
2.4. More About Files

from only), your monitor is a file (one that the system writes to only) etc. To begin with, this
won’t affect what we do too much but keep it in mind as it helps with understanding the
behavior of Linux as we manage files and directories.

2.4.2 Linux is an Extensionless System


This one can sometimes be hard to get your head around but as you work through the sections
it will start to make more sense. A file extension is normally a set of 2-4 characters after a
full stop at the end of a file, which denotes what type of file it is. The following are common
extensions:

• file.exe - an executable file, or program.


• file.txt - a plain text file.
• file.png, file.gif, file.jpg - an image.

In other systems such as Windows the extension is important and the system uses it to
determine what type of file it is.

There is a command called file which we can use to find this out:
$ file [path]

2.4.3 Linux is Case Sensitive


This is very important and a common source of problems for people new to Linux. Other
systems such as Windows are case insensitive when it comes to referring to files. Linux is not
like this. As such it is possible to have two or more files and directories with the same name
but letters of different case.
$ls Documents

FILE1 .txt File1 .txt file1 .TXT


...
file Documents / file1 .txt
Documents / file1 .txt: ERROR : cannot open ’file1 .txt ’
(No such file or directory )

Linux sees these all as distinct and separate files.

2.4.4 Spaces in Names


Spaces in file and directory names are perfectly valid but we need to be a little careful with
them. As you would remember, a space on the command line is how we separate items. They
are how we know what is the program name and can identify each command line argument.
If we wanted to move into a directory called Holiday Photos for example the following would
not work:

19
2. The Basics

$ ls Documents

FILE1 .txt File1 .txt file1 .TXT Holiday Photos


...
cd Holiday Photos
bash: cd: Holiday : No such file or directory

What happens is that Holiday Photos is seen as two command line arguments. To get around
this we need to identify to the terminal that we wish Holiday Photos to be seen as a single
command line argument. There are two ways to go about this, either way is just as valid.

2.4.4.1 Quotes

The first approach involves using quotes around the entire item. You may use either single or
double quotes (later on we will see that there is a subtle difference between the two but for
now that difference is not a problem). Anything inside quotes is considered a single item:
@ cd ’Holiday Photos ’
@ pwd

/home/ mmasroorali / Documents / Holiday Photos

2.4.4.2 Escape Characters

Another method is to use what is called an escape character, which is a backslash (\). What
the backslash does is escape (or nullify) the special meaning of the next character:
$ cd Holiday \ Photos
$ pwd

/home/ mmasroorali / Documents / Holiday Photos

In the above example the space between Holiday and Photos would normally have a special
meaning which is to separate them as distinct command line arguments. Because we placed
a backslash in front of it, that special meaning was removed.

In Section 2.1.5.1 we learned about something called Tab Completion. If you use that before
encountering the space in the directory name then the terminal will automatically escape any
spaces in the name for you.

2.4.5 Hidden Files and Directories


Linux actually has a very simple and elegant mechanism for specifying that a file or directory
is hidden. If the file or directory’s name begins with a . (full stop) then it is considered to be
hidden. You don’t even need a special command or action to make a file hidden. Files and
directories may be hidden for a variety of reasons. Configuration files for a particular user

20
2.5. Manipulating Files and Directories

(which are normally stored in their home directory) are hidden for instance so that they don’t
get in the way of the user doing their everyday tasks.

To make a file or directory hidden all you need to do is create the file or directory with it’s
name beginning with a . or rename it to be as such. Likewise you may rename a hidden file
to remove the . and it will become unhidden. The command ls which we have seen in the
previous section will not list hidden files and directories by default. We may modify it by
including the command line option -a so that it does show hidden files and directories:
$ ls Documents

FILE1 .txt File1 .txt file1 .TXT


...

$ ls -a Documents

. .. FILE1 .txt File1 .txt file1 .TXT . hidden .file.txt


...

In the above example you will see that when we listed all items in our current directory the
first two items were . and ..

If you’re unsure what these are then you may wish to have a read over Section 2.3.2.

2.5 Manipulating Files and Directories


2.5.1 View Directory Contents
Using the ls command, you can display the contents of your current directory.

Many options are available with the ls command. The ls command, by itself, does not show
all the files in the directory. Some files are hidden files (also called dot files) and can only be
seen with an additional option specified to the ls command.

Type the command ls -a. Now you can view files that begin with dots.

Hidden files are most often configuration files which set preferences in programs, window
managers, shells, and more. The reason they are hidden is to help prevent any accidental
tampering by the user. When you are searching for something in a directory, you are not
usually looking for these configuration files. Keeping them hidden helps to avoid some screen
clutter when viewing directories at the shell prompt.

Viewing all the files using the ls -a command can give you plenty of detail, but you can
view still more information by using multiple options.

If you want to see the size of a file or directory, when it was created, and so on, add the
long option (-l) to the ls -a command. This command shows the file creation date, its size,
ownership, permissions, and more.

21
2. The Basics

You do not have to be in the directory whose contents you want to view to use the ls command.
For example, to see what is in the /etc/ directory from your home directory, type:
$ ls -al /etc

The following is a brief list of options commonly used with ls. Remember, you can view the
full list by reading the ls man page (man ls).

-a (all): Lists all files in the directory, including hidden files (.filename). The .. and . at
the top of your list refer to the parent directory and the current directory, respectively.
-l (long): Lists details about contents, including permissions (modes), owner, group, size,
creation date, whether the file is a link to somewhere else on the system and where its
link points.
-F (file type): Adds a symbol to the end of each listing. These symbols include /, to indicate
a directory; @, to indicate a symbolic link to another file; and *, to indicate an executable
file.
-r (reverse): Lists the contents of the directory in reverse sort order.
-R (recursive): Lists the contents of all directories below the current directory recursively.
-S (size): Sorts files by their sizes.
-h (human readable): With -l print human readable sizes (e.g., 1K 234M 2G).

2.5.2 Creating a Directory


To create a new directory using a shell prompt, use the command mkdir. Enter:
mkdir <directory-name>, replacing <directory-name> with the intended title of the new
directory.

The -p option tells mkdir to make parent directories as needed. The -v option tell us what it
is doing.

See the following commands:


$ mkdir -p linuxtutorialwork /foo/bar
$ cd linuxtutorialwork /foo/bar
$ pwd

And now the same command but with the -v option:


$ mkdir -pv linuxtutorialwork /foo/bar
$ mkdir : created directory ’linuxtutorialwork /foo ’
$ mkdir : created directory ’linuxtutorialwork /foo/bar ’
$ cd linuxtutorialwork /foo/bar
$ pwd

22
2.5. Manipulating Files and Directories

L
Deleting a file or directory with rm or rmdir is permanent — you cannot un-delete it.

2.5.3 Removing a File or Directory


2.5.3.1 Deleting Files

To delete a file using rm enter the following at a shell prompt:


$ rm filename

The second word can also be a path, but must end in a file:
$ rm ../../ filename

There are many options to rm. To view them all, enter man rm at the shell prompt.

-i (interactive): Prompts you to confirm the deletion. This option can stop you from deleting
a file by mistake.
-f (force): Overrides interactive mode and removes the file(s) without prompting. This might
not be a good idea, unless you know exactly what you are doing.
-v (verbose): Shows the progress of the files as they are being removed.
-r (recursive): Deletes a directory and all files and subdirectories it contains.

The interactive, or -i, option for rm causes it to ask if you are sure before permanently deleting
a file or directory.

2.5.3.2 Deleting Directories

There are two commands that can be used to delete directories. The first is rmdir and the
second is rm.

The rmdir command will only delete directories that are empty. If you are concerned about
accidentally deleting a directory that is not empty, use this command:
$ rmdir directory /

The above command permanently deletes directory/ if it is empty.

If you want to delete a directory and all of its contents, use the command rm -rf. Note that
if you enter rm -rf, the shell will not ask if you are sure before permanently deleting the
directory:
$ rm -rf /dir1/

The above command deletes /dir1/ and every file and sub-directory that exists inside.

23
2. The Basics

2.5.4 Copying a File or Directory


To create a copy of an existing file, use the cp command. To view cp options, read the man
page by entering man cp at a shell prompt.

To copy a file within the current directory specify the new name as the third word on the
command line:
$ cp original_file new_file

This command creates a new file, named new_file, with the same content as the original file.

To copy a file to a different directory, specify a path as the third word on the command line:
$ cp original_file /dir1/dir2/

This command creates a copy of original_file in dir2/. If the last part of the path is a
filename instead of a directory, the copy has that new name:
$ cp original_file /dir1/dir2/ new_file

This creates a new file named new_file with the contents of original_file in dir2/.

Alternatively, if you know where the file is and would like to place a copy of it in your current
directory, enter the path as word two and “.” as the third word:
$ cp /dir1/dir2/ filename .

The above command places a copy of filename in your current working directory.

cp command can be used for copying directories as well. The syntax is as follows:
$ cp dir1 dir2

In this example copy /home/user/letters directory and all its files to /usb/backup directory:
$ cp -avr /home/user/ letters /usb/ backup

Where,

-a : Preserve the specified attributes such as directory an file mode, ownership, timestamps.
-v : Explain what is being done.
-r : Copy directories recursively.

Copy a directory called /tmp/conf to /tmp/backup:


$ cp -avr /tmp/conf/ /tmp/ backup

2.5.5 Moving a File or Directory


To move a file or directory from one location to another, use the command mv.

Common useful options for mv include:

i (interactive): Prompts you if the file you have selected overwrites an existing file in the
destination directory.

24
2.5. Manipulating Files and Directories

L
When you rename a file to another file where the destination already exists, Linux will not
issue any warning by default and the destination will be destroyed.

f (force): Overrides the interactive mode and moves without prompting. Be very careful
about using this option.
v (verbose): Shows the progress of the files as they are being moved.

To move a file from the current directory to another location, enter a path as the third word
on the command line:
$ mv filename /dir1/

This command would remove filename from the current working directory and place it in
/dir1/.

Alternatively, a path to the location of the file may be entered as the second word and “.” as
the third word. This moves the file from the location specified in word two into your current
working directory:
$ mv /tmp/ filename .

The above command moves the file filename from the /tmp/ directory into your current
working directory.

Finally, both words two and three may be paths:


$ mv ../../ filename /tmp/ new_name

The command above moves the file filename from a directory two levels up to the /tmp/
directory while renaming the file new_name.

2.5.6 Renaming a File or a Directory


To rename a file or directory, use the mv command.

To rename a file with mv, the third word on the command line must end in the new filename:
$ mv original_name new_name

The above command renames the file original_name to new_name:


$ mv ../ original_name new_name

The above command moves the file original_name from one directory up to the current
directory and renames it new_name:
$ mv original_name /dir1/dir2/dir3/ new_name

The above command moves the file original_name from the current working directory to
directory dir3/ and renames it new_name.

25
2. The Basics

Figure 2.3: Hard and soft links.

2.6 Create Hard and Soft Links


Links or symbolic links are a special type of file that contains a reference to another file or
directory. In simple world a single file or directory available at two or more locations.

Every file has an associated data structure that contains important information about that
file. This data structure is known as an i-node. A filename is used to refer to an i-node. The
filename is simply used as a reference to that data structure. An i-node has information
about the length of the file, the time the file was most recently modified or accessed, the time
the i-node itself was most recently modified, the owner of the file, the group identifications
and access permissions, the number of links to the file (we will discuss these shortly), and
pointers to the location on the disk where the data contained in the file is stored. A directory
is a file that contains a list of names, and for each name there is a pointer to the i-node for the
file or directory.

The ln command is used to create a link.

There are two types of symbolic links that can be created.

2.6.1 Hard Links


A hard link is basically a second filename for the same file. So if you hardlink a file, it will
only be once on the filesystem, and therefore only take up space once.

Hard Links have same inodes number. ls -l command shows all the links with the link
column showing the number of links. Links have actual file contents. You cannot create a
hard link for a directory. Even if the original file is removed, the link will still show you the
contents of the file. See Figure 2.3.

Hard links cannot be created to link a file from one file system to another file on another file
system.

Suppose that we have a file one.txt that contains some string:


$ cat one.txt

CSE , BUET

26
2.6. Create Hard and Soft Links

The link count increases by one, every time you create a new hard link to the file:
$ ls -li one.txt
47317304 -rw ------- 1 masroor masroor 10 Aug 20 06:59 one.txt

Now we use the ln command to create a link to one.txt called two.txt:


$ ln one.txt two.txt

The two names one.txt and two.txt now refer to the same data. Use cat command to view
them.

Also, check the link counts and other relevant information:


$ ls -li one.txt two.txt
47317304 -rw ------- 2 masroor masroor 10 Aug 20 06:59 one.txt
47317304 -rw ------- 2 masroor masroor 10 Aug 20 06:59 two.txt

$ find . -inum 47317304


./ one.txt
./ two.txt

If we modify the contents of file two.txt, then we also modify the contents of file one.txt.
Add some string to two.txt, and view the contents of the both the files.

Again, if we modify the contents of file one.txt, then we also modify the contents of file
two.txt. Add some string to one.txt, and view the contents of the both the files.

Removing any link, just reduces the link count but doesn’t affect the other links:
$ rm -v one.txt
removed ’one.txt ’
$ ls -l two.txt
-rw ------- 1 masroor masroor 10 Aug 20 06:59 two.txt

2.6.2 Soft Links or Symbolic Links


A soft link, also called symbolic link, is a file that contains the name of another file. We can
then access the contents of the other file through that name. That is, a symbolic link is like a
pointer to the pointer to the file’s contents. See Figure 2.3.

Soft links are able to span file systems. A Soft Link can link to a directory.

Suppose that in the previous example (Section 2.6.1), we had used the -s option of ln to
create a soft link:
$ ln -s one.txt twoS.txt
$ ls -F
one.txt twoS.txt@ two.txt

A symbolic link, that ls -F displays with a @ symbol, has been added to the directory.

Let us view a bit more detailed information:

27
2. The Basics

i total in the Very First Line after ls -l

That is the total number of file system blocks, including indirect blocks, used by the listed
files. If you run ls -s on the same files and sum the reported numbers you’ll get that
same number.

$ ls -l
total 8
-rw ------- 2 masroor masroor 10 Aug 20 07:31 one.txt
lrwxrwxrwx 1 masroor masroor 7 Aug 20 07:33 twoS.txt -> one.txt
-rw ------- 2 masroor masroor 10 Aug 20 07:31 two.txt

ls -l command shows all links with second column value 1 and the link points to original
file.

The l in the ls -l command output above indicates that the file is a soft link. The size of the
soft link created in the example above is the no of characters in the pathname (file), which is
7. It can be absolute or relative.

Soft Links have different i-nodes numbers.


$ ls -li
total 8
47317114 -rw ------- 2 masroor masroor 10 Aug 20 07:31 one.txt
47317304 lrwxrwxrwx 1 masroor masroor 7 Aug 20 07:33 twoS.txt -> one.txt
47317114 -rw ------- 2 masroor masroor 10 Aug 20 07:31 two.txt

Soft Link contains the path for original file and not the contents.

Removing soft link doesn’t affect anything but when the original file is removed, the link
becomes a dangling link that points to nonexistent file. If we remove the file one.txt, we can
no longer access the data through the symbolic link twoS.txt:
$ ls -F
one.txt twoS.txt@ two.txt
$ rm -v one.txt
removed ’one.txt ’
$ ls -l
total 4
lrwxrwxrwx 1 masroor masroor 7 Aug 20 07:33 twoS.txt -> one.txt
-rw ------- 1 masroor masroor 10 Aug 20 07:31 two.txt
$ ls -F
twoS.txt@ two.txt
$ cat twoS.txt
cat: twoS.txt: No such file or directory
$ cat two.txt

28
2.7. Getting Help in Red Hat Enterprise Linux

CSE , BUET

The link twoS.txt contains the name one.txt, and there no longer is a file with that name.
On the other hand, two.txt has its own pointer to the contents of the file we called one.txt,
and hence we can still use it to access the data.

2.7 Getting Help in Red Hat Enterprise Linux


There are several resources available to get the information you need to use and configure your
Red Hat Enterprise Linux system. Along with the Red Hat Enterprise Linux documentation
there are manual pages, documents that detail usage of important applications and files;
Info pages which break information about an application down by context-sensitive menus;
and help files that are included in the main menu bar of graphical applications. You can
choose any method of accessing documentation that best suits your needs, as all of these
resources are either already installed on your Red Hat Enterprise Linux system or can be
easily installed.

2.7.1 Manual Pages


Applications, utilities, and shell prompt commands usually have corresponding manual
pages (also called man pages) that show the reader available options and values of file or
executable. Man Pages are structured in such a way that users can quickly scan the page for
pertinent information, which is important when dealing with commands that they have not
previously encountered.

2.7.1.1 Using man

Man pages can be accessed via shell prompt by typing the command man and the name of
the executable. For example, to access the man page for the ls command, type the following:
$ man ls

The NAME field shows the executable’s name and a brief explanation of what function the
executable performs. The SYNOPSIS field shows the common usage of the executable, such
as what options are declared and what types of input (such as files or values) the executable
supports. The DESCRIPTION field shows available options and values associated with a file
or executable. See Also shows related terms, files, and programs.

To navigate the man page you can use the “arrow” keys or use the [Spacebar] to move down
one page and [B] to move up. To exit the man page, type [Q].

To search a man page for keywords type / and then a keyword or phrase and press [Enter].
All instances of the keyword will be highlighted throughout the man page, allowing you to
quickly read the keyword in context.

29
2. The Basics

2.7.1.2 Printing a Man Page

Printing man pages is a useful way to archive commonly used commands, perhaps in bound
form for quick reference. If you have a printer available and configured for use with Red Hat
Enterprise Linux (refer to the Red Hat Enterprise Linux System Administration Guide for
more information), you can print a man page by typing the following command at a shell
prompt:
$ man command | col -b | lpr

The example above combines separate commands into one unique function. man command
will output the contents of the command man page to col, which formats the contents to fit
within a printed page. The lpr command sends the formatted content to the printer.

2.7.1.3 The man Man Page

Just like other commands, man has its own man page. Type man man at the shell prompt for
more information.

2.7.2 info Command


info is an alternative system to provide manual pages for commands, based on GNU Emacs.
It is provided mainly for GNU commands and utilities. It does not seem to be widely adopted
from others. It is a documentation system originating in the GNU project. It’s hypertext with
links (predates the Web). An info manual is like a digital book with a concept of table of
contents and (searchable) index which helps locating the information.

The command:
$ info emacs

will start at emacs node from top-level dir.

The command:
$ info --show - options emacs

start at node with emacs command line options.

2.7.3 Files in /usr/share/doc


The /usr/share/doc directory stores documentation (release notes, installation guide etc.)
for all packages under respective directories by the name of package.

For example, executing


$ ls /usr/ share /doc/ntp -4.2.6 p5/

gives us:
ChangeLog COPYRIGHT NEWS

30
2.7. Getting Help in Red Hat Enterprise Linux

So, these files, with self explanatory names are related to the ntp (NTP — Network
Time Protocol, Section 7.3) package. Any of these files can be simply viewed using the
less (Section 3.1.2.4) command.

Some of the documentation files are compressed with .gz extension. Since less can cope
with the gzipped files seamlessly, these gzipped files (Section 3.4.1) can be viewed in the
same manner. For example:
$ less /usr/ share /doc/attr -2.4.46/ CHANGES .gz

will show the changes brought in attr — extended attributes on XFS file system (Chapter 12)
objects.

Two
31
This page is intentionally left blank

32
Chapter 3

Text File Operations


3/4 classes

Chapter Goals
1. Create and edit text files.
2. Use input-output redirection (>, >>, |, 2>, etc.).
3. Using multiple commands.
4. Use grep and regular expressions to analyze text.
5. Archive, compress, unpack, and uncompress files using tar,
star, gzip, and bzip2.

3.1 Creating, Viewing, and Editing Text Files


3.1.1 Text Editors
3.1.1.1 Shell Prompt Text Editor vi

Red Hat Enterprise Linux includes the vi (pronounced vee-eye) text editor. vi is a simple
application that opens within the shell prompt and allows you to view, search, and modify
text files. To start vi, type vi at a shell prompt. To open a file with vi type:
$ vi <filename >

at a shell prompt.

By default, vi opens a file in Normal mode, meaning that you can view and run built-in
commands on the file but you cannot add text to it. To add text, press i (for Insert mode),
which allows you to make any modifications you need to. To exit insert mode, press [Esc],
and vi reverts to Normal mode.

33
3. Text File Operations

Figure 3.1: Nano editor.

To exit vi, press : (which is the vi command mode) and press q then [Enter]. If you have
made changes to the text file that you want to save, press : and type w then q to write your
changes to the file and exit the application. If you accidentally made changes to a file and you
want to exit vi without saving the changes, type : and then type q followed by ! , which exits
without saving changes.
More information about using vi can be found by typing man vi at a shell prompt.

3.1.1.2 Text Editor Nano

To launch nano, you can either just type nano at the command prompt, optionally followed by
a filename (in this case, if the file exists, it will be opened in edition mode). If the file does not
exist, or if we omit the filename, nano will also be opened in edition mode but will present a
blank screen for us to start typing as in Figure 3.1.
As you can see, nano displays at the bottom of the screen several functions that are available
via the indicated shortcuts (^, aka caret, indicates the Ctrl key).

3.1.2 Viewing Text Files from the Shell Prompt


Red Hat Enterprise Linux has several applications that allow you to view and manipulate
text files at the shell prompt. These applications are fast and best suited to manipulating the
plain text files of configuration files.

3.1.2.1 The head Command

The head command displays the beginning of a file. The format of the head command is:
$ head <filename >

By default, you can only read the first ten lines of a file. You can change the number of lines
displayed by specifying a number option:
$ head -20 <filename >

The above command would display the first 20 lines of a file named <filename>.

3.1.2.2 The tail Command

The reverse of head is tail. Using tail, you can view the last ten lines of a file. This can be
useful for viewing the last ten lines of a log file for important system messages.

34
3.1. Creating, Viewing, and Editing Text Files

You can also use tail to watch log files as they are updated. Using the -f option, tail
automatically prints new messages from an open file to the screen in real-time. For example,
to actively watch /var/log/messages, enter the following at a shell prompt (as the root user):
$ tail -f /var/log/ messages

Press [ Ctrl ]−[C] when you are finished.

3.1.2.3 The more Command

The more command is a “pager” utility used to view text in the terminal window one page or
screen at a time. The [Space] bar moves forward one page and Q quits.

3.1.2.4 Viewing Files with less

The format of the less command is:


$ less <filename >

The main difference between more and less is that less allows backward and single-line
movement using the same navigation as man pages: press the [Space] bar to go down one
page, the B to go back one page, the directional (or “arrow”) keys to move one line at a time,
and Q to quit.

To search the output of a text file using less, press / and enter the keyword to search for
within the file.

/stuff

The above command would search through the file for all instances of stuff and highlight
them in the text.

3.1.2.5 Viewing and Creating Files with cat

The cat command is a versatile utility. It can be used to view text, to create text files, and to
join files. Its name is short for concatenate, which means to combine files.

Using cat alone echoes on the screen any text you enter. It will continue to do so until you
exit with the [ Ctrl ]−[D] keystroke.

Entering the cat command followed by a file name displays the entire contents of the file on
the screen. If the file is long, the contents scroll off the screen. You can control this by using
the redirection techniques that are discussed in Section 3.2.

3.1.2.6 The grep Command

The grep command is useful for finding specific character strings in a file. For example, to
find every reference made to “pattern” in the file <filename>, enter:
$ grep pattern <filename >

Each line in the file that includes the pattern “pattern” is located and displayed on the screen.

35
3. Text File Operations

3.2 Pipes and Pagers


3.2.1 Pipes
When using cat we can view a large file, the contents scroll off the screen. Using a pipe, we
can control that behavior. A pipe is the | symbol. It is used to connect the standard output
of one command to the standard input of another command. Essentially, it allows a user
to string commands together. Pagers are commands (such as less) that display text in the
terminal window.

Using cat, the pipe (|), and less together displays the file one page at a time. You can then
use the up and down arrow keys to move backward and forward through the pages:
$ cat <filename > | less

The above command opens the file named <filename> using the cat command, but does not
allow it to scroll off the screen.

Using the pipe with a pager is also useful when examining large directories with ls. For
example, view the /etc/ directory with the ls command:
$ ls -al /etc/

Notice that the contents scroll past too quickly to view. To get a closer look at the output of
the ls command, pipe it through less:
$ ls -al /etc/ | less

Now you can view the contents of /etc/ one screen at a time. Remember that you can
navigate forward and backward through the screens and even search for specific text using
the / key.

You can combine redirection with wildcards:


$ ls -al /etc/a* | less

The above displays all files and directories in /etc/ that start with the letter “a” one screen at
a time.

36
3.2. Pipes and Pagers

L
Be careful when you redirect the output to a file, because you can easily overwrite an
existing file! Make sure the name of the file you are creating does not match the name of a
pre-existing file, unless you want to replace it.

3.2.2 Using Redirection


Redirection means changing where standard input comes from or where the standard output
goes.

Input and output in the Linux environment is distributed across three streams. These streams
are:

• standard input — stdin, (0),


• standard output — stdout, (1),
• standard error — stderr, (2).

To redirect standard output, use the > symbol. Placing > after the cat command (or after
any utility that writes to standard output) redirects its output to the file name following the
symbol.

Remember that the cat command echoes the text you enter on the screen. Those echoes are
the standard output of the command. To redirect this output to a file, type the following at a
shell prompt and press Enter:
$ cat > foo.txt.

Enter a few more lines of text, and use the [ Ctrl ]−[D] key combination to quit cat.

The following is an example, redirecting three lines of text into the file foo.txt:
$ cat > foo.txt
BUET
IAC , CSE
Dhaka

The following is another example, redirecting three more lines of text to create the file
bar.txt:
$ cat > bar.txt
Bangladesh
Palashi
DHAKA

The following demonstrates cat’s concatenate function, adding the contents of bar.txt to the
end of foo.txt. Without redirection to the file example1.txt, cat displays the concatenated
contents of foo.txt and bar.txt on the screen:

37
3. Text File Operations

$ cat foo.txt bar.txt > example1 .txt


cat example1 .txt

Using the last command, we displayed the contents of example1.txt, so that the user can see
how the files were added together.

3.2.3 Appending Standard Output


The symbol >> appends standard output. This means it adds the standard output to the end
of a file, rather than over-writing the file.

The following shows the command to append bar.txt to foo.txt. The contents of bar.txt are
now permanently added to the end of foo.txt. The cat command is called a second time to
display the contents of foo.txt:
$ cat foo.txt bar.txt > bar.txt
$ cat bar.txt

To compare example1.txt and the modified foo.txt, use the diff command. diff compares
text files line-by-line and reports the differences to standard output. The following compares
between example1.txt and foo.txt. Because there are no differences, diff returns no
information.
$ diff example1 .txt foo.txt

3.2.4 Redirecting Standard Input


You can also perform the same type of redirection with standard input.

When you use the redirect standard input symbol <, you are telling the shell that you want a
file to be read as input for a command.

The following shows foo.txt being redirected as input for cat:


$ cat < foo.txt

3.2.5 Redirecting the Standard Error Stream


Standard error writes the errors generated by a program that has failed at some point in
its execution. Like standard output, the default destination for this stream is the terminal
display.

This pattern redirects the standard error stream of a command to a file, overwriting existing
contents.
$ mkdir ’’
mkdir : cannot create directory ’’: No such file or directory
$ mkdir ’’ > mkdirlog .txt
mkdir : cannot create directory ’’: No such file or directory
$ mkdir ’’ 2> mkdirlog .txt

38
3.3. Using Multiple Commands

$ cat mkdirlog .txt


mkdir : cannot create directory ’’: No such file or directory

This redirects the error raised by the invalid directory name ’’, and writes it to mkdirlog.txt.

3.3 Using Multiple Commands


Linux allows you to enter multiple commands at one time. The only requirement is that you
separate the commands with a semicolon.
Suppose you have a file called foo.txt and you want to put it in a new subdirectory within
your home directory called dest/, but the subdirectory has not been created. After that, you
want to display the contents of the destination directory. You can combine all these by typing
the following at a shell prompt:
$ mkdir dest /; mv foo.txt dest /; ls dest

Running the combination of commands creates the directory and moves the file in one line,
and then displays the contents.

3.4 File Compression and Archiving


It is useful to store a group of files in one file for easy backup, for transfer to another directory,
or for transfer to another computer. It is also useful to compress large files; compressed files
take up less disk space and download faster via the Internet.
It is important to understand the distinction between an archive file and a compressed file.
An archive file is a collection of files and directories stored in one file. The archive file is not
compressed — it uses the same amount of disk space as all the individual files and directories
combined. A compressed file is a collection of files and directories that are stored in one
file and stored in a way that uses less disk space than all the individual files and directories
combined. If disk space is a concern, compress rarely-used files, or place all such files in a
single archive file and compress it.
An archive file is not compressed, but a compressed file can be an archive file.
Red Hat Enterprise Linux provides the bzip2, gzip, and zip tools for compression from
a shell prompt. Table 3.1 shows a summary. By convention, files compressed with bzip2
are given the extension .bz2, files compressed with gzip are given the extension .gz, and
files compressed with zip are given the extension .zip. Files compressed with bzip2 are
uncompressed with bunzip2, files compressed with gzip are uncompressed with gunzip,
and files compressed with zip are uncompressed with unzip.
The bzip2 compression tool is recommended because it provides the most compression and
is found on most UNIX-like operating systems. The gzip compression tool can also be found
on most UNIX-like operating systems. To transfer files between Linux and other operating
system such as MS Windows, use zip because it is more compatible with the compression
utilities available for Windows.

39
3. Text File Operations

Compression Tool File Extension Decompression Tool

bzip2 .bz2 bunzip2

gzip .gz gunzip

zip .zip unzip

Table 3.1: Compression tools.

3.4.1 Gzip and Gunzip


1. Compress a single file:
This will compress file.txt and create file.txt.gz. Note that this will remove the
original file.txt file.
$ gzip file.txt

2. Compress multiple files at once:


This will compress all files specified in the command, note again that this will remove
the original files specified by turning file1.txt, file2.txt and file3.txt into
file1.txt.gz, file2.txt.gz and file3.txt.gz.
$ gzip file1 .txt file2 .txt file3 .txt

3. Compress a single file and keep the original:


You can instead keep the original file and create a compressed copy.
$ gzip -c file.txt > file.txt.gz

The -c flag outputs the compressed copy of file.txt to stdout, this is then sent to
file.txt.gz, keeping the original file.txt file in place. Newer versions of gzip also
have -k or --keep available, which could be used instead with gzip -k file.txt.
4. Compress all files recursively:
All files within the directory and all sub directories can be compressed recursively with
the -r flag.
$ gzip -r *

5. Decompress a gzip compressed file:


To reverse the compression process and get the original file back that you have
compressed, you can use the gzip command itself or gunzip which is also part
of the gzip package.
$ gzip -d file.txt.gz

40
3.4. File Compression and Archiving

OR
$ gunzip file.txt.gz

Both of these commands will produce the same result, decompressing file.txt.gz to
file.txt, removing the compressed file.txt.gz file.
It is possible to decompress a file and keep the original .gz file as below.
$ gunzip -c file.txt.gz > file.txt

-d can be combined with -r to decompress all files recursively.


6. List compression information:
With the -l or --list flag we can see useful information regarding a compressed .gz
file such as the compressed and uncompressed size of the file as well as the compression
ratio, which shows us how much space our compression is saving.
$ gzip -l file.tzt.gz

7. Adjust compression level:


The level of compression applied to a file using gzip can be specified as a value between
1 (less compression) and 9 (best compression). Using option 1 will complete faster,
but space saved from the compression will not be optimal. Using option 9 will take
longer to complete, however you will have the largest amount of space saved. The below
example compares the differences between -1 and -9, as shown while -1 finishes much
faster it compresses around somewhat less.
$ time gzip -1 longfile .txt
$ gzip -l longfile .txt.gz

$ time gzip -9 longfile .txt


$ gzip -l longfile .txt.gz

-1 can also be specified with the flag --fast, while option -9 can also be specified with
the flag --best. By default gzip uses a compression level of -6, which is slightly biased
towards higher compression at the expense of speed. When selecting a value between 1
and 9 it is important to consider what is more important to you, the amount of space
saved or the amount of time spent compressing, the default -6 option provides a fair
trade off.
8. Integrity test:
The -t or --test flag can be used to check the integrity of a compressed file. On a
normal file, the result will be listed as OK.
$ gzip -tv file1 .txt.gz

41
3. Text File Operations

If we now manually modified this file with a text editor and added a random value,
essentially introducing corruption and it will be no longer valid.
9. Concatenate multiple files:
Multiple files can be concatenated into a single .gz file.
$ gzip -c file1 .txt > files .gz
$ gzip -c file2 .txt >> files .gz

The files.gz now contains the contents of both file1.txt and file2.txt, if you
decompress files.gz you will get a file named files which contains the content of
both .txt files. The output is similar to running cat file1.txt file2.txt.

For more information, enter man gzip and man gunzip at a shell prompt to read the man
pages for gzip and gunzip.

3.4.2 Bzip2 and Bunzip2


All the examples presented in Section 3.4.1 (Gzip and Gunzip) will valid for bzip2 and
bunzip2 as well.

3.4.3 Zip and Unzip


To compress a file with zip, enter the following command:
$ zip -r filename .zip filesdir

In this example, filename.zip represents the file you are creating and filesdir represents
the directory you want to put in the new zip file. The -r option specifies that you want to
include all files contained in the filesdir directory recursively.

To extract the contents of a zip file, enter the following command:


$ unzip filename .zip

You can use zip to compress multiple files and directories at the same time by listing them
with a space between each one:
$ zip -r filename .zip file1 file2 file3 /usr/work/ school

The above command compresses file1, file2, file3, and the contents of the
/usr/work/school/ directory (assuming this directory exists) and places them in a file
named filename.zip.

For more information, enter man zip and man unzip at a shell prompt to read the man pages
for zip and unzip.

42
3.4. File Compression and Archiving

3.4.4 Tar and Star


3.4.4.1 tar

A tar file is a collection of several files and/or directories in one file. This is a good way to
create backups and archives.

Some of tar’s options include:

-c : create a new archive.


-f : when used with the -c option, use the filename specified for the creation of the tar file;
when used with the -x option, unarchive the specified file.
-t : show the list of files in the tar file.
-v : show the progress of the files being archived.
-x : extract files from an archive.
-z : compress the tar file with gzip.
-j : compress the tar file with bzip2.
selinux : retain the SELinux context.

To create a tar file, enter:


$ tar -cvf filename .tar directory /file

In this example, filename.tar represents the file you are creating and directory/file
represents the directory and file you want to put in the archived file.

You can tar multiple files and directories at the same time by listing them with a space
between each one:
$ tar -cvf filename .tar /home/mine/work /home/mine/ school

The above command places all the files in the work and the school subdirectories of
/home/mine in a new file called filename.tar in the current directory.

You can use the --selinux option to retain the SELinux context:
$ tar --selinux -cvf filename .tar /home/mine/work /home/mine/ school

To list the contents of a tar file, enter:


$ tar -tvf filename .tar

To extract the contents of a tar file, enter:


$ tar -xvf filename .tar

This command does not remove the tar file, but it places copies of its unarchived contents in
the current working directory, preserving any directory structure that the archive file used.
For example, if the tarfile contains a file called bar.txt within a directory called foo/, then
extracting the archive file results in the creation of the directory foo/ in your current working
directory with the file bar.txt inside of it.

43
3. Text File Operations

Remember, the tar command does not compress the files by default. To create a tarred and
bzipped compressed file, use the -j option:
$ tar -cjvf filename .tbz file

tar files compressed with bzip2 are conventionally given the extension .tbz; however,
sometimes users archive their files using the tar.bz2 extension.
The above command creates an archive file and then compresses it as the file filename.tbz.
If you uncompress the filename.tbz file with the bunzip2 command, the filename.tbz file
is removed and replaced with filename.tar.
You can also expand and unarchive a bzip tar file in one command:
$ tar -xjvf filename .tbz

Again, you can use the --selinux option to retain the SELinux context:
$ tar --selinux -xjvf filename .tbz

To create a tarred and gzipped compressed file, use the -z option:


$ tar -czvf filename .tgz file

tar files compressed with gzip are conventionally given the extension .tgz.
This command creates the archive file filename.tar and compresses it as the file
filename.tgz. (The file filename.tar is not saved.) If you uncompress the filename.tgz
file with the gunzip command, the filename.tgz file is removed and replaced with
filename.tar.
You can expand a gzip tar file in one command:
$ tar -xzvf filename .tgz

3.4.4.2 star

The star command is more appropriate for archiving files in a SELinux (Section 10.3) system.
Since SELinux contexts are stored in extended attributes, contexts can be lost when archiving
files.
As the star command is not normally installed, you’ll need to install it:
# yum install star

The star command does not quite work in the same fashion as tar (Section 3.4.4.1).
The following command would create an archive, with all SELinux contexts, from the
./tobesaved/ directory:
$ star -v -xattr -H= exustar -c -f= saved .star ./ tobesaved /
a tobesaved / directory
a tobesaved / three .txt 5 bytes , 1 tape blocks
a tobesaved /one.txt 5 bytes , 1 tape blocks
a tobesaved /two.txt 5 bytes , 1 tape blocks
star: 1 blocks + 0 bytes ( total of 10240 bytes = 10.00 k).

44
3.4. File Compression and Archiving

The -xattr switch saves the extended attribute associated with ACL’s. The -c creates a new
archives file. The -f specifies the name of the archive file. Adding one or more -v options
increases the verbosity.

Once the archive is created, it can be unpacked with the following command which extracts
the archive. The star -x command can detect and restore files from archives configured
with various compression schemes.
$ star -v -x -f= saved .star
Release star 1.5.2 (x86_64 -redhat -linux -gnu)
Archtype exustar
Dumpdate 1503196868.374307 (Sun Aug 20 08:41:08 2017)
Volno 1
Blocksize 20 records
x tobesaved / directory
x tobesaved / three .txt 5 bytes , 1 tape blocks
x tobesaved /one.txt 5 bytes , 1 tape blocks
x tobesaved /two.txt 5 bytes , 1 tape blocks
star: 1 blocks + 0 bytes ( total of 10240 bytes = 10.00 k).

Three
45
This page is intentionally left blank

46
Chapter 4

Linux Installation
1 class

Chapter Goals

1. Install Red Hat Enterprise Linux.


2. Install and update software packages from Red Hat Network, a
remote repository, or from the local file system.
3. Update the kernel package appropriately to ensure a bootable
system.

4.1 Download RedHat Enterprise Linux


Unlike other popular distributions such as Ubuntu, Mint, CentOs etc, to install and use Red
Hat Enterprise Linux one must acquire some sort of subscription. Though Red Hat Enterprise
Linux is not free, there is an option of free subscription for developers. To avail the option, first
you need to register with developers.redhat.com. After registration, the download link for
Red Hat Enterprise Linux server will be available. Download the Red Hat Enterprise Linux
Server DVD . iso file. Note that, the username and password created during registration will
be later required for registering the product for additional software installation and future
update. Create a bootable CD/DVD from the downloaded .iso file.

4.2 System Requirements


The requirements for your physical system are:

• a 64-bit x86 machine.

47
4. Linux Installation

• 4 GB of RAM
• 20 GB of available disk space.

4.3 Step by Step Installation Instructions


This section provides an overview of the key steps for installing Red Hat Enterprise Linux.
Note the following before you proceed.

• The Red Hat Enterprise Linux . iso downloaded in the previous step will be used to
install a system with a full graphical desktop. By default, Red Hat Enterprise Linux
server will not install a graphical desktop.
• You need to select an installation destination, which is the disk or partition(s) where
the software will be installed. The selected disk or partition(s) will be overwritten.
Make sure you understand your selection before starting the installation to avoid
accidental data loss. To be on safe side, create the required space as unused from
existing WINDOWS installation using some partition manager and then point the Red
Hat Enterprise Linux installer to use that space.
• You should configure networking under ‘NETWORK AND HOST NAME’ before
starting the installation. You will need access to the Internet to complete registration
and download additional software. The network can be configured after the system is
installed. However, the steps are more straightforward during installation.
• Create your primary user account during installation. After the installation begins, you
will be instructed to set a password for the root account and be given the opportunity to
create a regular user account. You should create a user before the installation process
completes. The regular user will be your primary login for use. The root account should
only be used for system administration tasks. If you don’t create a user before the
installation completes, you will need to reboot and then log in as root to create user
accounts.

Now follow the steps described below.

1. Start the system from the bootable disk and select Install Red Hat Enterprise Linux. The
system will proceed with media checking. You can skip the media checking step by
hitting the [Esc] key. See Figure 4.1.
2. Select your preferred language and keyboard layout to use during installation. Under
LOCALIZATION review the settings and make any necessary changes for date and time,
language, and keyboard layout. Note that the Done button to return to the INSTALLATION
SUMMARY screen is located in the upper left corner of the screen. See Figure 4.2.
3. Perform the following steps to make your software selection. See Figure 4.3.
(a) Click SOFTWARE SELECTION.
(b) On the next screen, under SOFTWARE SELECTION, in the Base Environment list
on the left, select Server with GUI.

48
4.3. Step by Step Installation Instructions

Figure 4.1: Starting screen after boot from DVD.

Figure 4.2: Selection of language and keyboard.

(c) In the list Add-ons for selected environment on the right, select Development tools
if you want to develop software with Red Hat Enterprise Linux.
(d) Click the Done button. Note that after returning to the INSTALLATION SUMMARY
screen it will take several seconds to validate your choices.

4. Click INSTALLATION DESTINATION to specify which disk or partition(s) to use for Red
Hat Enterprise Linux. It is important that you understand the choices that you are

49
4. Linux Installation

Figure 4.3: Selection of software.

making in this section to avoid accidental data loss.read the Installation Destination
section of the the Red Hat Enterprise Linux Installation Guide. The installation
destination should be at least 20 GB or larger to accommodate the OS, graphical desktop,
and development tools. See Figure 4.4.

Figure 4.4: Selection of installation destination.

5. Click NETWORK & HOST NAME to configure the network. If the system has more than
one network adapter, select it from the list on the left. Then click the On/Off button on
the right to enable the network adapter. See Figure 4.5.

50
4.3. Step by Step Installation Instructions

(a) Click Configure to review and/or change the default settings for the network
adapter. The default settings should be fine for most networks that use DHCP.
(b) Optionally, set a Host name for the system.
(c) Click Save to dismiss the network adapter configuration dialog.
(d) Before leaving the NETWORK & HOST NAME, make sure there is at least one
network adapter enabled with the switch in the On position. A network connection
will be required to register the system and download system updates.
(e) Click Done.

Figure 4.5: Network configuration screen.

6. Click KDump to disable KDump and free up memory. Click the box next to Enable
kdump so that it is no longer checked. Then click Done.
7. Click the Begin Installation button when you are ready to start the actual installation.
8. On the next screen, while the installation is running, click USER CREATION to create
the user ID you will use to log in for normal work. See Figure 4.6.
9. Click ROOT PASSWORD to set the password for the root user. Note that if you choose a
password that the system considers to be weak, you will need to click Done twice.
10. After the installation process completes, click the Reboot button located in bottom right
of the screen. See Figure 4.7

4.3.1 Complete Installation and Register the System


After installation, during the first boot of the system, you will be asked to accept the license
agreement and register the system with Red Hat Subscription Management. Completing
these steps are required for your system to download software from Red Hat. See Figure 4.8.

51
4. Linux Installation

Figure 4.6: User creation.

Figure 4.7: End of installation.

1. Click LICENSE INFORMATION to go the license acceptance screen.


(a) Click the check box to accept the license.
(b) Click Done in the upper left corner to return to the INITIAL SETUP screen.
2. If you did not configure a network during installation, click NETWORK & HOST NAME
to configure your network connection.
3. In the next step you will register your system with Red Hat and attach it to your

52
4.3. Step by Step Installation Instructions

Figure 4.8: Screen after reboot.

subscription. Note that for this step to succeed, you must have successfully configured
your network connection.
(a) Click Subscription Manager.
(b) Leave I will register with set to the default.
(c) if you need to configure an HTTP proxy server, click Configure Proxy.
(d) Click Next to move the next screen. See Figure 4.9.

Figure 4.9: Entering of license information.

53
4. Linux Installation

(e) Enter your Red Hat username and password. This is the login that you created
during download and use to access Red Hat sites such as the Red Hat Customer
Portal, access.redhat.com.
(f) Optionally, enter a System Name that will be used to identify this system on the
Red Hat Customer Portal.
(g) Click Register.
(h) On the next screen you will be shown the list of subscriptions that are available
to your user ID. If you have more than one subscription available, select which
subscription to attach this system to.
(i) Click Attach.
(j) Click Done.
4. Finally, Click Finish configuration .
5. Log in to the system with the username and password you created during installation.
6. If you did not create a regular user, you will need to log in as root and create a user.
7. Select your preferred language for the GNOME desktop. Then click Next.
8. Select your keyboard layout. Then click Next.
9. Optionally follow the dialogs to connect your online accounts or click Skip.
10. Click Start using Red Hat Enterprise Linux.

You are now logged into Red Hat Enterprise Linux. The Getting Started page of the GNOME
Help viewer is opened automatically as a full screen application after your first login. You
may minimize, resize, or exit out of that application by using the window controls on the
upper right corner.See Figure 4.10.

4.4 Software Installation


All of you are familiar with installing a software in Windows where we need an ‘installer’ for
any software. The installer is usually in .exe or .msi format. In Linux, use need ‘packages’
instead of installers and packages are installed by software package managers such as yum or
rpm. Packages are in turn stored in ‘repositories’. You can think of repositories as directories
of packages. Hence, to install a software, we first need to know the name of the associated
package and the repository where it is stored.

4.4.1 Yum Package Manager


Yum (Yellowdog updated modified) is the Red Hat package manager that is able to query for
information about available packages, fetch packages from repositories, install and uninstall
them, and update an entire system to the latest available version. Yum performs automatic
dependency resolution when updating, installing, or removing packages, and thus is able to
automatically determine, fetch, and install all available dependent packages. Yum can be
configured with new, additional repositories, or package sources, and also provides many

54
4.4. Software Installation

Figure 4.10: Inside Red Hat Enterprise Linux.

plug-ins which enhance and extend its capabilities. Listed below some of the functionalities
of Yum.

Checking for and Updating Packages To see which installed packages on your system have
updates available, use the following command:
# yum check - update

To update a single package, run the following command as root:


# yum update package_name

To update all packages and their dependencies, use the yum update command without any
arguments:
# yum update

Working with Packages You can search all RPM package names, descriptions and
summaries by using the following command:
# yum search term

Replace term with a package name you want to search.

To list information on all installed and available packages type the following at a shell prompt:
# yum list all

To display information about one or more packages, use the following command:
# yum info package_name

To get the list of the dependencies associated with a package (here httpd), type:

55
4. Linux Installation

# yum deplist httpd

To define which package provides a specified file (here libvirt), type:


# yum whatprovides */ libvirt

Working with Repositories To list the repository ID, name, and number of packages for
each repository both enabled and disabled on your system, use the following command:
# yum repolist all

To get the list of only the enabled repositories, type:


# yum repolist

Installing Packages To install a single package and all of its non-installed dependencies,
enter a command in the following form as root:
# yum install package_name

The following example provides an overview of installation with use of yum. To download
and install the latest version of the httpd package, execute as root:
# yum install httpd
Loaded plugins : langpacks , product -id , subscription - manager
Resolving Dependencies
--> Running transaction check
---> Package httpd . x86_64 0:2.4.6 -12. el7 will be updated
---> Package httpd . x86_64 0:2.4.6 -13. el7 will be an update
--> Processing Dependency : 2.4.6 -13. el7 for package : httpd
-2.4.6 -13. el7. x86_64
--> Running transaction check
---> Package httpd - tools . x86_64 0:2.4.6 -12. el7 will be updated
---> Package httpd - tools . x86_64 0:2.4.6 -13. el7 will be an
update
--> Finished Dependency Resolution
Dependencies Resolved

To clean up the yum cache, type:


# yum clean all

To install a previously downloaded package from the local directory on your system, use the
following command:
# yum localinstall path

Similarly to package installation, yum enables you to uninstall them. To uninstall a particular
package, as well as any packages that depend on it, run the following command as root:
# yum remove package_name

56
4.4. Software Installation

L
Yum is not able to remove a package without also removing packages which depend on
it. This type of operation, which can only be performed by RPM, is not advised, and can
potentially leave your system in a non-functioning state or cause applications to not work
correctly or crash.

4.4.2 Red Hat Network, Remote and Local Repositories


Repositories in Red Hat Network After registering your system, a list of pre-specified
repositories are already added in the system based on your subscription type. To find the list
of repositories use the following command.
# subscription - manager repos --list - enabled

If you do not see any enabled repositories, your system might not be registered with Red Hat
or might not have a valid subscription.

Additionally, you many include Optional RPMs and RHSCL (Red Hat Software Collection)
software repositories. The Optional RPMs repository includes a number of development
packages. The RHSCL repository includes the both Red Software Collections as well as Red
Hat Developer Toolset (DTS).
# subscription - manager repos --enable rhel -server -rhscl -7- rpms
# subscription - manager repos --enable rhel -7- server -optional -rpms

Adding Remote Repository It is also possible to manually add repositories using the
following command.
#yum -config - manager --add -repo repository_url

For example, to add a repository located at http://www.example.com/example.repo, type


the following at a shell prompt:
# yum -config - manager --add -repo http :// www. example .com/ example
.repo
Loaded plugins : product -id , refresh - packagekit , subscription -
manager
adding repo from: http :// www. example .com/ example .repo
grabbing file http :// www. example .com/ example . repose to /etc/
yum. repos .d/ example .repo
example .repo | 413 B 00:00
repo saved to /etc/yum. repos .d/ example .repo

It is also possible to add a remote repository by manually adding a new .repo file in the
/etc/yum.repos.d/ directory.

57
4. Linux Installation

Adding Local Repositories In case, you do not have an Internet connection, you may use
local repositories located in file system. However, the repositories or the corresponding .iso
file has to downloaded first from some PC with Internet connection and then copied to some
portable media such as CD, DVD or USB drive. Now follow the steps below.

1. Mount the RHEL 7 installation ISO to a directory like /mnt, e.g.:


# mount -o loop RHEL7 .1. iso /mnt

If you use DVD media , you can mount like below.


# mount -o loop /dev/sr0 /mnt

2. Copy the media.repo file from the root of the mounted directory to /etc/yum.repos.d/
and set the permissions to something sane, e.g.:
# cp /mnt/ media .repo /etc/yum. repos .d/ rhel7dvd .repo
# chmod 644 /etc/yum. repos .d/ rhel7dvd .repo

3. Edit the new repo file, changing the gpgcheck=0 setting to 1 and adding the following
3 lines
enabled =1
baseurl =file :/// mnt/
gpgkey =file :/// etc/pki/rpm -gpg/RPM -GPG -KEY -redhat - release
In the end, the new repo file could look like the following (though the mediaid will be
different depending on the version of RHEL):
[ InstallMedia ]
name=DVD for Red Hat Enterprise Linux 7.1 Server
mediaid =1359576196.686790
metadata_expire =-1
gpgcheck =1
cost =500
enabled =1
baseurl =file :/// mnt/
gpgkey =file :/// etc/pki/rpm -gpg/RPM -GPG -KEY -redhat - release

4. clear the related caches by yum clean all and subscription-manager clean once
# yum clean all
# subscription - manager clean

5. check whether you can get the packages list from the DVD repo
# yum --noplugins list

6. if no problem, you Will update

58
4.5. RPM Package Manager

# yum --noplugins update

Enabling Repositories To enable a particular repository or repositories, type the following


at a shell prompt as root:
# yum -config - manager --enable repository_name

Now download and install any available updates by running yum update.
# yum -y update

If yum updates the kernel package or installs a large number of updates, you should reboot
your system.

4.5 RPM Package Manager


It is possible to install software distributed as .rpm format using the RPM Package Manager.
RPM packages typically have file name in the following format:
package_name-version-release-operating_system-CPU_architecture.rpm.

For example the tree-1.6.0-10.el7.x8 _6 4.rpm file name includes the package name
(tree), version (1.6.0), release (10), operating system major version (el7) and CPU architecture
(x86_64). To install this particular file, use the following command.
# rpm -Uvh tree -1.6.0 -10. el7. x86_64 .rpm

The -v and -h options (which are combined with -U) cause rpm to print a more verbose output
and display a progress meter using hash signs. If the upgrade or installation is successful,
the following output is displayed:
Preparing ... # ################################
[100%] Updating / installing ... 1: tree -1.6.0 -10. el7
# ################################ [100%]

4.6 Update Kernel Package


Red Hat Enterprise Linux kernels are packaged in the RPM format so that they are easy
to upgrade and verify using the Yum package manager. The yum -update command will
update the kernel without replacing it. It is also possible to update the kernel manually by
downloading the kernel .rpm file and using the following command.
# rpm -ivh kernel -kernel_version .arch.rpm

Do not use the -U option, since it overwrites the currently installed kernel, which creates boot
loader problems.

Four
59
This page is intentionally left blank

60
Chapter 5

Users and Groups


2 classes

Chapter Goals
1. Create, delete, and modify local user accounts.
2. Change passwords and adjust password aging for local user
accounts.
3. Create, delete, and modify local groups and group
memberships.
4. List, set, and change standard ugo/rwx permissions.
5. Create and configure set-GID directories for collaboration.
6. Create and manage Access Control Lists (ACLs).
7. Diagnose and correct file permission problems.
8. Configure a system to use an existing authentication service for
user and group information.

5.1 Manage Local Linux Users and Groups


The control of users and groups is a core element of Red Hat Enterprise Linux system
administration. This chapter explains how to add, manage, and delete users and groups in
the graphical user interface and on the command line, and covers advanced topics, such as
creating group directories.

61
5. Users and Groups

5.1.1 Users and Groups Basics


While users can be either people (meaning accounts tied to physical users) or accounts that
exist for specific applications to use, groups are logical expressions of organization, tying
users together for a common purpose. Users within a group share the same permissions to
read, write, or execute files owned by that group.
Each user is associated with a unique numerical identification number called a user ID (UID).
Likewise, each group is associated with a group ID (GID).

Reserved User and Group IDs Red Hat Enterprise Linux reserves user and group IDs
below 1000 for system users and groups. By default, the User Manager does not display the
system users. Reserved user and group IDs are documented in the setup package. To view
the documentation, use this command:
cat /usr/ share /doc/ setup */ uidgid

The recommended practice is to assign IDs starting at 5,000 that were not already reserved,
as the reserved range can increase in the future. To make the IDs assigned to new users by
default start at 5,000, change the UID_MIN and GID_MIN directives in the /etc/login.defs
file:
[file contents truncated ]
UID_MIN 5000
[file contents truncated ]
GID_MIN 5000
[file contents truncated ]

5.1.1.1 User Private Groups

Red Hat Enterprise Linux uses a user private group (UPG) scheme, which makes UNIX
groups easier to manage. A user private group is created whenever a new user is added to
the system. It has the same name as the user for which it was created and that user is the only
member of the user private group.
A list of all groups is stored in the /etc/group configuration file.

5.1.1.2 Shadow Passwords

In environments with multiple users, it is very important to use shadow passwords provided
by the shadow-utils package to enhance the security of system authentication files. For this
reason, the installation program enables shadow passwords by default.
The following is a list of the advantages shadow passwords have over the traditional way of
storing passwords on UNIX-based systems:

• Shadow passwords improve system security by moving encrypted password hashes


from the world-readable /etc/passwd file to /etc/shadow, which is readable only by
the root user.

62
5.1. Manage Local Linux Users and Groups

• Shadow passwords store information about password aging.


• Shadow passwords allow to enforce some of the security policies set in the
/etc/login.defs file.

Most utilities provided by the shadow-utils package work properly whether or not shadow
passwords are enabled. However, since password aging information is stored exclusively
in the /etc/shadow file, some utilities and commands do not work without first enabling
shadow passwords:

• The chage utility for setting password aging parameters.


• The gpasswd utility for administrating the /etc/group file.
• The usermod command with the -e, --expiredate or -f, --inactive option.
• The useradd command with the -e, --expiredate or -f, --inactive option.

5.1.2 Managing Users in a Graphical Environment


The Users utility allows you to view, modify, add, and delete local users in the graphical user
interface.

5.1.2.1 Using the Users Settings Tool

Press the Super key to enter the Activities Overview, type Users and then press Enter. The
Users settings tool appears. The Super key appears in a variety of guises, depending on
the keyboard and other hardware, but often as either the Windows or Command key, and
typically to the left of the Space bar. Alternatively, you can open the Users utility from the
Settings menu after clicking your user name in the top right corner of the screen.

To make changes to the user accounts, first select the Unlock button and authenticate yourself
as indicated by the dialog box that appears. See Figure 5.1. Note that unless you have
superuser privileges, the application will prompt you to authenticate as root. To add and
remove users, select the + and - button respectively. To add a user to the administrative group
wheel, change the Account Type from Standard to Administrator. To edit a user’s language
setting, select the language and a drop-down menu appears.

When a new user is created, the account is disabled until a password is set. The Password drop-
down menu, shown in Figure 5.2 contains the options to set a password by the administrator
immediately, choose a password by the user at the first login, or create a guest account with
no password required to log in. You can also disable or enable an account from this menu.

5.1.3 Using Command-Line Tools


Apart from the Users settings tool described in the previous section which is designed for
basic managing of users, you can use command line tools for managing users and groups
that are listed in Table 5.1.

63
5. Users and Groups

Figure 5.1: The Users Settings Tool.

Figure 5.2: The Password Menu.

5.1.3.1 Adding a New User

To add a new user to the system, type the following at a shell prompt as root:
# useradd [ options ] username

. . . where options are command-line options as described in Table Table 5.2.

By default, the useradd command creates a locked user account. To unlock the account, run
the following command as root to assign a password:
# passwd username

Optionally, you can set a password aging policy.

64
5.1. Manage Local Linux Users and Groups

5.1.3.2 Explaining the Process

The following steps illustrate what happens if the command useradd khaled is issued on a
system that has shadow passwords enabled. Note that here no option has been explicitly
specified. Hence the default options will be used as stored in the /etc/login.defs file.

1. A new line for khaled is created in /etc/passwd:


khaled :x :1001:1001::/ home/ khaled :/ bin/bash

The line has the following characteristics:

• It begins with the user name khaled.


• There is an x for the password field indicating that the system is using shadow
passwords.
• A UID greater than 999 is created. Under Red Hat Enterprise Linux 7, UIDs below
1000 are reserved for system use and should not be assigned to users.
• A GID greater than 999 is created. Under Red Hat Enterprise Linux 7, GIDs below
1000 are reserved for system use and should not be assigned to users.
• The optional GECOS information is left blank. The GECOS field can be used to
provide additional information about the user, such as their full name or phone
number.
• The home directory for khaled is set to /home/khaled/.
• The default shell is set to /bin/bash.

2. A new line for khaled is created in /etc/shadow:


khaled :!!:14798:0:99999:7:::

The line has the following characteristics:

• It begins with the user name khaled.

Utilities Description

id Displays user and group IDs.

useradd, usermod, userdel Standard utilities for adding, modifying, and deleting
user accounts

groupadd, groupmod, groupdel Standard utilities for adding, modifying, and deleting
groups

Table 5.1: Command line tools.

65
5. Users and Groups

-c ’comment’ comment can be replaced with any string. This option is


generally used to specify the full name of a user.
-d home_directory Home directory to be used instead of default
/home/username/.
-e date Date for the account to be disabled in the format
YYYY-MM-DD.
-f days Number of days after the password expires until the account
is disabled. If 0 is specified, the account is disabled
immediately after the password expires. If -1 is specified,
the account is not disabled after the password expires.
-g group_name Group name or group number for the user’s default
(primary) group. The group must exist prior to being
specified here.
-G group_list List of additional (supplementary, other than default) group
names or group numbers, separated by commas, of which
the user is a member. The groups must exist prior to being
specified here.
-m Create the home directory if it does not exist.
-M Do not create the home directory.
-N Do not create a user private group for the user.
-p password The password encrypted with crypt.
-r Create a system account with a UID less than 1000 and
without a home directory.
-s User’s login shell, which defaults to /bin/bash.
-u uid User ID for the user, which must be unique and greater than
999.

Table 5.2: Common useradd command-line options.

• Two exclamation marks (!!) appear in the password field of the /etc/shadow file,
which locks the account. Note that if an encrypted password is passed using the
-p flag, it is placed in the /etc/shadow file on the new line for the user.
• The password is set to never expire.

3. A new line for a group named khaled is created in /etc/group:


khaled :x :1001:

A group with the same name as a user is called a user private group. For more
information on user private groups, see Section 5.1.1.1.
The line created in /etc/group has the following characteristics:

66
5.1. Manage Local Linux Users and Groups

• It begins with the group name khaled.


• An x appears in the password field indicating that the system is using shadow
group passwords.
• The GID matches the one listed for khaled’s primary group in /etc/passwd.

4. A new line for a group named khaled is created in /etc/gshadow:


khaled :!::

The line has the following characteristics:

• It begins with the group name khaled.


• An exclamation mark (!) appears in the password field of the /etc/gshadow file,
which locks the group.
• All other fields are blank.

5. A directory for user khaled is created in the /home directory:


6. The files within the /etc/skel/ directory (which contain default user settings) are
copied into the new /home/khaled/ directory:
#ls -la /home/ khaled
total 28
drwx ------. 4 khaled khaled 4096 Mar 3 18:23 .
drwxr -xr -x. 5 root root 4096 Mar 3 18:23 ..
-rw -r--r--. 1 khaled khaled 18 Jun 22 2010 . bash_logout
-rw -r--r--. 1 khaled khaled 176 Jun 22 2010 . bash_profile
-rw -r--r--. 1 khaled khaled 124 Jun 22 2010 . bashrc
drwxr -xr -x. 4 khaled khaled 4096 Nov 23 15:09 . mozilla

At this point, a locked account called khaled exists on the system. To activate it, the
administrator must next assign a password to the account using the passwd command as
shown below and, optionally, set password aging guidelines.
passwd khaled

5.1.3.3 Modifying an User Account

You can modify the parameters related to a user account such as group membership, addition
of new groups, location of home directory, passwd, default shell program etc. using
the usermod command as shown below. See the manpage man usermod for all possible
modification option.
$ usermod [ options ] username

67
5. Users and Groups

Option Description

-f, --force When used with -g gid and gid already exists, groupadd
will choose another unique gid for the group.

-g gid Group ID for the group, which must be unique and greater
than 999.

-K, --key key=value Override /etc/login.defs defaults.

-o, --non-unique Allows creating groups with duplicate GID.

-p, --password password Use this encrypted password for the new group.

-r Create a system group with a GID less than 1000.

Table 5.3: Common groupadd command-line options.

5.1.3.4 Adding a New Group

To add a new group to the system, type the following at a shell prompt as root:
groupadd [ options ] group_name

. . . where options are command-line options as described in Table 5.3.

5.1.3.5 Adding an Existing User to an Existing Group

Use the usermod utility to add an already existing user to an already existing group.

Various options of usermod have different impact on user’s primary group and on his or her
supplementary groups.

To override user’s primary group, run the following command as root:


# usermod -g group_name user_name

To override user’s supplementary groups, run the following command as root:


# usermod -G group_name1 , group_name2 ,... user_name

Note that in this case all previous supplementary groups of the user are replaced by the new
group or several new groups.

To add one or more groups to user’s supplementary groups, run one of the following
commands as root:
# usermod -aG group_name1 , group_name2 ,... user_name

# usermod --append -G group_name1 , group_name2 ,... user_name

Note that in this case the new group is added to user’s current supplementary groups.

68
5.2. Control Access to Files with Linux File System Permissions

5.2 Control Access to Files with Linux File System Permissions


Since everything in Linux is treated as files, file permission serves as the primary security
mechanism in Linux. Linux permissions dictate three things you may do with a file, read,
write and execute. They are referred to in Linux by a single letter each.

• r read - you may view the contents of the file.


• w write - you may change the contents of the file.
• x execute - you may execute or run the file if it is a program or script.

For every file we define three sets of people for whom we may specify permissions.

• owner - a single person who owns the file. (typically the person who created the file
but ownership may be granted to some one else by certain users)
• group - every file belongs to a single group.
• others - everyone else who is not in the group or the owner.

Permissions for Directories The same series of permissions may be used for directories
but they have a slightly different behaviour.

• r - you have the ability to read the contents of the directory (ie do an ls)
• w - you have the ability to write into the directory (ie create files and directories)
• x - you have the ability to enter that directory (ie cd)

5.2.1 Viewing File Permission


To view permissions for a file we use the long listing option for the command ls -l [path].
Consider the following example.
$ ls -l /home/test.txt
-rwxr ----x 1 khaled wheel 5K Sep 13 07:32 /home/test.txt

In the above example the first 10 characters of the output are what we look at to identify
permissions.

• The first character identifies the file type. If it is a dash ( - ) then it is a normal file. If it is
a d then it is a directory.
• The following 3 characters represent the permissions for the owner. A letter represents
the presence of a permission and a dash ( - ) represents the absence of a permission. In
this example the owner has all permissions (read, write and execute).
• The following 3 characters represent the permissions for the group. In this example the
group has the ability to read but not write or execute. Note that the order of permissions
is always read, then write then execute.
• Finally the last 3 characters represent the permissions for others (or everyone else). In
this example they have the execute permission and nothing else.

69
5. Users and Groups

5.2.2 Changing File Permission


The file owner can be changed only by root, and access permissions can be changed by both
the root user and file owner.

To change permissions on a file or directory we use a command called


chmod [permissions] [path].

chmod has permission arguments that are made up of 3 components

• Who are we changing the permission for? [ugoa] - user (or owner), group, others, all
• Are we granting or revoking the permission - indicated with either a plus ( + ) or minus
(-)
• Which permission are we setting? - read ( r ), write ( w ) or execute ( x )

Consider the following example where we grant the execute permission to the group and
then remove the write permission for the owner.
$ ls -l test.txt
-rwxr ----x 1 khaled wheel 2.7K Sep 13 07:32 test.txt
$ chmod g+x test.txt
$ ls -l test.txt
-rwxr -x--x 1 khaled wheel 2.7K Sep 13 07:32 test.txt
$ chmod u-w test.txt
$ ls -l test.txt
-r-xr -x--x 1 khaled wheel 2.7K Sep 13 07:32 test.txt

5.2.2.1 Setting Default Permissions for New Files Using umask

When a process creates a file, the file has certain default permissions, for example, -rw-rw-r--.
These initial permissions are partially defined by the file mode creation mask, also called
file permission mask or umask. This is configured in the /etc/bashrc file. Traditionally
on UNIX-based systems, the umask is set to 022, which allows only the user who created the
file or directory to make modifications. Every process has its own umask, for example, bash
has umask 0022 by default.

What umask consists of A umask consists of bits corresponding to standard file permissions.
For example, for umask 0137, the digits mean that:

• 0 = no meaning, it is always 0 (umask does not affect special bits)


• 1 = for owner permissions, the execute bit is set
• 3 = for group permissions, the execute and write bits are set
• 7 = for others permissions, the execute, write, and read bits are set

Umasks can be represented in binary, octal, or symbolic notation. For example, the octal
representation 0137 equals symbolic representation u=rw-,g=r--,o=---. Symbolic notation

70
5.2. Control Access to Files with Linux File System Permissions

specification is the reverse of the octal notation specification: it shows the allowed permissions,
not the prohibited permissions.

How umask works Umask prohibits permissions from being set for a file:

• When a bit is set in umask, it is unset in the file.


• When a bit is not set in umask, it can be set in the file, depending on other factors.

Figure 5.3 shows how umask 0137 affects creating a new file.

Figure 5.3: Applying umask when creating a file.

Important For security reasons, a regular file cannot have execute permissions by default.
Therefore, even if umask is 0000, which does not prohibit any permissions, a new regular file
still does not have execute permissions. However, directories can be created with execute
permissions:
[ john@server tmp]$ umask 0000
[ john@server tmp]$ touch file
[ john@server tmp]$ mkdir directory
[ john@server tmp]$ ls -lh .
total 0
drwxrwxrwx . 2 john john 40 Nov 2 13:17 directory
-rw -rw -rw -. 1 john john 0 Nov 2 13:17 file

Managing umask in Shells For popular shells, such as bash, ksh, zsh and tcsh, umask is
managed using the umask shell builtin. Processes started from shell inherit its umask.

Displaying the current mask To show the current umask in octal notation:
$ umask
0022

71
5. Users and Groups

To show the current umask in symbolic notation:


$ umask -S
u=rwx ,g=rx ,o=rx

Setting Mask in Shell Using umask To set umask for the current shell session using octal
notation run:
$ umask octal_mask

Substitute octal_mask with four or less digits from 0 to 7. When three or less digits are
provided, permissions are set as if the command contained leading zeros. For example, umask
7 translates to 0007.

Example of Setting umask Using Octal Notation To prohibit new files from having write
and execute permissions for owner and group, and from having any permissions for others:
$ umask 0337

Or simply:
$ umask 337

To set umask for the current shell session using symbolic notation:
$ umask -S symbolic_mask

Example: setting umask Using Symbolic Notation

To set umask 0337 using symbolic notation:


$ umask -S u=r,g=r,o=

Working with the Default Shell umask Shells usually have a configuration file where their
default umask is set. For bash, it is /etc/bashrc. To show the default bash umask:
$ grep -i -B 1 umask /etc/ bashrc

The output shows if umask is set, either using the umask command or the UMASK variable. In
the following example, umask is set to 022 using the umask command:
$ grep -i -B 1 umask /etc/ bashrc
# By default , we want umask to get set. This sets
# it for non - login shell .
--
if [ $UID -gt 199 ] && [ "‘id -gn ‘" = "‘id -un ‘" ]; then
umask 002
else
umask 022

To change the default umask for bash, change the umask command call or the UMASK variable
assignment in /etc/bashrc. This example changes the default umask to 0227:

72
5.3. Special File Permissions

if [ $UID -gt 199 ] && [ "‘id -gn ‘" = "‘id -un ‘" ]; then
umask 002
else
umask 227

Working with the default shell umask of a specific user By default, bash umask of a new
user defaults to the one defined in /etc/bashrc.

To change bash umaskfor a particular user, add a call to the umask command in $HOME/.bashrc
file of that user. For example, to change bash umask of user john to 0227:
john@server $ echo ’umask 227 ’ >> /home/john /. bashrc

Setting default permissions for newly created home directories To change permis-
sions with which user home directories are created, change the UMASK variable in the
/etc/login.defs file:
# The permission mask is initialized to this value . If not specified ,
# the permission mask will be initialized to 022.
UMASK 077

5.3 Special File Permissions


There are two special permissions that can be set on executable files: Set User ID (setuid) and
Set Group ID (sgid). These permissions allow the file being executed to be executed with the
privileges of the owner or the group. For example, if a file was owned by the root user and
has the setuid bit set, no matter who executed the file it would always run with root user
privileges.

5.3.1 Set User ID (setuid)


You must be the owner of the file or the root user to set the setuid bit. Run the following
command to set the setuid bit:
chmod u+s file1

View the permissions using the ls -l command:


$ ls -l file1
-rwSrw -r-- 1 user1 user1 0 2007 -10 -29 21:41 file1

Note the capital S. This means there are no execute permissions. Run the following command
to add execute permissions to the file1 file, noting the lower case s:
$ chmod u+x file1
$ ls -l file1
-rwsrw -r-- 1 user1 user1 0 2007 -10 -29 21:41 file1

73
5. Users and Groups

Note the lower case s. This means there are execute permissions.

Alternatively, you can set the setuid bit using the numeric method by prepending a 4 to the
mode. For example, to set the setuid bit, read, write, and execute permissions for owner of
the file1 file, run the following command:
$ chmod 4700 file1

Use Case of setuid In Red Hat Enterprise Linux, setuid bit is set by default on commands
like /usr/bin/passwd, /usr/bin/wall, /usr/bin/ssh-agent, etc. To understand why,
consider the passwd command that a any user can use to change his password. But this
command must be run as root since changing password requires modifying the /etc/shadow
file which is owned by root. The binary file /usr/bin/passwd corresponding to the command
passwd has its setuid bit set as can be seen using the ls command.
# ls -l /usr/bin/ passwd
-rwsr -xr -x 1 root root 27936 Sep 25 2017 /usr/bin/ passwd

Hence no matter which user is executing the command, it always runs as root enabling it to
change the user’s password.

5.4 Set Group ID (setgid)


When the Set Group ID bit is set, the executable is run with the authority of the group. For
example, if a file was owned by the users group, no matter who executed that file it would
always run with the authority of the users group.

Run the following command as to set the setgid bit on the file1 file:
$ chmod g+s

Note that both the setuid and setgid bits are set using the s symbol. Alternatively, prepend a
2 to the mode. For example, run the following command as root to set the setgid bit, and
read, write, and execute permissions for the owner of the file1 file:
$ chmod 2700 file1

The setgid is represented the same as the setuid bit, except in the group section of the
permissions:
$ ls -l file1
-rwx --S--- 1 user1 user1 0 2007 -10 -30 21:40 file1

5.5 Special Permissions For Directories


There are two special permissions for directories: the sticky bit and the setgid bit. When the
sticky bit is set on a directory, only the root user, the owner of the directory, and the owner of
a file can remove files within said directory.

74
5.5. Special Permissions For Directories

5.5.1 Sticky Bit


An example of the sticky bit is the /tmp directory. Use the ls -ld /tmp command to view
the permissions:
$ ls -ld /tmp
drwxrwxrwt 24 root root 4096 2007 -10 -30 22:00 tmp

The t at the end symbolizes that the sticky bit is set. A file created in the /tmp directory can
only be removed by its owner, or the root user. If for some directory, you find T in place of t ,
the directory does not have execute permission for others.

To set the sticky bit on the folder1 folder:


$ chmod a+t folder1

Alternatively, prepend a 1 to the mode of a directory to set the sticky bit:


$ chmod 1777 folder1

The permissions should be read, write, and execute for the owner, group, and everyone else,
on directories that have the sticky bit set. This allows anyone to cd into the directory and
create files.

5.5.2 setgid Bit


System administrators usually like to create a group for each major project and assign people
to the group when they need to access that project’s files. With this traditional scheme, file
management is difficult; when someone creates a file, it is associated with the primary group
to which they belong. When a single person works on multiple projects, it becomes difficult
to associate the right files with the right group. However, with the UPG (User Private Group)
scheme, groups are automatically assigned to files created within a directory with the setgid
bit set. The setgid bit makes managing group projects that share a common directory very
simple because any files a user creates within the directory are owned by the group that owns
the directory.

For example, a group of people need to work on files in the /opt/myproject/ directory. Some
people are trusted to modify the contents of this directory, but not everyone.

1. As root, create the /opt/myproject/ directory by typing the following at a shell


prompt:
# mkdir /opt/ myproject

2. Add the myproject group to the system:


# groupadd myproject

3. Associate the contents of the /opt/myproject/ directory with the myproject group:

75
5. Users and Groups

# chown root: myproject /opt/ myproject

4. Allow users in the group to create files within the directory and set the setgid bit:
# chmod 2775 /opt/ myproject

At this point, all members of the myproject group can create and edit files in the
/opt/myproject/ directory without the administrator having to change file permissions
every time users write new files. To verify that the permissions have been set correctly,
run the following command:
# ls -ld /opt/ myproject
drwxrwsr -x. 3 root myproject 4096 Mar 3 18:31 /opt/ myproject

5. Add users to the myproject group:


# usermod -aG myproject username

5.6 Create and Manage Access Control Lists (ACLs)


Files and directories have permission sets for the owner of the file, the group associated with
the file, and all other users for the system. However, these permission sets have limitations.
For example, different permissions cannot be configured for different users. Thus, Access
Control Lists (ACLs) were implemented.

The Red Hat Enterprise Linux kernel provides ACL support for the ext3 file system and
NFS-exported file systems. ACLs are also recognized on ext3 file systems accessed via Samba.

Along with support in the kernel, the acl package is required to implement ACLs. It contains
the utilities used to add, modify, remove, and retrieve ACL information.

The cp and mv commands copy or move any ACLs associated with files and directories.

5.6.1 Mounting File Systems


Before using ACLs for a file or directory, the partition for the file or directory must be mounted
with ACL support. If it is a local ext3 file system, it can mounted with the following command:

mount -t ext3 -o acl device-name partition

For example:

mount -t ext3 -o acl /dev/VolGroup00/LogVol02 /work

Alternatively, if the partition is listed in the /etc/fstab file, the entry for the partition can
include the acl option:
LABEL =/ work /work ext3 acl 1 2

76
5.6. Create and Manage Access Control Lists (ACLs)

If an ext3 file system is accessed via Samba and ACLs have been enabled for it, the ACLs
are recognized because Samba has been compiled with the --with-acl-support option. No
special flags are required when accessing or mounting a Samba share.

5.6.1.1 NFS

By default, if the file system being exported by an NFS server supports ACLs and the NFS
client can read ACLs, ACLs are utilized by the client system.

To disable ACLs on NFS shares when configuring the server, include the no_acl option in the
/etc/exports file. To disable ACLs on an NFS share when mounting it on a client, mount it
with the no_acl option via the command line or the /etc/fstab file.

5.6.2 Setting Access ACLs


There are two types of ACLs: access ACLs and default ACLs. An access ACL is the access
control list for a specific file or directory. A default ACL can only be associated with a
directory; if a file within the directory does not have an access ACL, it uses the rules of the
default ACL for the directory. Default ACLs are optional.

ACLs can be configured:

1. Per user
2. Per group
3. Via the effective rights mask
4. For users not in the user group for the file

The setfacl utility sets ACLs for files and directories. Use the -m option to add or modify
the ACL of a file or directory:
# setfacl -m rules files

Rules (rules) must be specified in the following formats. Multiple rules can be specified in
the same command if they are separated by commas.

u:uid:perms Sets the access ACL for a user. The user name or UID may be specified. The
user may be any valid user on the system.
g:gid:perms Sets the access ACL for a group. The group name or GID may be specified.
The group may be any valid group on the system.
m:perms Sets the effective rights mask. The mask is the union of all permissions of the
owning group and all of the user and group entries.
o:perms Sets the access ACL for users other than the ones in the group for the file.

Permissions (perms) must be a combination of the characters r, w, and x for read, write, and
execute.

77
5. Users and Groups

If a file or directory already has an ACL, and the setfacl command is used, the additional
rules are added to the existing ACL or the existing rule is modified.

Example: give read and write permissions

For example, to give read and write permissions to user andrius:


# setfacl -m u: andrius :rw / project / somefile

To remove all the permissions for a user, group, or others, use the -x option and do not specify
any permissions:
# setfacl -x rules files

Example. Remove all permissions

For example, to remove all permissions from the user with UID 500:
# setfacl -x u:500 / project / somefile

5.6.3 Setting Default ACLs


To set a default ACL, add d: before the rule and specify a directory instead of a file name.

Example: setting default ACLs

For example, to set the default ACL for the /share/ directory to read and execute for users
not in the user group (an access ACL for an individual file can override it):
# setfacl -m d:o:rx / share

5.6.4 Retrieving ACLs


To determine the existing ACLs for a file or directory, use the getfacl command. In the
example below, the getfacl is used to determine the existing ACLs for a file.

Example: retrieving ACLs


# getfacl home/john/ picture .png

The above command returns the following output:


# file: home/john/ picture .png
# owner : john
# group : john
user ::rw -
group ::r--
other ::r--

If a directory with a default ACL is specified, the default ACL is also displayed as illustrated
below. For example, getfacl home/sales/ will display similar output:
# file: home/ sales /
# owner : john

78
5.7. Authentication in Red Hat Linux

# group : john
user ::rw -
user: barryg :r--
group ::r--
mask ::r--
other ::r--
default :user :: rwx
default :user:john:rwx
default : group ::r-x
default :mask :: rwx
default : other ::r-x

5.6.5 Archiving File Systems With ACLs


By default, the dump command now preserves ACLs during a backup operation. When
archiving a file or file system with tar, use the --acls or if using star, use the --acl option
to preserve ACLs. When using cp to copy files with ACLs, include the --preserve=mode
option to ensure that ACLs are copied across too. In addition, the -a option (equivalent to -dR
--preserve=all) of cp also preserves ACLs during a backup along with other information
such as timestamps, SELinux contexts, and the like. For more information about dump, tar,
star or cp, refer to their respective man pages.

5.6.6 Compatibility with Older Systems


If an ACL has been set on any file on a given file system, that file system has the ext_attr
attribute. This attribute can be seen using the following command:
# tune2fs -l filesystem - device

A file system that has acquired the ext_attr attribute can be mounted with older kernels,
but those kernels do not enforce any ACLs which have been set.

Versions of the e2fsck utility included in version 1.22 and higher of the e2fsprogs package
(including the versions in Red Hat Enterprise Linux 2.1 and 4) can check a file system with
the ext_attr attribute. Older versions refuse to check it.

5.7 Authentication in Red Hat Linux


Authentication is the process of confirming an identity. For network interactions,
authentication involves the identification of one party by another party. Authentication
requires that a user presents some kind of credential to verify his identity. The kind of
credential that is required is defined by the authentication mechanism being used. There are
several kinds of authentication for local users on a system:

79
5. Users and Groups

• Password-based authentication. Almost all software permits the user to authen-


ticate by providing a recognized name and password. This is also called simple
authentication.
• Certificate-based authentication. Client authentication based on certificates is
part of the SSL protocol. The client digitally signs a randomly generated piece of data
and sends both the certificate and the signed data across the network. The server
validates the signature and confirms the validity of the certificate.
• Kerberos authentication. Kerberos establishes a system of short-lived credentials,
called ticket-granting tickets (TGTs). The user presents credentials, that is, user
name and password, that identify the user and indicate to the system that the user can
be issued a ticket. TGT can then be repeatedly used to request access tickets to other
services, like websites and email. Authentication using TGT allows the user to undergo
only a single authentication process in this way.
• Smart card-based authentication. This is a variant of certificate-based authentica-
tion. The smart card (or token) stores user certificates; when a user inserts the token
into a system, the system can read the certificates and grant access.

5.7.1 Available Services


All Red Hat Enterprise Linux systems have some services already available to configure
authentication for local users on local systems. These include:

Authentication Setup The Authentication Configuration tool (authconfig) sets up different


identity back ends and means of authentication (such as passwords, fingerprints, or
smart cards) for the system.
Identity Back End Setup The Security System Services Daemon (SSSD) sets up multiple
identity providers (primarily LDAP-based directories such as Microsoft Active Directory
or Red Hat Enterprise Linux IdM) which can then be used by both the local system
and applications for users. Passwords and tickets are cached, allowing both offline
authentication and single sign-on by reusing credentials.
Authentication Mechanisms Pluggable Authentication Modules (PAM) provide a system
to set up authentication policies. An application using PAM for authentication loads
different modules that control different aspects of authentication; which PAM module
an application uses is based on how the application is configured. The available PAM
modules include Kerberos, Winbind, or local UNIX file-based authentication.

Other services and applications are also available, but these are common ones.

5.7.2 Configuring System Authentication


The system must have a configured list of valid account databases for it to check for user
authentication. The information to verify the user can be located on the local system or the
local system can reference a user database on a remote system, such as LDAP or Kerberos.

80
5.7. Authentication in Red Hat Linux

A local system can use a variety of different data stores for user information, including
Lightweight Directory Access Protocol (LDAP), Network Information Service (NIS), and
Winbind. Both LDAP and NIS data stores can use Kerberos to authenticate users.

For convenience and potentially part of single sign-on, Red Hat Enterprise Linux can use the
System Security Services Daemon (SSSD) as a central daemon to authenticate the user to
different identity back ends or even to ask for a ticket-granting ticket (TGT) for the user. SSSD
can interact with LDAP, Kerberos, and external applications to verify user credentials.

5.7.3 Using authconfig


The authconfig tool can help configure what kind of data store to use for user credentials,
such as LDAP. On Red Hat Enterprise Linux, authconfig has both GUI and command-line
options to configure any user data stores. The authconfig tool can configure the system to
use specific services — SSSD, LDAP, NIS, or Winbind — for its user database, along with
using different forms of authentication mechanisms.

The following three authconfig utilities are available for configuring authentication settings:

• authconfig-gtk provides a full graphical interface.


• authconfig provides a command-line interface for manual configuration.
• authconfig-tui provides a text-based UI. Note that this utility has been deprecated.

All of these configuration utilities must be run as root.

5.7.3.1 Installing the authconfig UI

The authconfig UI is not installed by default, but it can be useful for administrators to make
quick changes to the authentication configuration.

To install the UI, install the authconfig-gtk package. This has dependencies on some
common system packages, such as the authconfig command-line tool, Bash, and Python.
Most of those are installed by default.
# yum install authconfig -gtk
Loaded plugins : langpacks , product -id , subscription - manager
Resolving Dependencies
--> Running transaction check
---> Package authconfig -gtk. x86_64 0:6.2.8 -8. el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================
Package Arch Version Repository Size
================================================================================
Installing :
authconfig -gtk x86_64 6.2.8 -8. el7 RHEL - Server 105 k

81
5. Users and Groups

Transaction Summary
================================================================================
Install 1 Package

... 8< ...

5.7.3.2 Launching the authconfig UI

1. Open the terminal and log in as root.


2. Run the system-config-authentication command.
3. There are three configuration tabs in the Authentication dialog box. See Figure 5.4.

• Identity & Authentication, which configures the resource used as the identity
store (the data repository where the user IDs and corresponding credentials are
stored).
• Advanced Options, which configures authentication methods other than pass-
words or certificates, like smart cards and fingerprint.
• Password Options, which configures password authentication methods.

5.7.3.3 Testing Authentication Settings

It is critical that authentication is fully and properly configured. Otherwise all users (even
root) could be locked out of the system, or some users blocked.

The --test option prints all of the authentication configuration for the system, for every
possible identity and authentication mechanism. This shows both the settings for what is
enabled and what areas are disabled.

The test option can be run by itself to show the full, current configuration or it can be used
with an authconfig command to show how the configuration will be changed (without
actually changing it). This can be very useful in verifying that the proposed authentication
settings are complete and correct.
# authconfig --test
caching is disabled
nss_files is always enabled
nss_compat is disabled
nss_db is disabled
nss_hesiod is disabled
DNS preference over NSS or WINS is disabled
pam_unix is always enabled
[ output omitted ]

82
5.7. Authentication in Red Hat Linux

Figure 5.4: Authconfig window.

5.7.3.4 Saving and Restoring Configuration Using authconfig

Changing authentication settings can be problematic. Improperly changing the configuration


can wrongly exclude users who should have access, can cause connections to the identity
store to fail, or can even lock all access to a system.

Before editing the authentication configuration, it is strongly recommended that administrators


take a backup of all configuration files. This is done with the --savebackup option.

# authconfig --savebackup=/backups/authconfigbackup20170701

83
5. Users and Groups

The authentication configuration can be restored to any previous saved version using the
--restorebackup option, with the name of the backup to use.

# authconfig --restorebackup=/backups/authconfigbackup20170701

The authconfig command saves an automatic backup every time the configuration is altered.
It is possible to restore the last backup using the --restorelastbackup option. Note that
any changes take effect immediately when the authconfig UI is closed.

# authconfig --restorelastbackup

5.7.4 Selecting the Identity Store for Authentication with authconfig


The Identity & Authentication tab in the authconfig UI sets how users should be
authenticated. The default is to use local system authentication, meaning the users and their
passwords are checked against local system accounts. A Red Hat Enterprise Linux machine
can also use external resources which contain the users and credentials, including LDAP, NIS,
and Winbind.

5.7.5 Configuring LDAP


LDAP (Lightweight Directory Access Protocol) is a set of open protocols used to access centrally
stored information over a network. It is based on the X.500 standard for directory sharing,
but is less complex and resource-intensive. For this reason, LDAP is sometimes referred to as
“X.500 Lite”.

Like X.500, LDAP organizes information in a hierarchical manner using directories. These
directories can store a variety of information such as names, addresses, or phone numbers, and
can even be used in a manner similar to the Network Information Service (NIS), enabling
anyone to access their account from any machine on the LDAP enabled network.

LDAP is commonly used for centrally managed users and groups, user authentication, or
system configuration. It can also serve as a virtual phone directory, allowing users to easily
access contact information for other users. Additionally, it can refer a user to other LDAP
servers throughout the world, and thus provide an ad-hoc global repository of information.
However, it is most frequently used within individual organizations such as universities,
government departments, and private companies.

Both standard LDAP directories (such as OpenLDAP and Red Hat Directory Server) can
be used as LDAP identity providers. Additionally, older IdM versions and FreeIPA can be
configured as identity providers by configuring them as LDAP providers with a related
Kerberos server.

Either the openldap-clients package or the sssd package is used to configure an LDAP server
for the user database. Both packages are installed by default.

84
5.7. Authentication in Red Hat Linux

5.7.5.1 Configuring LDAP Authentication from the UI

1. Open the authconfig UI


2. Select LDAP in the User Account Database drop-down menu. See Figure 5.5.

Figure 5.5: LDAP configuration with authconfig.

3. Set the information that is required to connect to the LDAP server.


LDAP Search Base DN gives the root suffix or distinguished name (DN) for the user
directory. All of the user entries used for identity or authentication exist below this
parent entry. For example, ou=people,dc=example,dc=com. This field is optional.
If it is not specified, the System Security Services Daemon (SSSD) attempts to detect

85
5. Users and Groups

the search base using the namingContexts and defaultNamingContext attributes


in the LDAP server’s configuration entry.
LDAP Server gives the URL of the LDAP server. This usually requires both the host
name and port number of the LDAP server, such as
ldap://ldap.example.com:389. Entering the secure protocol by using a URL
starting with ldaps:// enables the Download CA Certificate button, which retrieves
the issuing CA certificate for the LDAP server from whatever certificate authority
issued it. The CA certificate must be in the privacy enhanced mail (PEM) format.
If you use a insecure standard port connection (URL starting with ldap://), you
can use the Use TLS to encrypt connections check box to encrypt communication
with the LDAP server using STARTTLS. Selecting this check box also enables the
Download CA Certificate button.
Note that you do not need to select the Use TLS to encrypt connections check
box if the server URL uses the LDAPS (LDAP over SSL) secure protocol as the
communication is already encrypted.
4. Select the authentication method. LDAP allows simple password authentication or
Kerberos authentication. The LDAP password option uses PAM applications to use
LDAP authentication. This option requires a secure connection to be set either by using
LDAPS or TLS to connect to the LDAP server.

5.7.5.2 Configuring LDAP User Stores from the Command Line

To use an LDAP identity store, use the --enableldap. To use LDAP as the authentication
source, use --enableldapauth and then the requisite connection information, like the LDAP
server name, base DN for the user suffix, and (optionally) whether to use TLS. The authconfig
command also has options to enable or disable RFC 2307bis schema for user entries, which is
not possible through the authconfig UI.

Be sure to use the full LDAP URL, including the protocol (ldap or ldaps) and the port number.
Do not use a secure LDAP URL (ldaps) with the --enableldaptls option.

authconfig --enableldap --enableldapauth


--ldapserver=ldap://ldap.example.com:389,ldap://ldap2.example.com:389
--ldapbasedn="ou=people,dc=example,dc=com" --enableldaptls
--ldaploadcacert=https://ca.server.example.com/caCert.crt --update

5.7.6 Configuring Kerberos (with LDAP or NIS) Using authconfig


Both LDAP and NIS authentication stores support Kerberos authentication methods. Using
Kerberos has a couple of benefits:

• It uses a security layer for communication while still allowing connections over standard
ports.
• It automatically uses credentials caching with SSSD, which allows offline logins.

86
5.7. Authentication in Red Hat Linux

Note

Using Kerberos authentication requires the krb5-libs and krb5-workstation packages.

5.7.6.1 Configuring Kerberos Authentication from the UI

The Kerberos password option from the Authentication Method drop-down menu
automatically opens the fields required to connect to the Kerberos realm. See Figure 5.6.

Figure 5.6: Kerberos Fields.

• Realm gives the name for the realm for the Kerberos server. The realm is the network
that uses Kerberos, composed of one or more key distribution centers (KDC) and

87
5. Users and Groups

a potentially large number of clients.


• KDCs gives a comma-separated list of servers that issue Kerberos tickets.
• Admin Servers gives a list of administration servers running the kadmind process in
the realm.
• Optionally, use DNS to resolve server host name and to find additional KDCs within
the realm.

5.7.6.2 Configuring Kerberos Authentication from the Command Line

Both LDAP and NIS allow Kerberos authentication to be used in place of their native
authentication mechanisms. At a minimum, using Kerberos authentication requires specifying
the realm, the KDC, and the administrative server. There are also options to use DNS to
resolve client names and to find additional admin servers.

# authconfig NIS or LDAP options


--enablekrb5 --krb5realm EXAMPLE --krb5kdc
kdc.example.com:88,server.example.com:88 --krb5adminserver
server.example.com:749 --enablekrb5kdcdns --enablekrb5realmdns --update

Five
88
Chapter 6

Linux Processes
3/4 classes

Chapter Goals
1. View system processes.
2. Manage processes.
3. Schedule tasks using at and cron.
4. Identify CPU/memory intensive processes, adjust process
priority with renice, and kill processes.

6.1 Viewing System Processes


6.1.1 Using the ps Command
The ps command allows you to display information about running processes. It produces a
static list, that is, a snapshot of what is running when you execute the command.

To list all processes that are currently running on the system including processes owned by
other users, type the following at a shell prompt:
$ ps ax

For each listed process, the ps ax command displays the process ID (PID), the terminal that
is associated with it (TTY), the current status (STAT), the cumulated CPU time (TIME), and
the name of the executable file (COMMAND). For example:
PID TTY STAT TIME COMMAND
1 ? Ss 0:01 /usr/lib/ systemd / systemd --switched -root --
system --deserialize 21

89
6. Linux Processes

2 ? S 0:00 [ kthreadd ]
3 ? S 0:00 [ ksoftirqd /0]
4 ? S 0:00 [ kworker /0:0]
5 ? S< 0:00 [ kworker /0:0H]
6 ? S 0:00 [ kworker /u2 :0]
7 ? S 0:00 [ migration /0]
8 ? S 0:00 [ rcu_bh ]
9 ? R 0:00 [ rcu_sched ]
10 ? S 0:00 [ watchdog /0]
12 ? S 0:00 [ kdevtmpfs ]
13 ? S< 0:00 [ netns ]

( output truncated )

Here are the different values that the s, stat and state output specifiers (header STAT or S)
will display to describe the state of a process:

D : uninterruptible sleep (usually IO).


R : running or runnable (on run queue).
S : interruptible sleep (waiting for an event to complete).
T : stopped by job control signal.
X : dead (should never be seen)
Z : defunct (“zombie”) process, terminated but not reaped by its parent

For BSD formats and when the stat keyword is used, additional characters may be displayed:

< : high-priority (not nice to other users).


N : low-priority (nice to other users).
L : has pages locked into memory (for real-time and custom IO).
s : is a session leader.
l : is multi-threaded (using CLONE_THREAD, like NPTL pthreads do).
+ : is in the foreground process group.

A brief explanation on the important ones are in order here:

Running — here it’s either running (it is the current process in the system) or it’s ready to
run (it’s waiting to be assigned to one of the CPUs).
Waiting — in this state, a process is waiting for an event to occur or for a system resource.
Additionally, the kernel also differentiates between two types of waiting processes;
interruptible waiting processes — can be interrupted by signals and uninterruptible
waiting processes — are waiting directly on hardware conditions and cannot be
interrupted by any event/signal.
Stopped — in this state, a process has been stopped, usually by receiving a signal. For
instance, a process that is being debugged.

90
6.1. Viewing System Processes

Zombie — here, a process is dead, it has been halted but it’s still has an entry in the process
table.

To display the owner alongside each process, use the following command:
$ ps aux

Apart from the information provided by the ps ax command, ps aux displays the effective
user name of the process owner (USER), the percentage of the CPU (%CPU) and memory
(%MEM) usage, the virtual memory size in kilobytes (VSZ), the non-swapped physical
memory size in kilobytes (RSS), and the time or date the process was started.

For instance:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.1 128164 6852 ? Ss 17:50 0:01
/usr/lib/ systemd / systemd --switched -root --system --deserialize 21
root 2 0.0 0.0 0 0 ? S 17:50 0:00 [ kthreadd ]
root 3 0.0 0.0 0 0 ? S 17:50 0:00 [ ksoftirqd /0]
root 5 0.0 0.0 0 0 ? S< 17:50 0:00 [ kworker /0:0H]
root 6 0.0 0.0 0 0 ? S 17:50 0:00 [ kworker /u2 :0]
root 7 0.0 0.0 0 0 ? S 17:50 0:00 [ migration /0]
root 8 0.0 0.0 0 0 ? S 17:50 0:00 [ rcu_bh ]
root 9 0.0 0.0 0 0 ? R 17:50 0:00 [ rcu_sched ]
root 10 0.0 0.0 0 0 ? S 17:50 0:00 [ watchdog /0]
root 12 0.0 0.0 0 0 ? S 17:50 0:00 [ kdevtmpfs ]
root 13 0.0 0.0 0 0 ? S< 17:50 0:00 [ netns ]
root 14 0.0 0.0 0 0 ? S 17:50 0:00 [ khungtaskd ]

( output truncated )

You can also use the ps command in a combination with grep to see if a particular process is
running. For example, to determine if Emacs is running, type:
ps ax | grep emacs

And you see an output something like:


mmasroo + 1454 0.7 0.8 621948 33928 pts /0 Sl 18:37 0:00 emacs
mmasroo + 1479 0.0 0.0 112660 972 pts /0 R+ 18:38 0:00 grep --color =auto emacs

6.1.2 Using the top Command


The top command displays a real-time list of processes that are running on the system. It also
displays additional information about the system uptime, current CPU and memory usage,
or total number of running processes, and allows you to perform actions such as sorting the
list or killing a process. To run the top command, type the following at a shell prompt:
$ top

For each listed process, the top command displays the process ID (PID), the effective user
name of the process owner (USER), the priority (PR), the nice value (NI), the amount of

91
6. Linux Processes

Figure 6.1: top command output.

virtual memory the process uses (VIRT), the amount of non-swapped physical memory the
process uses (RES), the amount of shared memory the process uses (SHR), the process status
field S), the percentage of the CPU (%CPU) and memory (%MEM) usage, the accumulated
CPU time (TIME+), and the name of the executable file (COMMAND).
See Figure 6.1 for an example:
Type q in top to terminate the utility and return to the shell prompt.

6.2 Manage Linux Processes


6.2.1 Sending Signals to Processes with kill and killall
6.2.1.1 kill Command — Kill the Process by Specifying Its PID

All the following kill commands will send signal to the specified process. For the signals,
either the signal name or signal number can be used. You need to lookup the pid for the
process and give it as an argument to kill:
$ kill -TERM pid
$ kill -SIGTERM pid
$ kill -15 pid

• The signal SIGTERM (15) is used to ask a process to stop.


• The signal SIGKILL (9) is used to force a process to stop.
• The SIGHUP (1) signal is used to hang up a process. The effect is that the process
will reread its configuration files, which makes this a useful signal to use after making
modifications to a process configuration file.

92
6.2. Manage Linux Processes

$ ps ax | grep emacs

mmasroo + 1454 0.7 0.8 621948 33928 pts /0 Sl 18:37 0:00 emacs
mmasroo + 1479 0.0 0.0 112660 972 pts /0 R+ 18:38 0:00 grep --color =auto emacs

$ kill -9 1454

6.2.1.2 killall Command — Kill Processes by Name

Instead of specifying a process by its PID, you can specify the name of the process. If more
than one process runs with that name, all of them will be killed. Example: Kill all the emacs
processes
$ killall -9 emacs

6.2.2 Manage Priority of Linux Processes


There are commands in Unix and Linux operating systems that allows for the adjustment of
the “niceness” value of processes. Adjusting the niceness value of processes allows for setting
an advised CPU priority that the kernel’s scheduler will use to determine which processes
get more or less CPU time.

Being able to adjust the niceness value comes in handy in two scenarios usually.

The first is when you have a process that is or may cause resource contention; for this scenario
we would increase the processes niceness value.

The second is when you want to increase the resources of a specific process in order to
decrease the run time or give a process higher priority. For this scenario we would decrease
the processes niceness value.

6.2.2.1 The nice Command

Before we start changing niceness values I want to go over identifying what the current nice
values are. The simplest method to determine the default value is to simply run the nice
command with no arguments. By default nice will simply return the current niceness value.

Example:
$ nice
0

In this example we see the niceness value of 0 is the default.

The niceness value of current processes are also pretty simple to find as they are visible in
the ps command’s long format. In the following example we are going to find the current
niceness value of the netns (PID 13) process.
$ ps -lp 13

F S UID PID PPID C PRI NI ADDR SZ WCHAN TTY TIME CMD


1 S 0 13 2 0 60 -20 - 0 rescue ? 00:00:00 netns

93
6. Linux Processes

The column NI is the niceness value of the netns process. In this case it is currently set to -20.

Niceness values range from -20 (the highest priority, lowest niceness) and 19 (the lowest
priority, highest niceness). In order to prevent a process from stealing CPU time from high
priority processes, we will increase the processes niceness value.

Changing the niceness value of a new process is fairly simple. The nice command itself will
run the supplied command with the desired niceness value.

Example:
$ nice -n 19 gedit

The niceness value is 19.

This method is helpful for CPU intensive processes that are not as time sensitive as other
processes running on the system. By increasing the niceness value, we allow other processes
on the system to be scheduled more frequently.

6.2.2.2 The renice Command

To change the niceness value of a running process we will utilize the renice command. The
usage is similar to nice however rather than supplying a command to run we will be supplying
a process id.

In this example we will be adjusting the priority of the netns (PID 13) process:
# renice -n -10 -p 13
13 ( process ID) old priority -20, new priority -10

It is important to note that only the root user can modify the niceness value of other users
processes. However a regular unprivileged user can adjust the niceness value to a nicer value
on processes owned by that user.

6.3 Schedule Future Linux Tasks


Cron is a service that enables you to schedule running a task, often called a job, at regular
times. A cron job is only executed if the system is running on the scheduled time. Users
specify cron jobs in cron table files, also called crontab files. These files are then read by the
crond service, which executes the jobs.

6.3.1 Prerequisites for Cron Jobs


Before scheduling a cron job, perform as root:

1. Install the cronie package:


# yum install cronie

94
6.3. Schedule Future Linux Tasks

2. The crond service is enabled - made to start automatically at boot time - upon installation.
If you disabled the service, enable it:
# systemctl enable crond . service

3. Start the crond service for the current session:


# systemctl start crond . service

6.3.2 Scheduling a Cron Job


6.3.2.1 Scheduling a Job as root User

The root user uses the cron table in /etc/crontab, or, preferably, creates a cron table file in
/etc/cron.d/. Use this procedure to schedule a job as root:

1. Choose:
• in which minutes of an hour to execute the job. For example, use 0,10,20,30,40,50
or 0/10 to specify every 10 minutes of an hour.
• in which hours of a day to execute the job. For example, use 17-20 to specify time
from 17:00 to 20:59.
• in which days of a month to execute the job. For example, use 15 to specify 15th
day of a month.
• in which months of a year to execute the job. For example, use Jun,Jul,Aug or
6,7,8 to specify the summer months of the year.
• in which days of the week to execute the job. For example, use * for the job to
execute independently of the day of week.
• Combine the chosen values into the time specification. The above example values
result into this specification:
0 ,10 ,20 ,30 ,40 ,50 17 -20 15 Jun ,Jul ,Aug *

2. Specify the user. The job will execute as if run by this user. For example, use root.
3. Specify the command to execute. For example, use
/usr/local/bin/my-script.sh
4. Put the above specifications into a single line:
0 ,10 ,20 ,30 ,40 ,50 17 -20 15 Jun ,Jul ,Aug * root /usr/ local /bin/my
- script .sh

5. Add the resulting line to /etc/crontab, or, preferably, create a cron table file in
/etc/cron.d/ and add the line there.

The job will now run as scheduled.


For full reference on how to specify a job, see the crontab(5) manual page.

95
6. Linux Processes

6.3.3 Scheduling a Job as Non-root User


Non-root users can use the crontab utility to configure cron jobs. The jobs will run as if
executed by that user.

1. To create a cron job as a specific user: From the user’s shell, run:
$ crontab -e

2. This will start editing of the user’s own crontab file using the editor specified by the
VISUAL or EDITOR environment variable.
3. Specify the job in the same way as in Section 6.3.2.1 as root user, but leave out the field
with user name. For example, instead of adding:
0 ,10 ,20 ,30 ,40 ,50 17 -20 15 Jun ,Jul ,Aug * bob /home/bob/bin/
script .sh

add:
0 ,10 ,20 ,30 ,40 ,50 17 -20 15 Jun ,Jul ,Aug * /home/bob/bin/ script .sh

4. Save the file and exit the editor.


5. To verify the new job, list the contents of the current user’s crontab file by running:
]$ crontab -l @daily /home/bob/bin/ script .sh

6.3.4 Scheduling Hourly, Daily, Weekly, and Monthly Jobs


To schedule an hourly, daily, weekly, or monthly job:

1. Put the actions you want your job to execute into a shell script.
2. Put the shell script into one of the following directories:
/etc/cron. hourly /
/etc/cron. daily /
/etc/cron. weekly /
/etc/cron. monthly /

From now, your script will be executed — the crond service automatically executes
any scripts present in /etc/cron.hourly, /etc/cron.daily, /etc/cron.weekly, and
/etc/cron.monthly directories at their corresponding times.

6.4 Configuring at Jobs


To schedule a one-time job at a specific time, type the command at time, where time is the
time to execute the command.
The argument time can be one of the following:

96
6.4. Configuring at Jobs

HH:MM format — For example, 04:00 specifies 4:00AM. If the time is already past, it is
executed at the specified time the next day.
midnight — Specifies 12:00AM.
noon — Specifies 12:00PM.
teatime — Specifies 4:00PM.
month-name day year format — For example, January 15 2002 specifies the 15th day of
January in the year 2002. The year is optional.
MMDDYY, MM/DD/YY, or MM.DD.YY formats — For example, 011502 for the 15th day
of January in the year 2002.
now + time — time is in minutes, hours, days, or weeks. For example, now + 5 days
specifies that the command should be executed at the same time five days from now.

The time must be specified first, followed by the optional date.

After typing the at command with the time argument, the at> prompt is displayed. Type
the command to execute, press [Enter], and type Ctrl −D. More than one command can be
specified by typing each command followed by the [Enter] key. After typing all the commands,
press [Enter] to go to a blank line and type Ctrl −D:
$ at 21:30
at > ps
at > ls -l /tmp
at > echo "My jobs" > /tmp/ myjobs .txt

Alternatively, a shell script can be entered at the prompt, pressing [Enter] after each line in
the script, and typing Ctrl −D on a blank line to exit. If a script is entered, the shell used is the
shell set in the user’s SHELL environment, the user’s login shell, or /bin/sh (whichever is
found first).

After you enter one or more jobs, you can view the current list of scheduled jobs with the atq
command:
$ atq

If the set of commands or script tries to display information to standard out, the output is
emailed to the user.

Usage of the at command can be restricted.

Six
97
This page is intentionally left blank

98
Chapter 7

Services and Daemons


3/4 classes

Chapter Goals

1. Start and stop services and configure services to start


automatically at boot.
2. Managing system services — listing, displaying status of,
starting, stopping, restarting, enabling, disabling services.
3. Configure a system to use time services.

7.1 Introduction to Systemd


Systemd is a system and service manager for Linux operating systems. It is designed to be
backwards compatible with SysV init scripts, and provides a number of features such as
parallel startup of system services at boot time, on-demand activation of daemons, support
for system state snapshots, or dependency-based service control logic. In Red Hat Enterprise
Linux 7, systemd replaces Upstart as the default init system.

7.2 Managing System Services


Previous versions of Red Hat Enterprise Linux, which were distributed with SysV init or
Upstart, used init scripts located in the /etc/rc.d/init.d/ directory. These init scripts were
typically written in Bash, and allowed the system administrator to control the state of services
and daemons in their system. In Red Hat Enterprise Linux 7, these init scripts have been
replaced with service units. Service units end with the .service file extension and serve a

99
7. Services and Daemons

Command Description

systemctl start name.service Starts a service.

systemctl stop name.service Stops a service.

systemctl restart name.service Restarts a service.

systemctl try-restart name.service Restarts a service only if it is running.

systemctl reload name.service Reloads configuration.

systemctl status name.service Checks if a service is running.


systemctl is-active name.service

systemctl list-units --type service --all Displays the status of all services.

Table 7.1: systemctl command summary.

similar purpose as init scripts. To view, start, stop, restart, enable, or disable system services,
use the systemctl command as described in Table 7.1.

The service and chkconfig commands are still available in the system and work as expected,
but are only included for compatibility reasons and should be avoided.

7.2.1 Listing Services


To list all currently loaded service units, type the following at a shell prompt:
# systemctl list - units --type service

The output should be something like:


UNIT LOAD ACTIVE SUB DESCRIPTION
abrt -ccpp. service loaded active exited Install ABRT coredump hook
abrt -oops. service loaded active running ABRT kernel log watcher
abrt -xorg. service loaded active running ABRT Xorg log watcher
abrtd . service loaded active running ABRT Automated Bug Reporting Tool
accounts - daemon . service loaded active running Accounts Service
alsa - state . service loaded active running Manage Sound Card State ( restore and store )
atd. service loaded active running Job spooling tools
auditd . service loaded active running Security Auditing Service
avahi - daemon . service loaded active running Avahi mDNS/DNS -SD Stack

... ... ...

upower . service loaded active running Daemon for power management


wpa_supplicant . service loaded active running WPA Supplicant daemon

LOAD = Reflects whether the unit definition was properly loaded .


ACTIVE = The high - level unit activation state , i.e. generalization of SUB.
SUB = The low - level unit activation state , values depend on unit type.

65 loaded units listed . Pass --all to see loaded but inactive units , too.
To show all installed unit files use ’systemctl list -unit -files ’.

100
7.2. Managing System Services

i Specifying Service Units

For clarity, all command examples use full unit names with the .service file extension,
for example:
# systemctl stop nfs - server . service

However, the file extension can be omitted, in which case the systemctl utility assumes the
argument is a service unit. The following command is equivalent to the one above:
# systemctl stop nfs - server

Additionally, some units have alias names. Those names can have shorter names than
units, which can be used instead of the actual unit names. To find all aliases that can be
used for a particular unit, use:
# systemctl show nfs - server . service -p Names

For each service unit file, this command displays its full name (UNIT) followed by a note
whether the unit file has been loaded (LOAD), its high-level (ACTIVE) and low-level (SUB) unit
file activation state, and a short description (DESCRIPTION).

By default, the systemctl list-units command displays only active units. If you want to
list all loaded units regardless of their state, run this command with the --all or -a command
line option:
# systemctl list - units --type service --all

You can also list all available service units to see if they are enabled. To do so, type:
# systemctl list -unit - files --type service

For each service unit, this command displays its full name (UNIT FILE) followed by information
whether the service unit is enabled or not (STATE). For information on how to determine the
status of individual service units, see Section 7.2.2.

101
7. Services and Daemons

To list all currently loaded service units, run the following command:
# systemctl list - units --type service
The output should be something like:
UNIT LOAD ACTIVE SUB DESCRIPTION
abrt -ccpp. service loaded active exited Install ABRT coredump hook
abrt -oops. service loaded active running ABRT kernel log watcher
abrt -xorg. service loaded active running ABRT Xorg log watcher
abrtd . service loaded active running ABRT Automated Bug Reporting Tool
accounts - daemon . service loaded active running Accounts Service
alsa - state . service loaded active running Manage Sound Card State ( restore and store )
atd. service loaded active running Job spooling tools
auditd . service loaded active running Security Auditing Service
avahi - daemon . service loaded active running Avahi mDNS/DNS -SD Stack

... ... ...

systemd -vconsole - setup . service loaded active exited Setup Virtual Console
tuned . service loaded active running Dynamic System Tuning Daemon
udisks2 . service loaded active running Disk Manager
upower . service loaded active running Daemon for power management
wpa_supplicant . service loaded active running WPA Supplicant daemon

LOAD = Reflects whether the unit definition was properly loaded .


ACTIVE = The high - level unit activation state , i.e. generalization of SUB.
SUB = The low - level unit activation state , values depend on unit type.

65 loaded units listed . Pass --all to see loaded but inactive units , too.
To show all installed unit files use ’systemctl list -unit -files ’.

To list all installed service unit files to determine if they are enabled, type:
# systemctl list -unit - files --type service
The output should look like:
UNIT FILE STATE
abrt -ccpp. service enabled
abrt -oops. service enabled
abrt - pstoreoops . service disabled
abrt - vmcore . service enabled
abrt -xorg. service enabled
abrtd . service enabled
accounts - daemon . service enabled
alsa - restore . service static
alsa - state . service static

... ... ...

unbound - anchor . service static


upower . service disabled
usb_modeswitch@ . service static
usbmuxd . service static
vgauthd . service disabled
virtlockd . service indirect
virtlogd . service indirect
vmtoolsd . service enabled
wacom - inputattach@ . service static
wpa_supplicant . service disabled
zram. service static

281 unit files listed .

102
7.2. Managing System Services

7.2.2 Displaying Service Status


To display detailed information about a service unit that corresponds to a system service,
type the following at a shell prompt:
# systemctl status name. service

Replace name with the name of the service unit you want to inspect (for example, gdm). This
command displays the name of the selected service unit followed by its short description,
and if it is executed by the root user, also the most recent log entries.

To only verify that a particular service unit is running, run the following command:
# systemctl is - active name. service

Similarly, to determine whether a particular service unit is enabled, type:


# systemctl is - enabled name. service

Note that both systemctl is-active and systemctl is-enabled return an exit status of 0
if the specified service unit is running or enabled.

The service unit for the GNOME Display Manager is named gdm.service. To determine the
current status of this service unit, type the following at a shell prompt as root:
# systemctl status gdm. service

You should an output like:


o gdm. service - GNOME Display Manager
Loaded : loaded (/ usr/lib/ systemd / system /gdm. service ; enabled ; vendor preset : enabled )
Active : active ( running ) since Tue 2017 -08 -08 16:19:43 +06; 5h 42 min ago
Process : 1117 ExecStartPost =/ bin/bash -c TERM= linux /usr/bin/ clear > /dev/tty1 (code=
exited , status =0/ SUCCESS )
Main PID: 1095 (gdm)
CGroup : / system . slice /gdm. service
| - -1095 /usr/sbin/gdm
| - -1134 /usr/bin/X :0 -background none -noreset -audit 4 -verbose -auth /run/
gdm/auth -for -gdm - EK5B5S / database -seat seat0 -nolisten tcp vt1

Aug 08 16:19:41 localhost . localdomain systemd [1]: Starting GNOME Display Manager ...
Aug 08 16:19:43 localhost . localdomain systemd [1]: Started GNOME Display Manager .

To determine what services are ordered to start after the specified service, type the following
at a shell prompt as root:
# systemctl list - dependencies --before gdm. service

The output should look like:


gdm. service
o |--dracut - shutdown . service
o |-- graphical . target
o | |--systemd -readahead -done. service
o | |--systemd -readahead -done. timer
o | |--systemd -update -utmp - runlevel . service
o |-- shutdown . target

103
7. Services and Daemons

o |--systemd - reboot . service


o |-- final . target
o |--systemd - reboot . service

7.2.3 Starting a Service


To start a service unit that corresponds to a system service, type the following at a shell
prompt as root:
# systemctl start name. service

Replace name with the name of the service unit you want to start (for example, gdm). This
command starts the selected service unit in the current session. For information on how to
enable a service unit to be started at boot time, see Section 7.2.6. For information on how to
determine the status of a certain service unit, see Section 7.2.2.

The service unit for the Apache HTTP Server is named httpd.service. To activate this
service unit and start the httpd daemon in the current session, run the following command
as root:
# systemctl start httpd . service

7.2.4 Stopping a Service


To stop a service unit that corresponds to a system service, type the following at a shell
prompt as root:
# systemctl stop name. service

Replace name with the name of the service unit you want to stop (for example, bluetooth).
This command stops the selected service unit in the current session. For information on how
to disable a service unit and prevent it from being started at boot time, see Section 7.2.7. For
information on how to determine the status of a certain service unit, see Section 7.2.2.

The service unit for the bluetoothd daemon is named bluetooth.service. To deactivate
this service unit and stop the bluetoothd daemon in the current session, run the following
command as root:
# systemctl stop bluetooth . service

7.2.5 Restarting a Service


To restart a service unit that corresponds to a system service, type the following at a shell
prompt as root:
# systemctl restart name. service

Replace name with the name of the service unit you want to restart (for example, httpd).
This command stops the selected service unit in the current session and immediately starts it
again. Importantly, if the selected service unit is not running, this command starts it too. To

104
7.2. Managing System Services

tell systemd to restart a service unit only if the corresponding service is already running, run
the following command as root:
# systemctl try - restart name. service

Certain system services also allow you to reload their configuration without interrupting
their execution. To do so, type as root:
# systemctl reload name. service

Note that system services that do not support this feature ignore this command altogether.
For convenience, the systemctl command also supports the reload-or-restart and
reload-or-try-restart commands that restart such services instead. For information
on how to determine the status of a certain service unit, see Section 7.2.2.
In order to prevent users from encountering unnecessary error messages or partially rendered
web pages, the Apache HTTP Server allows you to edit and reload its configuration without
the need to restart it and interrupt actively processed requests. To do so, type the following at
a shell prompt as root:
# systemctl reload httpd . service

7.2.6 Enabling a Service


To configure a service unit that corresponds to a system service to be automatically started at
boot time, type the following at a shell prompt as root:
# systemctl enable name. service

Replace name with the name of the service unit you want to enable (for example, httpd).
This command reads the [Install] section of the selected service unit and creates
appropriate symbolic links to the /usr/lib/systemd/system/name.service file in the
/etc/systemd/system/ directory and its subdirectories. This command does not, however,
rewrite links that already exist. If you want to ensure that the symbolic links are re-created,
use the following command as root:
# systemctl reenable name. service

This command disables the selected service unit and immediately enables it again. For
information on how to determine whether a certain service unit is enabled to start at boot
time, see Section 7.2.2. For information on how to start a service in the current session,
see Section 7.2.3.
To configure the Apache HTTP Server to start automatically at boot time, run the following
command as root:
# systemctl enable httpd . service

The output should be like:


Created symlink from
/etc/ systemd / system /multi -user. target . wants / httpd . service to
/usr/lib/ systemd / system / httpd . service .

105
7. Services and Daemons

7.2.7 Disabling a Service


To prevent a service unit that corresponds to a system service from being automatically
started at boot time, type the following at a shell prompt as root:
# systemctl disable name. service

Replace name with the name of the service unit you want to disable (for example,
bluetooth). This command reads the [Install] section of the selected service unit and
removes appropriate symbolic links to the /usr/lib/systemd/system/name.service file
from the /etc/systemd/system/ directory and its subdirectories. In addition, you can mask
any service unit to prevent it from being started manually or by another service. To do so,
run the following command as root:
# systemctl mask name. service

This command replaces the /etc/systemd/system/name.service file with a symbolic link


to /dev/null, rendering the actual unit file inaccessible to systemd. To revert this action and
unmask a service unit, type as root:
# systemctl unmask name. service

For information on how to determine whether a certain service unit is enabled to start at
boot time, see Section 7.2.6. For information on how to stop a service in the current session,
see Section 7.2.4.
To prevent bluetooth.service service unit from starting at boot time, type the following at
a shell prompt as root:
# systemctl disable bluetooth . service
Removed symlink /etc/ systemd / system / bluetooth . target . wants / bluetooth . service .
Removed symlink /etc/ systemd / system /dbus -org. bluez . service .

7.3 Configure a System to Use Time Services


NTP (Network Time Protocol) is a protocol to keep servers time synchronized: one or several
master servers provide time to client servers that can themselves provide time to other client
servers (notion of stratus).
Two main packages are used in Red Hat Enterprise Linux 7 to set up the client side:

ntp : this is the classic package, already existing in RHEL 6, RHEL 5, etc. It can be used both
as a NTP client or server.
chrony : this is a new solution better suited for portable PC or machines with network
connection problems (time synchronization is quicker). It can only be used as a NTP
client.

Chrony should be considered for all systems which are frequently suspended or otherwise
intermittently disconnected and reconnected to a network. Mobile and virtual systems for
example.

106
7.3. Configure a System to Use Time Services

The NTP daemon (ntpd) should be considered for systems which are normally kept
permanently on. Systems which are required to use broadcast or multicast IP, or to perform
authentication of packets with the Autokey protocol, should consider using ntpd.

7.3.1 Prerequisites
Before anything else, you need to assign the correct time zone. To get the current configuration,
type as root:
# timedatectl

We will get a plethora of information:


Local time: Fri 2017 -08 -11 22:11:23 +06
Universal time: Fri 2017 -08 -11 16:11:23 UTC
RTC time: Fri 2017 -08 -11 16:11:23
Time zone: Asia/ Dhaka (+06 , +0600)
NTP enabled : no
NTP synchronized : no
RTC in local TZ: no
DST active : n/a

Perhaps the time zone has been set correctly during installation. To change it, we need to get
the list of all the available time zones. Type as root:
# timedatectl list - timezones

We will get a long list:


Africa / Abidjan
Africa / Accra
Africa / Addis_Ababa
Africa / Algiers
Africa / Asmara
... ... ...
Asia/ Damascus
Asia/ Dhaka
Asia/Dili
Asia/ Dubai
... ... ...
Pacific / Tarawa
Pacific / Tongatapu
Pacific /Wake
Pacific / Wallis
UTC

Finally, to set a specific time zone (here Asia/Dhaka), type:

107
7. Services and Daemons

# timedatectl set - timezone Asia/ Dhaka

Then, to check your new configuration, type:


# timedatectl

7.3.2 The NTP Package


Install the ntp package:
# yum install ntp

Activate the ntp service at boot:


# systemctl enable ntpd

Start the ntp service:


# systemctl start ntpd

To get some information about the time synchronization process, type:


# ntpq -p

This informs about the time synchronization process:


remote refid st t when poll reach delay offset jitter
==============================================================================
* krc01 . hrctech .n 204.152.184.72 2 u 31 64 1 167.076 0.474 0.403
+ns1. hrctech .net 114.130.13.6 3 u 30 64 1 167.629 2.179 0.239

Alternatively, to get a basic report, type:


# ntpstat

To quickly synchronize a server, type:


# systemctl stop ntpd
# ntpdate pool.ntp.org
# systemctl start ntpd

Seven
108
Chapter 8

System Logs
1/2 class

Chapter Goals

1. Locate and interpret system log files and journals.


2. Understand interact of rsyslog and journal.
3. Using the journal.

8.1 Locating Log Files


Log files are files that contain messages about the system, including the kernel, services, and
applications running on it. There are different log files for different information. For example,
there is a default system log file, a log file just for security messages, and a log file for cron
tasks.

Log files can be very useful when trying to troubleshoot a problem with the system such as
trying to load a kernel driver or when looking for unauthorized login attempts to the system.
This chapter discusses where to find log files, how to view log files, and what to look for in
log files.

Some log files are controlled by a daemon called rsyslogd. The rsyslogd daemon is an
enhanced replacement for previous sysklogd, and provides extended filtering, encryption
protected relaying of messages, various configuration options, input and output modules,
support for transportation via the TCP or UDP protocols. Note that rsyslog is compatible
with sysklogd.

109
8. System Logs

i Common Techniques for Viewing Logs

Since the log files are simple text files, the common techniques for viewing the text files
are definitely applicable here. You can use these commands to see the log files: i. less,
ii. more, iii. cat, iv. grep, v. tail, vi. zcat, vii. zgrep, and viii. zmore command.
Open the Terminal or login as root user using ssh command. Go to /var/log directory
using the following cd command:
cd /var/log

To view a common log file called /var/log/messages, use any one of the following
commands:
# less /var/log/ messages
# more -f /var/log/ messages
# cat /var/log/ messages
# tail -f /var/log/ messages
# grep -i error /var/log/ messages

A list of log files maintained by rsyslogd can be found in the /etc/rsyslog.conf


configuration file. Most log files are located in the /var/log/ directory. Some applications
such as httpd and samba have a directory within /var/log/ for their log files.

You may notice multiple files in the /var/log/ directory with numbers after them (for example,
cron-20100906). These numbers represent a time stamp that has been added to a rotated log
file. Log files are rotated so their file sizes do not become too large. The logrotate package
contains a cron task that automatically rotates log files according to the /etc/logrotate.conf
configuration file and the configuration files in the /etc/logrotate.d/ directory.

8.2 Interaction of Rsyslog and Journal


Rsyslog and Journal, the two logging applications present on your system, have several
distinctive features that make them suitable for specific use cases. In many situations it is
useful to combine their capabilities, for example to create structured messages and store them
in a file database. A communication interface needed for this cooperation is provided by
input and output modules on the side of Rsyslog and by the Journal’s communication socket.
By default, rsyslogd uses the imjournal module as a default input mode for journal files.
With this module, you import not only the messages but also the structured data provided by
journald. Also, older data can be imported from journald.

110
8.3. Using the Journal

8.3 Using the Journal


The Journal is a component of systemd that is responsible for viewing and management of log
files. It can be used in parallel, or in place of a traditional syslog daemon, such as rsyslogd.
The Journal was developed to address problems connected with traditional logging. It is
closely integrated with the rest of the system, supports various logging technologies and
access management for the log files.

Logging data is collected, stored, and processed by the Journal’s journald service. It creates
and maintains binary files called journals based on logging information that is received from
the kernel, from user processes, from standard output, and standard error output of system
services or via its native API. These journals are structured and indexed, which provides
relatively fast seek times. Journal entries can carry a unique identifier. The journald service
collects numerous meta data fields for each log message. The actual journal files are secured,
and therefore cannot be manually edited.

8.3.1 Viewing Log Files


To access the journal logs, use the journalctl tool. For a basic view of the logs type as root:
# journalctl

An output of this command is a long list of all log files generated on the system including
messages generated by system components and by users.

The structure of this output is similar to one used in /var/log/messages/ but with certain
improvements:

• The priority of entries is marked visually. Lines of error priority and higher are
highlighted with red color and a bold font is used for lines with notice and warning
priority.
• The time stamps are converted for the local time zone of your system.
• All logged data is shown, including rotated logs.
• The beginning of a boot is tagged with a special line.

The following is an example output provided by the journalctl tool. When called without
parameters, the listed entries begin with a time stamp, then the host name and application
that performed the operation is mentioned followed by the actual message.
-- Logs begin at Sat 2017 -08 -12 00:58:34 +06 , end at Sat 2017 -08 -12 01:47:51 +06. --
Aug 12 00:58:34 localhost . localdomain systemd - journal [87]: Runtime journal is using 8.0M
(max allowed 189.5M, trying to leave 284.3 M free of 1.8G available âĘŠ current
limit 189.5 M).
Aug 12 00:58:34 localhost . localdomain kernel : Initializing cgroup subsys cpuset
Aug 12 00:58:34 localhost . localdomain kernel : Initializing cgroup subsys cpu
Aug 12 00:58:34 localhost . localdomain kernel : Initializing cgroup subsys cpuacct
Aug 12 00:58:34 localhost . localdomain kernel : Linux version 3.10.0 -693. el7. x86_64 (
mockbuild@x86 -038. build .eng.bos. redhat .com) (gcc version 4.8.5 20150623 (Red Hat
4.8.5 -16) (GCC) ) #1 SMP Thu Jul 6 19:56:57 EDT 2017

111
8. System Logs

Aug 12 00:58:34 localhost . localdomain kernel : Command line: BOOT_IMAGE =/ vmlinuz


-3.10.0 -693. el7. x86_64 root =/ dev/ mapper /rhel -root ro crashkernel =auto rd.lvm.lv=rhel
/root rd.lvm.lv=rhel/swap rhgb quiet LANG= en_US .UTF -8
Aug 12 00:58:34 localhost . localdomain kernel : e820: BIOS - provided physical RAM map:
Aug 12 00:58:34 localhost . localdomain kernel : BIOS -e820: [mem 0 x0000000000000000 -0
x000000000009fbff ] usable
Aug 12 00:58:34 localhost . localdomain kernel : BIOS -e820: [mem 0 x000000000009fc00 -0
x000000000009ffff ] reserved

... ... ...

In many cases, only the latest entries in the journal log are relevant. The simplest way to
reduce journalctl output is to use the -n option that lists only the specified number of most
recent log entries:
# journalctl -n Number

Replace Number with the number of lines to be shown. When no number is specified,
journalctl displays the ten most recent entries.
The journalctl command allows controlling the form of the output with the following
syntax:
# journalctl -o form

Replace form with a keyword specifying a desired form of output. There are several options:

verbose : returns full-structured entry items with all fields.


export : creates a binary stream suitable for backups and network transfer.
json : formats entries as JSON data structures.

For the full list of keywords, see the journalctl(1) manual page.

To view full meta data about all entries, type:


# journalctl -o verbose

A very detailed output (around 25,000 lines) is shown:


-- Logs begin at Sat 2017 -08 -12 19:59:41 +06 , end at Sat 2017 -08 -12 20:06:21 +06. --
Sat 2017 -08 -12 19:59:41.544066 +06 [s=95 d0965ff5d84fb9b2597600261cad37 ;i=1;b= adf9fd937eb34b958daf5627d0076b
PRIORITY =6
_TRANSPORT = driver
MESSAGE = Runtime journal is using 8.0M (max allowed 189.5M, trying to leave 284.3 M free of 1.8G availabl
MESSAGE_ID = ec387f577b844b8fa948f33cad9a75e6
_PID =86
_UID =0
_GID =0
_COMM =systemd - journal

... ... ...

8.3.2 Access Control


By default, Journal users without root privileges can only see log files generated by them.
The system administrator can add selected users to the adm group, which grants them access
to complete log files. To do so, type as root:

112
8.3. Using the Journal

# usermod -a -G adm username

Here, replace username with a name of the user to be added to the adm group. This user then
receives the same output of the journalctl command as the root user. Note that access
control only works when persistent storage is enabled for Journal.

8.3.3 Using the Live View


When called without parameters, journalctl shows the full list of entries, starting with the
oldest entry collected. With the live view, you can supervise the log messages in real time as
new entries are continuously printed as they appear. To start journalctl in live view mode,
type:
# journalctl -f

This command returns a list of the ten most current log lines. The journalctl utility then
stays running and waits for new changes to show them immediately.

8.3.4 Filtering Messages


The output of the journalctl command executed without parameters is often extensive,
therefore you can use various filtering methods to extract information to meet your needs.

8.3.4.1 Filtering by Priority

Log messages are often used to track erroneous behavior on the system. To view only entries
with a selected or higher priority, use the following syntax:
# journalctl -p priority

Here, replace priority with one of the following keywords (or with a number): debug (7),
info (6), notice (5), warning (4), err (3), crit (2), alert (1), and emerg (0).

To view only entries with error or higher priority, use:


# journalctl -p err

8.3.4.2 Filtering by Time

To view log entries only from the current boot, type:


# journalctl -b

If you reboot your system just occasionally, the -b will not significantly reduce the output of
journalctl. In such cases, time-based filtering is more helpful:
# journalctl --since = value --until = value

With --since and --until, you can view only log messages created within a specified time
range. You can pass values to these options in form of date or time or both as shown in the
following example.

113
8. System Logs

Filtering options can be combined to reduce the set of results according to specific requests.
For example, to view the warning or higher priority messages from a certain point in time,
type:
# journalctl -p warning --since ="2017 -9 -16 23:59:59"

If components of the above format are left off, some defaults will be applied. For instance,
if the date is omitted, the current date will be assumed. If the time component is missing,
00:00:00 (midnight) will be substituted. The seconds field can be left off as well to default
to "00":
# journalctl --since "2015 -01 -10" --until "2015 -01 -11 03:00"

The journal also understands some relative values and named shortcuts. For instance, you
can use the words yesterday, today, tomorrow, or now. You do relative times by prepending
- or + to a numbered value or using words like ago in a sentence construction.

To get the data from yesterday, you could type:


# journalctl --since yesterday

If you received reports of a service interruption starting at 9:00 AM and continuing until an
hour ago, you could type:
# journalctl --since 09:00 --until "1 hour ago"

As you can see, it’s relatively easy to define flexible windows of time to filter the entries you
wish to see.

8.3.5 Enabling Persistent Storage


By default, Journal stores log files only in memory or a small ring-buffer in the
/run/log/journal/ directory. This is sufficient to show recent log history with journalctl.
This directory is volatile, log data is not saved permanently. With the default configuration,
syslog reads the journal logs and stores them in the /var/log/ directory. With persistent
logging enabled, journal files are stored in /var/log/journal which means they persist after
reboot.

Enabled persistent storage has the following advantages:

• Richer data is recorded for troubleshooting in a longer period of time.


• For immediate troubleshooting, richer data is available after a reboot.
• Server console currently reads data from journal, not log files.

Persistent storage has also certain disadvantages:

• Even with persistent storage the amount of data stored depends on free memory, there
is no guarantee to cover a specific time span.
• More disk space is needed for logs.

114
8.3. Using the Journal

To enable persistent storage for Journal, create the journal directory manually as shown in the
following example. As root type:
# mkdir -p /var/log/ journal /

Then, restart journald to apply the change:


# systemctl restart systemd - journald

Eight
115
This page is intentionally left blank

116
Chapter 9

Linux Networking
1 class

Chapter Goals

1. Configure networking.
2. Start, stop, and check the status of network services.
3. Configure network services to start automatically at boot.
4. Configure hostname resolution statically or dynamically.

9.1 Computer Network


A computer network, also referred to as just a network, consists of two or more computers,
and typically other devices as well (such as printers, external hard drives, modems and
routers), that are linked together so that they can communicate with each other and thereby
exchange commands and share data, hardware and other resources.

The devices on a network are referred to as nodes. They are analogous to the knots in nets
that have traditionally been used by fishermen and others. Nodes can be connected using any
of various types of media, including twisted pair copper wire cable, optical fiber cable, coaxial
cable and radio waves. And they can be arranged according to several basic topologies (i.e.,
layouts), including bus (in which all nodes are connected along a single cable), star (all nodes
are connected to a central node), tree (nodes successively branch off from other nodes) and
ring.

The smallest and simplest networks are local area networks (LANs), which extend over only a
small area, typically within a single building or a part thereof. A home network is a type of

117
9. Linux Networking

LAN that is contained within a user’s residence. Wide area networks (WANs) can extend
over a large geographic area and are connected via the telephone network or radio waves. A
metropolitan area network (MAN) is designed to serve a town or city, and a campus area
network is designed to serve a university or other educational institution.

9.2 Network Terminologies


9.2.1 IP Address
An Internet Protocol address (IP address) is a numerical label assigned to each device
connected to a computer network that uses the Internet Protocol for communication. An IP
address serves two principal functions: host or network interface identification and location
addressing.

Version 4 of the Internet Protocol (IPv4) defines an IP address as a 32-bit number. However,
because of the growth of the Internet and the depletion of available IPv4 addresses, a new
version of IP (IPv6), using 128 bits for the IP address, was developed in 1995, and standardized
as RFC 2460 in 1998. IPv6 deployment has been ongoing since the mid-2000s.

IP addresses are usually written and displayed in human-readable notations, such as


172.16.254.1 in IPv4, and 2001:db8:0:1234:0:567:8:1 in IPv6.

9.2.2 Subnet Mask


An IP address has two components, the network address and the host address. A subnet
mask separates the IP address into the network and host addresses (<network><host>).
Subnetting further divides the host part of an IP address into a subnet and host address
(<network><subnet><host>) if additional subnetwork is needed. It is called a subnet mask
because it is used to identify network address of an IP address by performing a bitwise AND
operation on the netmask.

A Subnet mask is a 32-bit number that masks an IP address, and divides the IP address into
network address and host address. Subnet Mask is made by setting network bits to all 1s and
setting host bits to all 0s.

9.2.3 MAC Addresses


A media access control address (MAC address) of a computer is a unique identifier assigned
to network interfaces for communications at the data link layer of a network segment. MAC
addresses are used as a network address for most IEEE 802 network technologies, including
Ethernet and Wi-Fi. Logically, MAC addresses are used in the media access control protocol
sublayer of the OSI reference model.

MAC addresses are most often assigned by the manufacturer of a network interface controller
(NIC) and are stored in its hardware, such as the card’s read-only memory or some other
firmware mechanism. If assigned by the manufacturer, a MAC address usually encodes the

118
9.3. NetworkManager

Application or Tool Description

NetworkManager The default networking daemon

nmtui A simple curses-based text user interface (TUI) for Network-


Manager

nmcli A command-line tool provided to allow users and scripts to


interact with NetworkManager

control-center A graphical user interface tool provided by the GNOME


Shell

nm-connection-editor A GTK+ 3 application available for certain tasks not yet


handled by control-center

Table 9.1: A summary of networking tools and applications.

manufacturer’s registered identification number and may be referred to as the burned-in


address (BIA). It may also be known as an Ethernet hardware address (EHA), hardware
address or physical address (not to be confused with a memory physical address). This can
be contrasted to a programmed address, where the host device issues commands to the NIC
to use an arbitrary address.

9.3 NetworkManager
In Red Hat Enterprise Linux 7, the default networking service is provided by NetworkManager,
which is a dynamic network control and configuration daemon that attempts to keep network
devices and connections up and active when they are available. The traditional ifcfg type
configuration files are still supported.

NetworkManager can configure network aliases, IP addresses, static routes, DNS information,
and VPN connections, as well as many connection-specific parameters. NetworkManager
provides an API via D-Bus which allows applications to query and control network
configuration and state.

Finally, NetworkManager now maintains the state of devices after the reboot process and takes
over interfaces which are set into managed mode during restart. In addition, NetworkManager
can handle devices which are not explicitly set as unmanaged but controlled manually by the
user or another network service.

Table 9.1 shows a summary of networking tools and applications.

119
9. Linux Networking

9.4 Installing NetworkManager


NetworkManager is installed by default on Red Hat Enterprise Linux. If necessary, to ensure
that it is, enter the following command as the root user:
# yum install NetworkManager

9.4.1 The NetworkManager Daemon


The NetworkManager daemon runs with root privileges and is, by default, configured to start
up at boot time. You can determine whether the NetworkManager daemon is running by
entering this command:
# systemctl status NetworkManager

o NetworkManager . service - Network Manager


Loaded : loaded (/ usr/lib/ systemd / system / NetworkManager . service ;
enabled ; vendor preset : enabled )
Active : active ( running ) since Sun 2017 -08 -13 15:47:02 +06; 3min 20s
ago
Docs: man: NetworkManager (8)
Main PID: 751 ( NetworkManager )

... ... ...

The systemctl status command will report NetworkManager as Active: inactive


(dead) if the NetworkManager service is not running. To start it for the current session enter
the following command as the root user:
# systemctl start NetworkManager

Run the systemctl enable command to ensure that NetworkManager starts up every time
the system boots:
# systemctl enable NetworkManager

For more information on starting, stopping and managing services, see Chapter 7.

9.4.2 Interacting with NetworkManager


Users do not interact with the NetworkManager system service directly. Instead, users
perform network configuration tasks using graphical and command-line user interface tools.
The following tools are available in Red Hat Enterprise Linux 7:

1. A simple curses-based text user interface (TUI) for NetworkManager, nmtui, is available.
2. A command-line tool, nmcli, is provided to allow users and scripts to interact with
NetworkManager. Note that nmcli can be used on systems without a GUI such as
servers to control all aspects of NetworkManager. It is on an equal footing with the GUI
tools.

120
9.5. Configure IP Networking

(a) Activate a connection. (b) Deactivate the modified (c) Reactivate the modified
connection. connection.

Figure 9.1: Using nmtui to configure network.

3. The GNOME Shell also provides a network icon in its Notification Area representing
network connection states as reported by NetworkManager. The icon has multiple states
that serve as visual indicators for the type of connection you are currently using.
4. A graphical user interface tool called control-center, provided by the GNOME Shell, is
available for desktop users. It incorporates a Network settings tool. To start it, press
the Super key to enter the Activities Overview, type control network and then press
Enter. The Super key appears in a variety of guises, depending on the keyboard and
other hardware, but often as either the Windows or Command key, and typically to the
left of the Space key.
5. A graphical user interface tool, nm-connection-editor, is available for certain tasks
not yet handled by control-center. To start it, press the Super key to enter the
Activities Overview, type network connections or nm-connection-editor and then
press Enter.

9.5 Configure IP Networking


9.5.1 Using the Text User Interface, nmtui
The NetworkManager text user interface (TUI) tool, nmtui, provides a text interface to configure
networking by controlling NetworkManager. The tool is contained in the NetworkManager-
tui package. Usually, it is not installed along with NetworkManager by default. To install
NetworkManager-tui, issue the following command as root:
# yum install NetworkManager -tui

To start nmtui, issue a command as follows:


# nmtui

The text user interface appears. To navigate, use the arrow keys or press Tab to step forwards
and press Shift +Tab to step back through the options. Press Enter to select an option. The
Space bar toggles the status of a check box.

121
9. Linux Networking

To apply changes after a modified connection which is already active requires a reactivation
of the connection. In this case, follow the procedure below:

1. Select the Activate a connection menu entry (Figure 9.1(a)).


2. Select the modified connection. On the right, click the Deactivate button (Figure 9.1(b)).
3. Choose the connection again and click the Activate button (Figure 9.1(c)).

9.5.2 Using the NetworkManager Command Line Tool, nmcli


The nmcli (NetworkManager Command Line Interface) command-line utility is used for
controlling NetworkManager and reporting network status. It can be utilized as a replacement
for nm-applet or other graphical clients. nmcli is used to create, display, edit, delete, activate,
and deactivate network connections, as well as control and display network device status.

The nmcli utility can be used by both users and scripts for controlling NetworkManager:

For servers, headless machines, and terminals, nmcli can be used to control NetworkManager
directly, without GUI, including creating, editing, starting and stopping network connections
and viewing network status.

For scripts, nmcli supports a terse output format which is better suited for script processing.
It is a way to integrate network configuration instead of managing network connections
manually.

The basic format of a nmcli command is as follows:


# nmcli OPTIONS OBJECT { COMMAND | help }

where OBJECT can be one of the following options: general, networking, radio, connection,
device, agent, and monitor. You can use any prefix of these options in your commands. For
example: nmcli con help.

9.5.2.1 Adding a Dynamic Ethernet Connection

To add an Ethernet configuration profile with dynamic IP configuration, allowing DHCP to


assign the network configuration, a command in the following format can be used:
# nmcli connection add type ethernet con -name \
connection -name ifname interface -name

For example, to create a dynamic connection profile named my-office, issue a command as
follows:
# nmcli con add type ethernet con -name my - office ifname ens3

Connection ’my -office ’ (fb157a65 -ad32 -47ed -858c -102 a48e064a2 )


successfully added .

NetworkManager will set its internal parameter connection.autoconnect to yes.


NetworkManager will also write out settings to

122
9.5. Configure IP Networking

/etc/sysconfig/network-scripts/ifcfg-my-office where the ONBOOT directive will be


set to yes.

To open the Ethernet connection, issue a command as follows:


# nmcli con up my - office

Connection successfully activated (D-Bus active path:


/org/ freedesktop / NetworkManager / ActiveConnection /5)

Review the status of the devices and connections:


# nmcli device status

DEVICE TYPE STATE CONNECTION


ens3 ethernet connected my - office
ens9 ethernet disconnected --
lo loopback unmanaged --

To change the host name sent by a host to a DHCP server, modify the dhcp-hostname property
as follows:
# nmcli con modify my - office my - office \
ipv4.dhcp - hostname host -name \
ipv6.dhcp - hostname host -name

To change the IPv4 client ID sent by a host to a DHCP server, modify the dhcp-client-id
property as follows:
# nmcli con modify my - office my - office ipv4.dhcp -client -id \
client -ID - string

To configure a dynamic Ethernet connection using the interactive editor, issue commands as
follows:
$ nmcli con edit type ethernet con -name ens3

===| nmcli interactive connection editor |===

Adding a new ’802-3- ethernet ’ connection

Type ’help ’ or ’?’ for available commands .


Type ’describe [<setting >.<prop >]’ for detailed property description .

You may edit the following settings : connection , 802 -3 - ethernet


( ethernet ), 802 -1x, ipv4 , ipv6 , dcb

nmcli > describe ipv4. method

=== [ method ] ===


[NM property description ]

IPv4 configuration method . If ’auto ’ is specified then the


appropriate automatic method (DHCP , PPP , etc) is used for the
interface and most other properties can be left unset . If
’link -local ’ is specified , then a link - local address in the 169.254/16
range will be assigned to the interface . If ’manual ’ is specified ,

123
9. Linux Networking

static IP addressing is used and at least one IP address must be given


in the ’addresses ’ property . If ’shared ’ is specified ( indicating
that this connection will provide network access to other computers )
then the interface is assigned an address in the 10.42. x .1/24 range
and a DHCP and forwarding DNS server are started , and the interface is
NAT -ed to the current default network connection . ’disabled ’ means
IPv4 will not be used on this connection . This property must be set.

nmcli > set ipv4. method auto


nmcli > save

Saving the connection with ’autoconnect =yes ’. That might result in an


immediate activation of the connection .

Do you still want to save? [yes] yes

Connection ’ens3 ’ (090 b61f7 -540f -4dd6 -bf1f - a905831fc287 ) successfully


saved .

nmcli > quit


$

The default action is to save the connection profile as persistent. If required, the profile can
be held in memory only, until the next restart, by means of the save temporary command.

9.5.2.2 Adding a Static Ethernet Connection

To add an Ethernet connection with static IPv4 configuration, a command in the following
format can be used:
# nmcli connection add type ethernet con -name \
connection -name ifname \
interface -name ip4 address gw4 address

IPv6 address and gateway information can be added using the ip6 and gw6 options. For
example, a command to create a static Ethernet connection with only IPv4 address and
gateway is as follows:
# nmcli con add type ethernet con -name test -lab \
ifname ens9 ip4 10.10.10.10/24 \
gw4 10.10.10.254

NetworkManager will set its internal parameter ipv4.method to manual and


connection.autoconnect to yes. NetworkManager will also write out settings to
/etc/sysconfig/network-scripts/ifcfg-my-office where the corresponding BOOTPROTO
will be set to none and ONBOOT will be set to yes.
To set two IPv4 DNS server addresses:
# nmcli con mod test -lab ipv4.dns "8.8.8.8 8.8.4.4"

Note that this will replace any previously set DNS servers. Alternatively, to add additional
DNS servers to any previously set, use the + prefix as follows:
# nmcli con mod test -lab +ipv4.dns "8.8.8.8 8.8.4.4"

124
9.6. Editing Network Configuration Files

To open the new Ethernet connection, issue a command as follows:


# nmcli con up test -lab ifname ens9

Connection successfully activated (D-Bus active path:


/org/ freedesktop / NetworkManager / ActiveConnection /6)

Review the status of the devices and connections:


# nmcli device status

DEVICE TYPE STATE CONNECTION


ens3 ethernet connected my - office
ens9 ethernet connected test -lab
lo loopback unmanaged --

To view detailed information about the newly configured connection, issue a command as
follows:
# nmcli -p con show test -lab
===============================================================================
Connection profile details (test -lab)
===============================================================================
connection .id: test -lab
connection .uuid: 05 abfd5e -324e -4461 -844e -8501 ba704773
connection .interface -name: ens9
connection .type: 802 -3 - ethernet
connection . autoconnect : yes
connection . timestamp : 1410428968
connection .read -only: no
connection . permissions :
connection .zone: --
connection . master : --
connection .slave -type: --
connection . secondaries :
connection .gateway -ping - timeout : 0
[ output truncated ]

The use of the -p, --pretty option adds a title banner and section breaks to the output.

9.6 Editing Network Configuration Files


9.6.1 Configuring a Network Interface Using ifcfg Files
Interface configuration files control the software interfaces for individual network devices.
As the system boots, it uses these files to determine what interfaces to bring up and how to
configure them. These files are usually named ifcfg-name, where the suffix name refers to the
name of the device that the configuration file controls. By convention, the ifcfg file’s suffix is
the same as the string given by the DEVICE directive in the configuration file itself.

125
9. Linux Networking

9.6.1.1 Static Network Settings

For example, to configure an interface with static network settings using ifcfg files,
for an interface with the name eth0, create a file with the name ifcfg-eth0 in the
/etc/sysconfig/network-scripts/ directory, that contains:
DEVICE =eth0
BOOTPROTO =none
ONBOOT =yes
PREFIX =24
IPADDR =10.0.1.27

You do not need to specify the network or broadcast address as this is calculated automatically
by ipcalc.

9.6.1.2 Dynamic Network Settings

For example, to configure an interface with dynamic network settings using ifcfg
files, for an interface with the name em1, create a file with the name ifcfg-em1 in the
/etc/sysconfig/network-scripts/ directory, that contains:
DEVICE =em1
BOOTPROTO =dhcp
ONBOOT =yes

To configure an interface to send a different host name to the DHCP server, add the following
line to the ifcfg file:
DHCP_HOSTNAME = hostname

To configure an interface to use particular DNS servers, add the following lines to the ifcfg
file:
PEERDNS =no
DNS1=ip - address
DNS2=ip - address

where ip-address is the address of a DNS server. This will cause the network service to
update /etc/resolv.conf with the specified DNS servers specified. Only one DNS server
address is necessary, the other is optional.

In order to apply the configuration, you need to enter the nmcli c reload command.

9.7 Validating Network Configuration


9.7.1 Check an IP Address
Use the ip addr show command to display IP addresses and property information
(abbreviation of address). Run as root:

126
9.7. Validating Network Configuration

ip addr show
OR
ip a s
The output will be something like:
1: lo: <LOOPBACK ,UP ,LOWER_UP > mtu 65536 qdisc noqueue state UNKNOWN
qlen 1
link/ loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp0s3 : <BROADCAST ,MULTICAST ,UP ,LOWER_UP > mtu 1500 qdisc pfifo_fast
state UP qlen 1000
link/ ether 08:00:27: ab :67:21 brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3
valid_lft 80799 sec preferred_lft 80799 sec
inet6 fe80 :: c4e9:aecc:e10d :6 edb /64 scope link
valid_lft forever preferred_lft forever
3: virbr0 : <NO -CARRIER ,BROADCAST ,MULTICAST ,UP > mtu 1500 qdisc noqueue
state DOWN qlen 1000
link/ ether 52:54:00: ee :41:75 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
4: virbr0 -nic: <BROADCAST ,MULTICAST > mtu 1500 qdisc pfifo_fast master
virbr0 state DOWN qlen 1000
link/ ether 52:54:00: ee :41:75 brd ff:ff:ff:ff:ff:ff

In the above output, we see four interfaces. The ones numbered 1, 3 and 4 are the loop back
and virtual interfaces. To us, the interesting one is numbered 2. This network interface name
is enp0s3 and the configured IP address for this interface is 10.0.2.15 with a 24 bit mask,
and broadcast IP address is 10.0.2.255.
The following terms are quite indicative:

BROADCAST : device can send traffic to all hosts on the link.


MULTICAST : device can perform and receive multicast packets.
UP : device is functioning, or in other words, network interface is enabled.
LOWER_UP : physical layer link flag (the layer below the network layer, where IP is generally
located). LOWER_UP indicates that an Ethernet cable was plugged in and that the device
is connected to the network. LOWER_UP differs from UP, which additionally requires the
network interface to be enabled.

Another important entity for us is the part, state UP.

9.7.2 Checking the Link Status


Use the show ip link command to display the state of all network interfaces.

127
9. Linux Networking

Run as root:
# ip link show
A sample output is:
1: lo: <LOOPBACK ,UP ,LOWER_UP > mtu 65536 qdisc noqueue state UNKNOWN
mode DEFAULT qlen 1
link/ loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp0s3 : <BROADCAST ,MULTICAST ,UP ,LOWER_UP > mtu 1500 qdisc pfifo_fast
state UP mode DEFAULT qlen 1000
link/ ether 08:00:27: ab :67:21 brd ff:ff:ff:ff:ff:ff
3: virbr0 : <NO -CARRIER ,BROADCAST ,MULTICAST ,UP > mtu 1500 qdisc noqueue
state DOWN mode DEFAULT qlen 1000
link/ ether 52:54:00: ee :41:75 brd ff:ff:ff:ff:ff:ff
4: virbr0 -nic: <BROADCAST ,MULTICAST > mtu 1500 qdisc pfifo_fast master
virbr0 state DOWN mode DEFAULT qlen 1000
link/ ether 52:54:00: ee :41:75 brd ff:ff:ff:ff:ff:ff

The output is very similar to that of ip addr show as detailed in Section 9.7.1.

9.7.3 Check Route Table


Use the show ip route command to list all of the route entries in the kernel.
Run as root:
# ip route show
OR
# ip route list
A sample output is:
default via 10.0.2.2 dev enp0s3 proto static metric 100
10.0.2.0/24 dev enp0s3 proto kernel scope link src 10.0.2.15 metric 100
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1

Each entry is nothing but an entry in the routing table (Linux kernel routing table). Our
default route is set via enp0s3 interface i.e. all network packets that cannot be sent according
to the previous entries of the routing table are sent through the gateway defined in this entry
i.e 10.0.2.2 is our default gateway.

9.8 Setting Up Hostname and Name Resolution


A hostname is a label or nickname that is assigned to a computer connected to a network and
that is used to identify the machine in various forms of electronic communication within an
internal network. Hostnames are also important because they form part of a computer’s Fully
Qualified Domain Name (FQDN). Assigning a FQDN to a computer makes it reachable via
the public Domain Name System (DNS), i.e. the Internet.
There are three classes of hostname: static, pretty, and transient. The “static” host name is the
traditional hostname, which can be chosen by the user, and is stored in the /etc/hostname file.

128
9.8. Setting Up Hostname and Name Resolution

Figure 9.2: The NetworkManager Text User Interface starting menu.

The “transient” hostname is a dynamic host name maintained by the kernel. It is initialized
to the static host name by default, whose value defaults to “localhost”. It can be changed
by DHCP or mDNS at runtime. The “pretty” hostname is a free-form UTF8 host name for
presentation to the user.

9.8.1 Configuring Host Names Using Text User Interface, nmtui


The text user interface tool nmtui can be used to configure a host name in a terminal window.
Issue the following command to start the tool:
# nmtui

The text user interface appears (Figure 9.2). Any invalid command prints a usage message.

To navigate, use the arrow keys or press Tab to step forwards and press Shift +Tab to step back
through the options. Press Enter to select an option. The Space bar toggles the status of a
check box

9.8.2 Configuring Host Names Using hostnamectl


The hostnamectl tool is provided for administering the three separate classes of host names
in use on a given system.

9.8.2.1 View All the Host Names

To view all the current host names, enter the following command:
# hostnamectl status

The status option is implied by default if no option is given.

9.8.2.2 Set All the Host Names

To set all the host names on a system, enter the following command as root:
# hostnamectl set - hostname name

129
9. Linux Networking

This will alter the pretty, static, and transient host names alike. The static and transient host
names will be simplified forms of the pretty host name. Spaces will be replaced with “-” and
special characters will be removed.

9.8.2.3 Set a Particular Host Name

To set a particular host name, enter the following command as root with the relevant option:
# hostnamectl set - hostname name [ option ...]

Where option is one or more of: --pretty, --static, and --transient.

If the --static or --transient options are used together with the --pretty option, the
static and transient host names will be simplified forms of the pretty host name. Spaces will
be replaced with “-” and special characters will be removed. If the --pretty option is not
given, no simplification takes place.

When setting a pretty host name, remember to use the appropriate quotation marks if the
host name contains spaces or a single quotation mark. For example:
# hostnamectl set - hostname "Stephen ’s notebook " --pretty

9.8.2.4 Clear a Particular Host Name

To clear a particular host name and allow it to revert to the default, enter the following
command as root with the relevant option:
# hostnamectl set - hostname "" [ option ...]

Where "" is a quoted empty string and where option is one or more of: --pretty, --static,
and --transient.

9.8.3 Configuring Host Names Using nmcli


The NetworkManager tool nmcli can be used to query and set the static host name in the
/etc/hostname file.

To query the static host name, issue the following command:


# nmcli general hostname

To set the static host name to my-server, issue the following command as root:
# nmcli general hostname my - server

Nine
130
Chapter 10

Securing Linux Systems


2 classes

Chapter Goals
1. Configure firewall settings using firewall-config,
firewall-cmd, or iptables.
2. Set enforcing and permissive modes for SELinux.
3. List and identify SELinux file and process context.
4. Restore default file contexts.
5. Use Boolean settings to modify system SELinux settings.
6. Diagnose and address routine SELinux policy violations.

10.1 Introduction to Computer Security


Due to the increased reliance on powerful, networked computers to help run businesses
and keep track of our personal information, entire industries have been formed around
the practice of network and computer security.Because most organizations are increasingly
dynamic in nature, their workers are accessing critical company IT resources locally and
remotely, hence the need for secure computing environments has become more pronounced.

10.1.1 What is Computer Security?


Computer security, also known as cyber security or IT security, is the protection of computer
systems from the theft or damage to their hardware, software or information, as well as from
disruption or misdirection of the services they provide. As per standard security, security
comprises of the following three components known as CIA, or Confidentiality, Integrity, and
Availability.

131
10. Securing Linux Systems

• Confidentiality — Sensitive information must be available only to a set of pre-defined


individuals. Unauthorized transmission and usage of information should be restricted.
• Integrity — Information should not be altered in ways that render it incomplete or
incorrect. Unauthorized users should be restricted from the ability to modify or destroy
sensitive information.
• Availability — Information should be accessible to authorized users any time that it is
needed. .

10.1.2 Common Exploits and Attacks


Listed below details of some of the most common exploits and entry points used by intruders
to access organizational network resources. Key to these common exploits are the explanations
of how they are performed and how administrators can properly safeguard their network
against such attacks.

Null or Default Passwords Leaving administrative passwords blank or using a default


password set by the product vendor. This is most common in hardware such as routers and
firewalls, but some services that run on Linux can contain default administrator passwords
as well (though Red Hat Enterprise Linux 7 does not ship with them).

Default Shared Keys Secure services sometimes package default security keys for
development or evaluation testing purposes. If these keys are left unchanged and are
placed in a production environment on the Internet, all users with the same default keys
have access to that shared-key resource, and any sensitive information that it contains. Most
common in wireless access points and preconfigured secure server appliances.

IP Spoofing A remote machine acts as a node on your local network, finds vulnerabilities
with your servers, and installs a backdoor program or Trojan horse to gain control over
your network resources. Spoofing is quite difficult as it involves the attacker predicting
TCP/IP sequence numbers to coordinate a connection to target systems, but several tools are
available to assist crackers in performing such a vulnerability. Depends on target system
running services (such as rsh, telnet, FTP and others) that use source-based authentication
techniques, which are not recommended when compared to PKI or other forms of encrypted
authentication used in ssh or SSL/TLS.

Eavesdropping Collecting data that passes between two active nodes on a network by
eavesdropping on the connection between the two nodes. This type of attack works mostly
with plain text transmission protocols such as Telnet, FTP, and HTTP transfers.

Remote attacker must have access to a compromised system on a LAN in order to perform
such an attack; usually the cracker has used an active attack (such as IP spoofing or man-in-
the-middle) to compromise a system on the LAN.

132
10.2. Mitigating Network Attacks Using Firewalls

Preventative measures include services with cryptographic key exchange, one-time passwords,
or encrypted authentication to prevent password snooping; strong encryption during
transmission is also advised.

Service Vulnerabilities An attacker finds a flaw or loophole in a service run over the
Internet; through this vulnerability, the attacker compromises the entire system and any data
that it may hold, and could possibly compromise other systems on the network. HTTP-based
services such as CGI are vulnerable to remote command execution and even interactive shell
access. Even if the HTTP service runs as a non-privileged user such as “nobody”, information
such as configuration files and network maps can be read, or the attacker can start a denial of
service attack which drains system resources or renders it unavailable to other users.

Services sometimes can have vulnerabilities that go unnoticed during development and
testing; these vulnerabilities (such as buffer overflows, where attackers crash a service using
arbitrary values that fill the memory buffer of an application, giving the attacker an interactive
command prompt from which they may execute arbitrary commands) can give complete
administrative control to an attacker.

Application Vulnerabilities Attackers find faults in desktop and workstation applications


(such as email clients) and execute arbitrary code, implant Trojan horses for future compromise,
or crash systems. Further exploitation can occur if the compromised workstation has
administrative privileges on the rest of the network.

Denial of Service (DoS) Attacks Attacker or group of attackers coordinate against an


organization’s network or server resources by sending unauthorized packets to the target
host (either server, router, or workstation). This forces the resource to become unavailable
to legitimate users. The most reported DoS case in the US occurred in 2000. Several highly-
trafficked commercial and government sites were rendered unavailable by a coordinated ping
flood attack using several compromised systems with high bandwidth connections acting as
zombies, or redirected broadcast nodes.

Source packets are usually forged (as well as rebroadcast), making investigation as to the true
source of the attack difficult.

Advances in ingress filtering (IETF rfc2267) using iptables and Network Intrusion Detection
Systems such as snort assist administrators in tracking down and preventing distributed
DoS attacks.

10.2 Mitigating Network Attacks Using Firewalls


In computing, a firewall is a network security system that monitors and controls the incoming
and outgoing network traffic based on predetermined security rules. A firewall typically
establishes a barrier between a trusted, secure internal network and another outside network,
such as the Internet, that is assumed not to be secure or trusted. Firewalls are often categorized

133
10. Securing Linux Systems

as either network firewalls or host-based firewalls. Network firewalls filter traffic between two
or more networks; they are either software appliances running on general purpose hardware,
or hardware-based firewall computer appliances. Host-based firewalls provide a layer of
software on one host that controls network traffic in and out of that single machine.

10.2.1 Linux Firewall Architecture


To understand LINUX firewall architecture, see Figure 10.1. The key components of the
architecture are described below.

Figure 10.1: The firewall stack.

10.2.1.1 Netfilter

Netfilter is a framework provided by the Linux kernel that allows various networking-related
operations to be implemented in the form of customized handlers. Netfilter offers various
functions and operations for packet filtering, network address translation, and port translation,
which provide the functionality required for directing packets through a network, as well as
for providing ability to prohibit packets from reaching sensitive locations within a computer
network.

Netfilter represents a set of hooks inside the Linux kernel, allowing specific kernel modules
to register callback functions with the kernel’s networking stack. Those functions, usually
applied to the traffic in the form of filtering and modification rules, are called for every packet
that traverses the respective hook within the networking stack.

10.2.1.2 ip tables

The kernel modules named ip_tables and ip6_tables are some of the significant parts of the
Netfilter hook system. They provide a table-based system for defining firewall rules that
can filter or transform packets. The tables can be administered through the user-space tools
iptables and ip6tables. Notice that although both the kernel modules and userspace utilities
have similar names, each of them is a different entity with different functionality.

Each table is actually its own hook, and each table was introduced to serve a specific purpose.
As far as Netfilter is concerned, it runs a particular table in a specific order with respect to

134
10.2. Mitigating Network Attacks Using Firewalls

other tables. Any table can call itself and it also can execute its own rules, which enables
possibilities for additional processing and iteration.

Rules are organized into chains, or in other words, "chains of rules". These chains are
named with predefined titles, including INPUT, OUTPUT and FORWARD. These chain
titles help describe the origin of the Netfilter stack. Packet reception, for example, falls into
PREROUTING, while the INPUT represents locally delivered data, and forwarded traffic falls
into the FORWARD chain. Locally generated output passes through the OUTPUT chain, and
packets to be sent out are in POSTROUTING chain. Netfilter modules not organized into
tables (see below) are capable of checking for the origin to select their mode of operation.

10.2.2 Introduction to firewalld


The firewalld daemon provides a dynamically managed firewall with support for network
“zones” to assign a level of trust to a network and its associated connections and interfaces. It
has support for IPv4 and IPv6 firewall settings. It supports Ethernet bridges and IP set and
has a separation of runtime and permanent configuration options. It also has an interface for
services or applications to add firewall rules directly.

The firewall service provided by firewalld is dynamic rather than static because changes to
the configuration can be made anytime and are immediately set live. There is no need to save
or apply the changes. No unintended disruption of existing network connections occurs as
no part of the firewall has to be reloaded.

To configure and communicate with the firewalld daemon, two tools are mainly provided:
one is GUI based client, firewall-config and a command-line client, firewall-cmd. See
Figure 10.2 for firewalld architecture.

Figure 10.2: The firewalld architecture.

10.2.3 Installing firewalld


In Red Hat Enterprise Linux 7, firewalld is installed by default. If required, to ensure that it
is, enter the following command as root:
# yum install firewalld

135
10. Securing Linux Systems

The graphical user interface configuration tool firewall-config is installed by default in some
versions of Red Hat Enterprise Linux 7. If required, enter the following command as root to
ensure firewall-config is installed:
# yum install firewall - config

10.2.3.1 Stopping firewalld

To stop firewalld, enter the following command as root:


# systemctl stop firewalld

To prevent firewalld from starting automatically at system start, enter the following
command as root:
# systemctl disable firewalld

10.2.3.2 Starting firewalld

To start firewalld, enter the following command as root:


# systemctl unmask firewalld
# systemctl start firewalld

To ensure firewalld starts automatically at system start, enter the following command as
root:
# systemctl enable firewalld

10.2.3.3 Checking If firewalld Is Running

To check if firewalld is running, enter the following command:


# systemctl status firewalld
firewalld . service - firewalld - dynamic firewall daemon
Loaded : loaded (/ usr/lib/ systemd / system / firewalld . service ;
enabled ; vendor preset : enabled )
Active : active ( running ) since Tue 2016 -10 -11 09:15:58 CEST; 2
days ago
Docs: man: firewalld (1)
Main PID: 721 ( firewalld )
CGroup : / system . slice / firewalld . service
721 /usr/bin/ python -Es /usr/sbin/ firewalld --nofork --nopid

Oct 11 09:15:57 localhost . localdomain systemd [1]: Starting


firewalld - dynami ...
Oct 11 09:15:58 localhost . localdomain systemd [1]: Started
firewalld - dynamic ...
Hint: Some lines were ellipsized , use -l to show in full.

136
10.2. Mitigating Network Attacks Using Firewalls

In addition, check if firewall-cmd can connect to the daemon by entering the following
command:
# firewall -cmd --state
running

10.2.4 Understanding firewalld Concepts


Before we start configuring firewall using firewalld, we need to understand the concepts of
‘Networks Zones’, ‘Services’ and ‘Direct Network Interfaces’. These are explained below.

10.2.4.1 Understanding Network Zones

firewalld can be used to separate networks into different zones based on the level of trust the
user has decided to place on the interfaces and traffic within that network. NetworkManager
(Linux network interface configurator) informs firewalld to which zone an interface belongs.

The zone settings in /etc/firewalld/ are a range of preset settings, which can be quickly
applied to a network interface. They are listed below with a brief explanation.

drop Any incoming network packets are dropped; there is no reply. Only outgoing network
connections are possible.
block Any incoming network connections are rejected with an icmp-host-prohibited message
for IPv4 and icmp6-adm-prohibited for IPv6. Only network connections initiated from
within the system are possible.
public For use in public areas. You do not trust the other computers on the network to not
harm your computer. Only selected incoming connections are accepted.
external For use on external networks with masquerading enabled, especially for routers.
You do not trust the other computers on the network to not harm your computer. Only
selected incoming connections are accepted.
dmz For computers in your demilitarized zone that are publicly-accessible with limited access
to your internal network. Only selected incoming connections are accepted.
work For use in work areas. You mostly trust the other computers on networks to not harm
your computer. Only selected incoming connections are accepted.
home For use in home areas. You mostly trust the other computers on networks to not harm
your computer. Only selected incoming connections are accepted.
internal For use on internal networks. You mostly trust the other computers on the networks
to not harm your computer. Only selected incoming connections are accepted.
trusted All network connections are accepted.

It is possible to designate one of these zones to be the default zone. When interface connections
are added to NetworkManager, they are assigned to the default zone. On installation, the
default zone in firewalld is set to be the public zone.

137
10. Securing Linux Systems

10.2.4.2 Understanding Predefined Services

A service can be a list of local ports, protocols, source ports, and destinations as well as a list
of firewall helper modules automatically loaded if a service is enabled. The use of predefined
services makes it easier for the user to enable and disable access to a service. Using the
predefined services or custom-defined services, as opposed to opening ports or ranges of
ports, may make administration easier. The services are specified by means of individual XML
configuration files, which are named in the following format: service-name.xml. Protocol
names are preferred over service or application names in firewalld.

10.2.4.3 Understanding the Direct Interface

firewalld has direct interface, which enables directly passing rules to iptables, ip6tables
and ebtables. It is primarily intended for use by applications. It is not recommended and
it is dangerous to use the direct interface if you are not very familiar with iptables, as you
could inadvertently cause a breach in the firewall. The direct interface mode is intended for
services or applications to add specific firewall rules during runtime. If the rules are not
made permanent, then they need to be applied every time after receiving the start, restart, or
reload. With the direct interface, it is possible to add chains, rules, and tracked and untracked
passthrough rules. You can also use direct rules in zone-specific chains.

10.2.5 Configuring firewalld Using The Graphical User Interface


Starting the Graphical Firewall Configuration Tool To start the graphical firewall-config
tool, press the Super key to enter the Activities Overview, type firewall, and press Enter.
The firewall-config tool appears. You will be prompted for an administrator password.
To start the graphical firewall configuration tool using the command-line, enter the following
command:
# firewall - config

The Firewall Configuration window opens. Note that this command can be run as a normal
user, but you will be prompted occasionally for an administrator password.
Look for the “Connection to firewalld established” message in the lower-left corner. This
indicates that the firewall-config tool is connected to firewalld. Note that the ICMP Types,
IPSets, Direct Configuration, and Lockdown Whitelist tabs are only visible after being
selected from the View drop-down menu. The Active Bindings sidebar on the left is visible
by default.

Changing the Firewall Settings To immediately change the current firewall settings, ensure
the current view is set to Runtime. Alternatively, to edit the settings to be applied at the next
system start or firewall reload, select Permanent from the drop-down list.
Note that, when making changes to the firewall settings in Runtime mode, your selection
takes immediate effect when you set or clear the check box associated with the service. You
should keep this in mind when working on a system that may be in use by other users.

138
10.2. Mitigating Network Attacks Using Firewalls

Figure 10.3: The firewall configuration tool.

When making changes to the firewall settings in Permanent mode, your selection will only
take effect when you reload the firewall or the system restarts. Click the Options menu and
select Reload Firewall.

You can select zones in the left-hand side column. You will notice the zones have some
services enabled; you may need to resize the window or scroll to see the full list. You can
customize the settings by selecting and deselecting a service.

Adding an Interface to a Zone To add a connection (the interfaces used by a connection)


to a zone, start firewall-config. Click on the zone in the zone list on the left and select the
Interfaces tab on the right. Click on the Add button to rise a new dialog to add the interface.

To change the zone setting for an interface, double-click the proper connection or interface in
the Active Bindings sidebar. Select the new firewall zone from the drop-down menu in the
following dialog and confirm by clicking OK.

Alternatively, to add or reassign an interface of a connection to a zone, start firewall-config,


select Options from the menu bar, and select Change Zones of Connections from the drop-

139
10. Securing Linux Systems

down menu. The Connections, Interface, and Source list displays. Select the connection to
be reassigned. The Select Zone for Connection window appears. Select the new firewall
zone from the drop-down menu and click OK.

For connections handled by NetworkManager, the request to change the zone is forwarded
to NetworkManager. The zone interface setting will not be saved in firewalld.

The connections without specific zone settings are automatically bound to the default zone. A
change of the default zone consequently applies to the zone bindings of all such connections.

Setting the Default Zone To set the default zone that new interfaces will be assigned to,
start firewall-config, select Options from the menu bar, and select Change Default Zone
from the drop-down menu. The Default Zone window appears. Select the zone from the list
that you want to be used as the default zone and click OK.

Configuring Services To enable or disable a predefined or custom service, start the firewall-
config tool and select the network zone whose services are to be configured. Select the
Services tab and select the check box for each type of service you want to trust. Clear the
check box to block a service.

To edit a service, start the firewall-config tool and select Permanent mode from the drop-
down selection menu labeled Configuration. Additional icons and menu buttons appear at
the bottom of the Services window. Select the service you want to configure.

The Ports, Protocols, and Source Port tabs enables adding, changing, and removing of ports,
protocols, and source port for the selected service. The modules tab is for configuring Netfilter
helper modules. The Destination tab enables limiting traffic to a particular destination address
and Internet Protocol (IPv4 or IPv6).

Note that it is not possible to alter service settings in Runtime mode.

Opening Ports in the Firewall To permit traffic through the firewall to a certain port, start
the firewall-config tool and select the network zone whose settings you want to change.
Select the Ports tab and click the Add button on the right-hand side. The Port and Protocol
window opens. Enter the port number or range of ports to permit. Select tcp or udp from the
drop-down list.

Opening Protocols in the Firewall To permit traffic through the firewall using a certain
protocol, start the firewall-config tool and select the network zone whose settings you want
to change. Select the Protocols tab and click the Add button on the right-hand side. The
Protocol window opens. Either select a protocol from the drop-down list or select the Other
Protocol check box and enter the protocol in the field.

Opening Source Ports in the Firewall To permit traffic through the firewall from a certain
port, start the firewall-config tool and select the network zone whose settings you want to
change. Select the Source Port tab and click the Add button on the right-hand side. The

140
10.2. Mitigating Network Attacks Using Firewalls

Source Port window opens. Enter the port number or range of ports to permit. Select tcp or
udp from the drop-down list.

Enabling IPv4 Address Masquerading To translate IPv4 addresses to a single external


address, start the firewall-config tool and select the network zone whose addresses are to be
translated. Select the Masquerading tab and select the check box to enable the translation of
IPv4 addresses to a single address.

Note that to enable masquerading for IPv6, use a rich rule. Note that, discussion of rich rule
is out of the scope of this course.

Configuring Port Forwarding To forward inbound network traffic, or “packets”, for a


specific port to an internal address or alternative port, first enable IP address masquerading,
then select the Port Forwarding tab.

Select the protocol of the incoming traffic and the port or range of ports on the upper section
of the window. The lower section is for setting details about the destination.

To forward traffic to a local port (a port on the same system), select the Local forwarding
check box. Enter the local port or range of ports for the traffic to be sent to.

To forward traffic to another IPv4 address, select the Forward to another port check box.
Enter the destination IP address and port or port range. The default is to send to the same
port if the port field is left empty. Click OK to apply the changes.

Configuring the ICMP Filter To enable or disable an ICMP filter, start the firewall-config
tool and select the network zone whose messages are to be filtered. Select the ICMP Filter
tab and select the check box for each type of ICMP message you want to filter. Clear the check
box to disable a filter. This setting is per direction and the default allows everything.

To edit an ICMP type, start the firewall-config tool and select Permanent mode from the
drop-down selection menu labeled Configuration. Additional icons appear at the bottom of
the Services window. Select Yes in the following dialog to enable masquerading and to make
forwarding to another machine working.

To enable inverting the ICMP Filter, click the Invert Filter check box on the right. Only
marked ICMP types are now accepted, all other are rejected. In a zone using the DROP target,
they are dropped.

Configuring Sources To add a source to a zone, start firewall-config. Click on a zone in the
zone list on the left and select the Sources tab on the right. With clicking the Add button,
there will be a new dialog to add the source. A source can either be an IP address or range, a
MAC address or an ipset. Select the type in the drop-down menu on the left and click the
button on the right to select or enter the setting.

141
10. Securing Linux Systems

10.2.5.1 Configuring IP Sets Using firewall-config

To configure IP sets, start the firewall-config tool and select the IPSets tab. Select an IP set
from the list on the left to change the runtime settings of an IP set that has been created with
firewalld already.
To add new IP sets or to change base IP set settings, switch to Permanent mode. Additional
icons and menu buttons appear at the bottom of the IPSets window. Select the IP set you
want to configure. The entries tab on the right shows the entries that are part of the IP set.
There are no entries listed for IP sets that use a timeout, as the entries are kept and handled in
kernel space.
With the Add button, you can add single entries, but also entries from a file. With Remove
you can remove the selected entry, all entries and also entries from a file. The file should
contain an entry per line. Lines starting with a hash or semicolon are ignored. Also empty
lines.
After clicking on the + button to add a new IP set, a new window appears to configure the
base IP set settings. There are three settings that need to be configured for an IP set: Name,
Type, and Family. Name can contain all alphanumeric characters and additionally ‘-’, ‘-’, ‘:’,
and ‘.’. The maximum name length is 32 characters. Type can be: hash:ip, hash:net, and
hash:mac. Bitmap types are not supported by firewalld as they can be only used with IPv4.
Combined types are not supported, too.
To have a simple and fast IP address or network set, use the hash:net type. The hash:ip type
expands all ranges and network segments internally and reaches the hash limit soon.
For these types, it is also necessary to define Family. This can be either inet for IPv4 or inet6
for IPv6.
To store MAC addresses in an IP set use hash:mac - Family is not selectable in this case. To
define a lifetime of the added entries for use with external services like fail2ban, use the
Timeout setting. Note that firewalld is not able to show the temporarily stored entries with
a timeout. Use the ipset command for such entries.
To define the initial hash size for an IP set, use the Hashsize setting. Limit the maximum
number of elements that can be stored in an IP set by using the Maxelem field.
You can use the created IP set as a source in a zone, in a rich rule, and also in a direct rule.

10.2.6 Configuring the Firewall Using the firewall-cmd Command-Line Tool


The firewall-cmd command-line tool is part of the firewalld application that is installed
by default. You can verify that it is installed by checking the version or displaying the help
output. Enter the following command to check the version:
# firewall -cmd --version

Enter the following command to view the help output:


# firewall -cmd --help

142
10.2. Mitigating Network Attacks Using Firewalls

We list a selection of commands below; for a full list see the firewall-cmd(1) man page.

To make a command permanent or persistent, add the --permanent option to all commands
apart from the --direct commands (which are by their nature temporary). Note that this
not only means the change will be permanent, but that the change will only take effect after
firewalld reload, service restart, or after system reboot. Settings made with firewall-cmd
without the --permanent option take effect immediately but are only valid till next firewall
reload, system boot, or firewalld service restart. Reloading the firewalld does not in itself
break connections, but be aware you are discarding temporary changes by doing so.

To make a command both persistent and take effect immediately, enter the command twice:
once with the --permanent and once without. This is because a firewalld reload takes
more time than just repeating a command because it has to reload all configuration files and
recreate the whole firewall configuration. While reloading, the policy for built-in chains is set
to DROP for security reasons and is then reset to ACCEPT at the end. Service disruption is
possible during the reload.

10.2.6.1 Viewing the Firewall Settings Using the Command-Line Interface (CLI)

Viewing the Firewall State To get a text display of the state of firewalld, enter the
following command:
# firewall -cmd --state

Viewing Zones To view the list of active zones with a list of the interfaces currently assigned
to them, enter the following command:
# firewall -cmd --get -active - zones
public
interfaces : em1

To find out the zone that an interface, for example, em1, is currently assigned to, enter the
following command:
# firewall -cmd --get -zone -of - interface =em1
public

To find out all the interfaces assigned to a zone, for example, the public zone, enter the
following command as root:
# firewall -cmd --zone= public --list - interfaces
em1 wlan0

This information is obtained from NetworkManager and only shows interfaces, not
connections.

To find out all the settings of a zone, for example, the public zone, enter the following
command as root:

143
10. Securing Linux Systems

# firewall -cmd --zone= public --list -all


public
interfaces :
services : mdns dhcpv6 - client ssh
ports :
forward - ports :
icmp - blocks : source - quench

To view the zone information, use the --info-zone option. To get the verbose output with
the description and short description, use the additional -v option.
# firewall -cmd --info -zone= public
public ( active )
target : default
icmp -block - inversion : no
interfaces : em1
sources :
services : dhcpv6 - client mdns ssh
ports :
protocols :
masquerade : no
forward - ports :
source - ports :
icmp - blocks :
rich rules :

Viewing Services To view the list of services currently loaded, enter the following command
as root:
# firewall -cmd --get - services
cluster - suite pop3s bacula - client smtp ipp radius bacula ftp
mdns samba
dhcpv6 - client dns openvpn imaps samba - client http https ntp
vnc - server
telnet libvirt ssh ipsec ipp - client amanda - client tftp - client
nfs tftp
libvirt -tls

This lists the names of the predefined services loaded from /usr/lib/firewalld/services/
as well as any custom services that are currently loaded. Note that the configuration files
themselves are named service-name.xml.

To list the custom services that have been created but not loaded, use the following command
as root:
# firewall -cmd --permanent --get - services

144
10.2. Mitigating Network Attacks Using Firewalls

This lists all services, including custom services configured in /etc/firewalld/services/,


even if they are not yet loaded.
To show the settings of the ftp service, use the following command as root:
# firewall -cmd --info - service =ftp
ftp
ports : 21/ tcp
protocols :
source - ports :
modules : nf_conntrack_ftp
destination :

10.2.6.2 Changing the Firewall Settings Using the Command-Line Interface (CLI)

Dropping All Packets (Panic Mode) To start dropping all incoming and outgoing packets,
enter the following command as root:
# firewall -cmd --panic -on

All incoming and outgoing packets will be dropped. Active connections will be terminated
after a period of inactivity; the time taken depends on the individual session timeout values.
To start passing incoming and outgoing packets again, enter the following command as root:
# firewall -cmd --panic -off

After disabling panic mode, established connections might work again if panic mode was
enabled for a short period of time.
To find out if panic mode is enabled or disabled, enter the following command:
# firewall -cmd --query - panic

The command prints yes with exit status 0 if enabled. It prints no with exit status 1 otherwise.

Reloading the Firewall To reload the firewall without interrupting user connections
(without losing state information), enter the following command:
# firewall -cmd --reload

A firewall reload involves reloading all configuration files and recreating the whole firewall
configuration. While reloading, the policy for built-in chains is set to DROP for security
reasons and is then reset to ACCEPT at the end. Service disruption is therefore possible
during the reload.
To reload the firewall and interrupt user connections, discarding state information, enter the
following command as root:
# firewall -cmd --complete - reload

This command should normally only be used in case of severe firewall problems. For example,
use this command if there are state information problems and no connection can be established
but the firewall rules are correct.

145
10. Securing Linux Systems

Add an Interface to a Zone To add an interface to a zone (for example, to add em1 to the
public zone), enter the following command as root:
# firewall -cmd --zone= public --add - interface =em1

Note that, all options to change the zone binding for interfaces that are under control of
NetworkManager are forwarded to NetworkManager. These changes are not applied to the
firewalld configuration if the request for NetworkManager succeeds. This is also the case
with the --permanent option.
For interfaces that are not under control of NetworkManager, the change applies to the
firewalld configuration. If there is an ifcfg file that uses this interface, then the ZONE=
setting in this ifcfg file is adapted to make sure that the configuration in firewalld and the
ifcfg file is consistent. If there is more than one ifcfg file using this interface then the first
one is used.

Setting the Default Zone To set the default zone (for example, to public), enter the
following command as root:
# firewall -cmd --set -default -zone= public

This change will take effect immediately; in this case, it is not necessary to reload the firewall.

Opening Ports in the Firewall To list all open ports for a zone (for example, dmz), enter the
following command as root:
# firewall -cmd --zone=dmz --list - ports

Note that this will not show ports opened as a result of the --add-services command.
To add a port to a zone (for example, to allow TCP traffic to port 8080 to the dmz zone), enter
the following command as root:
# firewall -cmd --zone=dmz --add -port =8080/ tcp

To make this setting persistent, repeat the command adding the --permanent option.
To add a range of ports to a zone (for example, to allow the ports from 5060 to 5061 to the
public zone, enter the following command as root:
# firewall -cmd --zone= public --add -port =5060 -5061/ udp

Opening Protocols To list all open ports for a zone (dmz, for example), enter the following
command as root:
# firewall -cmd --zone=dmz --list - protocols

Note that this command does not show protocols opened as a result of the firewall-cmd
--add-services command.
To add a protocol to a zone (for example, to allow ESP traffic to the dmz zone), enter the
following command as root:
# firewall -cmd --zone=dmz --add - protocol =esp

146
10.2. Mitigating Network Attacks Using Firewalls

Opening Source Ports To list all open source ports for a zone (for example, the dmz zone),
enter the following command as root:
# firewall -cmd --zone=dmz --list -source - ports

Note that this command does not show source ports opened as a result of the firewall-cmd
--add-services command.

To add a source port to a zone (for example, to allow TCP traffic from port 8080 to the dmz
zone), use the following command as root:
# firewall -cmd --zone=dmz --add -source -port =8080/ tcp

To add a range of source ports to a zone (for example, to allow the ports from 5060 to 5061 to
the public zone), enter the following command as root:
~]# firewall -cmd --zone= public --add -source -port =5060 -5061/ udp

Adding a Service to a Zone To add a service to a zone (for example, to allow SMTP to the
work zone), enter the following command as root:
# firewall -cmd --zone=work --add - service =smtp

Removing a Service from a Zone To remove a service from a zone (for example, to remove
SMTP from the work zone), enter the following command as root:
# firewall -cmd --zone=work --remove - service =smtp

To make this change persistent, repeat the command adding the --permanent option. This
change will not break established connections. If that is your intention, you can use the
--complete-reload option, but this will break all established connections — not just for the
service you have removed.

Configuring IP Address Masquerading To check if IP masquerading is enabled (for


example, for the external zone), enter the following command as root:
# firewall -cmd --zone= external --query - masquerade

The command prints yes with exit status 0 if enabled. It prints no with exit status 1 otherwise.
If zone is omitted, the default zone will be used.

To enable IP masquerading, enter the following command as root:


# firewall -cmd --zone= external --add - masquerade

To disable IP masquerading, enter the following command as root:


# firewall -cmd --zone= external --remove - masquerade

Configuring Port Forwarding To forward inbound network packets from one port to an
alternative port or address, first enable IP address masquerading for a zone (for example,
external), by entering the following command as root:

147
10. Securing Linux Systems

# firewall -cmd --zone= external --add - masquerade

To forward packets to a local port (a port on the same system), enter the following command
as root:
# firewall -cmd --zone= external --add -forward -port=port =22:
proto =tcp: toport =3753

In this example, the packets intended for port 22 are now forwarded to port 3753. The original
destination port is specified with the port option. This option can be a port or port range,
together with a protocol. The protocol, if specified, must be one of either tcp or udp. The new
local port (the port or range of ports to which the traffic is being forwarded to) is specified
with the toport option. To make this setting persistent, repeat the commands adding the
--permanent option.
To forward packets to another IPv4 address, usually an internal address, without changing
the destination port, enter the following command as root:
# firewall -cmd --zone= external --add -forward -port=port =22:
proto =tcp: toaddr =192.0.2.55

In this example, the packets intended for port 22 are now forwarded to the same port at
the address given with the toaddr. The original destination port is specified with the port
option. This option can be a port or port range, together with a protocol. The protocol, if
specified, must be one of either tcp or udp. The new destination port (the port or range of
ports to which the traffic is being forwarded to) is specified with the toport option.
To forward packets to another port at another IPv4 address, usually an internal address, enter
the following command as root:
# firewall -cmd --zone= external \
--add -forward -port=port =22: proto =tcp: toport =2055: toaddr
=192.0.2.55

In this example, the packets intended for port 22 are now forwarded to port 2055 at the
address given with the toaddr option. The original destination port is specified with the
port option. This option can be a port or port range, together with a protocol. The protocol,
if specified, must be one of either tcp or udp. The new destination port, the port or range of
ports to which the traffic is being forwarded to, is specified with the toport option.

10.2.6.3 Using the Direct Interface

It is possible to add and remove chains during runtime by using the --direct option with
the firewall-cmd tool. It is dangerous to use the direct interface if you are not very familiar
with iptables as you could inadvertently cause a breach in the firewall. The direct interface
mode is intended for services or applications to add specific firewall rules during runtime.

Adding a Rule Using the Direct Interface To add a rule to the “IN_public_allow” chain,
enter the following command as root:

148
10.2. Mitigating Network Attacks Using Firewalls

# firewall -cmd --direct --add -rule ipv4 filter \


IN_public_allow 0 -m tcp -p tcp --dport 666 -j ACCEPT

Removing a Rule Using the Direct Interface To remove a rule from the “IN_public_allow”
chain, enter the following command as root:
# firewall -cmd --direct --remove -rule ipv4 \
filter IN_public_allow 0 -m tcp -p \
tcp --dport 666 -j ACCEPT

Listing Rules Using the Direct Interface To list the rules in the “IN_public_allow” chain,
enter the following command as root:
# firewall -cmd --direct --get - rules ipv4 filter
IN_public_allow

Note that this command (the --get-rules option) only lists rules previously added using
the --add-rule option. It does not list existing iptables rules added by other means.

10.2.6.4 Firewall Lockdown

Local applications or services are able to change the firewall configuration if they are running
as root. With this feature, the administrator can lock the firewall configuration so that either
no applications or only applications that are added to the lockdown whitelist are able to
request firewall changes. The lockdown settings default to disabled. If enabled, the user
can be sure that there are no unwanted configuration changes made to the firewall by local
applications or services.

Configuring Lockdown with the Command-Line Client To query whether lockdown is


enabled, use the following command as root:
# firewall -cmd --query - lockdown

The command prints yes with exit status 0 if lockdown is enabled. It prints no with exit
status 1 otherwise.

To enable lockdown, enter the following command as root:


# firewall -cmd --lockdown -on

To disable lockdown, use the following command as root:


# firewall -cmd --lockdown -off

Testing Firewall Lockdown To test whether lockdown is working, first turn lockdown on.
Then try to enable the imaps service in the default zone using the following command as an
administrative user (a user in the wheel group; usually the first user on the system). You will
be prompted for the user password:

149
10. Securing Linux Systems

# firewall -cmd --add - service = imaps


Error : ACCESS_DENIED : lockdown is enabled

To enable the use of firewall-cmd, enter the following command as root:


# firewall -cmd --add -lockdown -whitelist - command =’/ usr/bin/
python -Es /usr/bin/firewall -cmd*’

Reload the firewall as root:


# firewall -cmd --reload

Try to enable the imaps service again in the default zone by entering the following command
as an administrative user. You will be prompted for the user password:
# firewall -cmd --add - service = imaps

This time the command succeeds.

10.2.6.5 Configuring Logging for Denied Packets

With the LogDenied option in the firewalld, it is possible to add a simple logging mechanism
for denied packets. These are the packets that are rejected or dropped. To change the setting
of the logging, edit the /etc/firewalld/firewalld.conf file or use the command-line or
GUI configuration tool.
If LogDenied is enabled, logging rules are added right before the reject and drop rules in the
INPUT, FORWARD and OUTPUT chains for the default rules and also the final reject and drop
rules in zones. The possible values for this setting are: all, unicast, broadcast, multicast,
and off. The default setting is off. With the unicast, broadcast, and multicast setting, the
pkttype match is used to match the link-layer packet type. With all, all packets are logged.
To list the actual LogDenied setting with firewall-cmd, use the following command as root:
# firewall -cmd --get -log - denied
off

To change the LogDenied setting, use the following command as root:


# firewall -cmd --set -log - denied =all
success

To change the LogDenied setting with the firewalld GUI configuration tool, start firewall-
config, click the Options menu and select Change Log Denied menuitem. The LogDenied
window appears. Select the new LogDenied setting from the drop-down menu and click OK.

10.2.7 Using the iptables Service


To use the iptables and ip6tables services instead of firewalld, first disable firewalld
by running the following command as root:
# systemctl disable firewalld
# systemctl stop firewalld

150
10.2. Mitigating Network Attacks Using Firewalls

Then install the iptables-services package by entering the following command as root:
# yum install iptables - services

The iptables-services package contains the iptables service and the ip6tables service.

Then, to start the iptables and ip6tables services, enter the following commands as root:
# systemctl start iptables
# systemctl start ip6tables

To enable the services to start on every system start, enter the following commands:
# systemctl enable iptables
# systemctl enable ip6tables

10.2.7.1 iptables and IP Sets

The ipset utility is used to administer IP sets in the Linux kernel. An IP set is a framework for
storing IP addresses, port numbers, IP and MAC address pairs, or IP address and port number
pairs. The sets are indexed in such a way that very fast matching can be made against a set
even when the sets are very large. IP sets enable simpler and more manageable configurations
as well as providing performance advantages when using iptables. The iptables matches
and targets referring to sets create references which protect the given sets in the kernel. A set
cannot be destroyed while there is a single reference pointing to it.

The use of ipset enables iptables commands, such as those below, to be replaced by a set:
# iptables -A INPUT -s 10.0.0.0/8 -j DROP
# iptables -A INPUT -s 172.16.0.0/12 -j DROP
# iptables -A INPUT -s 192.168.0.0/16 -j DROP

The set is created as follows:


# ipset create my -block -set hash:net
# ipset add my -block -set 10.0.0.0/8
# ipset add my -block -set 172.16.0.0/12
# ipset add my -block -set 192.168.0.0/16

The set is then referenced in an iptables command as follows:


# iptables -A INPUT -m set --set my -block -set src -j DROP

If the set is used more than once a saving in configuration time is made. If the set contains
many entries a saving in processing time is made.

Installing ipset To install the ipset utility, enter the following command as root:
# yum install ipset

151
10. Securing Linux Systems

ipset Commands The format of the ipset command is as follows:


ipset [ options ] command [command - options ]

Where command is one of:


create | add | del | test | destroy | list | save | restore |
flush | rename | swap | help | version | -

Example for Creating an IP Set

To create an IP set consisting of a source IP address, a port, and destination IP address, run a
command as follows:
# ipset create my -set hash:ip ,port ,ip

Once the set is created, entries can be added as follows:


# ipset add my -set 192.168.1.2 ,80 ,192.168.2.2
# ipset add my -set 192.168.1.2 ,443 ,192.168.2.2

List an IP Set

To list the contents of a specific IP Set, my-set, run a command as follows:


# ipset list my -set
Name: my -set
Type: hash:ip ,port ,ip
Header : family inet hashsize 1024 maxelem 65536
Size in memory : 8360
References : 0
Members :
c 192.168.1.2 , tcp :80 ,192.168.2.2
192.168.1.2 , tcp :443 ,192.168.2.2

Omit the set name to list all sets.

10.3 Security Enhanced Linux: SELINUX


Since everything in Linux is treated as a file, associating permissions with files have served as
the primary security mechanism in LINUX. This is known as discretionary access controls since
the permissions are completely left to the discretion of users. Relying on DAC mechanisms
alone is fundamentally inadequate for strong system security. DAC access decisions are only
based on user identity and ownership, ignoring other security-relevant information such as
the role of the user, the function and trustworthiness of the program, and the sensitivity and
integrity of the data. Each user typically has complete discretion over their files, making it
difficult to enforce a system-wide security policy. For example, on Linux operating systems,
users could make their home directories world-readable, giving users and processes access
to potentially sensitive information, with no further protection over this unwanted action.
Furthermore, every program run by a user inherits all of the permissions granted to the user

152
10.3. Security Enhanced Linux: SELINUX

and is free to change access to the user’s files, so minimal protection is provided against
malicious software. Many system services and privileged programs run with coarse-grained
privileges that far exceed their requirements, so that a flaw in any one of these programs
could be exploited to obtain further system access.

Security-Enhanced Linux (SELinux) is a solution to this problem. SELInux is an implemen-


tation of a mandatory access control mechanism in the Linux kernel, checking for allowed
operations after standard discretionary access controls are checked. SELinux can enforce rules
on files and processes in a Linux system, and on their actions, based on defined policies.

Security-Enhanced Linux (SELinux) adds Mandatory Access Control (MAC) to the Linux
kernel, and is enabled by default in Red Hat Enterprise Linux. A general purpose MAC
architecture needs the ability to enforce an administratively-set security policy over all
processes and files in the system, basing decisions on labels containing a variety of security-
relevant information. When properly implemented, it enables a system to adequately defend
itself and offers critical support for application security by protecting against the tampering
with, and bypassing of, secured applications. MAC provides strong separation of applications
that permits the safe execution of untrustworthy applications. Its ability to limit the privileges
associated with executing processes limits the scope of potential damage that can result
from the exploitation of vulnerabilities in applications and system services. MAC enables
information to be protected from legitimate users with limited authorization as well as from
authorized users who have unwittingly executed malicious applications. Note that, When
using SELinux, files, including directories and devices, are referred to as objects. Processes,
such as a user running a command or the Mozilla Firefox application, are referred to as
subjects.

10.3.1 Benefits of running SELinux

• All processes and files are labeled with a type. A type defines a domain for processes,
and a type for files. Processes are separated from each other by running in their own
domains, and SELinux policy rules define how processes interact with files, as well as
how processes interact with each other. Access is only allowed if an SELinux policy
rule exists that specifically allows it.
• Fine-grained access control. Stepping beyond traditional UNIX permissions that are
controlled at user discretion and based on Linux user and group IDs, SELinux access
decisions are based on all available information, such as an SELinux user, role, type,
and, optionally, a level.
• SELinux policy is administratively-defined, enforced system-wide, and is not set at user
discretion.
• Reduced vulnerability to privilege escalation attacks. Processes run in domains, and are
therefore separated from each other. SELinux policy rules define how processes access
files and other processes. If a process is compromised, the attacker only has access to
the normal functions of that process, and to files the process has been configured to

153
10. Securing Linux Systems

have access to. For example, if the Apache HTTP Server is compromised, an attacker
cannot use that process to read files in user home directories, unless a specific SELinux
policy rule was added or configured to allow such access.
• SELinux can be used to enforce data confidentiality and integrity, as well as protecting
processes from untrusted inputs.

However, SELinux is not:

• antivirus software,
• a replacement for passwords, firewalls, or other security systems,
• an all-in-one security solution.

SELinux is designed to enhance existing security solutions, not replace them. Even when
running SELinux, it is important to continue to follow good security practices, such as keeping
software up-to-date, using hard-to-guess passwords, firewalls, and so on.

10.3.2 SELinux Architecture


SELinux is a Linux security module that is built into the Linux kernel. SELinux is driven by
loadable policy rules. When security-relevant access is taking place, such as when a process
attempts to open a file, the operation is intercepted in the kernel by SELinux. If an SELinux
policy rule allows the operation, it continues, otherwise, the operation is blocked and the
process receives an error. Remember that SELinux policy rules have no effect if DAC rules
deny access first.

10.3.3 SELinux States and Modes


SELinux can be either in the enabled or disabled state. When disabled, only DAC rules are
used. When enabled, SELinux can run in one of the following modes:

• Enforcing: SELinux policy is enforced. SELinux denies access based on SELinux policy
rules.
• Permissive: SELinux policy is not enforced. SELinux does not deny access, but denials
are logged for actions that would have been denied if running in enforcing mode.

Use the setenforce utility to change between enforcing and permissive mode. Changes
made with setenforce do not persist across reboots. To change to enforcing mode, as the
Linux root user, run the setenforce 1 command. To change to permissive mode, run the
setenforce 0 command. Use the getenforce utility to view the current SELinux mode:
# getenforce
Enforcing

# setenforce 0
# getenforce
Permissive

154
10.3. Security Enhanced Linux: SELINUX

# setenforce 1
# getenforce
Enforcing

10.3.4 SELinux Contexts


Processes and files are labeled with an SELinux context that contains additional information,
such as an SELinux user, role, type, and, optionally, a level. When running SELinux, all of this
information is used to make access control decisions. In Red Hat Enterprise Linux, SELinux
provides a combination of Role-Based Access Control (RBAC), Type Enforcement (TE), and,
optionally, Multi-Level Security (MLS).

The following is an example showing SELinux context. SELinux contexts are used on
processes, Linux users, and files, on Linux operating systems that run SELinux. Use the
following command to view the SELinux context of files and directories:
$ ls -Z file1
-rwxrw -r-- user1 group1 unconfined_u : object_r : user_home_t :s0
file1

SELinux contexts follow the SELinux user:role:type:level syntax. The fields are as follows:

SELinux user The SELinux user identity is an identity known to the policy that is authorized
for a specific set of roles, and for a specific MLS/MCS range. Each Linux user is
mapped to an SELinux user using SELinux policy. This allows Linux users to inherit
the restrictions placed on SELinux users. The mapped SELinux user identity is used
in the SELinux context for processes in that session, in order to define what roles and
levels they can enter.
role Part of SELinux is the Role-Based Access Control (RBAC) security model. The role is
an attribute of RBAC. SELinux users are authorized for roles, and roles are authorized
for domains. The role serves as an intermediary between domains and SELinux users.
The roles that can be entered determine which domains can be entered; ultimately, this
controls which object types can be accessed. This helps reduce vulnerability to privilege
escalation attacks.
type The type is an attribute of Type Enforcement. The type defines a domain for processes,
and a type for files. SELinux policy rules define how types can access each other,
whether it be a domain accessing a type, or a domain accessing another domain. Access
is only allowed if a specific SELinux policy rule exists that allows it.
level The level is an attribute of MLS and MCS. An MLS range is a pair of levels, written
as lowlevel-highlevel if the levels differ, or lowlevel if the levels are identical (s0-s0
is the same as s0). Each level is a sensitivity-category pair, with categories being
optional. If there are categories, the level is written as sensitivity:category-set. If
there are no categories, it is written as sensitivity. If the category set is a contiguous
series, it can be abbreviated. For example, c0.c3 is the same as c0,c1,c2,c3. The

155
10. Securing Linux Systems

/etc/selinux/targeted/setrans.conf file maps levels (s0:c0) to human-readable


form (that is CompanyConfidential). In Red Hat Enterprise Linux, targeted policy
enforces MCS, and in MCS, there is just one sensitivity, s0. MCS in Red Hat Enterprise
Linux supports 1024 different categories: c0 through to c1023. s0-s0:c0.c1023 is
sensitivity s0 and authorized for all categories.

10.3.5 Domain Transitions


A process in one domain transitions to another domain by executing an application that has
the entrypoint type for the new domain. The entrypoint permission is used in SELinux
policy and controls which applications can be used to enter a domain. The following example
demonstrates a domain transition:
An Example of a Domain Transition

1. A user wants to change his password. To do this, they run the passwd utility. The
/usr/bin/passwd executable is labeled with the passwd_exec_t type:
$ ls -Z /usr/bin/ passwd
-rwsr -xr -x root root system_u : object_r : passwd_exec_t :s0 /
usr/bin/ passwd
The passwd utility accesses /etc/shadow, which is labeled with the shadow_t type:
$ ls -Z /etc/ shadow
-r--------. root root system_u : object_r : shadow_t :s0 /etc/
shadow

2. An SELinux policy rule states that processes running in the passwd_t domain are
allowed to read and write to files labeled with the shadow_t type. The shadow_t
type is only applied to files that are required for a password change. This includes
/etc/gshadow, /etc/shadow, and their backup files.
3. An SELinux policy rule states that the passwd_t domain has entrypoint permission to
the passwd_exec_t type.
4. When a user runs the passwd utility, the user’s shell process transitions to the passwd_t
domain. With SELinux, since the default action is to deny, and a rule exists that allows
(among other things) applications running in the passwd_t domain to access files labeled
with the shadow_t type, the passwd application is allowed to access /etc/shadow, and
update the user’s password.

This example is not exhaustive, and is used as a basic example to explain domain transition.
Although there is an actual rule that allows subjects running in the passwd_t domain to
access objects labeled with the shadow_t file type, other SELinux policy rules must be met
before the subject can transition to a new domain. In this example, Type Enforcement ensures:

• The passwd_t domain can only be entered by executing an application labeled with
the passwd_exec_t type; can only execute from authorized shared libraries, such as the
lib_t type; and cannot execute any other applications.

156
10.3. Security Enhanced Linux: SELINUX

• Only authorized domains, such as passwd_t, can write to files labeled with the shadow_t
type. Even if other processes are running with superuser privileges, those processes
cannot write to files labeled with the shadow_t type, as they are not running in the
passwd_t domain.
• Only authorized domains can transition to the passwd_t domain. For example, the
sendmail process running in the sendmail_t domain does not have a legitimate reason
to execute passwd; therefore, it can never transition to the passwd_t domain.
• Processes running in the passwd_t domain can only read and write to authorized
types, such as files labeled with the etc_t or shadow_t types. This prevents the passwd
application from being tricked into reading or writing arbitrary files.

10.3.6 SELinux Contexts for Processes


Use the ps -eZ command to view the SELinux context for processes. Consider the following
example.

Example: view the SELinux Context for the passwd Utility

1. Open a terminal, such as Applications System Tools Terminal.


2. Run the passwd utility. Do not enter a new password:
$passwd
Changing password for user user_name .
Changing password for user_name .
( current ) UNIX password :

3. Open a new tab, or another terminal, and enter the following command. The output is
similar to the following:
$ ps -eZ | grep passwd
unconfined_u : unconfined_r : passwd_t :s0 -s0:c0. c1023 13212
pts /1 00:00:00 passwd

4. In the first tab/terminal, press Ctrl+C to cancel the passwd utility.

In this example, when the passwd utility (labeled with the passwd_exec_t type) is executed,
the user’s shell process transitions to the passwd_t domain. Remember that the type defines
a domain for processes, and a type for files.

To view the SELinux contexts for all running processes, run the ps utility again. Note that
below is a truncated example of the output, and may differ on your system:
$ ps -eZ
system_u : system_r : dhcpc_t :s0 1869 ? 00:00:00
dhclient
system_u : system_r : sshd_t :s0 -s0:c0. c1023 1882 ? 00:00:00 sshd
system_u : system_r : gpm_t :s0 1964 ? 00:00:00 gpm
system_u : system_r : crond_t :s0 -s0:c0. c1023 1973 ? 00:00:00 crond

157
10. Securing Linux Systems

system_u : system_r : kerneloops_t :s0 1983 ? 00:00:05


kerneloops
system_u : system_r : crond_t :s0 -s0:c0. c1023 1991 ? 00:00:00 atd

The system_r role is used for system processes, such as daemons. Type Enforcement then
separates each domain.

10.3.7 SELinux Contexts for Users


Use the following command to view the SELinux context associated with your Linux user:
$ id -Z
unconfined_u : unconfined_r : unconfined_t :s0 -s0:c0. c1023

In Red Hat Enterprise Linux, Linux users run unconfined by default. This SELinux context
shows that the Linux user is mapped to the SELinux unconfined_u user, running as the
unconfined_r role, and is running in the unconfined_t domain. s0-s0 is an MLS range,
which in this case, is the same as just s0. The categories the user has access to is defined by
c0.c1023, which is all categories (c0 through to c1023).

10.3.8 Targeted Policy


Targeted policy is the default SELinux policy used in Red Hat Enterprise Linux. When
using targeted policy, processes that are targeted run in a confined domain, and processes
that are not targeted run in an unconfined domain. For example, by default, logged-in
users run in the unconfined_t domain, and system processes started by init run in the
unconfined_service_t domain; both of these domains are unconfined.

Executable and writable memory checks may apply to both confined and unconfined domains.
However, by default, subjects running in an unconfined domain can allocate writable memory
and execute it. These memory checks can be enabled by setting Booleans, which allow the
SELinux policy to be modified at runtime. Boolean configuration is discussed later.

10.3.9 Confined Processes


Almost every service that listens on a network, such as sshd or httpd, is confined in Red
Hat Enterprise Linux. Also, most processes that run as the root user and perform tasks for
users, such as the passwd utility, are confined. When a process is confined, it runs in its own
domain, such as the httpd process running in the httpd_t domain. If a confined process
is compromised by an attacker, depending on SELinux policy configuration, an attacker’s
access to resources and the possible damage they can do is limited.

The following example demonstrates how SELinux prevents the Apache HTTP Server (httpd)
from reading files that are not correctly labeled, such as files intended for use by Samba. This
is an example, and should not be used in production. It assumes that the httpd and wget
packages are installed, the SELinux targeted policy is used, and that SELinux is running in
enforcing mode.

158
10.3. Security Enhanced Linux: SELINUX

An Example of Confined Process

1. Confirm that SELinux is enabled, is running in enforcing mode, and that targeted policy
is being used. The correct output should look similar to the output below:
$ sestatus
SELinux status : enabled
SELinuxfs mount : /sys/fs/ selinux
Current mode: enforcing
Mode from config file: enforcing
Policy version : 24
Policy from config file: targeted

2. As root, create a file in the /var/www/html/ directory:


# touch /var/www/html/ testfile

3. Enter the following command to view the SELinux context of the newly created file:
$ ls -Z /var/www/html/ testfile
-rw -r--r-- root root unconfined_u : object_r :
httpd_sys_content_t :s0 /var/www/html/ testfile
By default, Linux users run unconfined in Red Hat Enterprise Linux, which is why the
testfile file is labeled with the SELinux unconfined_u user. RBAC is used for processes,
not files. Roles do not have a meaning for files; the object_r role is a generic role used
for files (on persistent storage and network file systems). Under the /proc directory,
files related to processes may use the system_r role. The httpd_sys_content_t type
allows the httpd process to access this file.
4. As root, start the httpd daemon:
# systemctl start httpd . service
Confirm that the service is running. The output should include the information below
(only the time stamp will differ):
# systemctl status httpd . service
httpd . service - The Apache HTTP Server
Loaded : loaded (/ usr/lib/ systemd / system / httpd . service ;
disabled )
Active : active ( running ) since Mon 2013 -08 -05 14:00:55
CEST; 8s ago

5. Change into a directory where your Linux user has write access to, and enter the
following command. Unless there are changes to the default configuration, this
command succeeds:
$ wget http :// localhost / testfile
- -2009 -11 -06 17:43:01 - - http :// localhost / testfile
Resolving localhost ... 127.0.0.1

159
10. Securing Linux Systems

Connecting to localhost |127.0.0.1|:80... connected .


HTTP request sent , awaiting response ... 200 OK
Length : 0 [text/ plain ]
Saving to: ‘testfile ’

[ <=> ] 0 --.-K/s in 0s

2009 -11 -06 17:43:01 (0.00 B/s) - ‘testfile ’ saved [0/0]

6. The chcon command relabels files; however, such label changes do not survive when the
file system is relabeled. For permanent changes that survive a file system relabel, use
the semanage utility, which is discussed later. As root, enter the following command to
change the type to a type used by Samba:
# chcon -t samba_share_t /var/www/html/ testfile
Enter the following command to view the changes:
$ ls -Z /var/www/html/ testfile
-rw -r--r-- root root unconfined_u : object_r : samba_share_t :
s0 /var/www/html/ testfile

7. Note that the current DAC permissions allow the httpd process access to testfile.
Change into a directory where your user has write access to, and enter the following
command. Unless there are changes to the default configuration, this command fails:
$ wget http :// localhost / testfile
- -2009 -11 -06 14:11:23 - - http :// localhost / testfile
Resolving localhost ... 127.0.0.1
Connecting to localhost |127.0.0.1|:80... connected .
HTTP request sent , awaiting response ... 403 Forbidden
2009 -11 -06 14:11:23 ERROR 403: Forbidden .

This example demonstrates the additional security added by SELinux. Although DAC rules
allowed the httpd process access to testfile in step 2, because the file was labeled with a
type that the httpd process does not have access to, SELinux denied access.

If the auditd daemon is running, an error similar to the following is logged to


/var/log/audit/audit.log:
type=AVC msg= audit (1220706212.937:70) : avc:
denied { getattr } for pid =1904 comm=" httpd "
path="/var/www/html/ testfile " dev=sda5 ino =247576
scontext = unconfined_u : system_r : httpd_t :s0
tcontext = unconfined_u : object_r : samba_share_t :s0 tclass =file

type= SYSCALL msg= audit (1220706212.937:70) : arch =40000003 syscall


=196

160
10.3. Security Enhanced Linux: SELINUX

success =no exit = -13 a0= b9e21da0 a1= bf9581dc a2 =555 ff4 a3 =2008171
items =0
ppid =1902 pid =1904 auid =500 uid =48 gid =48 euid =48 suid =48 fsuid
=48
egid =48 sgid =48 fsgid =48 tty =( none) ses =1 comm=" httpd "
exe="/usr/sbin/ httpd " subj= unconfined_u : system_r : httpd_t :s0 key
=( null)

Also, an error similar to the following is logged to /var/log/httpd/error_log:


[Wed May 06 23:00:54 2009] [ error ] [ client 127.0.0.1] (13)
Permission denied : access to / testfile denied

10.3.10 Unconfined Processes


Unconfined processes run in unconfined domains, for example, unconfined services executed
by init end up running in the unconfined_service_t domain, unconfined services executed
by kernel end up running in the kernel_t domain, and unconfined services executed by
unconfined Linux users end up running in the unconfined_t domain. For unconfined
processes, SELinux policy rules are applied, but policy rules exist that allow processes
running in unconfined domains almost all access. Processes running in unconfined domains
fall back to using DAC rules exclusively. If an unconfined process is compromised, SELinux
does not prevent an attacker from gaining access to system resources and data, but of course,
DAC rules are still used. SELinux is a security enhancement on top of DAC rules – it does not
replace them.

The following example demonstrates how the Apache HTTP Server (httpd) can access data
intended for use by Samba, when running unconfined. Note that in Red Hat Enterprise
Linux, the httpd process runs in the confined httpd_t domain by default. This is an example,
and should not be used in production. It assumes that the httpd, wget, dbus and audit
packages are installed, that the SELinux targeted policy is used, and that SELinux is running
in enforcing mode.

An Example of Unconfined Process

1. As the root user, enter the following command to change the type to a type used by
Samba:
# chcon -t samba_share_t /var/www/html/ testfile
View the changes:
$ ls -Z /var/www/html/ testfile
-rw -r--r-- root root unconfined_u : object_r : samba_share_t :
s0 /var/www/html/ testfile

2. Enter the following command to confirm that the httpd process is not running:
# systemctl status httpd . service

161
10. Securing Linux Systems

httpd . service - The Apache HTTP Server


Loaded : loaded (/ usr/lib/ systemd / system / httpd . service ;
disabled )
Active : inactive (dead)
If the output differs, enter the following command as root to stop the httpd process:
# systemctl stop httpd . service

3. To make the httpd process run unconfined, enter the following command as root to
change the type of the /usr/sbin/httpd file, to a type that does not transition to a
confined domain:
# chcon -t bin_t /usr/sbin/ httpd

4. Confirm that /usr/sbin/httpd is labeled with the bin_t type:


$ ls -Z /usr/sbin/ httpd
-rwxr -xr -x. root root system_u : object_r : bin_t :s0 /usr/
sbin/ httpd

5. As root, start the httpd process and confirm, that it started successfully:
# systemctl start httpd . service

# systemctl status httpd . service


httpd . service - The Apache HTTP Server
Loaded : loaded (/ usr/lib/ systemd / system / httpd . service ;
disabled )
Active : active ( running ) since Thu 2013 -08 -15 11:17:01
CEST; 5s ago

6. Enter the following command to view httpd running in the unconfined_service_t


domain:
$ ps -eZ | grep httpd
system_u : system_r : unconfined_service_t :s0 11884 ? 00:00:00
httpd
system_u : system_r : unconfined_service_t :s0 11888 ? 00:00:00
httpd
system_u : system_r : unconfined_service_t :s0 11889 ? 00:00:00
httpd

7. Change into a directory where your Linux user has write access to, and enter the
following command. Unless there are changes to the default configuration, this
command succeeds:
$ wget http :// localhost / testfile
- -2009 -05 -07 01:41:10 - - http :// localhost / testfile
Resolving localhost ... 127.0.0.1
Connecting to localhost |127.0.0.1|:80... connected .

162
10.3. Security Enhanced Linux: SELINUX

HTTP request sent , awaiting response ... 200 OK


Length : 0 [text/ plain ]
Saving to: ‘testfile ’

[ <=> ]--.-K/s in 0s

2009 -05 -07 01:41:10 (0.00 B/s) - ‘testfile ’ saved [0/0]


Although the httpd process does not have access to files labeled with the samba_share_t
type, httpd is running in the unconfined unconfined_service_t domain, and falls
back to using DAC rules, and as such, the wget command succeeds. Had httpd been
running in the confined httpd_t domain, the wget command would have failed.
8. The restorecon utility restores the default SELinux context for files. As root, enter the
following command to restore the default SELinux context for /usr/sbin/httpd:
# restorecon -v /usr/sbin/ httpd
restorecon reset /usr/sbin/ httpd context system_u : object_r
: unconfined_exec_t :s0 -> system_u : object_r : httpd_exec_t :s0
Confirm that /usr/sbin/httpd is labeled with the httpd_exec_t type:
$ ls -Z /usr/sbin/ httpd
-rwxr -xr -x root root system_u : object_r : httpd_exec_t :s0 /
usr/sbin/ httpd

9. As root, enter the following command to restart httpd. After restarting, confirm that
httpd is running in the confined httpd_t domain:
# systemctl restart httpd . service

$ ps -eZ | grep httpd


system_u : system_r : httpd_t :s0 8883 ? 00:00:00
httpd
system_u : system_r : httpd_t :s0 8884 ? 00:00:00
httpd
system_u : system_r : httpd_t :s0 8885 ? 00:00:00
httpd

The examples in these sections demonstrate how data can be protected from a compromised
confined-process (protected by SELinux), as well as how data is more accessible to an attacker
from a compromised unconfined-process (not protected by SELinux).

10.3.11 Confined and Unconfined Users


Each Linux user is mapped to an SELinux user using SELinux policy. This allows Linux users
to inherit the restrictions on SELinux users. This Linux user mapping is seen by running the
semanage login -l command as root:

163
10. Securing Linux Systems

# semanage login -l

Login Name SELinux User MLS/MCS Range Service


__default__ unconfined_u s0 -s0:c0. c1023 *
root unconfined_u s0 -s0:c0. c1023 *
system_u system_u s0 -s0:c0. c1023 *

In Red Hat Enterprise Linux, Linux users are mapped to the SELinux __default__ login by
default, which is mapped to the SELinux unconfined_u user. The following line defines the
default mapping:
__default__ unconfined_u s0 -s0:c0.
c1023

The following procedure demonstrates how to add a new Linux user to the system and how
to map that user to the SELinux unconfined_u user. It assumes that the root user is running
unconfined, as it does by default in Red Hat Enterprise Linux:

Mapping a New Linux User to the SELinux unconfined_u User

1. As root, enter the following command to create a new Linux user named newuser:
# useradd newuser

2. To assign a password to the Linux newuser user. Enter the following command as root:
# passwd newuser
Changing password for user newuser .
New UNIX password : Enter a password
Retype new UNIX password : Enter the same password again
passwd : all authentication tokens updated successfully .

3. Log out of your current session, and log in as the Linux newuser user. When you log
in, the pam_selinux PAM module automatically maps the Linux user to an SELinux
user (in this case, unconfined_u), and sets up the resulting SELinux context. The Linux
user’s shell is then launched with this context. Enter the following command to view
the context of a Linux user:
[ newuser@localhost ]$ id -Z
unconfined_u : unconfined_r : unconfined_t :s0 -s0:c0. c1023

Confined and unconfined Linux users are subject to executable and writable memory checks,
and are also restricted by MCS or MLS.

If an unconfined Linux user executes an application that SELinux policy defines as one that
can transition from the unconfined_t domain to its own confined domain, the unconfined
Linux user is still subject to the restrictions of that confined domain. The security benefit
of this is that, even though a Linux user is running unconfined, the application remains
confined. Therefore, the exploitation of a flaw in the application can be limited by the policy.

164
10.3. Security Enhanced Linux: SELINUX

Similarly, we can apply these checks to confined users. Each confined Linux user is restricted
by a confined user domain. The SELinux policy can also define a transition from a confined
user domain to its own target confined domain. In such a case, confined Linux users are
subject to the restrictions of that target confined domain. The main point is that special
privileges are associated with the confined users according to their role.

Alongside with the already mentioned SELinux users, there are special roles, that can be
mapped to those users. These roles determine what SELinux allows the user to do:

• webadm_r can only administrate SELinux types related to the Apache HTTP Server.
• dbadm_r can only administrate SELinux types related to the MariaDB database and the
PostgreSQL database management system.
• logadm_r can only administrate SELinux types related to the syslog and auditlog
processes.
• secadm_r can only administrate SELinux.
• auditadm_r can only administrate processes related to the audit subsystem.

To list all available roles, enter the following command:


# seinfo -r

Note that the seinfo command is provided by the setools-console package, which is not
installed by default.

10.3.11.1 Changing the Default Mapping

In Red Hat Enterprise Linux, Linux users are mapped to the SELinux __default__ login by
default (which is in turn mapped to the SELinux unconfined_u user). If you would like new
Linux users, and Linux users not specifically mapped to an SELinux user to be confined by
default, change the default mapping with the semanage login command.

For example, enter the following command as root to change the default mapping from
unconfined_u to user_u:
# semanage login -m -S targeted -s " user_u " -r s0 __default__

Verify the __default__ login is mapped to user_u:


# semanage login -l

Login Name SELinux User MLS/MCS Range Service


__default__ user_u s0 -s0:c0. c1023 *
root unconfined_u s0 -s0:c0. c1023 *
system_u system_u s0 -s0:c0. c1023 *

If a new Linux user is created and an SELinux user is not specified, or if an existing Linux
user logs in and does not match a specific entry from the semanage login -l output, they
are mapped to user_u, as per the __default__ login.

165
10. Securing Linux Systems

To change back to the default behavior, enter the following command as root to map the
__default__ login to the SELinux unconfined_u user:
# semanage login -m -S targeted -s " unconfined_u " -r s0 -s0:c0.
c1023 __default__

10.3.12 SELinux Packages


In Red Hat Enterprise Linux full installation, the SELinux packages are installed by default
unless they are manually excluded during installation. If performing a minimal installation
in text mode, the policycoreutils-python and the policycoreutils-gui package are not
installed by default. Also, by default, SELinux runs in enforcing mode and the SELinux
targeted policy is used. The following SELinux packages are installed on your system by
default:

• policycoreutils provides utilities such as restorecon, secon, setfiles, semodule,


load_policy, and setsebool, for operating and managing SELinux.
• selinux-policy provides a basic directory structure, the selinux-policy.conf file, and
RPM macros.
• selinux-policy-targeted provides the SELinux targeted policy.
• libselinux – provides an API for SELinux applications.
• libselinux-utils provides the avcstat, getenforce, getsebool, matchpathcon,
selinuxconlist, selinuxdefcon, selinuxenabled, and setenforce utilities.
• libselinux-python provides Python bindings for developing SELinux applications.

10.3.13 Main Configuration File


The /etc/selinux/config file is the main SELinux configuration file. It controls whether
SELinux is enabled or disabled and which SELinux mode and SELinux policy is used:
# This file controls the state of SELinux on the system .
# SELINUX = can take one of these three values :
# enforcing - SELinux security policy is enforced .
# permissive - SELinux prints warnings instead of
enforcing .
# disabled - No SELinux policy is loaded .
SELINUX = enforcing
# SELINUXTYPE = can take one of these two values :
# targeted - Targeted processes are protected ,
# mls - Multi Level Security protection .
SELINUXTYPE = targeted

SELINUX= The SELINUX option sets whether SELinux is disabled or enabled and in which
mode - enforcing or permissive - it is running:

166
10.3. Security Enhanced Linux: SELINUX

• When using SELINUX=enforcing, SELinux policy is enforced, and SELinux denies


access based on SELinux policy rules. Denial messages are logged.
• When using SELINUX=permissive, SELinux policy is not enforced. SELinux does
not deny access, but denials are logged for actions that would have been denied if
running SELinux in enforcing mode.
• When using SELINUX=disabled, SELinux is disabled, the SELinux module is not
registered with the Linux kernel, and only DAC rules are used.
SELINUXTYPE= The SELINUXTYPE option sets the SELinux policy to use. Targeted policy is
the default policy. Only change this option if you want to use the MLS policy.

10.3.14 Permanent Changes in SELinux States and Modes


When systems run SELinux in permissive mode, users are able to label files incorrectly. Files
created while SELinux is disabled are not labeled at all. This behavior causes problems when
changing to enforcing mode because files are labeled incorrectly or are not labeled at all.
To prevent incorrectly labeled and unlabeled files from causing problems, file systems are
automatically relabeled when changing from the disabled state to permissive or enforcing
mode.

10.3.15 Booleans
Booleans allow parts of SELinux policy to be changed at runtime, without any knowledge
of SELinux policy writing. This allows changes, such as allowing services access to NFS
volumes, without reloading or recompiling SELinux policy.

10.3.15.1 Listing Booleans

For a list of Booleans, an explanation of what each one is, and whether they are on or off, run
the semanage boolean -l command as the Linux root user. The following example does not
list all Booleans and the output is shortened for brevity:
# semanage boolean -l
SELinux boolean State Default Description

smartmon_3ware (off , off) Determine


whether smartmon can ...
mpd_enable_homedirs (off , off) Determine
whether mpd can traverse ...

The SELinux boolean column lists Boolean names. The Description column lists whether
the Booleans are on or off, and what they do.

The getsebool -a command lists Booleans, whether they are on or off, but does not give a
description of each one:

167
10. Securing Linux Systems

# getsebool -a
cvs_read_shadow --> off
daemons_dump_core --> on

Run the getsebool boolean-name command to only list the status of the boolean-name
Boolean:
# getsebool cvs_read_shadow
cvs_read_shadow --> off

Use a space-separated list to list multiple Booleans:


# getsebool cvs_read_shadow daemons_dump_core
cvs_read_shadow --> off
daemons_dump_core --> on

10.3.15.2 Configuring Booleans

Run the setsebool utility in the setsebool boolean_name on/off form to enable or disable
Booleans.

The following example demonstrates configuring the httpd_can_network_connect_db


Boolean:

1. By default, the httpd_can_network_connect_db Boolean is off, preventing Apache


HTTP Server scripts and modules from connecting to database servers:
# getsebool httpd_can_network_connect_db
httpd_can_network_connect_db --> off

2. To temporarily enable Apache HTTP Server scripts and modules to connect to database
servers, enter the following command as root:
# setsebool httpd_can_network_connect_db on

3. Use the getsebool utility to verify the Boolean has been enabled:
# getsebool httpd_can_network_connect_db
httpd_can_network_connect_db --> on
This allows Apache HTTP Server scripts and modules to connect to database servers.
4. This change is not persistent across reboots. To make changes persistent across reboots,
run the setsebool -P boolean-name on command as root.
# setsebool -P httpd_can_network_connect_db on

10.3.16 Information Gathering Tools


The utilities listed below are command-line tools that provide well-formatted information,
such as access vector cache statistics or the number of classes, types, or Booleans.

168
10.3. Security Enhanced Linux: SELINUX

10.3.16.1 avcstat

This command provides a short output of the access vector cache statistics since boot.
You can watch the statistics in real time by specifying a time interval in seconds.
This provides updated statistics since the initial output. The statistics file used is
/sys/fs/selinux/avc/cache_stats, and you can specify a different cache file with the
-f /path/to/file option.
# avcstat
lookups hits misses allocs reclaims frees
47517410 47504630 12780 12780 12176
12275

10.3.16.2 seinfo

This utility is useful in describing the break-down of a policy, such as the number of classes,
types, Booleans, allow rules, and others. seinfo is a command-line utility that uses a
policy.conf file, a binary policy file, a modular list of policy packages, or a policy list file as
input. You must have the setools-console package installed to use the seinfo utility.

10.3.16.3 sesearch

You can use the sesearch utility to search for a particular rule in the policy. It is possible to
search either policy source files or the binary file. For example:
# sesearch --role_allow -t httpd_sys_content_t
Found 20 role allow rules :
allow system_r sysadm_r ;
allow sysadm_r system_r ;
allow sysadm_r staff_r ;
allow sysadm_r user_r ;
<output omitted >

The sesearch utility can provide the number of allow rules:


# sesearch --allow | wc -l
262798

And the number of dontaudit rules:


# sesearch --dontaudit | wc -l
156712

10.3.17 Troubleshooting
The following section describes what happens when SELinux denies access; the top three
causes of problems; where to find information about correct labeling; analyzing SELinux
denials; and creating custom policy modules with audit2allow.

169
10. Securing Linux Systems

10.3.18 What Happens When Access is Denied


SELinux decisions, such as allowing or disallowing access, are cached. This cache is known
as the Access Vector Cache (AVC). Denial messages are logged when SELinux denies access.
These denials are also known as “AVC denials”, and are logged to a different location,
depending on which daemons are running.

If you are running the X Window System, have the setroubleshoot and setroubleshoot-server
packages installed, and the setroubleshootd and auditd daemons are running, a warning
is displayed when access is denied by SELinux (Figure 10.4).

Figure 10.4: SELinux security alert.

Clicking on Show presents a detailed analysis of why SELinux denied access, and a possible
solution for allowing access. If you are not running the X Window System, it is less obvious
when access is denied by SELinux. For example, users browsing your website may receive an
error similar to the following:
Forbidden

You don ’t have permission to access file name on this server

For these situations, if DAC rules (standard Linux permissions) allow access, check
/var/log/messages and /var/log/audit/audit.log for "SELinux is preventing" and
"denied" errors respectively. This can be done by running the following commands as the
root user:
# grep " SELinux is preventing " /var/log/ messages

# grep " denied " /var/log/ audit / audit .log

10.3.19 Top Three Causes of Problems


The following sections describe the top three causes of problems: labeling problems,
configuring Booleans and ports for services, and evolving SELinux rules.

170
10.3. Security Enhanced Linux: SELINUX

10.3.19.1 Labeling Problems

On systems running SELinux, all processes and files are labeled with a label that contains
security-relevant information. This information is called the SELinux context. If these labels
are wrong, access may be denied. An incorrectly labeled application may cause an incorrect
label to be assigned to its process. This may cause SELinux to deny access, and the process
may create mislabeled files.

A common cause of labeling problems is when a non-standard directory is used for a service.
For example, instead of using /var/www/html/ for a website, an administrator wants to use
/srv/myweb/. On Red Hat Enterprise Linux, the /srv directory is labeled with the var_t
type. Files and directories created and /srv inherit this type. Also, newly-created top-level
directories (such as myserver/) may be labeled with the default_t type. SELinux prevents
the Apache HTTP Server (httpd) from accessing both of these types. To allow access, SELinux
must know that the files in /srv/myweb/ are to be accessible to httpd:
# semanage fcontext -a -t httpd_sys_content_t "/ srv/ myweb (/.*)
?"

This semanage command adds the context for the /srv/myweb/ directory (and all files and
directories under it) to the SELinux file-context configuration. The semanage utility does not
change the context. As root, run the restorecon utility to apply the changes:
# restorecon -R -v /srv/ myweb

What is the Correct Context? The matchpathcon utility checks the context of a file path
and compares it to the default label for that path. The following example demonstrates using
matchpathcon on a directory that contains incorrectly labeled files:
# matchpathcon -V /var/www/html /*
/var/www/html/ index .html has context unconfined_u : object_r :
user_home_t :s0 , should be system_u : object_r :
httpd_sys_content_t :s0
/var/www/html/ page1 .html has context unconfined_u : object_r :
user_home_t :s0 , should be system_u : object_r :
httpd_sys_content_t :s0

In this example, the index.html and page1.html files are labeled with the user_home_t type.
This type is used for files in user home directories. Using the mv command to move files from
your home directory may result in files being labeled with the user_home_t type. This type
should not exist outside of home directories. Use the restorecon utility to restore such files
to their correct type:
# restorecon -v /var/www/html/ index .html
restorecon reset /var/www/html/ index .html context
unconfined_u : object_r : user_home_t :s0 -> system_u : object_r :
httpd_sys_content_t :s0

171
10. Securing Linux Systems

To restore the context for all files under a directory, use the -R option:
# restorecon -R -v /var/www/html/
restorecon reset /var/www/html/ page1 .html context
unconfined_u : object_r : samba_share_t :s0 -> system_u : object_r :
httpd_sys_content_t :s0
restorecon
reset /var/www/html/ index .html context
unconfined_u : object_r : samba_share_t :s0 -> system_u : object_r :
httpd_sys_content_t :s0

10.3.19.2 How are Confined Services Running?

Services can be run in a variety of ways. To cater for this, you need to specify how you run
your services. This can be achieved through Booleans that allow parts of SELinux policy to be
changed at runtime, without any knowledge of SELinux policy writing. This allows changes,
such as allowing services access to NFS volumes, without reloading or recompiling SELinux
policy. Also, running services on non-default port numbers requires policy configuration to
be updated using the semanage command.

For example, to allow the Apache HTTP Server to communicate with MariaDB, enable the
httpd_can_network_connect_db Boolean:
# setsebool -P httpd_can_network_connect_db on

If access is denied for a particular service, use the getsebool and grep utilities to see if
any Booleans are available to allow access. For example, use the getsebool -a | grep ftp
command to search for FTP related Booleans:
# getsebool -a | grep ftp
ftpd_anon_write --> off
ftpd_full_access --> off
ftpd_use_cifs --> off
ftpd_use_nfs --> off

ftpd_connect_db --> off


httpd_enable_ftp_server --> off
tftp_anon_write --> off

For a list of Booleans and whether they are on or off, run the getsebool -a command. For a
list of Booleans, an explanation of what each one is, and whether they are on or off, run the
semanage boolean -l command as root.

10.3.19.3 Port Numbers

Depending on policy configuration, services may only be allowed to run on certain port
numbers. Attempting to change the port a service runs on without changing policy may

172
10.3. Security Enhanced Linux: SELINUX

result in the service failing to start. For example, run the semanage port -l | grep http
command as root to list http related ports:
# semanage port -l | grep http
http_cache_port_t tcp 3128 , 8080 , 8118
http_cache_port_t udp 3130
http_port_t tcp 80, 443 , 488 , 8008 ,
8009 , 8443
pegasus_http_port_t tcp 5988
pegasus_https_port_t tcp 5989

The http_port_t port type defines the ports Apache HTTP Server can listen on, which in
this case, are TCP ports 80, 443, 488, 8008, 8009, and 8443. If an administrator configures
httpd.conf so that httpd listens on port 9876 (Listen 9876), but policy is not updated to
reflect this, the following command fails:
# systemctl start httpd . service
Job for httpd . service failed . See ’systemctl status httpd .
service ’ and ’journalctl -xn ’ for details .

# systemctl status httpd . service


httpd . service - The Apache HTTP Server
Loaded : loaded (/ usr/lib/ systemd / system / httpd . service ;
disabled )
Active : failed ( Result : exit -code) since Thu 2013 -08 -15
09:57:05 CEST; 59s ago
Process : 16874 ExecStop =/ usr/sbin/ httpd $OPTIONS -k graceful -
stop (code=exited , status =0/ SUCCESS )
Process : 16870 ExecStart =/ usr/sbin/ httpd $OPTIONS -DFOREGROUND
(code=exited , status =1/ FAILURE )

An SELinux denial message similar to the following is logged to /var/log/audit/audit.log:


type=AVC msg= audit (1225948455.061:294) :
avc: denied { name_bind } for pid =4997 comm=" httpd " src
=9876
scontext = unconfined_u : system_r : httpd_t :s0
tcontext = system_u : object_r : port_t :s0 tclass = tcp_socket

To allow httpd to listen on a port that is not listed for the http_port_t port type, run the
semanage port command to add a port to policy configuration.
# semanage port -a -t http_port_t -p tcp 9876

The -a option adds a new record; the -t option defines a type; and the -p option defines a
protocol. The last argument is the port number to add.

173
10. Securing Linux Systems

10.3.19.4 Evolving Rules and Broken Applications

Applications may be broken, causing SELinux to deny access. Also, SELinux rules are
evolving – SELinux may not have seen an application running in a certain way, possibly
causing it to deny access, even though the application is working as expected. For example, if
a new version of PostgreSQL is released, it may perform actions the current policy has not
seen before, causing access to be denied, even though access should be allowed.

For these situations, after access is denied, use the audit2allow utility to create a custom
policy module to allow access.

10.3.20 Fixing Problems


The following sections help troubleshoot issues. They go over: checking Linux permissions,
which are checked before SELinux rules; possible causes of SELinux denying access, but no
denials being logged; manual pages for services, which contain information about labeling
and Booleans; permissive domains, for allowing one process to run permissive, rather than
the whole system; how to search for and view denial messages; analyzing denials; and
creating custom policy modules with audit2allow.

10.3.20.1 Linux Permissions

When access is denied, check standard Linux permissions. As mentioned in Section 10.3,
LINUX uses a Discretionary Access Control (DAC) system to control access, allowing users to
control the permissions of files that they own. SELinux policy rules are checked after DAC
rules. SELinux policy rules are not used if DAC rules deny access first.

If access is denied and no SELinux denials are logged, use the following command to view
the standard Linux permissions:
$ ls -l /var/www/html/ index .html
-rw -r----- 1 root root 0 2009 -05 -07 11:06 index .html

In this example, index.html is owned by the root user and group. The root user has read
and write permissions (-rw), and members of the root group have read permissions (-r-).
Everyone else has no access (---). By default, such permissions do not allow httpd to read
this file. To resolve this issue, use the chown command to change the owner and group. This
command must be run as root:
# chown apache : apache /var/www/html/ index .html

This assumes the default configuration, in which httpd runs as the Linux Apache user. If
you run httpd with a different user, replace apache:apache with that user.

10.3.20.2 Possible Causes of Silent Denials

In certain situations, AVC denial messages may not be logged when SELinux denies access.
Applications and system library functions often probe for more access than required to

174
10.3. Security Enhanced Linux: SELINUX

perform their tasks. To maintain least privilege without filling audit logs with AVC denials
for harmless application probing, the policy can silence AVC denials without allowing a
permission by using dontaudit rules. These rules are common in standard policy. The
downside of dontaudit is that, although SELinux denies access, denial messages are not
logged, making troubleshooting more difficult.
To temporarily disable dontaudit rules, allowing all denials to be logged, enter the following
command as root:
# semodule -DB

The -D option disables dontaudit rules; the -B option rebuilds policy. After running semodule
-DB, try exercising the application that was encountering permission problems, and see if
SELinux denials — relevant to the application — are now being logged. Take care in deciding
which denials should be allowed, as some should be ignored and handled by dontaudit
rules.
To rebuild policy and enable dontaudit rules, enter the following command as root:
# semodule -B

This restores the policy to its original state. For a full list of dontaudit rules, run the sesearch
--dontaudit command. Narrow down searches using the -s domain option and the grep
command. For example:
$ sesearch --dontaudit -s smbd_t | grep squid
dontaudit smbd_t squid_port_t : tcp_socket name_bind ;
dontaudit smbd_t squid_port_t : udp_socket name_bind ;

10.3.20.3 Manual Pages for Services

Manual pages for services contain valuable information, such as what file type to use for a
given situation, and Booleans to change the access a service has (such as httpd accessing
NFS volumes). This information may be in the standard manual page or in the manual page
that can be automatically generated from the SELinux policy for every service domain using
the sepolicy manpage utility. Such manual pages are named in the service-name_selinux
format. Such manual pages are also shipped with the selinux-policy-doc package.
For example, the httpd_selinux(8) manual page has information about what file type to use
for a given situation, as well as Booleans to allow scripts, sharing files, accessing directories
inside user home directories, and so on. Other manual pages with SELinux information for
services include:

Samba: the samba_selinux(8) manual page for example describes that enabling the
samba_enable_home_dirs Boolean allows Samba to share users home directories.
NFS: the nfsd_selinux(8) manual page describes SELinux nfsd policy that allows users to
setup their nfsd processes in as secure a method as possible.

The information in manual pages helps you configure the correct file types and Booleans,
helping to prevent SELinux from denying access.

175
10. Securing Linux Systems

10.3.20.4 Permissive Domains

When SELinux is running in permissive mode, SELinux does not deny access, but denials are
logged for actions that would have been denied if running in enforcing mode. Previously, it
was not possible to make a single domain permissive (remember: processes run in domains).
In certain situations, this led to making the whole system permissive to troubleshoot issues.

Permissive domains allow an administrator to configure a single process (domain) to run


permissive, rather than making the whole system permissive. SELinux checks are still
performed for permissive domains; however, the kernel allows access and reports an AVC
denial for situations where SELinux would have denied access.

Permissive domains have the following uses:

• They can be used for making a single process (domain) run permissive to troubleshoot
an issue without putting the entire system at risk by making it permissive.
• They allow an administrator to create policies for new applications. Previously, it was
recommended that a minimal policy be created, and then the entire machine put into
permissive mode, so that the application could run, but SELinux denials still logged.
The audit2allow could then be used to help write the policy. This put the whole system
at risk. With permissive domains, only the domain in the new policy can be marked
permissive, without putting the whole system at risk.

Making a Domain Permissive To make a domain permissive, run the semanage


permissive -a domain command, where domain is the domain you want to make permissive.
For example, enter the following command as root to make the httpd_t domain (the domain
the Apache HTTP Server runs in) permissive:
# semanage permissive -a httpd_t

To view a list of domains you have made permissive, run the semodule -l | grep
permissive command as root. For example:
# semodule -l | grep permissive
permissive_httpd_t 1.0
permissivedomains 1.0.0

If you no longer want a domain to be permissive, run the semanage permissive -d domain
command as root. For example:
# semanage permissive -d httpd_t

Disabling Permissive Domains The permissivedomains.pp module contains all of the


permissive domain declarations that are presented on the system. To disable all permissive
domains, enter the following command as root:
# semodule -d permissivedomains

176
10.3. Security Enhanced Linux: SELINUX

Note that, once a policy module is disabled through the semodule -d command, it is no longer
showed in the output of the semodule -l command. To see all policy modules including
disabled, enter the following command as root:
# semodule --list - modules =full

Denials for Permissive Domains The SYSCALL message is different for permissive domains.
The following is an example AVC denial (and the associated system call) from the Apache
HTTP Server:
type=AVC msg= audit (1226882736.442:86) :
avc: denied { getattr } for pid =2427 comm=" httpd "
path="/var/www/html/ file1 " dev=dm -0 ino =284133
scontext = unconfined_u : system_r : httpd_t :s0
tcontext = unconfined_u : object_r : samba_share_t :s0 tclass =file

type= SYSCALL msg= audit (1226882736.442:86) : arch =40000003


syscall =196
success =no exit = -13 a0= b9a1e198 a1= bfc2921c a2 =54 dff4 a3
=2008171 items =0
ppid =2425 pid =2427 auid =502 uid =48 gid =48 euid =48 suid =48
fsuid =48
egid =48 sgid =48 fsgid =48 tty =( none) ses =4 comm=" httpd "
exe="/usr/sbin/ httpd " subj= unconfined_u : system_r : httpd_t :s0
key =( null)

By default, the httpd_t domain is not permissive, and as such, the action is denied, and the
SYSCALL message contains success=no. The following is an example AVC denial for the same
situation, except the semanage permissive -a httpd_t command has been run to make the
httpd_t domain permissive:
type=AVC msg= audit (1226882925.714:136) :
avc: denied { read } for pid =2512 comm=" httpd " name=" file1 "
dev=dm -0
ino =284133 scontext = unconfined_u : system_r : httpd_t :s0
tcontext = unconfined_u : object_r : samba_share_t :s0 tclass =file

type= SYSCALL msg= audit (1226882925.714:136) : arch =40000003


syscall =5
success =yes exit =11 a0= b962a1e8 a1 =8000 a2 =0 a3 =8000 items =0
ppid =2511
pid =2512 auid =502 uid =48 gid =48 euid =48 suid =48 fsuid =48 egid =48
sgid =48
fsgid =48 tty =( none) ses =4 comm=" httpd " exe="/usr/sbin/ httpd "
subj= unconfined_u : system_r : httpd_t :s0 key =( null)

177
10. Securing Linux Systems

In this case, although an AVC denial was logged, access was not denied, as shown by
success=yes in the SYSCALL message.

10.3.20.5 Searching for and Viewing Denials

This section assumes the setroubleshoot, setroubleshoot-server, dbus and audit packages
are installed, and that the auditd, rsyslogd, and setroubleshootd daemons are running. A
number of utilites are available for searching for and viewing SELinux AVC messages, such
as ausearch, aureport, and sealert.

10.3.20.6 ausearch

The audit package provides the ausearch utility that can query the audit daemon
logs for events based on different search criteria. The ausearch utility accesses
/var/log/audit/audit.log, and as such, must be run as the root user.

Example: using ausearch To find all the denials use:


# ausearch -m avc

To search for SELinux AVC messages for a particular service, use the -c comm-name option,
where comm-name is the executable’s name, for example, httpd for the Apache HTTP Server:
# ausearch -m avc -c httpd

With each ausearch command, it is advised to use either the --interpret (-i) option for
easier readability, or the --raw (-r) option for script processing. See the ausearch(8) manual
page for further ausearch options.

10.3.20.7 aureport

The audit package provides the aureport utility, which produces summary reports of the
audit system logs. The aureport utility accesses /var/log/audit/audit.log, and as such,
must be run as the root user. To view a list of SELinux denial messages and how often each
one occurred, run the aureport -a command. The following is example output that includes
two denials:
# aureport -a

AVC Report
========================================================
# date time comm subj syscall class permission obj event
========================================================
1. 05/01/2009 21:41:39 httpd unconfined_u : system_r : httpd_t :s0
195 file getattr system_u : object_r : samba_share_t :s0 denied 2
2. 05/03/2009 22:00:25 vsftpd unconfined_u : system_r : ftpd_t :s0
5 file read unconfined_u : object_r : cifs_t :s0 denied 4

178
10.3. Security Enhanced Linux: SELINUX

10.3.20.8 sealert

The setroubleshoot-server package provides the sealert utility, which reads denial messages
translated by setroubleshoot-server. Denials are assigned IDs, as seen in /var/log/messages.
The following is an example denial from messages:
setroubleshoot : SELinux is preventing
/usr/sbin/ httpd from name_bind access on the tcp_socket . For complete
SELinux messages . run sealert -l 8c123656 -5dda -4e5d -8791 -9 e3bd03786b7

In this example, the denial ID is 8c123656-5dda-4e5d-8791-9e3bd03786b7. The -l option


takes an ID as an argument. Running the sealert -l
8c123656-5dda-4e5d-8791-9e3bd03786b7 command presents a detailed analysis of why
SELinux denied access, and a possible solution for allowing access.

Ten

179
This page is intentionally left blank

180
Chapter 11

Storage Administration
11/2 classes

Chapter Goals
1. List, create, delete partitions on MBR and GPT disks.
2. Create and remove physical volumes, assign physical volumes
to volume groups, and create and delete logical volumes.
3. Configure systems to mount file systems at boot by Universally
Unique ID (UUID) or label.
4. Add new swap to a system non-destructively.
5. Add new partitions and logical volumes to a system
non-destructively.

11.1 Partitions and File Systems


11.1.1 Hard Disk Basic Concepts
Hard disks perform a very simple function — they store data and reliably retrieve it on
command. To store data on a disk drive, it is necessary to format the disk drive first.
Formatting (usually known as “making a file system”) writes information to the drive,
creating order out of the empty space in an unformatted drive.
As shown in Figure 11.1, the order imposed by a file system involves some trade-offs:

• A small percentage of the driver’s available space is used to store file system-related
data and can be considered as overhead.
• A file system splits the remaining space into small, consistently-sized segments. For
Linux, these segments are known as blocks.

181
11. Storage Administration

(a) An unused disk (b) Disk drive with a (c) Disk drive with a (d) Disk drive with
drive. file system. different file system. data written to it.

Figure 11.1: Hard disk and file system.

Note that there is no single, universal file system. A disk drive can have one of many different
file systems written on it. Different file systems tend to be incompatible; that is, an operating
system that supports one file system (or a handful of related file system types) might not
support another. However, for example, Red Hat Enterprise Linux supports a wide variety
of file systems (including many commonly used by other operating systems), making data
interchange between different file systems easy.

11.1.2 Partitions: Turning One Drive Into Many


Disk drives can be divided into partitions. Each partition can be accessed as if it was a
separate disk. This is done through the addition of a partition table. There are several reasons
for allocating disk space into separate disk partitions, for example:

• Logical separation of the operating system data from the user data.
• Ability to use different file systems.
• Ability to run multiple operating systems on one machine.

There are currently two partitioning layout standards for physical hard disks: Master Boot
Record (MBR) (Section 11.2.1) and GUID Partition Table (GPT) (Section 11.2.2). MBR is
an older method of disk partitioning used with BIOS-based computers. GPT is a newer
partitioning layout that is a part of the Unified Extensible Firmware Interface (UEFI).

The partition table is divided into four sections or four primary partitions. A primary partition
is a partition on a hard drive that can contain only one logical drive (or section). Each section
can hold the information necessary to define a single partition, meaning that the partition
table can define no more than four partitions.

Table 11.1 shows a list of some of the commonly used partition types and hexadecimal
numbers used to represent them.

182
11.1. Partitions and File Systems

Partition Type Value Partition Type Value

Empty 0 Novell Netware 386 65

DOS 12-bit FAT 1 PIC/IX 75

XENIX root 2 Old MINIX 80

XENIX usr 3 Linux/MINUX 81

DOS 16-bit <=32M 4 Linux swap 82

Extended 5 Linux native 83

DOS 16-bit >=32 6 Linux extended 85

OS/2 HPFS 7 Amoeba 93

AIX 8 Amoeba BBT 94

AIX bootable 9 BSD/386 a5

OS/2 Boot Manager 0a OpenBSD a6

Win95 FAT32 0b NEXTSTEP a7

Win95 FAT32 (LBA) 0c BSDI fs b7

Win95 FAT16 (LBA) 0e BSDI swap b8

Win95 Extended (LBA) 0f Syrinx c7

Venix 80286 40 CP/M db

Novell 51 DOS access e1

PReP Boot 41 DOS R/O e3

GNU HURD 63 DOS secondary f2

Novell Netware 286 64 BBT ff

Table 11.1: Commonly used partition types and hexadecimal numbers used to represent
them.

11.1.3 Partitions Within Partitions — an Overview of Extended Partitions


In case four partitions are insufficient for your needs, you can use extended partitions to
create up additional partitions. You do this by setting the type of a partition to “Extended”.

An extended partition is like a disk drive in its own right — it has its own partition table
which points to one or more partitions (now called logical partitions, as opposed to the
four primary partitions) contained entirely within the extended partition itself. Figure 11.2

183
11. Storage Administration

Figure 11.2: Disk drive with extended partition.

shows a disk drive with one primary partition and one extended partition containing two
logical partitions (along with some unpartitioned free space). As this figure implies, there is a
difference between primary and logical partitions — there can only be four primary partitions,
but there is no fixed limit to the number of logical partitions that can exist. However, due to
the way in which partitions are accessed in Linux, no more than 12 logical partitions should
be defined on a single disk drive.

11.2 Partitioning Schemes


11.2.1 Master Boot Record (MBR) Partitioning Scheme
Actually, Section 11.1.3 mainly describe the Master Boot Record (MBR) disk partitioning
scheme.

A master boot record (MBR) is a special type of boot sector at the very beginning of partitioned
computer mass storage devices like fixed disks or removable drives intended for use with
IBM PC-compatible systems and beyond. The concept of MBRs was publicly introduced in
1983 with PC DOS 2.0 operating system.

The MBR holds the information on how the logical partitions, containing file systems, are
organized on that medium. The MBR also contains executable code to function as a loader
for the installed operating system — usually by passing control over to the loader’s second
stage, or in conjunction with each partition’s volume boot record (VBR). This MBR code is
usually referred to as a boot loader.

The organization of the partition table in the MBR limits the maximum addressable storage
space of a disk to 2 TiB (232 × 512 bytes).

11.2.2 GUID Partition Table (GPT) Partitioning Scheme


GUID Partition Table (GPT) is a newer partitioning scheme based on using Globally Unique
Identifiers (GUID). GPT was developed to cope with limitations of the MBR partition table,
especially with the limited maximum addressable storage space of a disk. Unlike MBR, which
is unable to address storage space larger than 2 TiB (equivalent to approximately 2.2 TB), GPT
can be used with hard disks larger than this; the maximum addressable disk size is 2.2 ZiB.
In addition GPT, by default, supports creating up to 128 primary partitions. This number
could be extended by allocating more space to the partition table.

184
11.2. Partitioning Schemes

GPT disks use logical block addressing (LBA) and the partition layout is as follows:

• To preserve backward compatibility with MBR disks, the first sector (LBA 0) of GPT is
reserved for MBR data and it is called “protective MBR”.
• The primary GPT header begins on the second logical block (LBA 1) of the device. The
header contains the disk GUID, the location of the primary partition table, the location
of the secondary GPT header, and CRC32 checksums of itself and the primary partition
table. It also specifies the number of partition entries of the table.
• The primary GPT table includes, by default, 128 partition entries, each with an entry
size 128 bytes, its partition type GUID and unique partition GUID.
• The secondary GPT table is identical to the primary GPT table. It is used mainly as a
backup table for recovery in case the primary partition table is corrupted.
• The secondary GPT header is located on the last logical sector of the disk and it can be
used to recover GPT information in case the primary header is corrupted. It contains
the disk GUID, the location of the secondary partition table and the primary GPT
header, CRC32 checksums of itself and the secondary partition table, and the number
of possible partition entries.

A GPT can coexist with an MBR in order to provide some limited form of backward
compatibility for older systems.

11.2.3 Device and Partition Naming Scheme


Red Hat Enterprise Linux uses a naming scheme that is file-based, with file names in the
form of /dev/xxyN. Device and partition names consist of the following:

/dev/: This is the name of the directory in which all device files reside. Because partitions
reside on hard disks, and hard disks are devices, the files representing all possible
partitions reside in /dev/.
xx: The first two letters of the partition name indicate the type of device on which the partition
resides, usually sd.
y: This letter indicates which device the partition is on. For example, /dev/sda for the first
hard disk, /dev/sdb for the second, and so on.
N: The final number denotes the partition. The first four (primary or extended) partitions
are numbered 1 through 4. Logical partitions start at 5. So, for example, /dev/sda3 is
the third primary or extended partition on the first hard disk, and /dev/sdb6 is the
second logical partition on the second hard disk.

Here are some examples:

• The first floppy drive is named /dev/fd0.


• The second floppy drive is named /dev/fd1.
• The first SCSI disk (SCSI ID address-wise) is named /dev/sda.

185
11. Storage Administration

i Accessing Partitions from Other Operating Systems

Even if Red Hat Enterprise Linux can identify and refer to all types of disk partitions, it
might not be able to read the file system and therefore access stored data on every partition
type. However, in many cases, it is possible to successfully access data on a partition
dedicated to another operating system.

• The second SCSI disk (address-wise) is named /dev/sdb, and so on.


• The first SCSI CD-ROM is named /dev/scd0, also known as /dev/sr0.
• The master disk on IDE primary controller is named /dev/hda.
• The slave disk on IDE primary controller is named /dev/hdb.
• The master and slave disks of the secondary controller can be called /dev/hdc and
/dev/hdd, respectively. Newer IDE controllers can actually have two channels, effectively
acting like two controllers.

11.3 Disk Partitions and Mount Points


11.3.1 Mount Points
In Red Hat Enterprise Linux each partition is used to form part of the storage necessary to
support a single set of files and directories. This is done by associating a partition with a
directory through a process known as mounting. Mounting a partition makes its storage
available starting at the specified directory (known as a mount point).

Linux and Unix present files in a single hierarchy, usually called “the filesystem”. Individual
filesystems must be grafted onto that hierarchy in order to access them.

You make a filesystem accessible by mounting it. Mounting associates the root directory of
the filesystem you’re mounting with a existing directory in the file hierarchy. A directory that
has such an association is known as a mount point.

For example, if partition /dev/sda5 is mounted on /usr/, that would mean that
all files and directories under /usr/ physically reside on /dev/sda5. So the file
/usr/share/doc/FAQ/txt/Linux-FAQ would be stored on /dev/sda5, while the file
/etc/gdm/custom.conf would not.

Continuing the example, it is also possible that one or more directories below /usr/ would be
mount points for other partitions. For instance, a partition (say, /dev/sda7) could be mounted
on /usr/local/, meaning that /usr/local/man/whatis would then reside on /dev/sda7
rather than /dev/sda5.

186
11.3. Disk Partitions and Mount Points

11.3.2 Mount Points in a Machine


11.3.2.1 Viewing the /etc/fstab File

The file /etc/fstab contains descriptive information about the various file systems. fstab
is only read by programs, and not written. It is the duty of the system administrator to
properly create and maintain this file. Each filesystem is described on a separate line. Fields
on each line are separated by tabs or spaces. Lines starting with # are comments, blank lines
are ignored. The order of records in fstab is important because fsck, mount, and umount
sequentially iterate through fstab doing their thing.
Here is a sample /etc/fstab file:
#
# /etc/ fstab
# Created by anaconda on Tue Jul 18 21:16:15 2017
#
# Accessible filesystems , by reference , are maintained under ’/dev/disk ’
# See man pages fstab (5) , findfs (8) , mount (8) and/or blkid (8) for more info
#
/dev/ mapper /rhel -root / xfs defaults 0 0
UUID =779 f906b -82cf -4565 -8 c60 -2 d1fbb11d5ee /boot xfs defaults 0 0
/dev/ mapper /rhel -home /home xfs defaults 0 0
/dev/ mapper /rhel -swap swap swap defaults 0 0

And here is another sample /etc/fstab file:


# /etc/ fstab : static file system information .
#
# Use ’blkid ’ to print the universally unique identifier for a
# device ; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed . See fstab (5).
#
# <file system > <mount point > <type > <options > <dump > <pass >
# / was on /dev/sda2 during installation
UUID =281 c6a0c -2b59 -4611 -83f3 -06129 df44ee3 / ext4 errors =remount -ro 0 1
# /home was on /dev/sdb1 during installation
UUID=d4d10198 -6876 -4 a4b -b4a7 -97 b74f9045a0 /home ext4 defaults 0 2
# swap was on /dev/sda1 during installation
UUID =17067 bdb -229d -4662 -9 d20 -4 b21f218d611 none swap sw 0 0

Here is what the fields mean:

The first field (fs_spec): This field describes the block special device or remote filesystem
to be mounted.
For ordinary mounts it will hold device node for the device to be mounted, like
/dev/cdrom or /dev/sdb7. For NFS mounts one will have <host>:<dir>, e.g.,
knuth.aeb.nl:/.
Instead of giving the device explicitly, one may indicate the filesystem that is to be
mounted by its UUID (Section 11.4) or LABEL (Section 11.5.2.2).

187
11. Storage Administration

The second field (fs_file): This field describes the mount point for the filesystem. For
swap partitions, this field should be specified as none.
The third field (fs_vfstype): This field describes the type of the filesystem (Table 11.1).
The fourth field (fs_mntops): This field describes the mount options associated with the
filesystem.
It is formatted as a comma separated list of options. Basic file system independent
options are:

defaults : use default options: rw, suid, dev, exec, auto, nouser, and async.
auto/noauto : Specify whether the partition should be automatically mounted on boot.
You can block specific partitions from mounting at boot-up by using noauto.
exec/noexec : Specifies whether the partition can execute binaries. If you have a
scratch partition that you compile on, then this would be useful, or maybe if you
have /home on a separate file system. If you’re concerned about security, change
this to noexec.
ro/rw : ro is read-only, and rw is read-write. If you want to be able to write to a
file-system as the user and not as root, you’ll need to have rw specified.
sync/async : sync forces writing to occur immediately on execution of the command,
which is ideal for floppies and USB drives, but isn’t entirely necessary for internal
hard disks.
nouser/user : This allows the user to have mounting and unmounting privileges. An
important note is that user automatically implies noexec so if you need to execute
binaries and still mount as a user, be sure to explicitly use exec as an option.

The fifth field (fs_freq): This field is used for these filesystems by the dump command to
determine which filesystems need to be dumped. If the fifth field is not present, a
value of zero is returned and dump will assume that the filesystem does not need to be
dumped.
The sixth field (fs_passno): This field is used by the fsck program to determine the order
in which filesystem checks are done at reboot time. The root filesystem should be
specified with a fs_passno of 1, and other filesystems should have a fs_passno of 2. If
the sixth field is not present or zero, a value of zero is returned and fsck will assume
that the filesystem does not need to be checked.

11.3.2.2 Viewing /etc/mtab File

The file /etc/mtab is a normal file that is updated by the mount program whenever file
systems are mounted or unmounted. Here is a sample /etc/mtab:
rootfs / rootfs rw 0 0
sysfs /sys sysfs rw ,seclabel ,nosuid ,nodev ,noexec , relatime 0 0
proc /proc proc rw ,nosuid ,nodev ,noexec , relatime 0 0

... ... ...

188
11.3. Disk Partitions and Mount Points

/dev/sda1 /boot xfs rw ,seclabel ,relatime ,attr2 ,inode64 , noquota 0 0


/dev/ mapper /rhel -home /home xfs rw ,seclabel ,relatime ,attr2 ,inode64 ,
noquota 0 0
sunrpc /var/lib/nfs/ rpc_pipefs rpc_pipefs rw , relatime 0 0
tmpfs /run/user /42 tmpfs rw ,seclabel ,nosuid ,nodev ,relatime ,size =388188k
,mode =700 , uid =42 , gid =42 0 0
tmpfs /run/user /1000 tmpfs rw ,seclabel ,nosuid ,nodev ,relatime ,size
=388188k,mode =700 , uid =1000 , gid =1000 0 0
fusectl /sys/fs/fuse/ connections fusectl rw , relatime 0 0

... ... ...

The /etc/mtab file is meant to be used to display the status of currently-mounted file systems
only. It should not be manually modified. Each line represents a file system that is currently
mounted and contains the following fields (from left to right):

• The device specification.


• The mount point.
• The file system type.
• Whether the file system is mounted read-only (ro) or read-write (rw), along with any
other mount options.
• Two unused fields with zeros in them (for compatibility with /etc/fstab).

11.3.2.3 Viewing /proc/mounts File

The /proc/mounts file is part of the proc virtual file system. As with the other files under
/proc/, the mounts “file” does not exist on any disk drive in your Red Hat Enterprise Linux
system. In fact, it is not even a file. Instead it is a representation of system status made
available (by the Linux kernel) in file form.

Using the command cat /proc/mounts, we can view the status of all mounted file systems:
rootfs / rootfs rw 0 0
sysfs /sys sysfs rw ,seclabel ,nosuid ,nodev ,noexec , relatime 0 0
proc /proc proc rw ,nosuid ,nodev ,noexec , relatime 0 0

... ... ...

/dev/sda1 /boot xfs rw ,seclabel ,relatime ,attr2 ,inode64 , noquota 0 0


/dev/ mapper /rhel -home /home xfs rw ,seclabel ,relatime ,attr2 ,inode64 ,
noquota 0 0
sunrpc /var/lib/nfs/ rpc_pipefs rpc_pipefs rw , relatime 0 0
tmpfs /run/user /42 tmpfs rw ,seclabel ,nosuid ,nodev ,relatime ,size =388188k
,mode =700 , uid =42 , gid =42 0 0
tmpfs /run/user /1000 tmpfs rw ,seclabel ,nosuid ,nodev ,relatime ,size
=388188k,mode =700 , uid =1000 , gid =1000 0 0
fusectl /sys/fs/fuse/ connections fusectl rw , relatime 0 0

... ... ...

189
11. Storage Administration

As we can see from the above example, the format of /proc/mounts is very similar to that of
/etc/mtab (Section 11.3.2.2). There are a number of file systems mounted that have nothing
to do with disk drives. Among these are the /proc/ file system itself (along with two other
file systems mounted under /proc/), pseudo-ttys, and shared memory.

While the format is admittedly not very user-friendly, looking at /proc/mounts is the best
way to be 100% sure of seeing what is mounted on your Red Hat Enterprise Linux system, as
the kernel is providing this information. Other methods can, under rare circumstances, be
inaccurate.

11.3.2.4 Issuing the df Command

While using /etc/mtab (Section 11.3.2.2) or /proc/mounts (Section 11.3.2.3) lets you know
what file systems are currently mounted, it does little beyond that. Most of the time you are
more interested in one particular aspect of the file systems that are currently mounted — the
amount of free space on them.

The df command allows you to display a detailed report on the system’s disk space usage.
The df command stand for “disk filesystem”. It is used to get full summary of available and
used disk space usage of file system on Linux system.

To use df, type the following at a shell prompt:


$ df

For each listed file system, the df command displays its name (Filesystem), size (1K-blocks
or Size), how much space is used (Used), how much space is still available (Available), the
percentage of space usage (Use%), and where is the file system mounted (Mounted on). For
example:
Filesystem 1K- blocks Used Available Use% Mounted on
/dev/ mapper /vg_kvm - lv_root 18618236 4357360 13315112 25% /
tmpfs 380376 288 380088 1% /dev/shm
/dev/vda1 495844 77029 393215 17% /boot

By default, the df command shows the partition size in 1 kilobyte blocks and the amount
of used and available disk space in kilobytes. To view the information in megabytes and
gigabytes, supply the -h command-line option, which causes df to display the values in a
human-readable format. For instance:
$ df -h

Filesystem Size Used Avail Use% Mounted on


/dev/ mapper /vg_kvm - lv_root 18G 4.2G 13G 25% /
tmpfs 372M 288K 372M 1% /dev/shm
/dev/vda1 485M 76M 384M 17% /boot

190
11.4. UUID and Other Persistent Identifiers

11.4 UUID and Other Persistent Identifiers


If a storage device contains a file system, then that file system may provide one or both of the
following:

• Universally Unique Identifier (UUID),


• File system label.

These identifiers are persistent, and based on metadata written on the device by certain
applications. They may also be used to access the device using the symlinks maintained by
the operating system.

11.4.1 Using the blkid Command


The blkid command allows you to display information about available block devices. To do
so, type the following at a shell prompt as root:
# blkid

For each listed block device, the blkid command displays available attributes such as its
universally unique identifier (UUID), file system type (TYPE), or volume label (LABEL). For
example:
/dev/sda1: UUID="779 f906b -82cf -4565 -8 c60 -2 d1fbb11d5ee " TYPE="xfs"
/dev/sda2: UUID="y8R3HU -EZvm -yDxZ -eEt2 -InV9 -EtgT - dcNLcI " TYPE=" LVM2_member "
/dev/ mapper /rhel -root: UUID="407 b936e -26b2 -4c87 -a92f - f8d82cefc8a1 " TYPE="xfs"
/dev/ mapper /rhel -swap: UUID="52 c091cc -abb7 -46a7 -bf43 -29 ee85d14192 " TYPE="swap"
/dev/ mapper /rhel -home: UUID="a1733c8d -8a3c -4e4e -b0d9 -2 f1b94526fef " TYPE="xfs"

By default, the blkid command lists all available block devices. To display information about
a particular device only, specify the device name on the command line:
# blkid device_name

For instance, to display information about /dev/sda1, type:


# blkid /dev/sda1

/dev/sda1: UUID="779 f906b -82cf -4565 -8 c60 -2 d1fbb11d5ee " TYPE="xfs"

You can also use the above command with the -p and -o udev command-line options to
obtain more detailed information.
# blkid -po udev /dev/sda1

ID_FS_UUID =779 f906b -82cf -4565 -8 c60 -2 d1fbb11d5ee


ID_FS_UUID_ENC =779 f906b -82cf -4565 -8 c60 -2 d1fbb11d5ee
ID_FS_TYPE =xfs
ID_FS_USAGE = filesystem
ID_PART_ENTRY_SCHEME =dos
ID_PART_ENTRY_TYPE =0 x83
ID_PART_ENTRY_FLAGS =0 x80
ID_PART_ENTRY_NUMBER =1

191
11. Storage Administration

ID_PART_ENTRY_OFFSET =2048
ID_PART_ENTRY_SIZE =2097152
ID_PART_ENTRY_DISK =8:0

11.5 Managing Partitions and File Systems


11.5.1 fdisk
fdisk is a dialog-driven program for the creation and manipulation of partition tables and
understands GPT, MBR, Sun, SGI, and BSD partition tables. While fdisk is easier to use,
parted (Section 11.5.2) is more robust for ext4 file systems. However, resizing with parted is
not available in Red Hat Enterprise Linux 7.

On disks with a GUID Partition Table (GPT), using the parted (Section 11.5.2) utility is
recommended, as fdisk GPT support is in an experimental phase.

11.5.1.1 Procedure

1. Unmount the partition:


# umount /dev/vdb1

2. Run fdisk disk_name. For example:


# fdisk /dev/vdb

Welcome to fdisk (util - linux 2.23.2).


Changes will remain in memory only , until you decide to write
them. Be careful before using the write command .
Command (m for help ):

3. Check the partition number you wish to delete with the p. The partitions are listed
under the heading “Device”. For example:
Command (m for help ): p
Disk /dev/vda: 407.6 GiB , 437629485056 bytes , 854745088 sectors
Units : sectors of 1 * 512 = 512 bytes
Sector size ( logical / physical ): 512 bytes / 4096 bytes
I/O size ( minimum / optimal ): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier : 0 x5c873cba
Partition 2 does not start on physical sector boundary .

Device Boot Start End Blocks Id System


/dev/vda1 * 2048 1026047 512000 83 Linux
/dev/vda2 1026048 1640447 307200 8e Linux LVM

4. Use the option d to delete a partition. If there is more than one, fdisk prompts for
which one to delete. For example:

192
11.5. Managing Partitions and File Systems

L
While working in the class, DO NOT actually alter the partitions in the hard disks. Make
the changes and then abort. Do not write the changes to the partition table. These are
all live machines and changing the partition tables will cause severe instability with the
selected file system.

Command (m for help ): d


Partition number (1,2, default 2): 2

Partition 2 has been deleted .

5. Use the option n to create a new partition. Follow the prompts and ensure you allow
enough space for any future resizing that is needed. It is possible to specify a set,
human-readable size instead of using sectors if this is preferred.
It is recommended to follow fdisk’s defaults as the default values (for example, the
first partition sectors) and partition sizes specified are always aligned according to the
device properties.
If you are recreating a partition in order to allow for more room on a mounted file
system, ensure you create it with the same starting disk sector as before. Otherwise the
resize operation will not work and the entire file system may be lost.
For example:
Command (m for help): n

Partition type:
p primary (1 primary , 0 extended , 3 free)
e extended
Select ( default p): * Enter *

Using default response p.


Partition number (2-4, default 2): * Enter *
First sector (1026048 -854745087 , default 1026048) : * Enter *
Last sector , + sectors or +size{K,M,G,T,P} (1026048 -854745087 , default
854745087) : +500M

Created a new partition 2 of type ’Linux ’ and of size 500 MiB.

6. Check the partition table to ensure that the partitions are created as required using the
p option. For example:
Command (m for help ): p
Disk /dev/vda: 407.6 GiB , 437629485056 bytes , 854745088 sectors
Units : sectors of 1 * 512 = 512 bytes
Sector size ( logical / physical ): 512 bytes / 4096 bytes
I/O size ( minimum / optimal ): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier : 0 xf6e2b6cb

193
11. Storage Administration

Device Boot Start End Blocks Id System


/dev/vda1 * 2048 1026047 512000 83 Linux
/dev/vda2 1026048 2050047 512000 8e Linux LVM

7. Write the changes with the w option when you are sure they are correct.
8. Run fsck on the partition.
# e2fsck /dev/vdb1

e2fsck 1.41.12 (17 -May -2010)


Pass 1: Checking inodes , blocks , and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
ext4 -1:11/131072 files (0.0% non - contiguous ) ,27050/524128 blocks

9. Now you need to mount (Section 11.3) the partition:


# mount /dev/vdb1

11.5.2 parted
The utility parted allows users to:

• View the existing partition table.


• Change the size of existing partitions.
• Add partitions from free space or additional hard drives.

By default, the parted package is included when installing Red Hat Enterprise Linux. To start
parted, log in as root and type the command parted /dev/sda at a shell prompt (where
/dev/sda is the device name for the drive you want to configure).
If you want to remove or resize a partition, the device on which that partition resides must
not be in use. Creating a new partition on a device which is in use — while possible — is not
recommended.
For a device to not be in use, none of the partitions on the device can be mounted, and any
swap space on the device must not be enabled.
As well, the partition table should not be modified while it is in use because the kernel may
not properly recognize the changes. If the partition table does not match the actual state of
the mounted partitions, information could be written to the wrong partition, resulting in lost
and overwritten data.
The easiest way to achieve this it to boot your system in rescue mode. When prompted to
mount the file system, select Skip.

194
11.5. Managing Partitions and File Systems

i Tips

To select a different device without having to restart parted, use the select command
followed by the device name (for example, /dev/sda). Doing so allows you to view or
configure the partition table of a device.

Alternately, if the drive does not contain any partitions in use (system processes that use
or lock the file system from being unmounted), you can unmount them with the umount
command and turn off all the swap space on the hard drive with the swapoff command.

11.5.2.1 Viewing the Partition Table

After starting parted, use the command print to view the partition table. A table similar to
the following appears:
Model : ATA ST3160812AS (scsi)
Disk /dev/sda: 160 GB
Sector size ( logical / physical ): 512B/512B
Partition Table : msdos

Number Start End Size Type File system Flags


1 32.3 kB 107 MB 107 MB primary ext3 boot
2 107 MB 105 GB 105 GB primary ext3
3 105 GB 107 GB 2147 MB primary linux -swap
4 107 GB 160 GB 52.9 GB extended root
5 107 GB 133 GB 26.2 GB logical ext3
6 133 GB 133 GB 107 MB logical ext3
7 133 GB 160 GB 26.6 GB logical lvm

The first line contains the disk type, manufacturer, model number and interface, and the
second line displays the disk label type. The remaining output below the fourth line shows
the partition table.

In the partition table, the Minor number is the partition number. For example, the partition
with minor number 1 corresponds to /dev/sda1. The Start and End values are in megabytes.
Valid Type are metadata, free, primary, extended, or logical. The Filesystem is the file
system type (Table 11.1).

If a Filesystem of a device shows no value, this means that its file system type is unknown.
The Flags column lists the flags set for the partition. Available flags are boot, root, swap,
hidden, raid, lvm, or lba.

195
11. Storage Administration

11.5.2.2 Creating a Partition

Before creating a partition, boot into rescue mode (or unmount any partitions on the device
and turn off any swap space on the device).

1. Start parted, where /dev/sda is the device on which to create the partition:
# parted /dev/sda

2. View the current partition table to determine if there is enough free space:
# print

3. From the partition table, determine the start and end points of the new partition and
what partition type it should be. For example, to create a primary partition with an
ext3 file system from 1024 megabytes until 2048 megabytes on a hard drive type the
following command:
# mkpart primary ext3 1024 2048

4. Review the command and (DO NOT DO THIS IN THE CLASS) press Enter.
5. Use the print command to confirm that it is in the partition table with the correct
partition type, file system type, and size. Also remember the minor number of the new
partition so that you can label any file systems on it. You should also view the output of
cat /proc/partitions after parted is closed to make sure the kernel recognizes the
new partition.
6. Format and label the partition. The partition still does not have a file system. To create
one use the following command:
# /usr/sbin/mkfs -t ext3 /dev/sda6

Formatting the partition permanently destroys any data that currently exists on the
partition.
7. Give the file system on the partition a label. For example, if the file system on the new
partition is /dev/sda6 and you want to label it /work, use:
# e2label /dev/sda6 /work

By default, the installation program uses the mount point of the partition as the label to
make sure the label is unique. You can use any label you want.
8. Afterwards, create a mount point (e.g. /work) as root. As root, edit the /etc/fstab
file to include the new partition using the partition’s UUID. To mount the partition
without rebooting, as root, type the command:
# mount /work

196
11.6. Swap Partitions

11.6 Swap Partitions


11.6.1 Adding Swap Space
Sometimes it is necessary to add more swap space after installation. For example, you may
upgrade the amount of RAM in your system from 1 GB to 2 GB, but there is only 2 GB of swap
space. It might be advantageous to increase the amount of swap space to 4 GB if you perform
memory-intense operations or run applications that require a large amount of memory.

You have two options: add a swap partition or add a swap file. It is recommended that you
add a swap partition, but that can be difficult if you do not have any free space available.

11.6.1.1 Add a Swap Partition

To add a swap partition (assuming /dev/hdb2 is the swap partition you want to add):

1. The hard drive can not be in use (partitions can not be mounted, and swap space can
not be enabled). The partition table should not be modified while in use because the
kernel may not properly recognize the changes. Data could be overwritten by writing
to the wrong partition because the partition table and partitions mounted do not match.
The easiest way to achieve this is to boot your system in rescue mode.
Alternately, if the drive does not contain any partitions in use, you can unmount them
and turn off all the swap space on the hard drive with the swapoff command.
2. Create the swap partition using parted (Section 11.5.2):
a) At a shell prompt as root, type the command parted /dev/hdb, where /dev/hdb
is the device name for the hard drive with free space.
b) At the (parted) prompt, type print to view the existing partitions and the amount
of free space. The start and end values are in megabytes. Determine how much
free space is on the hard drive and how much you want to allocate for a new swap
partition.
c) At the (parted) prompt, type mkpartfs part-type linux-swap start end,
where part-type is one of primary, extended, or logical, start is the starting
point of the partition, and end is the end point of the partition.
Changes take place immediately; be careful when you type.
d) Exit parted by typing quit.
3. Now that you have created the swap partition, use the command mkswap to setup the
swap partition. At a shell prompt as root, type the following:
# mkswap /dev/hdb2

4. To enable the swap partition immediately, type the following command:


# swapon /dev/hdb2

197
11. Storage Administration

5. To enable it at boot time, edit /etc/fstab to include:


/dev/hdb2 swap swap defaults 0 0

6. The next time the system boots, it enables the new swap partition.
7. After adding the new swap partition and enabling it, verify it is enabled by viewing the
output of the command cat /proc/swaps or free.

11.6.1.2 Add a Swap File

To add a swap file:

1. Determine the size of the new swap file in megabytes and multiple by 1024 to determine
the block size. For example, the block size of a 64 MB swap file is 65536.
2. At a shell prompt as root, type the following command with count being equal to the
desired block size:
dd if =/ dev/zero of=/ swapfile bs =1024 count =65536

3. aSetup the swap file with the command:


mkswap / swapfile

4. To enable the swap file immediately but not automatically at boot time:
swapon / swapfile

5. To enable it at boot time, edit /etc/fstab to include:


/ swapfile swap swap defaults 0 0

6. The next time the system boots, it enables the new swap file.
7. After adding the new swap file and enabling it, verify it is enabled by viewing the
output of the command cat /proc/swaps or free.

11.7 LVM
11.7.1 LVM Basics
Volume management creates a layer of abstraction over physical storage, allowing you to
create logical storage volumes. This provides much greater flexibility in a number of ways
than using physical storage directly. With a logical volume, you are not restricted to physical
disk sizes. In addition, the hardware storage configuration is hidden from the software so it
can be resized and moved without stopping applications or unmounting file systems. This
can reduce operational costs.
Logical volumes provide the following advantages over using physical storage directly:

198
11.7. LVM

Figure 11.3: LVM logical volume components.

Flexible capacity: When using logical volumes, file systems can extend across multiple disks,
since you can aggregate disks and partitions into a single logical volume.
Resizeable storage pools: You can extend logical volumes or reduce logical volumes in
size with simple software commands, without reformatting and repartitioning the
underlying disk devices.
Online data relocation: To deploy newer, faster, or more resilient storage subsystems, you
can move data while your system is active. Data can be rearranged on disks while the
disks are in use. For example, you can empty a hot-swappable disk before removing it.
Convenient device naming: Logical storage volumes can be managed in user-defined and
custom named groups.
Disk striping: You can create a logical volume that stripes data across two or more disks.
This can dramatically increase throughput.
Mirroring volumes: Logical volumes provide a convenient way to configure a mirror for
your data.
Volume Snapshots: Using logical volumes, you can take device snapshots for consistent
backups or to test the effect of changes without affecting the real data.

The underlying physical storage unit of an LVM logical volume is a block device such as a
partition or whole disk. This device is initialized as an LVM physical volume (PV).

To create an LVM logical volume, the physical volumes are combined into a volume group
(VG). This creates a pool of disk space out of which LVM logical volumes (LVs) can be allocated.
This process is analogous to the way in which disks are divided into partitions. A logical
volume is used by file systems and applications (such as databases).

Figure 11.3 shows the components of a simple LVM logical volume.

199
11. Storage Administration

11.7.2 LVM Components


11.7.2.1 Physical Volumes

The underlying physical storage unit of an LVM logical volume is a block device such as a
partition or whole disk. To use the device for an LVM logical volume, the device must be
initialized as a physical volume (PV). Initializing a block device as a physical volume places a
label near the start of the device.

An LVM label provides correct identification and device ordering for a physical device, since
devices can come up in any order when the system is booted. An LVM label remains persistent
across reboots and throughout a cluster.

The LVM label identifies the device as an LVM physical volume. It contains a random unique
identifier (the UUID) for the physical volume. It also stores the size of the block device in
bytes, and it records where the LVM metadata will be stored on the device.

11.7.2.2 Multiple Partitions on a Disk

LVM allows you to create physical volumes out of disk partitions. Red Hat recommends that
you create a single partition that covers the whole disk to label as an LVM physical volume.

Although it is not recommended, there may be specific circumstances when you will need to
divide a disk into separate LVM physical volumes. For example, on a system with few disks it
may be necessary to move data around partitions when you are migrating an existing system
to LVM volumes. Additionally, if you have a very large disk and want to have more than one
volume group for administrative purposes then it is necessary to partition the disk.

11.7.2.3 Volume Groups

Physical volumes are combined into volume groups (VGs). This creates a pool of disk space
out of which logical volumes can be allocated.

Within a volume group, the disk space available for allocation is divided into units of a
fixed-size called extents. An extent is the smallest unit of space that can be allocated. Within
a physical volume, extents are referred to as physical extents.

A logical volume is allocated into logical extents of the same size as the physical extents. The
extent size is thus the same for all logical volumes in the volume group. The volume group
maps the logical extents to physical extents.

11.7.2.4 LVM Logical Volumes

In LVM, a volume group is divided up into logical volumes. The following describe the major
different types of logical volumes.

Linear Volumes: A linear volume aggregates space from one or more physical volumes into
one logical volume. For example, if you have two 60GB disks, you can create a 120GB

200
11.7. LVM

L
pvcreate command destroys any data on the corresponding hard disk partitions.

logical volume. The physical storage is concatenated. The physical volumes that make
up a logical volume do not have to be the same size.
Striped Logical Volumes: When you write data to an LVM logical volume, the file system
lays the data out across the underlying physical volumes. You can control the way the
data is written to the physical volumes by creating a striped logical volume. For large
sequential reads and writes, this can improve the efficiency of the data I/O.
Striping enhances performance by writing data to a predetermined number of physical
volumes in round-robin fashion. With striping, I/O can be done in parallel. In some
situations, this can result in near-linear performance gain for each additional physical
volume in the stripe.

11.7.2.5 Extents

LVM breaks up each physical volume into extents. A logical volume consists of a set of extents.
Each extent is either wholly unused, or wholly in used by a particular logical volume: extents
cannot be subdivided. Extents are the elementary blocks of LVM allocation.

The physical extent in LVM is the block size that physical volumes are using. The default
physical extent is 4MB but can range from 8kB up to 16GB (using powers of 2). Logical
volumes are made up from logical extents having the same size as the physical extents.

11.7.3 LVM Administration


11.7.3.1 Creating LVM Logical Volumes

This example procedure creates an LVM logical volume called new_logical_volume that
consists of the disks at /dev/sda1, /dev/sdb1, and /dev/sdc1.

1. To use disks in a volume group, label them as LVM physical volumes with the pvcreate
command.
# pvcreate /dev/sda1 /dev/sdb1 /dev/sdc1
Physical volume "/dev/sda1" successfully created
Physical volume "/dev/sdb1" successfully created
Physical volume "/dev/sdc1" successfully created

2. Create the a volume group that consists of the LVM physical volumes you have created.
The following command creates the volume group new_vol_group.
# vgcreate new_vol_group /dev/sda1 /dev/sdb1 /dev/sdc1

201
11. Storage Administration

Volume group " new_vol_group " successfully created

3. You can use the vgs command to display the attributes of the new volume group.
# vgs
VG #PV #LV #SN Attr VSize VFree
new_vol_group 3 0 0 wz --n- 51.45 G 51.45 G

4. Create the logical volume from the volume group you have created. The following
command creates the logical volume new_logical_volume from the volume group
new_vol_group. This example creates a logical volume that uses 2 gigabytes of the
volume group.
# lvcreate -L 2G -n new_logical_volume new_vol_group
Logical volume " new_logical_volume " created

5. Create a file system on the logical volume. The following command creates a GFS2 file
system on the logical volume.
# mkfs.gfs2 -p lock_nolock -j 1 /dev/ new_vol_group /
new_logical_volume
This will destroy any data on /dev/ new_vol_group /
new_logical_volume .

Are you sure you want to proceed ? [y/n] y

Device : /dev/ new_vol_group / new_logical_volume


Blocksize : 4096
Filesystem Size: 491460
Journals : 1
Resource Groups : 8
Locking Protocol : lock_nolock
Lock Table :

Syncing ...
All Done

6. The following commands mount the logical volume and report the file system disk
space usage.
# mount /dev/ new_vol_group / new_logical_volume /mnt
# df
Filesystem 1K- blocks Used Available Use% Mounted on
/dev/ new_vol_group / new_logical_volume
1965840 20 1965820 1% /mnt

202
11.7. LVM

11.7.3.2 Resizing LVM Logical Volumes

This example procedure shows how you can remove a disk from an existing logical volume,
either to replace the disk or to use the disk as part of a different volume. In order to remove a
disk, you must first move the extents on the LVM physical volume to a different disk or set of
disks.

1. In this example, the logical volume is distributed across four physical volumes in the
volume group myvg.
# pvs -o+ pv_used
PV VG Fmt Attr PSize PFree Used
/dev/sda1 myvg lvm2 a- 17.15 G 12.15 G 5.00G
/dev/sdb1 myvg lvm2 a- 17.15 G 12.15 G 5.00G
/dev/sdc1 myvg lvm2 a- 17.15 G 12.15 G 5.00G
/dev/sdd1 myvg lvm2 a- 17.15 G 2.15G 15.00 G

2. This examples moves the extents off of /dev/sdb1 so that it can be removed from the
volume group. If there are enough free extents on the other physical volumes in the
volume group, you can execute the pvmove command on the device you want to remove
with no other options and the extents will be distributed to the other devices.
# pvmove /dev/sdb1
/dev/sdb1: Moved : 2.0%
...
/dev/sdb1: Moved : 79.2%
...
/dev/sdb1: Moved : 100.0%

3. After the pvmove command has finished executing, the distribution of extents is as
follows:
# pvs -o+ pv_used
PV VG Fmt Attr PSize PFree Used
/dev/sda1 myvg lvm2 a- 17.15 G 7.15G 10.00 G
/dev/sdb1 myvg lvm2 a- 17.15 G 17.15 G 0
/dev/sdc1 myvg lvm2 a- 17.15 G 12.15 G 5.00G
/dev/sdd1 myvg lvm2 a- 17.15 G 2.15G 15.00 G

4. Use the vgreduce command to remove the physical volume /dev/sdb1 from the volume
group.
# vgreduce myvg /dev/sdb1
Removed "/dev/sdb1" from volume group "myvg"
# pvs

203
11. Storage Administration

PV VG Fmt Attr PSize PFree


/dev/sda1 myvg lvm2 a- 17.15 G 7.15G
/dev/sdb1 lvm2 -- 17.15 G 17.15 G
/dev/sdc1 myvg lvm2 a- 17.15 G 12.15 G
/dev/sdd1 myvg lvm2 a- 17.15 G 2.15G

The disk can now be physically removed or allocated to other users.

Eleven
204
Chapter 12

Linux File Systems


1 class

Chapter Goals

1. Create, mount, unmount, and use vfat, ext4, and xfs file systems.
2. Mount and unmount CIFS and NFS network file systems.
3. Extend existing logical volumes.

12.1 Create, Mount, Unmount, and Use vfat, ext4, and xfs File
Systems
12.1.1 VFAT File System
VFAT is an extension of the FAT file system and was introduced with Windows 95. VFAT
maintains backward compatibility with FAT but relaxes the rules. For example, VFAT
filenames can contain up to 255 characters, spaces, and multiple periods. Although vfat
preserves the case of filenames, it’s not considered case sensitive.

To create an VFAT file system, type:


# mkfs.vfat /dev/sdb1
mkfs.fat 3.0.20 (12 Jun 2013)
unable to get drive geometry , using default 255/63

To mount this file system, type:

205
12. Linux File Systems

# mount /dev/sdb1 /mnt

To mount it permanently, edit the /etc/fstab file and add the following line:
/dev/sdb1 /mnt vfat defaults 1 2

12.1.2 The ext4 File System


The ext4 file system is a scalable extension of the ext3 file system. With Red Hat Enterprise
Linux 7, it can support a maximum individual file size of 16 terabytes, and file systems to
a maximum of 50 terabytes, unlike Red Hat Enterprise Linux 6 which only supported file
systems up to 16 terabytes. It also supports an unlimited number of sub-directories (the ext3
file system only supports up to 32,000), though once the link count exceeds 65,000 it resets to
1 and is no longer increased.

To create an ext4 file system, use the mkfs.ext4 command. In general, the default options are
optimal for most usage scenarios:
# mkfs.ext4 /dev/ device

Below is a sample output of this command, which displays the resulting file system geometry
and features:
# mkfs.ext4 /dev/sdb1
mke2fs 1.41.12 (17 -May -2010)
Filesystem label =
OS type: Linux
Block size =4096 (log =2)
Fragment size =4096 (log =2)
Stride =0 blocks , Stripe width =0 blocks

245280 inodes , 979456 blocks


48972 blocks (5.00%) reserved for the super user
First data block =0
Maximum filesystem blocks =1006632960
30 block groups
32768 blocks per group , 32768 fragments per group
8176 inodes per group
Superblock backups stored on blocks :
32768 , 98304 , 163840 , 229376 , 294912 , 819200 , 884736

Writing inode tables : done


Creating journal (16384 blocks ): done

Writing superblocks and filesystem accounting information : done

An ext4 file system can be mounted with no extra options. For example:

206
12.2. Mount and Unmount CIFS File Systems

i Are CIFS and SMB the Same?

The Server Message Block (SMB) Protocol is a network file sharing protocol, and as
implemented in Microsoft Windows is known as Microsoft SMB Protocol. The set of
message packets that defines a particular version of the protocol is called a dialect. The
Common Internet File System (CIFS) Protocol is a dialect of SMB.

# mount /dev/sdb1 /mnt

To mount it permanently, edit the /etc/fstab file and add the following line:
/dev/sdb1 /mnt ext4 defaults 1 2

12.1.3 The XFS File System


XFS is a highly scalable, high-performance file system which was originally designed at
Silicon Graphics, Inc. XFS is the default file system for Red Hat Enterprise Linux 7.

Below is a sample output of the mkfs.xfs command:


# mkfs.xfs /dev/sdb1
blks
= sectsz =512 attr =2
data = bsize =4096 blocks =13109032 , imaxpct =25
= sunit =0 swidth =0 blks
naming = version 2 bsize =4096 ascii -ci =0
log = internal log bsize =4096 blocks =6400 , version =2
= sectsz =512 sunit =0 blks , lazy - count =1
realtime =none extsz =4096 blocks =0, rtextents =0

An XFS file system can be mounted with no extra options. For example:
# mount /dev/xdb1 /mnt

To mount it permanently, edit the /etc/fstab file and add the following line:
/dev/sdb1 /mnt xfs defaults 1 2

12.2 Mount and Unmount CIFS File Systems


12.2.1 Common Internet File System (CIFS)
The Common Internet File System (CIFS) is the standard way that computer users share files
across corporate intranets and the Internet. An enhanced version of the Microsoft open,
cross-platform Server Message Block (SMB) protocol, CIFS is a native file-sharing protocol in
Windows 2000.

207
12. Linux File Systems

12.2.2 Mount CIFS Share


Mount CIFS with the default local filesystem permissions:
# mkdir /mnt/cifs
# mount -t cifs // server -name/share -name /mnt/cifs -o username
=shareuser , password = sharepassword , domain = cseiac
# mount -t cifs //192.168.101.100/ sales /mnt/cifs -o username =
shareuser , password = sharepassword , domain = cseiac

Where,

username=shareuser : specifies the CIFS user name.


password=sharepassword : specifies the CIFS password. If this option is not given then the
environment variable PASSWD is used. If the password is not specified directly or
indirectly via an argument to mount, mount will prompt for a password, unless the
guest option is specified.
domain=cseiac : sets the domain (workgroup) of the user.

12.3 Mount and Unmount NFS Network File Systems


The Network File System (NFS) is a way of mounting Linux discs/directories over a network.
An NFS server can export one or more directories that can then be mounted on a remote
Linux machine.

The mount command mounts NFS shares on the client side. Its format is as follows:
# mount -t nfs -o options server :/ remote / export / local /
directory

This command uses the following variables:

options : A comma-delimited list of mount options.


server : The hostname, IP address, or fully qualified domain name of the server exporting
the file system you wish to mount.
/remote/export : The file system or directory being exported from the server, that is, the
directory you wish to mount.
/local/directory : The client location where /remote/export is mounted.

An alternate way to mount an NFS share from another machine is to add a line to the
/etc/fstab file. The line must state the hostname of the NFS server, the directory on the
server being exported, and the directory on the local machine where the NFS share is to be
mounted. You must be root to modify the /etc/fstab file.

The general syntax for the line in /etc/fstab is as follows:


server :/ usr/ local /pub /pub nfs defaults 0 0

208
12.4. Managing Logical Volumes

The mount point /pub must exist on the client machine before this command can be executed.
After adding this line to /etc/fstab on the client system, use the command mount /pub, and
the mount point /pub is mounted from the server.
A valid /etc/fstab entry to mount an NFS export should contain the following information:
server :/ remote / export / local / directory nfs options 0 0

The variables server, /remote/export, /local/directory, and options are the same ones
used when manually mounting an NFS share.
The mount point /local/directory must exist on the client before /etc/fstab is read. For
more information about /etc/fstab, refer to Section 11.3.2.1.

12.4 Managing Logical Volumes


12.4.1 Creating Linear Logical Volumes
To create a logical volume, use the lvcreate command. If you do not specify a name for the
logical volume, the default name lvol# is used where # is the internal number of the logical
volume. When you create a logical volume, the logical volume is carved from a volume group
using the free extents on the physical volumes that make up the volume group. Normally
logical volumes use up any space available on the underlying physical volumes on a next-free
basis. Modifying the logical volume frees and reallocates space in the physical volumes.
The following command creates a logical volume 10 gigabytes in size in the volume group
vg1.
# lvcreate -L 10G vg1

The default unit for logical volume size is megabytes. The following command creates a
1500 MB linear logical volume named testlv in the volume group testvg, creating the block
device /dev/testvg/testlv.
# lvcreate -L 1500 -n testlv testvg

The following command creates a 50 gigabyte logical volume named gfslv from the free
extents in volume group vg0.
# lvcreate -L 50G -n gfslv vg0

You can use the -l argument of the lvcreate command to specify the size of the logical
volume in extents. You can also use this argument to specify the percentage of the volume
group to use for the logical volume. The following command creates a logical volume called
mylv that uses 60% of the total space in volume group testvg.
# lvcreate -l 60% VG -n mylv testvg

You can also use the -l argument of the lvcreate command to specify the percentage of
the remaining free space in a volume group as the size of the logical volume. The following
command creates a logical volume called yourlv that uses all of the unallocated space in the
volume group testvg.

209
12. Linux File Systems

# lvcreate -l 100% FREE -n yourlv testvg

You can use -l argument of the lvcreate command to create a logical volume that uses the
entire volume group. Another way to create a logical volume that uses the entire volume
group is to use the vgdisplay command to find the "Total PE" size and to use those results
as input to the lvcreate command.

The following commands create a logical volume called mylv that fills the volume group
named testvg.
# vgdisplay testvg | grep " Total PE"
Total PE 10230
# lvcreate -l 10230 testvg -n mylv

The underlying physical volumes used to create a logical volume can be important if the
physical volume needs to be removed, so you may need to consider this possibility when you
create the logical volume. For information on removing a physical volume from a volume
group, see Section 11.7.3.2.

To create a logical volume to be allocated from a specific physical volume in the volume
group, specify the physical volume or volumes at the end at the lvcreate command line.
The following command creates a logical volume named testlv in volume group testvg
allocated from the physical volume /dev/sdg1.
# lvcreate -L 1500 -ntestlv testvg /dev/sdg1

You can specify which extents of a physical volume are to be used for a logical volume. The
following example creates a linear logical volume out of extents 0 through 24 of physical
volume /dev/sda1 and extents 50 through 124 of physical volume /dev/sdb1 in volume
group testvg.
# lvcreate -l 100 -n testlv testvg /dev/sda1 :0 -24 /dev/sdb1 :50 -124

The following example creates a linear logical volume out of extents 0 through 25 of physical
volume /dev/sda1 and then continues laying out the logical volume at extent 100.
# lvcreate -l 100 -n testlv testvg /dev/sda1 :0 -25:100 -

12.4.2 Growing Logical Volumes


To increase the size of a logical volume, use the lvextend command.

When you extend the logical volume, you can indicate how much you want to extend the
volume, or how large you want it to be after you extend it.

The following command extends the logical volume /dev/myvg/homevol to 12 gigabytes.


# lvextend -L12G /dev/myvg/ homevol
lvextend -- extending logical volume "/dev/myvg/ homevol " to 12 GB
lvextend -- doing automatic backup of volume group "myvg"
lvextend -- logical volume "/dev/myvg/ homevol " successfully extended

210
12.4. Managing Logical Volumes

L
It is important to reduce the size of the file system or whatever is residing in the volume
before shrinking the volume itself, otherwise you risk losing data.

The following command adds another gigabyte to the logical volume /dev/myvg/homevol.
# lvextend -L+1G /dev/myvg/ homevol
lvextend -- extending logical volume "/dev/myvg/ homevol " to 13 GB
lvextend -- doing automatic backup of volume group "myvg"
lvextend -- logical volume "/dev/myvg/ homevol " successfully extended

As with the lvcreate command, you can use the -l argument of the lvextend command
to specify the number of extents by which to increase the size of the logical volume. You
can also use this argument to specify a percentage of the volume group, or a percentage of
the remaining free space in the volume group. The following command extends the logical
volume called testlv to fill all of the unallocated space in the volume group myvg.
# lvextend -l +100% FREE /dev/myvg/ testlv
Extending logical volume testlv to 68.59 GB
Logical volume testlv successfully resized

After you have extended the logical volume it is necessary to increase the file system size to
match.

By default, most file system resizing tools will increase the size of the file system to be the
size of the underlying logical volume so you do not need to worry about specifying the same
size for each of the two commands.

12.4.3 Shrinking Logical Volumes


To reduce the size of a logical volume, first unmount the file system. You can then use the
lvreduce command to shrink the volume. After shrinking the volume, remount the file
system.

Shrinking a logical volume frees some of the volume group to be allocated to other logical
volumes in the volume group.

The following example reduces the size of logical volume lvol1 in volume group vg00 by 3
logical extents.
# lvreduce -l -3 vg00/ lvol1

Twelve
211
This page is intentionally left blank

212
Chapter 13

System Issues
11/2 classes

Chapter Goals
1. Access remote systems using ssh.
2. Configure key-based authentication for ssh.
3. Securely transfer files between systems.
4. Log in and switch users in multiuser targets.
5. Boot, reboot, and shut down a system normally.
6. Boot systems into different targets manually.
7. Configure systems to boot into a specific target automatically.
8. Modify the system bootloader.
9. Interrupt the boot process in order to gain access to a system.

13.1 System Access


13.1.1 Access Remote Systems Using ssh
The ssh command is a secure replacement for the rlogin, rsh, and telnet commands. It
allows you to log in to a remote machine as well as execute commands on a remote machine.

Logging in to a remote machine with ssh is similar to using telnet. To log in to a remote
machine named penguin.example.net, type the following command at a shell prompt:
$ ssh penguin . example .net

The first time you ssh to a remote machine, you will see a message similar to the following:

213
13. System Issues

i ssh Server for Class

The name penguin.example.net has been used here as an example only. In class practice,
you will use a different server and corresponding username supplied by your instructor.

The authenticity of host ’penguin . example .net ’ can ’t be established .


DSA key fingerprint is 94:68:3 a:3a:bc:f3 :9a:9b :01:5 d:b3 :07:38: e2 :11:0 c.
Are you sure you want to continue connecting (yes/no )?

Type yes to continue. This will add the server to your list of known hosts
(~/.ssh/known_hosts) as seen in the following message:
Warning : Permanently added ’penguin . example .net ’ (RSA) to the
list of known hosts .

Next, you will see a prompt asking for your password for the remote machine. After entering
your password, you will be at a shell prompt for the remote machine. If you do not specify a
username the username that you are logged in as on the local client machine is passed to the
remote machine. If you want to specify a different username, use the following command:
$ ssh username@penguin . example .net

You can also use the syntax:


$ ssh -l username penguin . example .net

The ssh command can be used to execute a command on the remote machine without logging
in to a shell prompt. The syntax is ssh <hostname> <command>. For example, if you want
to execute the command ls /usr/share/doc on the remote machine penguin.example.net,
type the following command at a shell prompt:
$ ssh penguin . example .net ls /usr/ share /doc

After you enter the correct password, the contents of the remote directory /usr/share/doc
will be displayed, and you will return to your local shell prompt.

13.1.2 Using Key-Based Authentication


To improve the system security even further, you can enforce key-based authentication by
disabling the standard password authentication.
To set up key-based authentication, you need two virtual/physical servers that we will call
server1 and server2. We also assume that there are users user01 in server1, and user02
in server2.

1. On the server1, connect as this new user:


$ su - user01

214
13.1. System Access

2. Generate a private/public pair for key-based authentication (here rsa key with 2048 bits
and no passphrase):
[ user01@server1 ~]$ ssh - keygen -b 2048 -t rsa
Generating public / private rsa key pair.
Enter file in which to save the key (/ home/ user01 /. ssh/ id_rsa ): return
Created directory ’/home/ user01 /.ssh ’.
Enter passphrase (empty for no passphrase ): return
Enter same passphrase again: return
Your identification has been saved in /home/ user01 /. ssh/ id_rsa .
Your public key has been saved in /home/ user01 /. ssh/ id_rsa .pub.
The key fingerprint is:
6d:ac :45:32:34: ac:da:4a:3b:4e:f2 :83:85:84:5 f:d8 user01@server1 . example .com
The key ’s randomart image is:
+--[ RSA 2048]----+
| .o |
| ... |
| . o .o . |
|. o E . * |
| o o o S = |
| o + . + |
| .+.o . |
| .+= |
| .oo |
+-----------------+

3. Still on server1, copy the public key to server2.


[ user01@server1 ~]$ ssh -copy -id -i .ssh/ id_rsa .pub user02@server2 .
example .com
The authenticity of host ’server2 . example .com (192.168.1.49) ’ can ’t
be established .
ECDSA key fingerprint is 67:79:67:88:7 f:da :31:49:7 b:dd:ed :40: af:ae:b6
:ae.
Are you sure you want to continue connecting (yes/no)? yes
/bin/ssh -copy -id: INFO: attempting to log in with the new key(s), to
filter out any that are already installed
/bin/ssh -copy -id: INFO: 1 key(s) remain to be installed -- if you are
prompted now it is to install the new keys
user01@server2 . example .com ’s password :

Number of key(s) added: 1

Now try logging into the machine , with: "ssh ’user02@server2 .


example .com ’"
and check to make sure that only the key(s) you wanted were added.

4. On the server2, edit the /etc/ssh/sshd_config file and set the following options:
PasswordAuthentication no
PubkeyAuthentication yes

215
13. System Issues

L
Never share your private key with anybody. It is for your personal use only.

5. On the server1 as user01, connect to the server2:


[ user01@server1 ~]$ ssh user01@server2 . example .com

13.2 Securely Transfer Files Between Systems


scp can be used to transfer files between machines over a secure, encrypted connection. In its
design, it is very similar to rcp.

To transfer a local file to a remote system, use a command in the following form:
scp localfile username@hostname : remotefile

For example, if you want to transfer taglist.vim to a remote machine named


penguin.example.com, type the following at a shell prompt:
$ scp taglist .vim john@penguin . example .com :. vim/ plugin / taglist .vim
john@penguin . example .com ’s password :
taglist .vim 100% 144 KB 144.5 KB/s 00:00

Multiple files can be specified at once. To transfer the contents of .vim/plugin/ to the same
directory on the remote machine penguin.example.com, type the following command:
$ scp .vim/ plugin /* john@penguin . example .com :. vim/ plugin /
john@penguin . example .com ’s password :
closetag .vim 100% 13 KB 12.6 KB/s 00:00
snippetsEmu .vim 100% 33 KB 33.1 KB/s 00:00
taglist .vim 100% 144 KB 144.5 KB/s 00:00

To transfer a remote file to the local system, use the following syntax:
scp username@hostname : remotefile localfile

For instance, to download the .vimrc configuration file from the remote machine, type:
$ scp john@penguin . example .com :. vimrc . vimrc
john@penguin . example .com ’s password :
.vimrc 100% 2233 2.2 KB/s 00:00

13.3 Log in and Switch Users in Multiuser Targets


Linux has three types of accounts: system, user and root. A user logs in his Linux user
account by typing his username and password. System processes, such as mail, also log in to

216
13.4. System Boot

Linux when they start. The root account is a special user account with unrestricted privileges
to perform any operation. Provided that you know the password to another account and
that the account permits user logins, you can switch users in Linux with the su command,
commonly referred to as the “substitute user”, “super user” or “switch user” command.
$ whoami
mmasroorali
$ su - anotheruser
Password :
$ whoami
anotheruser
$ exit
logout
$ whoami
mmasroorali

To change to a different user and create a session as if the other user had logged in from a
command prompt, type su - followed by a space and the target user’s username. Type the
target user’s password when prompted. If you omit the hyphen, you log in to the other user’s
account with your environment variables, which might cause different results from what the
user would experience when logging in to the system. Type exit and press Enter to log out
of the account and return to the previous user session.

When you type su - without a username and press Enter, the system assumes you want to
log in as the root user and prompts you for the root user password.
$ whoami
mmasroorali
$ su -
Password :
Last login : Thu Aug 17 20:24:01 +06 2017 on pts /0
# whoami
root
# exit
logout
$ whoami
mmasroorali

Only a few experienced and trusted users can typically log in as the root user on most Linux
systems, because the root user can read, modify and delete any file or setting on the server.

13.4 System Boot


13.4.1 Boot, Reboot, and Shut Down a System Normally
To reboot the system, choose one command among these:

217
13. System Issues

# reboot
# systemctl reboot
# shutdown -r now
# init 6
# telinit 6

To shutdown the system, choose one command among these:


# halt
# systemctl halt
# shutdown -h now
# init 0
# telinit 0

To switch off the system, choose one command among these:


# poweroff
# systemctl poweroff

13.4.1.1 shutdown Command

shutdown arranges for the system to be brought down in a safe way. All logged-in users are
notified that the system is going down and, within the last five minutes of TIME, new logins
are prevented.
The syntax is:
shutdown [ OPTION ] [TIME] [ MESSAGE ]

To shutdown a machine call the shutdown command like this:


# shutdown -h now

The -h option is for halt which means to stop. The second parameter is the time parameter.
now means that shutdown the system right away.
The time parameter can be specified in minutes or hours also. For example:
# shutdown -h +5 " Server is going down for upgrade . Please
save your work ."
Shutdown scheduled for Mon 2017 -08 -21 21:59:57 +06 , use ’
shutdown -c’ to cancel .

The above command shall flash the message to all other logged in users and give them 5
minutes before the system goes for shutdown.
Broadcast message from root@localhost . localdomain (Mon
2017 -08 -21 21:54:57 +06):

Server is going down for upgrade . Please save your work.


The system is going down for power -off at Mon 2017 -08 -21
21:59:57 +06!

218
13.4. System Boot

The shutdown command can be used to restart a system with the -r option instead of the -h
option. Usage is same as before. Just replace the -h option with -r option.
# shutdown -r +5 " Server will restart in 5 minutes . Please
save your work ."
Shutdown scheduled for Mon 2017 -08 -21 21:34:36 +06 , use ’
shutdown -c’ to cancel .

All other logged in users will see a broadcast message in their terminal like this:
Broadcast message from root@localhost . localdomain (Mon
2017 -08 -21 21:29:36 +06):

Server will restart in 5 minutes . Please save your work.


The system is going down for reboot at Mon 2017 -08 -21 21:34:36
+06!

At this point a shutdown can be canceled by calling shutdown with -c option.


# shutdown -c
Broadcast message from root@localhost . localdomain (Mon
2017 -08 -21 21:29:47 +06):

The system shutdown has been cancelled at Mon 2017 -08 -21
21:30:47 +06!

13.4.1.2 reboot Command

reboot command can be used to shutdown or reboot Linux.


The following command will shutdown Linux.
# reboot -p

The -p options stands for poweroff.


To reboot just call the reboot command directly without any options:
# reboot

This will perform a graceful shutdown and restart of the machine.


The following command will forcefully reboot the machine. This is similar to pressing the
power button of the CPU. No shutdown takes place. The system will reset instantly.
# reboot -f

13.4.2 Boot Systems into Different Targets Manually


13.4.2.1 Working with Systemd Targets

Previous versions of Red Hat Enterprise Linux, which were distributed with SysV init
or Upstart, implemented a predefined set of runlevels that represented specific modes of

219
13. System Issues

i Targets, not Runlevels

In the systemd toolset halt, poweroff, reboot, telinit, and shutdown are all symbolic
links to /bin/systemctl. They are all backwards compatibility shims, that are simply
shorthands for invoking systemd’s primary command-line interface: systemctl. They all
map to that same single program.
Most of those commands are shorthands for telling systemd, using systemctl, to isolate a
particular target. Isolation can be thought of as starting a target and stopping any others.
There are three final targets that are relevant here:

halt.target : Once the system has reached the state of fully isolating this target, it
will have called the reboot(RB_HALT_SYSTEM) system call. The kernel will have
attempted to enter a ROM monitor program, or simply halted the CPU (using
whatever mechanism is appropriate for doing so).

reboot.target : Once the system has reached the state of fully isolating this target, it will
have called the reboot(RB_AUTOBOOT) system call. The kernel will have attempted
to trigger a reboot.

poweroff.target : Once the system has reached the state of fully isolating this target,
it will have called the reboot(RB_POWER_OFF) system call. The kernel will have
attempted to remove power from the system, if possible.

These are the things that you should be thinking about as the final system states, not run
levels.

operation. These runlevels were numbered from 0 to 6 and were defined by a selection of
system services to be run when a particular runlevel was enabled by the system administrator.
In Red Hat Enterprise Linux 7, the concept of runlevels has been replaced with systemd targets.

Systemd targets are represented by target units. Target units end with the .target file
extension and their only purpose is to group together other systemd units through a chain of
dependencies. For example, the graphical.target unit, which is used to start a graphical
session, starts system services such as the GNOME Display Manager (gdm.service) or
Accounts Service (accounts-daemon.service) and also activates the multi-user.target
unit. Similarly, the multi-user.target unit starts other essential system services such
as NetworkManager (NetworkManager.service) or D-Bus (dbus.service) and activates
another target unit named basic.target.

Red Hat Enterprise Linux 7 is distributed with a number of predefined targets that are more
or less similar to the standard set of runlevels from the previous releases of this system. For
compatibility reasons, it also provides aliases for these targets that directly map them to SysV
runlevels. Table 13.1 provides a complete list of SysV runlevels and their corresponding
systemd targets.

220
13.4. System Boot

Runlevel Target Units Description

0 runlevel0.target, Shut down and power off the system


poweroff.target

1 runlevel1.target, Set up a rescue shell


rescue.target

2 runlevel2.target, Set up a non-graphical multi-user system


multi-user.target

3 runlevel3.target, Set up a non-graphical multi-user system


multi-user.target

4 runlevel4.target, Set up a non-graphical multi-user system


multi-user.target

5 runlevel5.target, Set up a graphical multi-user system


graphical.target

6 runlevel6.target, Shut down and reboot the system


reboot.target

Table 13.1: Comparison of SysV runlevels with systemd targets.

13.4.2.2 Viewing the Default Target

To determine which target unit is used by default, run the following command:
$ systemctl get - default
graphical . target

13.4.2.3 Viewing the Current Target

To list all currently loaded target units, type the following command at a shell prompt:
$ systemctl list - units --type target

You get a long list as output:


UNIT LOAD ACTIVE SUB DESCRIPTION
basic . target loaded active active Basic System
cryptsetup . target loaded active active Encrypted Volumes
getty . target loaded active active Login Prompts
graphical . target loaded active active Graphical Interface
local -fs -pre. target loaded active active Local File Systems (Pre)
local -fs. target loaded active active Local File Systems
multi -user. target loaded active active Multi -User System
network - online . target loaded active active Network is Online
network -pre. target loaded active active Network (Pre)
network . target loaded active active Network

221
13. System Issues

nfs - client . target loaded active active NFS client services


nss -user - lookup . target loaded active active User and Group Name Lookups
paths . target loaded active active Paths
remote -fs -pre. target loaded active active Remote File Systems (Pre)
remote -fs. target loaded active active Remote File Systems
slices . target loaded active active Slices
sockets . target loaded active active Sockets
sound . target loaded active active Sound Card
swap. target loaded active active Swap
sysinit . target loaded active active System Initialization
timers . target loaded active active Timers

LOAD = Reflects whether the unit definition was properly loaded .


ACTIVE = The high - level unit activation state , i.e. generalization of SUB.
SUB = The low - level unit activation state , values depend on unit type.

21 loaded units listed . Pass --all to see loaded but inactive units , too.
To show all installed unit files use ’systemctl list -unit -files ’.

For each target unit, this commands displays its full name (UNIT) followed by a note whether
the unit has been loaded (LOAD), its high-level (ACTIVE) and low-level (SUB) unit activation state,
and a short description (DESCRIPTION). By default, the systemctl list-units command
displays only active units. If you want to list all loaded units regardless of their state, run the
previous command with the --all or -a command line option.

13.4.2.4 Changing the Default Target

To configure the system to use a different target unit by default, type the following at a shell
prompt as root:
systemctl set - default name. target

Replace name with the name of the target unit you want to use by default (for example,
multi-user). This command replaces the /etc/systemd/system/default.target file with
a symbolic link to /usr/lib/systemd/system/name.target, where name is the name of the
target unit you want to use.
To configure the system to use the multi-user.target unit by default, run the following
command as root:
# systemctl set - default multi -user. target
rm ’/etc/ systemd / system / default .target ’
ln -s ’/usr/lib/ systemd / system /multi -user.target ’ ’/etc/
systemd / system / default .target ’

13.4.2.5 Changing the Current Target

To change to a different target unit in the current session, type the following at a shell prompt
as root:

222
13.4. System Boot

# systemctl isolate name. target

Replace name with the name of the target unit you want to use (for example, multi-user).
This command starts the target unit named name and all dependent units, and immediately
stops all others.

To turn off the graphical user interface and change to the multi-user.target unit in the current
session, run the following command as root:
# systemctl isolate multi -user. target

13.4.2.6 Changing to Rescue Mode

Rescue mode provides a convenient single-user environment and allows you to repair your
system in situations when it is unable to complete a regular booting process. In rescue mode,
the system attempts to mount all local file systems and start some important system services,
but it does not activate network interfaces or allow more users to be logged into the system at
the same time. In Red Hat Enterprise Linux 7, rescue mode is equivalent to single user mode
and requires the root password.

To change the current target and enter rescue mode in the current session, type the following
at a shell prompt as root:
# systemctl rescue

This command is similar to systemctl isolate rescue.target, but it also sends an


informative message to all users that are currently logged into the system. To prevent
systemd from sending this message, run this command with the --no-wall command line
option:
# systemctl --no -wall rescue

See this example. When we run the following command as root:


# systemctl rescue
Broadcast message from root@localhost on pts /0 (Fri 2017 -08 -22
18:23:15 BDT):
The system is going down to rescue mode NOW!

13.4.2.7 Changing to Emergency Mode

Emergency mode provides the most minimal environment possible and allows you to repair
your system even in situations when the system is unable to enter rescue mode. In emergency
mode, the system mounts the root file system only for reading, does not attempt to mount any
other local file systems, does not activate network interfaces, and only starts a few essential
services. In Red Hat Enterprise Linux 7, emergency mode requires the root password.

To change the current target and enter emergency mode, type the following at a shell prompt
as root:

223
13. System Issues

# systemctl emergency

This command is similar to systemctl isolate emergency.target, but it also sends an


informative message to all users that are currently logged into the system. To prevent systemd
from sending this message, run this command with the --no-wall command line option:
# systemctl --no -wall emergency

13.4.2.8 Configure Systems to Boot into a Specific Target Automatically

This very similar to the concept of “Changing the Default Target” previously described in
Section 13.4.2.4.

Let us say that we want to boot to console instead of a graphical user interface.

To configure a system to boot into multi-user level with graphical interface, type:
# systemctl set - default graphical . target

To check the current configuration, type:


# systemctl get - default
multi -user. target

13.5 Modify the System Bootloader


13.5.1 Introduction to GRUB 2
Red Hat Enterprise Linux 7 is distributed with version 2 of the GNU GRand Unified Bootloader
(GRUB 2), which allows the user to select an operating system or kernel to be loaded at system
boot time. GRUB 2 also allows the user to pass arguments to the kernel.

The GRUB 2 configuration is spread over several files:

/boot/grub2/grub.cfg : This file contains the final GRUB 2 configuration. Do not edit it
directly.
/etc/grub2.cfg : This is a symbolic link to the /boot/grub2/grub.cfg file.
/etc/default/grub : This file contains the list of the GRUB 2 variables. The values of the
environment variables can be edited.
/etc/sysconfig/grub : This is a symbolic link to the /etc/default/grub file.
/etc/grub.d : This directory contains all the individual files internally used by GRUB 2.

13.5.2 View Information


To get the details about the current active kernel, execute:
# grub2 - editenv list
saved_entry =Red Hat Enterprise Linux Server (3.10.0 -693.1.1.
el7. x86_64 ) 7.4 ( Maipo )

224
13.5. Modify the System Bootloader

This above information is stored in the /boot/grub2/grubenv file.

To get the list of the kernels displayed at boot time, execute:


# grep ^ menuentry /boot/ grub2 /grub.cfg
menuentry ’Red Hat Enterprise Linux Server (3.10.0 -693.1.1. el7
. x86_64 ) 7.4 ( Maipo ) ’...
menuentry ’Red Hat Enterprise Linux Server (3.10.0 -693. el7.
x86_64 ) 7.4 ( Maipo )’ ...
menuentry ’Red Hat Enterprise Linux Server (3.10.0 -514.26.2.
el7. x86_64 ) 7.3 ( Maipo )’ ...
menuentry ’Red Hat Enterprise Linux Server (0- rescue -134307
dd453f491da445924507d02a87 ) 7.3 ...

The GRUB 2 variables are in the /etc/default/grub file. The /etc/default/grub file is very
simple. This grub defaults file has a number of valid key/value pairs listed already. You can
simply change the values of existing keys or add other keys that are not already in the file.
The /etc/default/grub file in your machine will look somewhat like this:
GRUB_TIMEOUT =5
GRUB_DISTRIBUTOR ="$(sed ’s, release .*$,,g’ /etc/system - release )"
GRUB_DEFAULT = saved
GRUB_DISABLE_SUBMENU =true
GRUB_TERMINAL_OUTPUT =" console "
GRUB_CMDLINE_LINUX =" crashkernel =auto
rd.lvm.lv=rhel/root
rd.lvm.lv=rhel/swap rhgb quiet"
GRUB_DISABLE_RECOVERY ="true"

The major keys are described below. Some of these don’t appear in the above grub default file.

GRUB_TIMEOUT : The value of this key determines the length of time that the GRUB selection
menu is displayed. GRUB offers the capability to keep multiple kernels installed
simultaneously and choose between them at boot time using the GRUB menu. The
default value for this key is 5 seconds.
GRUB_DISTRIBUTOR : This key defines a sed expression that extracts the distribution release
number from the /etc/system-release file. This information is used to generate the
text names for each kernel release that appear in the GRUB menu, such as “Red Hat
Enterprise Linux Server release 7.4 (Maipo)”. Due to variations in the structure of the
data in the system-release file between distributions, this sed expression may vary from
one machine to the other.
GRUB_DEFAULT : Determines which kernel is booted by default. That is the saved kernel
which is the most recent kernel. Other options here are a number which represents the
index of the list of kernels in grub.cfg. Using an index such as 3, however, to load the
fourth kernel in the list will always load the fourth kernel in the list even after a new

225
13. System Issues

kernel is installed. So using an index will load a different kernel after a new kernel is
installed. The only way to ensure that a specific kernel release is booted is to set the
value of GRUB_DEFAULT to the name of the desired kernel.
GRUB_SAVEDEFAULT : Normally, this option is not specified in the grub defaults file. Normal
operation when a different kernel is selected for boot, that kernel is booted only
that one time. The default kernel is not changed. When set to true and used with
GRUB_DEFAULT=saved this option saves a different kernel as the default. This happens
when a different kernel is selected for boot.
GRUB_DISABLE_SUBMENU : Some people may wish to create a hierarchical menu structure of
kernels for the GRUB menu screen. This key, along with some additional configuration
of the kernel stanzas in grub.cfg allow creating such a hierarchy. For example, the
one might have the main menu with production and test sub-menus where each
sub-menu would contain the appropriate kernels. Setting this to false would enable
the use of sub-menus.
GRUB_TERMINAL_OUTPUT : In some environments it may be desirable or necessary to redirect
output to a different display console or terminal. The default is to send output to the
default terminal, usually the console which equates to the standard display. Another
useful option is to specify serial in a data center or lab environment in which serial
terminals or Integrated Lights Out (ILO) terminal connections are in use.
GRUB_TERMINAL_INPUT : As with GRUB_TERMINAL_OUTPUT, it may be desirable or necessary
to redirect input from a serial terminal or ILO device rather than the standard keyboard
input.
GRUB_CMDLINE_LINUX : This key contains the command line arguments that will be passed
to the kernel at boot time. Note that these arguments will be added to the kernel line of
grub.cfg for all installed kernels.
GRUB_DISABLE_RECOVERY : When the value of this key is set to false, a recovery entry is
created in the GRUB menu for every installed kernel. When set to true no recovery
entries are created. Regardless of this setting, the last kernel entry is always a rescue
option.

13.5.3 Customizing the GRUB 2 Configuration File


GRUB 2 scripts search the user’s computer and build a boot menu based on what operating
systems the scripts find. To reflect the latest system boot options, the boot menu is rebuilt
automatically when the kernel is updated or a new kernel is added.

However, users may want to build a menu containing specific entries or to have the entries in
a specific order. GRUB 2 allows basic customization of the boot menu to give users control of
what actually appears on the screen.

GRUB 2 uses a series of scripts to build the menu. These are located in the /etc/grub.d/
directory. The following files are included:

226
13.5. Modify the System Bootloader

• 00_header, which loads GRUB 2 settings from the /etc/default/grub file.


• 01_users, which reads the superuser password from the user.cfg file.
• 10_linux, which locates kernels in the default partition of Red Hat Enterprise Linux.
• 30_os-prober, which builds entries for operating systems found on other partitions.
• 40_custom, a template, which can be used to create additional menu entries.

Scripts from the /etc/grub.d/ directory are read in alphabetical order and can be therefore
renamed to change the boot order of specific menu entries.

13.5.3.1 Changing the Default Boot Entry

By default, the key for the GRUB_DEFAULT directive in the /etc/default/grub file is the word
saved. This instructs GRUB 2 to load the kernel specified by the saved_entry directive in
the GRUB 2 environment file, located at /boot/grub2/grubenv. You can set another GRUB
record to be the default, using the grub2-set-default command, which will update the
GRUB 2 environment file.

GRUB 2 supports using a numeric value as the key for the saved_entry directive to change
the default order in which the operating systems are loaded. To specify which operating
system should be loaded first, pass its number to the grub2-set-default command. For
example:
# grub2 -set - default 2

Note that the position of a menu entry in the list is denoted by a number starting with zero.
Therefore, in the example above, the third entry will be loaded. This value will be overwritten
by the name of the next kernel to be installed.

To force a system to always use a particular menu entry, use the menu entry name as the
key to the GRUB_DEFAULT directive in the /etc/default/grub file. To list the available menu
entries, follow the procedure in Section 13.5.2.

The file name /etc/grub2.cfg is a symbolic link to the grub.cfg file, whose location is
architecture dependent. For reliability reasons, the symbolic link is not used in other examples
in this delineation. It is better to use absolute paths when writing to a file, especially when
repairing a system.

Changes to /etc/default/grub require rebuilding the grub.cfg file as follows:

• On BIOS-based machines, issue the following command as root:


# grub2 - mkconfig -o /boot/ grub2 /grub.cfg

• On UEFI-based machines, issue the following command as root:


# grub2 - mkconfig -o /boot/efi/EFI/ redhat /grub.cfg

227
13. System Issues

13.5.3.2 Editing a Menu Entry

If required to prepare a new GRUB 2 file with different parameters, edit the values of the
GRUB_CMDLINE_LINUX key in the /etc/default/grub file. Note that you can specify multiple
parameters for the GRUB_CMDLINE_LINUX key, similarly to adding the parameters in the GRUB 2
boot menu. For example:
GRUB_CMDLINE_LINUX =" console =tty0 console =ttyS0 ,9600 n8"

Where console=tty0 is the first virtual terminal and console=ttyS0 is the serial terminal to
be used.
Changes to /etc/default/grub require rebuilding the grub.cfg file as follows:

• On BIOS-based machines, issue the following command as root:


# grub2 - mkconfig -o /boot/ grub2 /grub.cfg

• On UEFI-based machines, issue the following command as root:


# grub2 - mkconfig -o /boot/efi/EFI/ redhat /grub.cfg

13.6 Interrupt the Boot Process in Order to Gain Access to a


System
A common scenario for a Linux administrator is “the root password missing”. If that happens,
you need to reset it. The only way to do that is by booting in the minimal mode (Single User
Mode), which allows you to log in without entering a password.

1. Start the system and, on the GRUB 2 boot screen, press the e key for edit.
2. Remove the rhgb and quiet parameters from the end, or near the end, of the linux16
line, or linuxefi on UEFI systems.
Press Ctrl+a and Ctrl+e to jump to the start and end of the line, respectively. On some
systems, Home and End might also work.
The rhgb and quiet parameters must be removed in order to enable system messages.
3. Add the following parameters at the end of the linux line on 64-Bit IBM Power Series,
the linux16 line on x86-64 BIOS-based systems, or the linuxefi line on UEFI systems:
rd. break enforcing =0

Adding the enforcing=0 option enables omitting the time consuming SELinux
relabeling process.
The initramfs will stop before passing control to the Linux kernel, enabling you to
work with the root file system.
Note that the initramfs prompt will appear on the last console specified on the Linux
line.

228
13.6. Interrupt the Boot Process in Order to Gain Access to a System

4. Press Ctrl +x to boot the system with the changed parameters.


With an encrypted file system, a password is required at this point. However the
password prompt might not appear as it is obscured by logging messages. You can
press the Backspace key to see the prompt. Release the key and enter the password for
the encrypted file system, while ignoring the logging messages.
The initramfs switch_root prompt appears.
5. The file system is mounted read-only on /sysroot/. You will not be allowed to change
the password if the file system is not writable.
Remount the file system as writable:
switch_root :/# mount -o remount ,rw / sysroot

6. The file system is remounted with write enabled.


Change the file system’s root as follows:
switch_root :/# chroot / sysroot

The prompt changes to sh-4.2#.


7. Enter the passwd command and follow the instructions displayed on the command line
to change the root password.
Note that if the system is not writable, the passwd tool fails with the following error:
Authentication token manipulation error

8. Remount the file system as read only:


sh -4.2# mount -o remount ,ro /

9. Enter the exit command to exit the chroot environment.


10. Enter the exit command again to resume the initialization and finish the system boot.
With an encrypted file system, a pass word or phrase is required at this point. However
the password prompt might not appear as it is obscured by logging messages. You
can press and hold the Backspace key to see the prompt. Release the key and enter the
password for the encrypted file system, while ignoring the logging messages.
11. Login as the root user with the new password.
12. Enter the following command in a terminal to restore the /etc/shadow file’s SELinux
security context:
# restorecon /etc/ shadow

13. Enter the following commands to turn SELinux policy enforcement back on and verify
that it is on:

229
13. System Issues

# setenforce 1
# getenforce
Enforcing

Thirteen
230
Chapter 14

Virtualized Systems
1 class

Chapter Goals
1. Virtualization basics.
2. Install Red Hat Enterprise Linux systems as virtual guests.
3. Access a virtual machine’s console.
4. Start and stop virtual machines.
5. Configure systems to launch virtual machines at boot.

14.1 Introduction to Virtualization


14.1.1 What is Virtualization?
Virtualization is a broad computing term used for running software, usually multiple operating
systems, concurrently and in isolation from other programs on a single system. Virtualization
is accomplished by using a hypervisor. This is a software layer or subsystem that controls
hardware and enables running multiple operating systems, called virtual machines (VMs) or
guests, on a single (usually physical) machine. This machine with its operating system is
called a host. There are several virtualization methods:

Full virtualization Full virtualization uses an unmodified version of the guest operating
system. The guest addresses the host’s CPU via a channel created by the hypervisor.
Because the guest communicates directly with the CPU, this is the fastest virtualization
method.

231
14. Virtualized Systems

Paravirtualization Paravirtualization uses a modified guest operating system. The guest


communicates with the hypervisor. The hypervisor passes the unmodified calls from
the guest to the CPU and other interfaces, both real and virtual. Because the calls are
routed through the hypervisor, this method is slower than full virtualization.
Software virtualization (or emulation) Software virtualization uses binary translation and
other emulation techniques to run unmodified operating systems. The hypervisor
translates the guest calls to a format that can be used by the host system. Because all
calls are translated, this method is slower than virtualization. Note that Red Hat does
not support software virtualization on Red Hat Enterprise Linux.

14.1.2 Why Use Virtualization


Virtualization can be useful both for server deployments and individual desktop stations.
Desktop virtualization offers cost-efficient centralized management and better disaster
recovery. In addition, by using connection tools such as ssh, it is possible to connect to a
desktop remotely.

When used for servers, virtualization can benefit not only larger networks, but also
deployments with more than a single server. Virtualization provides live migration, high
availability, fault tolerance, and streamlined backups.

14.1.3 Red Hat Virtualization Solutions


Red Hat offers the following major virtualization solutions, each with a different user focus
and features:

Red Hat Enterprise Linux: The ability to create, run, and manage virtual machines, as well
as a number of virtualization tools and features are included in Red Hat Enterprise
Linux 7. This solution supports a limited number of running guests per host, as well as
a limited range of guest types.
Red Hat Virtualization: Red Hat Virtualization (RHV) is based on the Kernel-based Virtual
Machine (KVM) technology like virtualization on Red Hat Enterprise Linux is, but
offers an enhanced array of features. Designed for enterprise-class scalability and
performance, it enables management of your entire virtual infrastructure, including
hosts, virtual machines, networks, storage, and users from a centralized graphical
interface.
Red Hat OpenStack Platform: Red Hat OpenStack Platform offers an integrated foundation
to create, deploy, and scale a secure and reliable public or private cloud.

14.1.4 KVM and Virtualization in Red Hat Enterprise Linux


KVM (Kernel-based Virtual Machine) is a full virtualization solution for Linux on a variety of
architectures. It is built into the standard Red Hat Enterprise Linux 7 kernel and integrated
with the Quick Emulator (QEMU), and it can run multiple guest operating systems. The

232
14.1. Introduction to Virtualization

KVM hypervisor in Red Hat Enterprise Linux is managed with the libvirt API, and tools
built for libvirt (such as virt-manager and virsh). Virtual machines are executed and run
as multi-threaded Linux processes, controlled by these tools. See Figure 14.1.

Figure 14.1: KVM architecture.

14.1.5 libvirt and libvirt Tools


The libvirt package provides a hypervisor-independent virtualization API that can interact
with the virtualization capabilities of a range of operating systems. It includes:

• A virtualization layer to securely manage virtual machines on a host.


• An interface for managing local and networked hosts.
• The APIs required to provision, create, modify, monitor, control, migrate, and stop
virtual machines. Although multiple hosts may be accessed with libvirt simultaneously,
the APIs are limited to single node operations.

libvirt focuses on managing single hosts and provides APIs to enumerate, monitor and
use the resources available on the managed node, including CPUs, memory, storage and
networking. The management tools do not need to be on the same physical machine as the
machines on which the hosts are running. In such a scenario, the machine on which the
management tools run communicates with the machines on which the hosts are running
using secure protocols.
Red Hat Enterprise Linux 7 supports libvirt and includes libvirt-based tools as its default
method for virtualization management (as in Red Hat Virtualization Management).

14.1.6 Virtualized Hardware Devices


Virtualization on Red Hat Enterprise Linux 7 allows virtual machines to use the host’s physical
hardware as three distinct types of devices:

Virtualized and emulated devices: KVM implements many core devices for virtual ma-
chines as software. Examples of such devices are virtual CPU (vCPU), PCI bridge,

233
14. Virtualized Systems

graphics card, IDE controller etc. These emulated hardware devices are crucial for
virtualizing operating systems. Emulated devices are virtual devices which exist entirely
in software. These hardware devices all appear as being physically attached to the
virtual machine but the device drivers work in different ways.
Paravirtualized devices: Paravirtualization provides a fast and efficient means of communi-
cation for guests to use devices on the host machine. KVM provides paravirtualized
devices to virtual machines using the virtio API as a layer between the hypervisor
and guest. All virtio devices have two parts: the host device and the guest driver.
Paravirtualized device drivers allow the guest operating system access to physical
devices on the host system.
Physically shared devices: Virtual Function I/O (VFIO) is a new kernel driver in Red Hat
Enterprise Linux 7 that provides virtual machines with high performance access to
physical hardware. VFIO attaches PCI devices on the host system directly to virtual
machines, providing guests with exclusive access to PCI devices for a range of tasks.
This enables PCI devices to appear and behave as if they were physically attached to the
guest virtual machine. This process in virtualization is known as device assignment, or
also as passthrough.

14.1.7 Virtual Networking


A virtual guest’s connection to any network uses the software network components of the
physical host. These software components can be rearranged and reconfigured by using
libvirt’s virtual network configuration. The host therefore acts as a virtual network switch,
which can be configured in a number of different ways to fit the guest’s networking needs.
From the point of view of the guest operating system, a virtual network connection is the
same as a normal physical network connection.
By default, all guests on a single host are connected to the same libvirt virtual network, named
default. Guests on this network can make the following connections:

With each other and with the virtualization host Both inbound and outbound traffic is
possible, but is affected by the firewalls in the guest operating system’s network
stack and by libvirt network filtering rules attached to the guest interface.
With other hosts on the network beyond the virtualization host Only outbound traffic is
possible, and is affected by Network Address Translation (NAT) rules, as well as the
host system’s firewall.

However, if needed, guest interfaces can instead be set to one of the following modes:

Isolated mode: The guests are connected to a network that does not allow any traffic beyond
the virtualization host.
Routed mode: The guests are connected to a network that routes traffic between the guest
and external hosts without performing any NAT. This enables incoming connections
but requires extra routing-table entries for systems on the external network.

234
14.2. Creating a Virtual Machine

Bridged mode: The guests are connected to a bridge device that is also connected directly
to a physical Ethernet device connected to the local Ethernet. This makes the guest
directly visible on the physical network, and thus enables incoming connections, but
does not require any extra routing-table entries.

For basic outbound-only network access from virtual machines, no additional network
setup is usually needed, as the default network is installed along with the libvirt
package, and automatically started when the libvirtd service is started. If more advanced
functionality is needed, additional networks can be created and configured using either virsh
or virt-manager, and the guest XML configuration file can be edited to use one of these new
networks.

14.2 Creating a Virtual Machine


14.2.1 Basic Requirements
To set up a KVM virtual machine on Red Hat Enterprise 7, your system must meet the
following criteria:

Architecture Virtualization with the KVM hypervisor is currently only supported on Intel64
and AMD 64 systems.
Disk space and RAM Minimum:
• 6 GB free disk space
• 2 GB RAM
Customer Portal registration To install virtualization packages, your host machine must
be registered and subscribed to the Red Hat Customer Portal. To register run the
subscription-manager register command and follow the prompts. Alternatively,
run the Red Hat Subscription Manager application from Applications System Tools
on the desktop to register.

14.2.2 Required Packages


To use virtualization on Red Hat Enterprise Linux, the qemu-kvm, qemu-img, and libvirt
packages must be installed. These provide the user-level KVM emulator, disk image manager,
and virtualization management tools on the host system. Install the qemu-kvm, qemu-img,
libvirt, and virt-manager packages using the following command.
# yum install qemu -kvm qemu -img libvirt virt - manager

14.2.3 Creating a Virtual Machine with Virtual Machine Manager


Virtual Machine Manager, also known as virt-manager, is a graphical tool for quick
deployment of virtual machines in Red Hat Enterprise Linux. In this section, you will
learn how to use Virtual Machine Manager to create a virtual machine.

235
14. Virtualized Systems

14.2.3.1 The Virtual Machine Manager GUI

To open the Virtual Machine Manager, click Applications System Tools Virtual Machine
Manager; or press the Super key type virt-manager press Enter.

Figure 14.2 shows the Virtual Machine Manager interface. This interface enables you to
control all of your virtual machines from one central location.

Figure 14.2: The Virtual Machine Manager interface.

Commonly used elements of the interface include:

1. Create new virtual machine: Click to create a new virtual machine.


2. Virtual Machines: A list of configured connections and all guest virtual machines
associated with them. When a virtual machine is created, it is listed here. When a guest
is running, an animated graph shows the guest’s CPU usage in the CPU usage column.
After selecting a virtual machine from this list, use the following buttons to control the
selected virtual machine’s state:
• Open: Opens the guest virtual machine console and details in a new window.
• Run: Turns on the virtual machine.
• Pause: Pauses the virtual machine.
• Shutdown: Shuts down the virtual machine. Clicking the arrow displays a drop-
down menu with several options for turning off the virtual machine, including
Reboot, Shut Down, Force Reset, Force Off, and Save.

Right-clicking a virtual machine shows a menu with more functions, including:

• Clone: Clones the virtual machine.


• Migrate: Migrates the virtual machine to another host.
• Delete: Deletes the virtual machine.

14.2.3.2 Creating a Virtual Machine with Virtual Machine Manager

In this section, we describe the steps to create a Red Hat Enterprise Linux 7 virtual machine on
Virtual Machine Manager. Download a binary DVD ISO image from the Red Hat Customer

236
14.2. Creating a Virtual Machine

Portal. This example uses Red Hat Enterprise Linux 7 Workstation. The image file will be
used to install the guest virtual machine’s operating system.

1. Open Virtual Machine Manager


Click Applications System Tools → Virtual Machine Manager
or
Press the Super key, type virt-manager, and press Enter
2. Create a new virtual machine
Click [ Virt −manager−new−button] to open the New VM wizard.
3. Specify the installation method
Start the creation process by choosing the method of installing the new virtual machine.
See Figure 14.3.

Figure 14.3: Select the installation method.

For this example, select Local install media (ISO image). This installation method uses an
image of an installation disk (in this case an .iso file). Click Forward to continue to the
next step.

237
14. Virtualized Systems

4. Locate installation media


a) Select the Use ISO Image option.
b) Click Browse → Browse Local buttons.
c) Locate the ISO downloaded in on your machine.
d) Select the ISO file and click Open.
e) Ensure that Virtual Machine Manager correctly detected the OS type. If not,
uncheck Automatically detect operating system based on install media and select Linux
from the OS type drop-down and Red Hat Enterprise Linux 6 from the Version
drop-down.See Figure 14.4.

Figure 14.4: Local ISO image installation.

5. Configure memory and CPU


You can use step 3 of the wizard to configure the amount of memory and the number of
CPUs to allocate to the virtual machine. The wizard shows the number of CPUs and
amount of memory available to allocate. See Figure 14.5 In this example, we leave the
default settings and click Forward.
6. Configure storage
Using step 4 of the wizard, you can assign storage to the guest virtual machine. The
wizard shows options for storage, including where to store the virtual machine on the

238
14.2. Creating a Virtual Machine

Figure 14.5: Configuring CPU and memory.

host machine. See Figure 14.6. In this example, we leave the default settings and click
Forward.

7. Name and review


Using step 5 of the wizard, you can create a name for the virtual machine and configure
network settings. Enter a name for the virtual machine, verify the settings, and click
Finish. Virtual Machine Manager will create a virtual machine with the specified
hardware settings. Figure 14.7.

After Virtual Machine Manager creates your Red Hat Enterprise Linux 7 virtual machine,
the virtual machine’s window will open, and the installation of the selected operating system
will begin in it. Follow the instructions in the Red Hat Enterprise Linux 7 installer to complete
the installation of the virtual machine’s operating system.

14.2.3.3 Exploring the Guest Virtual Machine

You can view a virtual machine’s console by selecting a virtual machine in the Virtual
Machine Manager window and clicking Open. You can operate your Red Hat Enterprise

239
14. Virtualized Systems

Figure 14.6: Configuring storage.

Linux 7 virtual machine from the console in the same way as a physical system. See Figure 14.8.
In this window, the following options are available.

1. Show the graphical console: shows the virtual machine’s display. The virtual machine
can be operated from the console the same as a physical machine.
2. Show virtual hardware details: shows details about the virtual hardware that the guest
is using. These include an overview of basic system details, performance, processor,
memory, and boot settings, and details of the system’s virtual devices.
3. Run: turns on the virtual machine.
4. Pause : pauses the virtual machine.
5. Shut down : shuts down the virtual machine. Clicking the arrow displays a drop-
down menu with several options for turning off the virtual machine, including Reboot,
Shut Down, Force Reset, Force Off, and Save.
6. Manage snapshots : enables creating, running, and managing snapshots of the virtual
machine.
7. Send Key : sends key combinations such as [ Ctrl +Alt+Backspace], [ Ctrl +Alt+Delete],
[ Ctrl +Alt+F1], [PrintScreen], and more to the virtual machine.

240
14.3. Interacting with Virtualization from Command-Line

Figure 14.7: Naming and verification.

8. Full screen : switches the virtual machine to full screen view.

14.3 Interacting with Virtualization from Command-Line


14.3.1 virsh
virsh is a command-line interface (CLI) tool for managing the hypervisor and guest virtual
machines, and works as the primary means of controlling virtualization on Red Hat Enterprise
Linux 7. Its capabilities include creating, configuring, pausing, listing, and shutting down
virtual machines, as well as managing virtual networks, or loading virtual machine. As such,
it is ideal for creating virtualization administration scripts. Users without root privileges
can use the virsh command as well, but in read-only mode. virsh is installed as part of the
libvirt-client package.

14.3.1.1 Running virsh

There are two ways to run virsh. One way is line by line and the other is inside an interactive
terminal. To enter the interactive terminal, type vi rsh and press [Enter]. Note that, it starts

241
14. Virtualized Systems

Figure 14.8: The guest virtual machine console.

as the root user.


$ virsh
Welcome to virsh , the virtualization interactive terminal .
Type: ’help ’ for help with commands
’quit ’ to quit
virsh #

14.3.1.2 Connecting to the Hypervisor

The virsh connect [hostname-or-URI] [--read only] command begins a local hypervi-
sor session using virsh. After the first time you run this command it will run automatically
each time the virsh shell runs. The hypervisor connection URI specifies how to connect to
the hypervisor. For example, to establish a session to connect to your set of guest virtual
machines, with you as the local user:
$ virsh connect qemu :/// session

14.3.1.3 Displaying Information about a Guest Virtual Machine and the Hypervisor

To list all the virtual machines your hypervisor is connected to, use:
# virsh list --all
Id Name State
------------------------------------------------

Note that this command lists both persistent and transient virtual machines.

242
14.3. Interacting with Virtualization from Command-Line

To displays the hypervisor’s host name, use:


# virsh hostname
dhcp -2 -157. eus. myhost .com

14.3.1.4 Starting a Guest Virtual Machine

To start virtual machine that you already created and is currently in the inactive state use:
# vi rsh start guest1 --console
Domain guest1 started
Connected to domain guest1
Escape character is ^]

Here guest1 is the name of the virtual machine. In addition, the command attaches the guest’s
console to the terminal running virsh.

14.3.1.5 Configuring a Virtual Machine to be Started Automatically at Boot

To set a virtual machine named guest1 which you already created to autostart when the host
boots:
# virsh autostart guest1

14.3.1.6 Rebooting a Guest Virtual Machine

To reboot a guest virtual machine named guest1, use:


# virsh reboot guest1 --mode initctl

In this example, the reboot uses the initctl method, but you can choose any mode that suits
your needs.

14.3.1.7 Shutting Down a Guest Virtual Machine

To shut down the guest1 virtual machine using the acpi mode:
# virsh shutdown guest1 --mode acpi

Domain guest1 is being shutdown.

14.3.1.8 Suspending a Guest Virtual Machine

To suspend the guest1 virtual machine:


# virsh suspend guest1

14.3.1.9 Other virsh Commands

• To displaying general information about a Virtual Machine named guest1, use:


virsh dominfo guest1.

243
14. Virtualized Systems

L
Using guestfish on running virtual machines may cause corruption of the disk image.
Use the guestfish command with the --ro (read-only) option if the disk image is being
used by a running virtual machine.

• To display the current state of the guest1 virtual machine, use virsh domstate guest1.
• To create a guest virtual machine from an XML file, use: virsh create guest1.xml.
• To send a keystroke combination to a guest virtual machine, for example, Left Ctrl ,
Left Alt , and Delete in the Linux encoding to the guest1 virtual machine and and holds
them for 1 second use:
# virsh send -key guest1 \
--codeset Linux --hold time 10 0 0 \
KEY_LEFTCTRL KEY_LEFTALT KEY_DELETE

These keys are all sent simultaneously, and may be received by the guest in a random
order.

14.3.2 Other Tools


There are several other tools to interact with guest machines from the host. A few are listed
below. Note that, most of the tools are installed as part of the libguestfs-tools package.

virt-install This is a command-line utility for provisioning new virtual machines. It


supports both text-based and graphical installations, using serial console, SPICE, or
VNC client-server pair graphics. Installation media can be local, or exist remotely on an
NFS, HTTP, or FTP server. The tool can also be configured to run unattended and use
the kickstart method to prepare the guest, allowing for easy automation of installation.
This tool is included in the virt-install package.
guestfish This is a shell and command-line utility for examining and modifying virtual
machine disk images, as these actions cannot be performed using virsh or virt-manager.
The guestfish tool uses the libguestfs library and exposes all functionality provided
by the guestfs API.
guestmount A command-line utility used to mount virtual machine file systems and disk
images on the host machine. To unmount content mounted this way, use guestunmount
virt-builder A command-line utility for quickly building and customizing new virtual
machines.
virt-cat A command-line utility that can be used to quickly view the contents of one or
more files in a specified virtual machine’s disk or disk image.

244
14.3. Interacting with Virtualization from Command-Line

L
Using guestmount in --rw (read/write) mode to access a disk that is currently being used
by a guest may cause the disk to become corrupted. Do not use guestmount in --rw
(read/write) mode on live virtual machines. Use the guestmount command with the --ro
(read-only) option if the disk image is being used.

L
Use virt-format only on turned off virtual machines that are not being used by another
disk-editing tool. Using virt-format on an active guest image or an image that is being
edited may cause disk corruption in the virtual machine.

virt-copy-in and virt-copy-out A command-line utilities used for copying files or


directories from the host to a guest, and from a guest to the host, respectively.
virt-customize A command-line utility for customizing virtual machine disk images. virt-
customize can be used to install packages, edit configuration files, run scripts, and set
passwords.
virt-df A command-line utility used to show the actual physical disk usage of virtual
machines, similar to the command-line utility df. Note that this tool does not work
across remote connections.
virt-diff A command-line utility for showing differences between the file systems of two
virtual machines. This can be useful for example to discover which files have changed
between snapshots.
virt-edit A command-line utility used to edit files that exist on a specified virtual machine.
virt-filesystems A command-line utility used to discover file systems, partitions, logical
volumes and their sizes in a disk image or virtual machine. One common use is in shell
scripts, to iterate over all file systems in a disk image.
virt-format A command-line utility capable of formatting guest image files, but also logical
volumes on the host machine.
virt-inspector A command-line utility that can examine a virtual machine or disk image to
determine the version of its operating system and other information. It can also produce
XML output, which can be piped into other programs. Note that virt-inspector can
only inspect one virtual machine at a time.
virt-log A command-line utility for listing log files from virtual machines.
virt-ls A command-line utility that lists files and directories inside a virtual machine.
virt-rescue A command-line utility that provides a rescue shell and some simple recovery
tools for unbootable virtual machines and disk images. It can be run on any virtual
machine known to libvirt, or directly on disk images.

245
14. Virtualized Systems

L
Using virt-resize on running virtual machines can give inconsistent results. It is
recommended to shut down virtual machines before attempting to resize them.

virt-resize A command-line utility to resize virtual machine disks, and resize or delete
any partitions on a virtual machine disk. It works by copying the guest image and
leaving the original disk image untouched.
virt-sparsify A command-line utility to make a virtual machine disk (or any disk image)
thin-provisioned. The tool can convert free space in the disk image to free space on the
host.
virt-tar-out and virt-tar-in Command-line archive tools for packing a virtual machine
disk image directory into a tarball, and unpacking an uncompressed tarball into a
virtual machine disk image or specified libvirt guest, respectively.
virt-top A command-line utility similar to top, which shows statistics related to guest
virtual machines. This tool is contained the virt-top package. See man virt-top for
details.
virt-v2v A command-line utility for converting virtual machines from a foreign hypervisor
to run on KVM managed by libvirt. Currently, virt-v2v can convert Red Hat Enterprise
Linux and Windows guests running on Xen and VMware ESX. The virt-v2v tool is
installed in Red Hat Enterprise Linux 7.1 and later as part of the virt-v2v package.

Fourteen
246
This “Linux System Administration I (Red Hat)” training guide was typeset using the

LATEX (the memoir class) typesetting system created by Leslie Lamport. The body text is set

11.0pt with Palatino font designed by Hermann Zapf, which includes italics and small caps.

Other fonts include Sans, Slanted and Typewriter from Donald Knuth’s Computer Modern

family.

Micro-typographic adjustments of text was done using the microtype package.

Those picturesque characters at the end of each chapter are actually chapter numbers in

word (One, Two, Three, Four, . . . ) written in Egyptian hieroglyphs, the formal writing

system used in Ancient Egypt. It combined logographic, syllabic and alphabetic elements,

with a total of some 1,000 distinct characters. Since they did not have the decimal (Arabic)

number system we enjoy today, it would have been difficult to write the chapter number in

numerals. As a matter of fact, the Egyptians had a bases 10 system of hieroglyphs for

numerals. By this we mean that they has separate symbols for one unit, one ten, one

hundred, one thousand, one ten thousand, one hundred thousand, and one million.

And had they tried to write the name of the course you have mastered here, it would look like,

Linux System Administration I Red Hat.

Both sweet and scary, right?

Potrebbero piacerti anche