🚀Day 04- Advanced Linux Commands

🚀Day 04-  Advanced Linux Commands

In the previous article, we have learned about some of the basic commands of the Linux Operating System.

In this article, we are going to see what are some of the advanced Linux commands that DevOps engineers use in day-to-day life.

1. Grep:-

The grep filter searches a file for a particular pattern of characters and displays all lines that contain that pattern.

Syntax: grep [options] pattern [files]

Options Description
-c : This prints only a count of the lines that match a pattern
-h : Display the matched lines, but do not display the filenames.
-i : Ignores, case for matching
-l : Displays list of a filenames only.
-n : Display the matched lines and their line numbers.
-v : This prints out all the lines that do not matches the pattern
-e exp : Specifies expression with this option. Can use multiple times.
-f file : Takes patterns from file, one per line.
-E : Treats pattern as an extended regular expression (ERE)
-w : Match whole word
-o : Print only the matched parts of a matching line,
 with each such part on a separate output line.
-A n : Prints searched line and nlines after the result.
-B n : Prints searched line and n line before the result.
-C n : Prints searched line and n lines after before the result.
Example 1: Search for a string in a file 
$ grep "employee" employee.txt

Example 2: Search for a string in multiple files
$ grep "employee" employee.txt data.txt

Example 3: Case-Insensitive Search
$ grep -i "employee" employee.txt

Example 4: Search for a string in all files recursively 
$ grep -r "employee" /home/ubuntu/

Example 5: Inverted search
$ grep –v "account" account.txt

2. Awk:-

AWK is suitable for pattern search and processing

WHAT CAN WE DO WITH AWK?

1. AWK Operations:
(a) Scans a file line by line
(b) Splits each input line into fields
(c) Compares input line/fields to pattern
(d) Performs action(s) on matched lines

2. Useful For:
(a) Transform data files
(b) Produce formatted reports

3. Programming Constructs:
(a) Format output lines
(b) Arithmetic and string operations
(c) Conditionals and loops

Built-In Variables In Awk

Awk’s built-in variables include the field variables—$1, $2, $3, and so on ($0 is the entire line) — that break a line of text into individual words or pieces called fields.

  • NR: The NR command keeps a current count of the number of input records. Remember that records are usually lines. Awk command performs the pattern/action statements once for each record in a file.

  • NF: The NF command keeps a count of the number of fields within the current input record.

  • FS: The FS command contains the field separator character which is used to divide fields on the input line. The default is “white space”, meaning space and tab characters. FS can be reassigned to another character (typically in BEGIN) to change the field separator.

  • RS: The RS command stores the current record separator character. Since, by default, an input line is the input record, the default record separator character is a new line.

  • OFS: The OFS command stores the output field separator, which separates the fields when Awk prints them. The default is a blank space. Whenever print has several parameters separated with commas, it will print the value of OFS in between each parameter.

  • ORS: The ORS command stores the output record separator, which separates the output lines when Awk prints them. The default is a newline character. print automatically outputs the contents of ORS at the end of whatever it is given to print.

Syntax: 
awk options 'selection _criteria {action }' input-file > output-file

Options:  
-f program-file : Reads the AWK program source from the file 
                  program-file,instead of from the first cmd line arg.
-F fs            : Use fs for the input field separator
Example 1: Print the lines which match the given pattern. 
$ awk '/manager/ {print}' employee.txt 

Example 2: Printing specific columns
$ awk '{print $1,$4}' employee.txt 

Example 3: Display Line Number
$ awk '{print NR,$0}' employee.txt 

Example 4: Display the first and  Last Field
$ awk '{print $1,$NF}' employee.txt 

Example 5: Display Line From 10 to 30
$ awk 'NR>=10 && NR<=30 {print NR,$1, $2}' employee.txt

Example 6: Print TRACE found in between line the number 20 to 30
$ awk 'NR=>20 && NR>=30 && /TRACE/ {print NR,$1,$2,$5)' app.log

Example 7: Printing log for specific time period
$ awk '$2>"8:52:00" && $2<"08:54:00" {print $1,$2)' app.log

3. Find:-

It can be used to find files and directories and perform subsequent operations on them. It supports searching by file, folder, name, creation date, modification date, owner and permissions. By using the ‘-exec’ other UNIX commands can be executed on files or folders found.

Syntax: 
$ find [where to start searching from]
 [expression determines what to find] [-options] [what to find]

Options:
-name pattern: It checks that the file name is the same as the given shell-glob pattern or not.
-type type: It checks that the file is a provided type.
-print: It always gives the true value. It prints the current file name and a newline to stdout.
-print0: It always gives the true value. It prints the current file name and a null character to stdout. Not needed by POSIX.
-exec program [argument ...];: It always gives the true value. It executes a program with the fixed given arguments and the current file path.
-exec program [argument ...] { } +: It always gives the true value. It executes a program with the fixed given arguments and as several paths as possible. For almost every implementation, other {} occurrences mean extra copies of the given name (aspect not needed by POSIX).
-ok program [argument ...];: It is the same as -exec, but will return false or true if the program gives 0.
( expr ): This operator can force precedence.
! expr: If expr returns false, it returns true.
expr1 expr2 (or expr1 -a expr2 : AND. expr2 isn't evaluated if expr1 is false.
expr1 -o expr2 : OR. expr2 isn't evaluated if expr1 is true.
Example 1: Find Files Using Name in Current Directory
$ find . -name employee.txt 

Example 2: Search for a file with pattern.
$ find . -name *.txt 

Example 3: Find File Based on User
$ find / -user root 

Example 4: Find File Based on group
$ find ~/ -group ubuntu 

Example 5:  Find SGID files 
$ find / -perm 2644

Example 6:  Find sticky bit files 
$ find / -perm 1551

Example 7: File all Hidden Files
$ find /tmp -type f -name ".*"

4. Sed:-

SED command in UNIX stands for stream editor and it can perform lots of functions on files like searching, find and replace, insertion or deletion. Though most common use of the SED command in UNIX is for substitution or for find and replace. By using SED you can edit files even without opening them, which is a much quicker way to find and replace something in a file, than first opening that file in VI Editor and then changing it.

Syntax:
sed OPTIONS... [SCRIPT] [INPUTFILE...]

Example 1:  Replace or substitute string
$ sed 's/unix/linux/' file.txt
Here the “s” specifies the substitution operation. The “/” are delimiters. The “unix” is the search pattern and the “linux” is the replacement string.
y default, the sed command replaces the first occurrence of the pattern in each line and it won’t replace the second, third…occurrence in the line.

Example 2:  Replacing the nth occurrence of a pattern in a line 
$ sed 's/unix/linux/2' file.txt

Example 3: Replacing all the occurrence of the pattern in a line
$ sed 's/unix/linux/g' file.txt

Example 4: Replacing string on a specific line number 
$ sed '3 s/unix/linux/' file.txt
The above sed command replaces the string only on the third line.

Example 5: Delete line contains certain string
$ sed '/apple/d' fruits.txt 

Example 6: Delete the range of lines
$ sed '3,5d' fruits.txt

5. SSH:-

ssh stands for “Secure Shell”. It is a protocol used to securely connect to a remote server/system. ssh is secure in the sense that it transfers the data in encrypted form between the host and the client. It transfers inputs from the client to the host and relays back the output. ssh runs at TCP/IP port 22.

The three major encryption techniques used by SSH.

SSH is significantly more secure than the other protocols such as telnet because of the encryption of the data. There are three major encryption techniques used by SSH:

  • Symmetrical encryption: This encryption works on the principle of the generation of a single key for encrypting as well as decrypting the data. The secret key generated is distributed among the clients and the hosts for a secure connection. Symmetrical encryption is the most basic encryption and performs best when data is encrypted and decrypted on a single machine.

  • Asymmetrical encryption: This encryption is more secure because it generates two different keys: Public and Private key. A public key is distributed to different host machines while the private key is kept securely on the client machine. A secure connection is established using this public-private key pair.

  • Hashing: One-way hashing is an authentication technique which ensures that the received data is unaltered and comes from a genuine sender. A hash function is used to generate a hash code from the data. It is impossible to regenerate the data from the hash value. The hash value is calculated at the sender as well as the receiver’s end. If the hash values match, the data is authentic.

Syntax:
ssh user_name@host(IP/Domain_name)

Example 1: login into IP address 10.143.90.2 using username jayesh
$ ssh jayesh@10.143.90.2

6. SCP:-

The scp command is used to securely copy files between two machines

Syntax:
scp source_file user@destination_host:destination_path

Example 1: copy file from local to remote server
$ scp file.txt user@192.168.43.120:/home/user/file.txt

Example 2: copy file remote server to local
$ scp user@192.168.43.120:/home/user/file.txt  /home/user

7. SUDO:-

sudo (Super User DO) command in Linux is generally used as a prefix for some commands that only superusers are allowed to run. If you prefix any command with “sudo”, it will run that command with elevated privileges or in other words allow a user with proper permissions to execute a command as another user, such as the superuser. This is the equivalent of the “run as administrator” option in Windows. The option of sudo lets us have multiple administrators.

These users who can use the sudo command need to have an entry in the sudoers file located at “/etc/sudoers”.

Syntax:
sudo <option> <command>

Exmaple 1: We can change the password of the user
$ sudo passwd ubuntu

Exmaple 2: we can use sudo to restart the system immediately
$ sudo passwd ubuntu

Exmaple 3: We can use the -k option with sudo to kill the current 
sudo authentication
$ sudo -k

8. SU:-

su command allows us to switch to a different user and execute one or more commands in the shell without logging out from our current sessio

Syntax:
su <username>

Example 1: Switch to another using su command 
$ su  ubuntu

9. SORT:-

SORT command is used to sort a file, arranging the records in a particular order. By default, the sort command sorts file assuming the contents are ASCII. Using options in the sort command can also be used to sort numerically.

  • SORT command sorts the contents of a text file, line by line.

  • sort is a standard command-line program that prints the lines of its input or concatenation of all files listed in its argument list in sorted order.

  • The sort command is a command-line utility for sorting lines of text files. It supports sorting alphabetically, in reverse order, by number, by month, and can also remove duplicates.

  • The sort command can also sort by items not at the beginning of the line, ignore case sensitivity, and return whether a file is sorted or not. Sorting is done based on one or more sort keys extracted from each line of input.

  • By default, the entire input is taken as the sort key. Blank space is the default field separator.

The sort command follows these features as stated below:

  1. Lines starting with a number will appear before lines starting with a letter.

  2. Lines starting with a letter that appears earlier in the alphabet will appear before lines starting with a letter that appears later in the alphabet.

  3. Lines starting with a uppercase letter will appear before lines starting with the same letter in lowercase.

Syntax:
sort [options] [file(s)]
Options
-r: sort the input in reverse order.
-n: sort the input numerically.
-k: sort the input based on a specific field or column.
-b: ignore the leading blanks.
-c: check if the file given is already sorted or not
-t: specify the field separator.
-u: remove duplicate lines from the output.
-o: specify the output file.
-M: To sort by month
Example 2: Sorting File Content in ascending order
$ sort file.txt

Example 2: Sorting in Reverse Order
$ sort -r file.txt

Example 3: Numerical Sorting
$ sort -n file.txt

Example 4: Sorting by Field - sorts the lines of text in the file.txt 
file based on the second field (column) and displays the result on the 
screen
$ sort -k 2 file.txt

Example 5: Removing Duplicate Lines
$ sort -u file.txt

Example 6: Specifying the Output File
$ sort file.txt -o sorted_data.txt

9. LSOF:-

lsof command stands for List Of Open File. This command provides a list of files that are opened. Basically, it gives the information to find out the files which are opened by which process. With one go it lists out all open files in output console. It cannot only list common regular files but it can list a directory, a block special file, a shared library, a character special file, a regular pipe, a named pipe, an internet socket, a UNIX domain socket, and many others. it can be combined with grep command can be used to do advanced searching and listing.

Syntax:
lsof [option][user name]

Options
-b: Suppresses kernel blocks.
-u [username]: Prints all files opened by a user.
-u ^[username]: Prints all files opened by everyone except a 
                specific user.
-c [process]: Lists all files accessed by a particular process.
-p [process ID]: Shows all open files associated with a specific 
                 process ID.
-p ^[process ID]: Shows files opened by all other PIDs.
-R: Lists parent process IDs.
+D [directory path]:    Prints all open files in a directory.
-i:    Displays all files accessed by network connections.  
-i [IP version number]:    Filters files based on their IP.
- i [udp or tcp]: Filters open files based on the connection 
                  type (TCP or UDP).
-i :[port number]: Finds processes running on a specific port.
-i :[port range]: Finds processes running on specific port ranges.
-t [file name]:    Lists IDs of processes that have accessed a 
                particular file.
-d [mem]: Shows all memory-mapped files.
Example 1: List all open files
$ lsof | less

Example 2: List all files opened by a user
$ lsof -u ubuntu

Example 3: List all files which are opened by everyone 
except a specific user
$ lsof -u ^root

Example 4: List all open files by a particular Process
$ lsof -c nginx

Example 5: List all open files that are opened by a particular process
using process id
$ lsof -p process ID

Example 6: List parent process IDs
$ lsof -R

Example 7: Files opened by a directory
$ lsof -D <directory path>

Example 8: Files opened by network connections
$ lsof -i 

Example 9: Find Processes Running on Specific Port
$ lsof -i TCP:22

Example 10: List Open Files of TCP Port Ranges 1-1024
$  lsof -i TCP:1-1024

Example 11: Kill all Activity of Particular User
$ kill -9 `lsof -t -u tech`

10. Tee:-

tee command reads the standard input and writes it to both the standard output and one or more files

Syntax:
tee [OPTION]... [FILE]...

Example 1: append the output to the given file
$ cat file1.txt |tee -a file2.txt

Example 1:  Write Output to Multiple Files in Linux
$ cat f1.txt |tee file-1.txt file-2.txt file-3.txt

11. Cut:-

The cut command in Linux is a command for cutting out the sections from each line of files and writing the result to standard output. It can be used to cut parts of a line by byte position, character and field. Basically the cut command slices a line and extracts the text. It is necessary to specify option with command otherwise it gives error. If more than one file name is provided then data from each file is not precedes by its file name.

Syntax:
cut OPTION... [FILE]...

Options
-f (--fields=LIST): Select using a specified field, a field set, 
                    or a field range.
-b (--bytes=LIST): Select using a specified byte, a byte set, 
                    or a byte range.
-c (--characters=LIST): Select using a specified character, 
                        a character set, or a character range.
-d (--delimiter): Used to specify a delimiter to use instead of the 
                  default TAB delimiter.
--complement: When specified, this option instructs cut to display 
              all the bytes, characters, or fields, except the selected.
-s (--only-delimited): The default setting is to print the lines that 
                       don't contain delimiter characters. Specifying 
                       the -s option instructs cut not to print the 
                       lines that don't contain delimiters.
--output-delimiter: By default, cut uses the input delimiter as 
the output delimiter. Specifying the --output-delimiter option allows 
you to specify a different output delimiter.
Let us consider the file state.txt having below contents
$ cat state.txt
Andhra Pradesh
Arunachal Pradesh
Assam
Bihar
Chhattisgarh

Example 1: Cut by Bytes
$ cut -b 1 state.txt
The command prints only the first byte from each file input line.

Example 2:  Cut by Characters
$  cut -c 1-7 state.txt
Above command prints first seven characters of each line from the file.

Example 3: Cut Based on a Delimiter
$ echo "phoenixNAP is a IT services provider" | cut -d ' ' -f 2
If -d option is used then it considered space as a field separator 
or delimiter

Example 4:  Cut by Fields
$ cut -f 2 state.txt
the -f option to extract the second field from the employees.txt file.
$ cut -d: -f1,6 /etc/passwd

Example 4: Cut by Complement Pattern
% cut --complement -c 1 state.txt
The above command will cut the file by the first character

12. Useradd :-

useradd is a command in Linux that is used to add user accounts to your system. It is just a symbolic link to adduser command in Linux and the difference between both of them is that useradd is a native binary compiled with the system whereas adduser is a Perl script that uses useradd binary in the background.

Syntax:
useradd [options] [User_name]

Example 1: Adding a simple user
$ sudo useradd testuser

Example 2: Specifying a home directory path for the new user
$ sudo useradd -d /home/testuser testuser

Example 3: Creating a user with a specific user ID (UID)
$ sudo useradd -u 1234 testuser

Example 4: Creating a user without a home directory
$ sudo useradd -M testuser

Example 5: Creating a user with changed login shell
$ sudo useradd -s /bin/sh testuser

13. Groupadd :-

Groups in Linux refer to the user groups. In Linux, there can be many users of a single system, (a normal user can take uid from 1000 to 60000, and one root user (uid 0) and 999 system users (uid 1 to 999)). In a scenario where there are many users, there might be some privileges that some users have and some don’t, and it becomes difficult to manage all the permissions at the individual user level. So, using groups, we can group together a number of users, and set privileges and permissions for the entire group.

Syntax:
groupadd [option] group_name 

Example 1: Adding a simple group
$ sudo groupadd developers 

Example 2: Creating a group with a specific group ID (GID)
$  sudo groupadd staff -g GID

Example 3: Creating a system group
$ sudo groupadd -r employee

Example 4: Create an encrypted password for the new group
$ $ sudo groupadd company -p pa55word

Thank you for reading. I hope you will find this article helpful. if you like it please share it with others

Mohd Ishtikhar Khan : )

Â