Sei sulla pagina 1di 59

Contents

Contents....................................................................................................................................................................................................................................... 1
File descriptors......................................................................................................................................................................................................................... 4
Redirection........................................................................................................................................................................................................................... 7
Implementing pipe............................................................................................................................................................................................................... 7
Linux tar command examples ( tar == arhiva gzip == compression ).......................................................................................................................................9
1) Create a tar archive of a subdirectory.............................................................................................................................................................................. 9
2) List the contents of a tar archive...................................................................................................................................................................................... 9
3) Extracting a tar archive.................................................................................................................................................................................................. 10
4) Linux tar command with gzip - Creating a compressed archive.....................................................................................................................................10
5) Creating a compressed archive of the current directory................................................................................................................................................10
6) Creating an archive in a different directory.................................................................................................................................................................... 11
Linux gzip - How to work with compressed files.................................................................................................................................................................... 11
The Unix/Linux gzip command........................................................................................................................................................................................... 11
The Linux gunzip command................................................................................................................................................................................................ 11
The Linux file compress utilities (zcat, zmore, zgrep).........................................................................................................................................................12
WC command......................................................................................................................................................................................................................... 12
Here Document.................................................................................................................................................................................................................. 13
Shell Script Commands.......................................................................................................................................................................................................... 14
UNIX Commands Review.................................................................................................................................................................................................... 14
Command-Line Arguments................................................................................................................................................................................................ 16
shift Command................................................................................................................................................................................................................... 16
Special *@#0$?_!- Parameters.......................................................................................................................................................................................... 17
Uppercase or Lowercase Text for Easy Testing................................................................................................................................................................... 19
Check the Return Code...................................................................................................................................................................................................... 20
Using getopts to Parse Command-Line Arguments................................................................................................................................................................21
Find Command................................................................................................................................................................................................................... 22
Finding files that contain text (find + grep)........................................................................................................................................................................ 24
Power file searching with find and grep............................................................................................................................................................................. 24
Acting on files you find (find + exec).................................................................................................................................................................................. 24
Running the ls command on files you find......................................................................................................................................................................... 25
Find and delete.................................................................................................................................................................................................................. 25
Case-insensitive file searching............................................................................................................................................................................................ 26
Find files by modification time........................................................................................................................................................................................... 26
grep command....................................................................................................................................................................................................................... 26
Searching for a text string in one file.................................................................................................................................................................................. 27
Searching for a string in multiple files................................................................................................................................................................................ 27
Case-insensitive file searching with the Unix grep command............................................................................................................................................27
Reversing the meaning of a grep search............................................................................................................................................................................ 27
Using grep in a Unix/Linux command pipeline................................................................................................................................................................... 27
How you can find all the Java processes running on your system......................................................................................................................................28
Find all the sub-directories in the current directory...........................................................................................................................................................28
Search for multiple patterns at one time........................................................................................................................................................................... 28
Searching for regular expressions (regex patterns) with grep............................................................................................................................................28
Display only filenames with a grep search......................................................................................................................................................................... 29
Related Unix/Linux grep commands and tutorials-string command..................................................................................................................................30
Scripting................................................................................................................................................................................................................................. 31
Variables............................................................................................................................................................................................................................ 31
Using the test shell builtin.................................................................................................................................................................................................. 34
File Tests............................................................................................................................................................................................................................. 36
Integer Tests....................................................................................................................................................................................................................... 36
String Tests......................................................................................................................................................................................................................... 37
String test with patern matching........................................................................................................................................................................................ 38
Diferenta dintre asignare directa si folosirea lui let............................................................................................................................................................39
SED............................................................................................................................................................................................................................................. 41
Stergere caractere dintr-un String sau numeFisier..................................................................................................................................................................... 42
dirname and basename............................................................................................................................................................................................................. 42
Diferenta intre pwd so PWD environment variable.................................................................................................................................................................... 43
Database connection and retrieve values.................................................................................................................................................................................. 43
Regular expressions................................................................................................................................................................................................................... 47
Reading variables from files....................................................................................................................................................................................................... 49
Example of logging..................................................................................................................................................................................................................... 50
Scripturi to use for oracle database........................................................................................................................................................................................... 52
How to test sftp in linux script................................................................................................................................................................................................... 54
Give permission recursivelly...................................................................................................................................................................................................... 55
File descriptors
In simple words, when you open a file, the operating system creates an entry to represent that file and store the information about that opened file. So if
there are 100 files opened in your OS then there will be 100 entries in OS (somewhere in kernel). These entries are represented by integers like (...100, 101,
102....). This entry number is the file descriptor. So it is just an integer number that uniquely represents an opened file in operating system. If your process
opens 10 files then your Process table will have 10 entries for file descriptors.
To the kernel, all open files are referred to by File Descriptors. A file descriptor is a non-negative number. When we open an existing file or create a new file,
the kernel returns a file descriptor to the process. The kernel maintains a table of all open file descriptors, which are in use. When we want to read or write
a file, we identify the file with the file descriptor that was returned by open() or create() function call, and use it as an argument to either read() or write().
It is by convention that, UNIX System shells associates the file descriptor 0 with Standard Input of a process, file descriptor 1 with Standard Output, and file
desciptor 2 with Standard Error.
File descriptor ranges from 0 to OPEN_MAX.
Detailed explanation :

File Descriptors : One of the first things a UNIX programmer learns is that every running program starts with three files already opened:

Descriptive Name File Number Description


Standard In 0 Input from the keyboard
Standard Out 1 Output to the console
Standard Error 2 Error output to the console

Figure 1.2. Default Unix Files

The standard files opened with any UNIX program.

This raises the question what an open file represents. The value returned by an open call is termed a file descriptor and is essentially an index into an array
of open files kept by the kernel.
Figure 1.3. Abstraction

File descriptors associate the abstraction provided by device-drivers with a file interface provided to a user.

File descriptors are an index into a file-descriptor table stored by the kernel. The kernel creates a file-descriptor in response to an open call and associates
the file-descriptor with some abstraction of an underlying file-like object; be that an actual hardware device, or a file-system or something else entirely.
Consequently a processes read or write calls that reference that file-descriptor are routed to the correct place by the kernel to ultimately do something
useful.
In short, the file-descriptor is the gateway into the kernel's abstractions of underlying hardware. An overall view of the abstraction for physical-devices is
shown in Figure 1.3, “Abstraction”.

Starting at the lowest level, the operating system requires a programmer to create a device-driver to be able to communicate with a hardware device. This
device-driver is written to an API provided by the kernel just like in Example 1.2, “Abstraction in include/linux/virtio.h”; the device-driver will provide a
range of functions which are called by the kernel in response to various requirements. In the simplified example above, we can see the drivers provide a
read and write function that will be called in response to the analogous operations on the file-descriptor. The device-driver knows how to convert these
generic requests into specific requests or commands for a particular device.

To provide the abstraction to user-space, the kernel provides a file-interface via what is generically termed a device layer. Physical devices on the host are
represented by a file in a special file-system such as /dev. In UNIX-like systems, so called device-nodes have what are termed a major and a minor number
which allows the kernel to associate particular nodes with their underlying driver. These can be identified via ls as illustrated in Example 1.3, “Example of
major and minor numbers”.

Example 1.3. Example of major and minor numbers


$ ls -l /dev/null /dev/zero /dev/tty
crw-rw-rw- 1 root root 1, 3 Aug 26 13:12 /dev/null
crw-rw-rw- 1 root root 5, 0 Sep 2 15:06 /dev/tty
crw-rw-rw- 1 root root 1, 5 Aug 26 13:12 /dev/zero

This brings us to the file-descriptor, which is the handle user-space uses to talk to the underlying device. In a broad-sense, what happens when a file is
opened is that the kernel is using the path information to map the file-descriptor with something that provides an appropriate read and write, etc. API.
When this open is for a device (/dev/sr0 above), the major and minor number of the opened device-node provides the information the kernel needs to find
the correct device-driver and complete the mapping. The kernel will then know how to route further calls such as read to the underlying functions provided
by the device-driver.

A non-device file operates similarly, although there are more layers in-between. The abstraction here is the mount-point; mounting a file-system has the
dual purpose of setting up a mapping so the file-system knows the underlying device that provides the storage and the kernel knows that files opened
under that mount-point should be directed to the file-system driver. Like device-drivers, file-systems are written to a particular generic file-system API
provided by the kernel.

There are indeed many other layers that complicate the picture in real-life. For example, the kernel will go to great efforts to cache as much data from disks
as possible in otherwise free-memory; this provides many speed advantages. It will also try to organise device access in the most efficient ways possible;
for example trying to order disk-access to ensure data stored physically close to each other is retrieved together, even if the requests did not arrive in such
an order. Further, many devices are of a more generic class such as USB or SCSI devices which provide their own abstraction layers to write too. Thus rather
than writing directly to devices, file-systems will go through these many layers. Understanding the kernel is to understand how these many APIs interrelate
and coexist.

The Shell

The shell is the gateway to interacting with the operating system. Be it bash, zsh, csh or any of the many other shells, they all fundamentally have only one
major task — to allow you to execute programs (you will begin to understand how the shell actually does this when we talk about some of the internals of
the operating system later).
But shells do much more than allow you to simply execute a program. They have powerful abilities to redirect files, allow you to execute multiple programs
simultaneously and script complete programs. These all come back to the everything is a file idiom.

Redirection
Often we do not want the standard file descriptors mentioned in the section called “File Descriptors” to point to their default places. For example, you may
wish to capture all the output of a program into a file on disk, or, alternatively have it read its commands from a file you prepared earlier. Another useful
task might like to pass the output of one program to the input of another. With the operating system, the shell facilitates all this and more.

Table 1.2. Standard Shell Redirection Facilities

Implementing pipe
The implementation of ls | more is just another example of the power of abstraction. What fundamentally happens here is that instead of associating the
file-descriptor for the standard-output with some sort of underlying device (such as the console, for output to the terminal), the descriptor is pointed to an
in-memory buffer provided by the kernel commonly termed a pipe. The trick here is that another process can associate its standard input with the other-
side of this same buffer and effectively consume the output of the other process. This is illustrated in Figure 1.4, “A pipe in action”

Figure 1.4. A pipe in action


The pipe is an in-memory buffer provided by the kernel which allows the output of one process to be consumed as the input to another.

The pipe is an in-memory buffer that connects two processes together. File-descriptors point to the pipe object, which buffers data sent to it (via a write) to
be drained (via a read)

Writes to the pipe are stored by the kernel until a corresponding read from the other side drains the buffer. This is a very powerful concept and is one of
the fundamental forms of inter-process communication or IPC in UNIX like operating systems. The pipe allows more than just a data transfer; it can act as a
signaling channel. If a process reads an empty pipe, it will by default block or be put into hibernation until there is some data available .Thus two processes
may use a pipe to communicate that some action has been taken just by writing a byte of data; rather than the actual data being important, the mere
presence of any data in the pipe can signal a message. Say for example one process requests that another print a file - something that will take some time.
The two processes may setup a pipe between themselves where the requesting process does a read on the empty pipe; being empty that call blocks and
the process does not continue. Once the print is done, the other process can write a message into the pipe, which effectively wakes up the requesting
process and signals the work is done.

Allowing processes to pass data between each other like this springs another common UNIX idiom of small tools doing one particular thing. Chaining these
small tools gives a flexibility that a single monolithic tool often can not.
Linux tar command examples ( tar == arhiva gzip == compression )
These days the Linux tar command is more often used to create compressed archives that can easily be moved around, from disk to disk, or computer to
computer. One user may archive a large collection of files, and another user may extract those files, with both of them using the tar command.

1) Create a tar archive of a subdirectory


A common use of the Linux tar command is to create an archive of a subdirectory. For instance, assuming there is a subdirectory named MyProject in the
current directory, you can use tar to create an uncompressed archive of that directory with this command:
tar cvf MyProject.20090816.tar MyProject

where MyProject.20090816.tar is the name of the archive (file) you are creating, and MyProject is the name of your subdirectory. It's common to
name an uncompressed archive with the .tar file extension.

In that command, I used three options to create the tar archive:

The letter c means "create archive".


The letter v means "verbose", which tells tar to print all the filenames as they are added to the archive.
The letter f tells tar that the name of the archive appears next (right after these options).
The v flag is completely optional, but I usually use it so I can see the progress of the command.

The general syntax of the tar command when creating an archive looks like this:
tar [flags] archive-file-name files-to-archive

2) List the contents of a tar archive


To list the contents of an uncompressed tar archive, just replace the c flag with the t flag, like this:
tar tvf my-archive.tar

This lists all the files in the archive, but does not extract them.

To list all the files in a compressed archive, add the z flag like before:
tar tzvf my-archive.tgz  was tar’d and gzip’d in one step

That same command can also work on a file that was tar'd and gzip'd in two separate steps (as indicated by the .tar.gz file extension):
tar tzvf my-archive.tar.gz  was tar’d and gzip’d in two steps

I almost always list the contents of an unknown archive before I extract the contents. I think this is always good practice, especially when you're logged in
as the root user.

3) Extracting a tar archive


To extract the contents of a Linux tar archive, now just replace the t flag with the x ("extract") flag. For uncompressed archives the extract command
looks like this:
tar xvf my-archive.tar

For compressed archives the tar extract command looks like this:
tar xzvf my-archive.tar.gz

or this:
tar xzvf my-archive.tgz

4) Linux tar command with gzip - Creating a compressed archive


You can compress a tar archive with the gzip command after you create it, like this:
gzip MyProject.20090816.tar

This creates the file MyProject.20090816.tar.gz.

But these days it's more common to create a gzip'd tar archive with one tar command, like this:
tar czvf MyProject.20090816.tgz MyProject

As you can see, I added the 'z' flag there (which means "compress this archive with gzip"), and I changed the extension of the archive to .tgz, which is the
common file extension for files that have been tar'd and gzip'd in one step.

5) Creating a compressed archive of the current directory


Many times when using the Linux tar command you will want to create an archive of all files in the current directory, including all subdirectories. You can
easily create this archive like this:
tar czvf mydirectory.tgz .

In this tar example, the '.' at the end of the command is how you refer to the current directory.

6) Creating an archive in a different directory


You may also want to create a new tar archive like that previous example in a different directory, like this:
tar czvf /tmp/mydirectory.tgz .

As you can see, you just add a path before the name of your tar archive to specify what directory the archive should be created in.

Linux gzip - How to work with compressed files


If you work much with Unix and Linux systems you'll eventually run into the terrific file compression utilities, gzip and gunzip. As their names imply, the first
command creates compressed files (by gzip'ing them), and the second command unzip's those files.

In this tutorial I take a quick look at the gzip and gunzip file compression utilities, along with their companion tools you may not have known about: zcat,
zgrep, and zmore.
The Unix/Linux gzip command
You can compress a file with the Unix/Linux gzip command. For instance, if I run an ls -l command on an uncompressed Apache access log file named
access.log, I get this output:
-rw-r--r-- 1 al al 22733255 Aug 12 2008 access.log

Note that the size of this file is 22,733,255 bytes. Now, if we compress the file using gzip, like this:
gzip access.log

we end up creating a new, compressed file named access.log.gz. Here's what that file looks like:
-rw-r--r-- 1 al al 2009249 Aug 12 2008 access.log.gz

Notice that the file has been compressed from 22,733,255 bytes down to just 2,009,249 bytes. That's a huge savings in file size, roughly 10 to 1(!).

There's one important thing to note about gzip: The old file, access.log, has been replaced by this new compressed file, access.log.gz. This might freak you
out a little the first time you use this command, but very quickly you get used to it. (If for some reason you don't trust gzip when you first try it, feel free to
make a backup copy of your original file.)

The Linux gunzip command


The gunzip ("g unzip") command works just the opposite of gzip, converting a gzip'd file back to its original format. In the following example I'll convert the
gzip'd file we just created back to its original format:
gunzip access.log.gz

Running that command restores our original file, as you can see in this output:
-rw-r--r-- 1 al al 22733255 Aug 12 2008 access.log

The Linux file compress utilities (zcat, zmore, zgrep)


I used to think I had to uncompress a gzip'd file to work on it with commands like cat, grep, and more, but at some point I learned there were equivalent
gzip versions of these same commands, appropriately named zcat, zgrep, and zmore. So, anything you would normally do on a text file with the first three
commands you can do on a gzip'd file with the last three commands.

For instance, instead of using cat to display the entire contents of the file, you use zcat to work on the gzip'd file instead, like this:
zcat access.log.gz

(Of course that output will go on for a long time with roughly 22MB of compressed text.)

You can also scroll through the file one page at a time with zmore:
zmore access.log.gz

And finally, you can grep through the compressed file with zgrep:
zgrep '/java/index.html' access.log.gz
There are also two other commands, zcmp and zdiff, that let you compare compressed files, but I personally haven't had the need for them. However, as
you can imagine, they work like this:
zmp file1.gz file2.gz

or
zdiff file1.gz file2.gz

Linux gzip / compress summary

As a quick summary, just remember that you don't have to uncompress files to work on them, you can use the following z-utilities to work on the
compressed files instead:
zcat
zmore
zgrep
zcmp
zdiff
WC command
If we have a file du1L that contains 5 lines like this

unu
doi
trei
patru
cinci
then we can count the lines number in many ways:
[mihail@oc8168772081 TEST_INFOACADEMY]$ cat du1L
unu
doi
trei
patru
cinci
[mihail@oc8168772081 TEST_INFOACADEMY]$ wc -l du1L
5 du1L
[mihail@oc8168772081 TEST_INFOACADEMY]$ wc -l <du1L
5
[mihail@oc8168772081 TEST_INFOACADEMY]$ cat du1L|wc -l --- piping the output of cat command into the input of wc command
5

Here Document
A here document is used to redirect input into an interactive shell script or program. We can run an interactive program within a shell script without user
action by supplying the required input for the interactive program, or interactive shell script. This is why it is called a here document: the required input is
here, as opposed to somewhere else.
This is the syntax for a here document:
program_name <<LABEL
Program_Input_1
Program_Input_2
Program_Input_3
Program_Input_#
LABEL

Example:
/usr/local/bin/My_program << EOF
Randy
Robin
Rusty
Jim
EOF
Notice in the here documents that there are no spaces in the program input lines, between the two EOF labels. If a space is added to the input, the here
document may fail. The input that is supplied must be the exact data that the program is expecting, and many programs will fail if spaces are added to the
input.

Shell Script Commands


The basis for the shell script is the automation of a series of commands. We can execute most any command in a shell script that we can execute from the
command line. (One exception is trying to set an execution suid or sgid, sticky bit, within a shell script; it is not supported for security reasons.) For
commands that are executed often, we reduce errors by putting the commands in a shell script. We will eliminate typos and missed device definitions, and
we can do conditional tests that can ensure there are not any failures due to unexpected input or output. Commands and command structure will be
covered extensively throughout this book.

Most of the commands shown in Table 1-3 are used at some point in this book, depending on the task we are working on in each chapter.

UNIX Commands Review


COMMAND DESCRIPTION
passwd Changes user password
pwd Prints current directory
cd Changes directory
ls Lists files in a directory
wildcards * matches any number of characters; ? matches a single character
file Prints the type of file
cat Displays the contents of a file
pr Displays the contents of a file
pg or page Displays the contents of a file one page at a time
more Displays the contents of a file one page at a time
clear Clears the screen
cp or copy Copies a file
chown Changes the owner of a file
chgrp Changes the group of a file
chmod Changes file modes, permissions
rm Removes a file from the system
mv Renames a file
mkdir Creates a directory
rmdir Removes a directory
grep Pattern matching
egrep grep command for extended regular expressions
find Locates files and directories
>> Appends to the end of a file
> Redirects, creates, or overwrites a file
| Strings commands together, known as a pipe
|| Logical OR – commandl || command2 – execute command2 if commandl fails
& Executes in background
&& Logical AND – commandl && command2 – execute command2 if commandl succeeds
date Displays the system date and time
echo Writes strings to standard output
sleep Halts execution for the specified number of seconds
wc Counts the number of words, lines, and characters in a file
head Views the top of a file
tail Views the end of a file
diff Compares two files
sdiff Compares two files side by side (requires 132-character display)
spell Spell checker
lp, lpr, enq, qprt Prints a file
lpstat Status of system print queues
enable Enables, or starts, a print queue
disablf Disables, or stops, a print queue_____
cal Displays a calendar
who Displays information about users on the system
w Extended who command
whoami Displays $LOGNAME or $USER environment parameters
who am I Displays login name, terminal, login date/time, and where logged in
f, finger Displays information about logged-in users, including the users .plan and .project
talk Enables two users to have a split-screen conversation
write Displays a message on a user's screen
wall Displays a message on all logged-in users’ screens
rwall Displays a message to all users on a remote host
rsh Or remsh Executes a command, or login, on a remote host
df Displays filesystem statistics
ps Displays information on currently running processes
netstat Shows network status
vmstat Shows virtual memory status
iostat Shows input/output status
uname Shows name of the current operating system, as well as machine information
sar Reports system activity
basename Displays base filename of a string parameter
man Displays the online reference manual
su Switches to another user, also known as super-user
cut Writes out selected characters
awk Programming language to parse characters
sed Programming language for character substitution
vi Starts the vi editor
emacs Starts the emacs editor

COMMAND DESCRIPTION
() Runs the enclosed command in a sub-shell
(( )) Evaluates and assigns value to a variable and does math in a shell
$(( )) Evaluates the enclosed expression
[] Same as the test command
<> Used for string comparison
$( ) Command substitution
‘command’ Command substitution

Command-Line Arguments

The command-line arguments $1, $2, $3,…$9 are positional parameters, with $0 pointing to the actual command, program, shell script, or function and $1,
$2, $3, …$9 as the arguments to the command.

The positional parameters, $0, $2, and so on in a function are for the function's use and may not be in the environment of the shell script that is calling the
function. Where a variable is known in a function or shell script is called the scope of the variable.
shift Command

The shift command is used to move positional parameters to the left; for example, shift causes $2 to become $1. We can also add a number to the shift
command to move the positions more than one position; for example, shift 3 causes $4 to move to the $1 position.

Sometimes we encounter situations where we have an unknown or varying number of arguments passed to a shell script or function, $1, $2, $3…. (also
known as positional parameters). Using the shift command is a good way of processing each positional parameter in the order they are listed.

To further explain the shift command, we will show how to process an unknown number of arguments passed to the shell script shown in Listing 1-2. Try to
follow through this example shell script structure. This script is using the shift command to process an unknown number of command-line arguments, or
positional parameters. In this script we will refer to these as tokens.

#!/usr/bin/sh
#
# SCRIPT: shifting.sh
# PLATFORM: Not platform dependent
#
# PURPOSE: This script is used to process all of the tokens which
# are pointed to by the command-line arguments, $1, $2, $3, etc…
#
# REV. LIST:
#

# Initialize all variables

TOTAL=0 # Initialize the TOTAL counter to zero

# Start a while loop

while true
do
TOTAL=‘expr $TOTAL + 1’ # A little math in the
# shell script, a running
# total of tokens processed.
TOKEN=$1 # We always point to the $1 argument with a shift process each $TOKEN
shift # Grab the next token, i.e. $2 becomes $1

done

echo “Total number of tokens processed: $TOTAL”


Listing 1-2

To shift three variables, shift 3 is equivalent to shift; shift; shift,


Special *@#0$?_!- Parameters
There are three types of parameters: positional parameters, special parameters, and variables. Positional parameters are arguments present on the
command line, and they are referenced by a number. Special parameters are set by the shell to store information about aspects of its current state, such as
the number of arguments and the exit code of the last command. Their names are nonalphanumeric characters (for example, *, #, and _). Variables are
identified by a name. The value of a parameter is accessed by preceding its name, number, or character with a dollar sign, as in $3, $#, or $HOME. The
name may be surrounded by braces, as in ${10}, ${PWD}, or ${USER}.

Positional Parameters

The arguments on the command line are available to a shell program as numbered parameters. The first argument is $1, the second is $2, and so on.

There are special parameters that allow accessing all the command-line arguments at once. $* and $@ both will act the same unless they are enclosed in
double quotes, “ ”.
Special Parameter Definitions

The $* special parameter specifies all command-line arguments.


The $@ special parameter also specifies all command-line arguments.
The “$*” special parameter takes the entire list as one argument with spaces between.
The “$@” special parameter takes the entire list and separates it into separate arguments.

We can rewrite the shell script shown in Listing 1-2 to process an unknown number of command-line arguments with either the $* or $@ special
parameters, as shown in Listing 1-3.

#!/usr/bin/sh
#
# SCRIPT: shifting.sh
# AUTHOR: Randy Michael
# DATE: 12-31-2007
# REV: 1.1.A
# PLATFORM: Not platform dependent
# PURPOSE: This script is used to process all of the tokens which
# Are pointed to by the command-line arguments, $1, $2, $3, etc… -
#
# REV LIST:
# Start a for loop
for TOKEN in $*
do
process each $TOKEN
done
Listing 1-3 Example using the special parameter $*

We could have also used the $@ special parameter just as easily. As we see in the preceding code segment, the use of the $@ or $* is an alternative
solution to the same problem, and it was less code to write. Either technique accomplishes the same task.
So , the first two special parameters, $* and $@, expand to the value of all the positional parameters combined. $# expands to the number of positional
parameters. $0 contains the path to the currently running script or to the shell itself if no script is being executed.

$$ contains the process identification number (PID) of the current process, $? is set to the exit code of the last-executed command, and $_ is set to the last
argument to that command. $! contains the PID of the last command executed in the background, and $- is set to the option flags currently in effect.
The Bourne shell could only address up to nine positional parameters. If a script used $10, it would be interpreted as $1 followed by a zero. To be able to
run old scripts, bash maintains that behavior. To access positional parameters greater than 9, the number must be enclosed in braces: ${15}.
e script is passed to the parameters that can be accessed via their positions, $0, $1, $2 and so on. The function shift N moves the positional parameters by
N positions, if you ran shift (the default value of N is 1), then $0 would be discarded, $1 would become $0, $2 would become $1, and so on: they would all
be shifted by 1 position. There are some very clever and simple uses of shift to iterate through a list of paramters of unknown length.

Uppercase or Lowercase Text for Easy Testing

We often need to test text strings like filenames, variables, file text, and so on, for comparison. It can sometimes vary so widely that it is easier to
uppercase or lowercase the text for ease of comparison. The tr and typeset commands can be used to uppercase and lowercase text. This makes testing for
things like variable input a breeze. Following are some examples of using the tr command:

Variable values:
Expected input TRUE
Real input TRUE
Possible input true TRUE True True, and so on

Upcasing:

UPCASEVAR=$(echo $VARIABLE | tr ‘[a-z]’ ‘[A-Z]’)

Downcasing:

DOWNCASEVAR=$(echo $VARIABLE | tr ‘[A-Z]’ ‘[a-z]’)


In the preceding example of the tr command, we echo the string and use a pipe (|) to send the output of the echo statement to the tr command. As the
preceding examples show, uppercasing uses ‘[a-z]’ ‘[A-Z]’.

NOTE The single quotes are required around the square brackets.

‘[a-z]’ ‘[A-Z]’ Used for lower to uppercase


‘[A-Z]’ ‘[a-z]’ Used for upper to lowercase

No matter what the user input is, we will always have the stable input of TRUE, if uppercased, and true, if lowercased. This reduces our code testing and
also helps the readability of the script.

We can also use typeset to control the attributes of a variable in the shell. In the previous example we are using the variable VARIABLE. We can set the
attribute to always translate all of the characters to uppercase or lowercase. To set the case attribute of the variable VARIABLE to always translate
characters assigned to it to uppercase, we use

typeset -u VARIABLE

The -u switch to the typeset command is used for uppercase. After we set the attribute of the variable VARIABLE, using the typeset command, anytime we
assign text characters to VARIABLE they are automatically translated to uppercase characters.

Example:

typeset -u VARIABLE
VARIABLE=“True”
echo $VARIABLE

TRUE

To set the case attribute of the variable VARIABLE to always translate characters to lowercase, we use

typeset -l VARIABLE

Example;
typeset -l VARIABLE
VARIABLE=“True”
echo $VARIABLE
true

Check the Return Code


Whenever we run a command there is a response back from the system about the last command that was executed, known as the return code. If the
command was successful the return code will be 0, zero. If it was not successful the return will be something other than 0, zero. To check the return code
we look at the value of the $? shell variable.

As an example, we want to check if the /usr/local/bin directory exists. Each of these blocks of code accomplishes the exact same thing:

test -d /usr/local/bin
if [ “$?” -eq 0 ] # Check the return code
then # The return code is zero

echo ‘/usr/local/bin does exist’

else # The return code is NOT zero

echo ‘/usr/local/bin does NOT exist’

fi

or
if test -d /usr/local/bin
then # The return code is zero

echo ‘/usr/local/bin does exist’

else # The return code is NOT zero

echo ‘/usr/local/bin does NOT exist’

fi

or
if [ -d /usr/local/bin ]
then # The return code is zero

echo ‘/usr/local/bin does exist’

else # The return code is NOT zero

echo ‘/usr/local/bin does NOT exist’


fi

Notice that we checked the return code using $? once. The other examples use the control structure's built-in test. The built-in tests do the same thing of
processing the return code, but the built-in tests hide this step in the process. All three of the previous examples give the exact same result. This is just a
matter of personal choice and readability.
Using getopts to Parse Command-Line Arguments

The getopts command is built into the shell. It retrieves valid command-line options specified by a single character preceded by a – (minus sign) or + (plus
sign). To specify that a command switch requires an argument to the switch, it is followed by a : (colon). If the switch does not require any argument, the :
should be omitted. All of the options put together are called the OptionString, and this is followed by some variable name. The argument for each
switch is stored in a variable called OPTARG. If the entire OptionString is preceded by a : (colon), any unmatched switch option causes a ? to be loaded into
the VARIABLE. The form of the command follows:

getopts OptionString VARIABLE [ Argument … ]

The easiest way to explain this is with an example. For a script we need seconds, minutes, hours, days, and a process to monitor. For each one of these we
want to supply an argument — that is, -s5 -m10 -pmy_backup. In this we are specifying 5 seconds, 10 minutes, and the process is my_backup. Notice that
there does not have to be a space between the switch and the argument, and they can be entered in any order. This is what makes getopts so great! The
command line to set up our example looks like this:

SECS=0 # Initialize all to zero


MINUTES=0
HOURS=0
DAYS=0
PROCESS= # Initialize to null

while getopts :s:m:h:d:p: TIMED 2>/dev/null


do
case $TIMED in
s) SECS=$OPTARG
;;
m) (( MINUTES = $OPTARG * 60 ))
;;
h) (( HOURS = $OPTARG * 3600 ))
;;
d) (( DAYS = $OPTARG * 86400 ))
;;
p) PROCESS=$OPTARG
;;
\?) usage
exit 1
;;
esac
done

(( TOTAL_SECONDS = SECONDS + MINUTES + HOURS + DAYS ))

There are a few things to note in the getopts command. The getopts command needs to be part of a while loop with a case statement within the loop for
this example. On each option we specified, s, m, h, d, and p, we added a : (colon) after each switch. This tells getopts that an argument is required. The :
(colon) before the OptionString list tells getopts that if an unspecified option is given, to set the TIMED variable to the ? character. This allows us to call the
usage function and exit with a return code of 1. The first thing to be careful of is that getopts does not care what arguments it receives, so we have to take
action if we want to exit. The last thing to note is that the first line of the while loop has output redirection of standard error (file descriptor 2) to the bit
bucket. Anytime an unexpected argument is encountered, getopts sends a message to standard error (file descriptor 2). Because we expect this to happen,
we can just ignore the messages and discard them to /dev/null. We will study getopts a lot in this book.

Find Command
The Linux find command is very powerful. It can search the entire filesystem to find files and directories according to the search criteria you specify.
Besides using the find command to locate files, you can also use it to execute other Linux commands (grep, mv, rm, etc.) on the files and directories you
find, which makes find extremely powerful.

On a related note, don’t forget the locate command. It keeps a database on your Unix/Linux system to help find files very fast:
locate command

--------------

locate tomcat.sh # search the entire filesystem for 'tomcat.sh' (uses the locate database)

locate -i spring.jar # case-insensitive search

The remaining sections on this page describe more fully the commands just shown.

Basic find command examples

This first Linux find example searches through the root filesystem ("/") for the file named Chapter1. If it finds the file, it prints the location to the screen.
find / -name Chapter1 -type f -print

On Linux systems and modern Unix system you no longer need the -print option at the end of the find command, so you can issue it like this:
find / -name Chapter1 -type f

The -type f option here tells the find command to return only files. If you don’t use it, the find command will returns files, directories, and other things
like named pipes and device files that match the name pattern you specify. If you don't care about that, just leave the -type f option off your command.

This next find command searches through only the /usr and /home directories for any file named Chapter1.txt:
find /usr /home -name Chapter1.txt -type f

To search in the current directory — and all subdirectories — just use the . character to reference the current directory in your find commands, like this:
find . -name Chapter1 -type f

This next example searches through the /usr directory for all files that begin with the letters Chapter, followed by anything else. The filename can end with
any other combination of characters. It will match filenames such as Chapter, Chapter1, Chapter1.bad, Chapter-in-life, etc.:
find /usr -name "Chapter*" -type f
This next command searches through the /usr/local directory for files that end with the extension .html. These file locations are then printed to the
screen:
find /usr/local -name "*.html" -type f

Find directories with the Unix find command

Every option you just saw for finding files can also be used on directories. Just replace the -f option with a -d option. For instance, to find all directories
named build under the current directory, use this command:
find . -type d -name build

Find files that don't match a pattern

To find all files that don't match a filename pattern, use the -not argument of the find command, like this:
find . -type f -not -name "*.html"

That generates a list of all files beneath the current directory whose filename DOES NOT end in .html, so it matches files like *.txt,*.jpg, and so on.

Finding files that contain text (find + grep)


You can combine the Linux find and grep commands to powerfully search for text strings in many files.
This next command shows how to find all files beneath the current directory that end with the extension .java, and contain the characters StringBuffer. The
-l argument to the grep command tells it to just print the name of the file where a match is found, instead of printing all the matches themselves:
find . -type f -name "*.java" -exec grep -l StringBuffer {} \;

(Those last few characters are required any time you want to exec a command on the files that are found. I find it helpful to think of them as a placeholder
for each file that is found.)

This next example is similar, but here I use the -i argument to the grep command, telling it to ignore the case of the characters string, so it will find files
that contain string, String, STRING, etc.:
find . -type f -name "*.java" -exec grep -il string {} \;

Power file searching with find and grep


A lot of times I know that the string "foo" exists in a file somewhere in my directory tree, but I can't remember where. In those cases I roll out a power
command, a Linux find command that uses grep to search what it finds:
find . -type f -exec grep -il 'foo' {} \;

This is a special way of mixing the Linux find and grep commands together to search every file in every subdirectory of my current location. It searches for
the string "foo" in every file below the current directory, in a case-insensitive manner. This find/grep command can be broken down like this:
"." means "look in the current directory"
-type f means "look in files only"
-exec grep -il foo means "search for the string 'foo' in a case-insensitive manner, and return the matching line and
filename when a match is found
{} \; is a little bizarre syntax that you need to add to the end of your find command whenever you add the -exec option.
I try to think of it as a placeholder for the filenames the find command finds.
Acting on files you find (find + exec)
This command searches through the /usr/local directory for files that end with the extension .html. When these files are found, their permission is
changed to mode 644 (rw-r--r--).
find /usr/local -name "*.html" -type f -exec chmod 644 {} \;

This find command searches through the htdocs and cgi-bin directories for files that end with the extension .cgi. When these files are found, their
permission is changed to mode 755 (rwxr-xr-x). This example shows that the find command can easily search through multiple sub-directories (htdocs, cgi-
bin) at one time:
find htdocs cgi-bin -name "*.cgi" -type f -exec chmod 755 {} \;

Running the ls command on files you find


From time to time I run the find command with the ls command so I can get detailed information about files the find command locates. To get started, this
find command will find all the *.pl files (Perl files) beneath the current directory:
find . -name "*.pl"

In my current directory, the output of this command looks like this:


./news/newsbot/old/3filter.pl
./news/newsbot/tokenParser.pl
./news/robonews/makeListOfNewsURLs.pl

That's nice, but what if I want to see the last modification time of these files, or their filesize? No problem, I just add the ls -ld command to my find
command, like this:
find . -name "*.pl" -exec ls -ld {} \;

This results in this very different output:


-rwxrwxr-x 1 root root 2907 Jun 15 2002 ./news/newsbot/old/3filter.pl
-rwxrwxr-x 1 root root 336 Jun 17 2002 ./news/newsbot/tokenParser.pl
-rwxr-xr-x 1 root root 2371 Jun 17 2002 ./news/robonews/makeListOfNewsURLs.pl

The "-l" flag of the ls command tells ls to give me a "long listing" of each file, while the -d flag is extremely useful in this case; it tells ls to give me the same
output for a directory. Normally if you use the ls command on a directory, ls will list the contents of the directory, but if you use the -d option, you'll get one
line of information, as shown above.

Find and delete


Be very careful with these next two commands. If you type them in wrong, or make the wrong assumptions about what you're searching for, you can delete
a lot of files very fast. Make sure you have backups and all that, you have been warned.

Here's how to find all files beneath the current directory that begin with the letters 'Foo' and delete them.
find . -type f -name "Foo*" -exec rm {} \;

This one is even more dangerous. It finds all directories named CVS, and deletes them and their contents. Just like the previous command, be very careful
with this command, it is dangerous(!), and not recommended for newbies, or if you don't have a backup.
find . -type d -name CVS -exec rm -r {} \;

Find files with different file extensions

The syntax to find multiple filename extensions with one command looks like this:
find . -type f \( -name "*.c" -o -name "*.sh" \)

Just keep adding more "-o" (or) options for each filename extension. Here's a link to

Case-insensitive file searching


To perform a case-insensitive search with the Unix/Linux find command, use the -iname option instead of -name. For example, if you want to search for all
files and directories named foo, FOO, or any other combination of uppercase and lowercase characters beneath the current directory, use this command:
find . -iname foo

If you’re just interested in directories, search like this:


find . -iname foo -type d

And if you’re just looking for files, search like this:


find . -iname foo -type f

Find files by modification time


To find all files and directories that have been modified in the last seven days, use this find command:
find . -mtime -7

To limit the output to just files, add the -type f option as shown earlier:
find . -mtime -7 -type f

and to show just directories:


find . -mtime -7 -type d

More find command resources

If you’re just looking for a file by name, and you want to be able to find that file even faster than you can with the find command, take a look at the Linux
locate command. The locate command keeps filenames in a database, and can find them very fast.

For more details on the find command, check out our online version of the find man page.
grep command
The name grep means "general regular expression parser", but you can think of the grep command as a “search” command for Unix and Linux systems: It’s
used to search for text strings and regular expressions within one or more files.

Searching for a text string in one file


This first grep command example searches for all occurrences of the text string 'fred' within the /etc/passwd file. It will find and display all of the lines
in this file that contain the text string fred, including lines that contain usernames like "fred", and also other strings like "alfred":
grep 'fred' /etc/passwd

In a simple example like this, the quotes around the string fred aren't necessary, but they are needed if you're searching for a string that contains spaces,
and will also be needed when you get into using regular expressions (search patterns).

Searching for a string in multiple files


Our next grep command example searches for all occurrences of the text string joe within all files of the current directory:
grep 'joe' *

The '*' wildcard matches all files in the current directory, and the grep output from this command will show both (a) the matching filename and (b) all lines
in all files that contain the string 'joe'.

As a quick note, instead of searching all file with the "*" wildcard, you can also use grep to search all files in the current directory that end in the file
extension .txt, like this:
grep 'joe' *.txt

Case-insensitive file searching with the Unix grep command


To perform a case-insensitive search with the grep command, just add the -i option, like this:
grep -i score gettysburg-address.txt

This grep search example matches the string "score", whether it is uppercase (SCORE), lowercase (score), or any mix of the two (Score, SCore, etc.).

Reversing the meaning of a grep search


You can reverse the meaning of a Linux grep search with the -v option. For instance, to show all the lines of my /etc/passwd file that don't contain the string
fred, I'd issue this command:
grep -v fred /etc/passwd
Using grep in a Unix/Linux command pipeline
The grep command is often used in a Unix/Linux pipeline. For instance, to show all the Apache httpd processes running on my Linux system, I use the grep
command in a pipeline with the ps command:
ps auxwww | grep httpd

This returns the following output:


root 17937 0.0 0.0 14760 6880 ? Ss Apr01 0:39 /usr/local/apache/bin/httpd -k start
nobody 21538 0.0 0.0 24372 17108 ? S Apr03 0:01 /usr/local/apache/bin/httpd -k start
nobody 24481 0.0 0.0 14760 6396 ? S Apr03 0:00 /usr/local/apache/bin/httpd -k start
nobody 26089 0.0 0.0 24144 16876 ? S Apr03 0:01 /usr/local/apache/bin/httpd -k start
nobody 27842 0.0 0.0 24896 17636 ? S Apr03 0:00 /usr/local/apache/bin/httpd -k start
nobody 27843 0.0 0.0 24192 16936 ? S Apr03 0:00 /usr/local/apache/bin/httpd -k start
nobody 27911 0.0 0.0 23888 16648 ? S Apr03 0:01 /usr/local/apache/bin/httpd -k start
nobody 28280 0.0 0.0 24664 17256 ? S Apr03 0:00 /usr/local/apache/bin/httpd -k start
nobody 30404 0.0 0.0 24360 17112 ? S Apr03 0:00 /usr/local/apache/bin/httpd -k start
nobody 31895 0.0 0.0 14760 6296 ? S Apr03 0:00 /usr/local/apache/bin/httpd -k start
root 31939 0.0 0.0 1848 548 pts/0 R+ Apr03 0:00 grep http

(I deleted about half of the "httpd -k start" lines from that output manually to save a little space.)

How you can find all the Java processes running on your system using the ps and grep commands in a Unix pipeline:
ps auxwww | grep -i java

In this example I've piped the output of the ps auxwww command into my grep command. The grep command only prints the lines that have the string
"java" in them; all other lines from the ps command are not printed.

Find all the sub-directories in the current directory is to mix the Linux ls and grep commands together in a pipe, like this:
ls -al | grep '^d'

Here I'm using grep to list only those lines where the first character in the line is the letter d.

Using the Linux grep command to search for multiple patterns at one time (egrep)

Search for multiple patterns at one time. You can use a different version of the grep command to .To do this, just use the egrep command
instead of grep, like this:
egrep 'score|nation|liberty|equal' gettysburg-address.txt

This Unix egrep command searches the file named gettysburg-address.txt for the four strings shown (score, nation, liberty, and equal). It returns any lines
from the file that contain any of those words.

I should also note that "egrep" stands for "extended grep", and as you can see, it lets you do things like searching for multiple patterns at one time.
Searching for regular expressions (regex patterns) with grep
Of course the Linux grep command is much more powerful than this, and can handle very powerful regular expressions (regex patterns). In a simple
example, suppose you want to search for the strings "Foo" or "Goo" in all files in the current directory. That grep command would be:
grep '[FG]oo' *

If you want to search for a sequence of three integers with grep you might use a command like this:
grep '[0-9][0-9][0-9]' *

This next grep command searches for all occurrences of the text string fred within the /etc/passwd file, but also requires that the "f" in the name "fred" be
in the first column of each record (that's what the caret character tells grep). Using this more-advanced search, a user named "alfred" would not be
matched, because the letter "a" will be in the first column:
grep '^fred' /etc/passwd

Regular expressions can get much, much more complicated (and powerful) than this, so I'll just leave it here for now.

Display only filenames with a grep search

If you're looking through a lot of files for a pattern, and you just want to find the names of the files that contain your pattern (or "patterns", as shown with
egrep) -- but don't want to see each individual grep pattern match -- just add the -l (lowercase letter L) to your grep command, like this:
grep -l StartInterval *.plist

This command doesn't show every line in every file that contains the string "StartInterval"; it just shows the names of all the files that contain this string,
like this:
com.apple.atrun.plist
com.apple.backupd-auto.plist
com.apple.dashboard.advisory.fetch.plist
com.apple.locationd.plist
org.amavis.amavisd_cleanup.plist

Of course you can also combine grep command arguments, so if you didn't happen to know how to capitalize "StartInterval" in that previous example, you
could just add the -i argument to ignore case, like this:
grep -il startinterval *.plist

and that would have worked just fine as well, returning the same results as the previous grep command example.

Showing matching line numbers with Linux grep

To show the line numbers of the files that match your grep command, just add the -n option, like this:
grep -n we gettysburg-address.txt
Searching my sample gettysburg-address.txt file, I get the following output from this command:
9:Now we are engaged in a great civil war,
22:that we should do this.
24:But in a larger sense we can not dedicate -
25:we can not consecrate -
26:we can not hallow this ground.
29:have consecrated it far above our poor power
33:what we say here,
43:we take increased devotion to that cause
46:that we here highly resolve that these dead

grep before/after - Showing lines before or after your grep pattern match

After a recent comment, I just learned that you can display lines before or after your grep pattern match, which is also very cool. To display five lines before
the phrase "the living" in my sample document, use the -B argument, like this:
grep -B 5 "the living" gettysburg-address.txt

This grep command example returns this output:


The world will little note,
nor long remember,
what we say here,
but can never forget what they did here.
It is for us, the living,

Similarly, to show the five lines after that same search phrase, use the -A argument with your Unix grep command, like this:
grep -A 5 "the living" gettysburg-address.txt

This grep "after" command returns the following output:


It is for us, the living,
rather to be dedicated here
to the unfinished work which they have,
thus far, so nobly carried on.
It is rather for us to be here
dedicated to the great task remaining before us -

Of course you can use any number after the -A and -B options, I'm just using the number five here as an example.

Related Unix/Linux grep commands and tutorials-string command


We hope you enjoyed this Linux grep command tutorial and our grep examples.

There are at least two other commands related to grep that you should at least be aware of. The fgrep command stands for "fast grep", or "fixed strings",
depending on who you talk to. The egrep command stands for "extended grep", and lets you use even more powerful regular expressions.

The strings command is good at finding printable strings in a binary file.


The locate command is more related to the find command, but I thought I would note that it is good at finding files in the entire filesystem when you know
the filename, or part of the filename.

And as I mentioned in the previous section Mac OS X systems have the mdfind command. As a practical matter I use plain old grep 99% of the time.

Scripting
Variables
A variable is a parameter denoted by a name; a name is a word containing only letters, numbers, or underscores and beginning with a letter or an
underscore.

Values can be assigned to variables in the following form:


name=VALUE

Many variables are set by the shell itself, including three you have already seen: HOME, PWD, and PATH. With only two minor exceptions, auto_resume and
histchars, all the variables set by the shell are all uppercase letters.

Arguments and Options

The words entered after the command are its arguments. These are words separated by whitespace (one or more spaces or tabs). If the whitespace is
escaped or quoted, it no longer separates words but becomes part of the word.

The following command lines all have four arguments:


echo 1 '2 3' 4 5
echo -n Now\ is the time
printf "%s %s\n" one two three

In the first line, the spaces between 2 and 3 are quoted because they are surrounded by single quotation marks. In the second, the space after now is
escaped by a backslash, which is the shell’s escape character.
In the final line, a space is quoted with double quotes.
In the second command, the first argument is an option. Traditionally, options to Unix commands are a single letter preceded by a hyphen, sometimes
followed by an argument. The GNU commands found in Linux distributions often accept long options as well. These are words preceded by a double
hyphen. For example, most GNU utilities have an option called --version that prints the version:

An internal command in all modern shells, echo prints its arguments with a single space between them to the standard output stream, followed by a
newline:
$ echo The quick brown fox

The quick brown fox

The default newline can be suppressed in one of two ways, depending on the shell:
$ echo -n No newline
No newline$ echo "No newline\c"

No newline$

The BSD variety of echo accepted the option -n, which suppressed the newline. AT&T’s version used an escape sequence, \c, to do the same thing.

bash has the -e option to activate escape sequences such as \c but by default uses -n to prevent a newline from being printed. (The escape sequences
recognized by echo -e are the same as those described in the next section, with the addition of \c).

Add –e to the echo command if you want the escape sequences to be recognized.

If you limit the use of echo to situations where there cannot be a conflict, that is, where you are sure the arguments do not begin with -n and do not
contain escape sequences, you will be fairly safe. For everything else (or if you’re not sure), use printf.

As you know, it is almost obligatory to begin with a hello world script and we will not disappoint as far as this is concerned. We will begin by creating a new
script $HOME/bin/hello1.sh. The contents of the file should read as in the following screenshot:

I am hoping that you haven't struggled with this too much; it is just three lines after all. I encourage you to run through the examples as you read to really
help you instill the information with a good hands-on practice.

#!/bin/bash: Normally, this is always the first line of the script and is known as the shebang. The shebang starts with a comment but the system still
uses this line. A comment in a shell script has the # symbol. The shebang instructs the system to the interpreter to execute the script. We use bash for
shell scripts and we may use PHP or Perl for other scripts, as required. If we do not add this line, then the commands will be run within the current shell; it
may cause issues if we run another shell.

echo "Hello World": The echo command will be picked up in a built-in shell and can be used to write a standard output, STDOUT, this defaults to the
screen. The information to print is enclosed in double-quotes, there will be more on quotes later.

exit 0: The exit command is a built in shell and is used to leave or exit the script. The exit code is supplied as an integer argument. A value of anything
other than 0 will indicate some type of error in the script's execution.

Executing the script

With the script saved in our PATH environment, it still will not execute as a standalone script. We will have to assign and execute permissions for the file, as
needed. For a simple test, we can run the file directly with bash. The following command shows you how to do this:
$ bash $HOME/bin/hello1.sh

We should be rewarded with the Hello World text being displayed back on our screens. This is not a long-term solution, as we need to have the script in the
$HOME/bin directory, specifically, to make the running of the script easy from any location without typing the full path. We need to add in the execute
permissions as shown in the following code:
$ chmod +x $HOME/bin/hello1.sh
We should now be able to run the script simply, as shown in the following screenshot:

Checking the exit status

This script is simple but we still have to know how to make use of the exit codes from scripts and other applications. The command-line list that we
generated earlier while creating the $HOME/bin directory, is a good example of how we can use the exit code:
$ command1 || command 2

In the preceding example, command2 is executed only if command1 fails in some way. To be specific, command2 will run if command1 exits with a status code
other than 0.

Similarly, in the following extract:


$ command1 && command2

We will only execute command2 if command1 succeeds and issues an exit code of 0.

To read the exit code from our script explicitly, we can view the $? variable, as shown in the following example:
$ hello1.sh

$ echo $?

The expected output is 0, as this is what we have added to the last line of the file and there is precious little else that can go wrong causing us to fail in
reaching that line.

Referring back to the command hierarchy within this chapter, we can use a type to determine the location and type of file hello1.sh is:
$ type hello1.sh #To determine the type and path
$ type -a hello1.sh #To print all commands found if the name is NOT unique
$ type -t hello1.sh ~To print the simple type of the command

And the output :


Variable string substitution
${var%Pattern} Remove from $var the shortest part of $Pattern that matches the back end of $var.
${var%%Pattern} Remove from $var the longest part of $Pattern that matches the back end of $var.
Example :
[mihail@oc8168772081 ~]$ day= daydaynightnight
[mihail@oc8168772081 ~]$ echo $day
daydaynightnight
[mihail@oc8168772081 ~]$ echo ${day%%n*t} #patttern is n*t and it matches nightnight, longest match
dayday
[mihail@oc8168772081 ~]$ echo ${day%n*t} #patttern is n*t and it matches night <- shortest match
daydaynight

${var#Pattern} Remove from $var the shortest part of $Pattern that matches the front end of $var.
${var##Pattern} Remove from $var the longest part of $Pattern that matches the front end of $var.
Example :

Using the test shell builtin

It is probably time for us to pull over to the side of the scripting highway and look a little more at this command test. This is both a shell builtin and a file
executable in its own right. Of course, we will have to hit the built-in command first, unless we specify the full path to the file.

When the test command is run without any expressions to evaluate, then the test will return false. So, if we run the test as shown in the following
command:

$ test

The exit status will be 1, even though no error output is shown. The test command will always return either True or False or 0 or 1, respectively. The basic
syntax of test is:

test EXPRESSION

Or, we can inverse the test command with:

test ! EXPRESSION
If we need to include multiple expressions, these can be AND or OR together using the -a and -o options, respectively:

test EXPRESSION -a EXPRESSION


test EXPRESSION -o EXPRESSION

We can also write in a shorthand version replacing the test with square brackets to surround the expression as shown in the following example:

[ EXPRESION ]

Testing strings

We can test for the equality or inequality of two strings. For example, one of the ways to test the root user is using the following command:

test $USER = root

We could also write this using the square bracket notation:

[ $USER = root ]

Equally, we could test for a non-root account with the following two methods:

test ! $USER = root


[ ! $USER = root ]

We can also test for zero values and non-zero values of strings. We saw this in an earlier example in this chapter.

To test if a string has a value, we could use the -n option. We can check to see if the current connection is made via SSH by checking for the existence of a
variable in the user's environment. We show this using test and square brackets in the following two examples:

test -n $SSH_TTY
[ -n $SSH_TTY ]

If this is true, then the connection is made with SSH; if it is false, then the connection is not via SSH.
As we saw earlier, testing for a zero string value is useful when deciding if a variable is set:

test -z $1

Or, more simply, we could use:

[ -z $1 ]
A true result for this query means that no input parameters have been supplied to the script.

Testing integers
As well as, testing string values of bash scripts can test for integer values and whole numbers. Another way of testing input of a script is to count the
numbers of positional parameters and also test that the number is above 0:

test $# -gt 0

Or using the brackets, as shown:

[ $# -gt 0 ]

When in relationship, top positional parameters the variable $# represents the number of parameters passed to the script. To test equality of integer
values, the -eq option is used and not the = symbol.
Testing file types

While testing for values we can test for the existence of a file or file type. For example, we may only want to delete a file if it is a symbolic link. I use this
while compiling a kernel. The /usr/src/linux directory should be a symbolic link to the latest kernel source code. If I download a newer version before
compiling the new kernel, I need to delete the existing link and create a new link. Just in case someone has created the /usr/src/linux directory, we
can test it as a link before removing it:

# [ -h /usr/src/linux ] &&rm /usr/src/linux

The -h option tests that the file has a link. Other options include:

-d: This shows that it's a directory


-e: This shows that the file exists in any form
-x: This shows that the file is executable
-f: This shows that the file is a regular file
-r: This shows that the file is readable
-p: This shows that the file is a named pipe
-b: This shows that the file is a block device
-c: This shows that the file is a character device

More options do exist, so delve into the main pages as you need. We will use different options throughout the book; thus, giving you practical and useful
examples.
File Tests
Several operators test the state of a file. A file’s existence can be tested with -e (or the nonstandard -a). The type of file can be checked with -f for a regular
file, -d for a directory, and -h or -L for a symbolic link. Other operators test for special types of files and for which permission bits are set.

Here are some examples:


test -f /etc/fstab ## true if a regular file
test -h /etc/rc.local ## true if a symbolic link
[ -x "$HOME/bin/hw" ] ## true if you can execute the file
[[ -s $HOME/bin/hw ]] ## true if the file exists and is not empty

Integer Tests

Comparisons between integers use the -eq, -ne, -gt, -lt, -ge, and -le operators.
The equality of integers is tested with -eq:

$ test 1 -eq 1
$ echo $?
0
$ [ 2 -eq 1 ]
$ echo $?
1

Inequality is tested with -ne:

$ [ 2 -ne 1 ]
$ echo $?
0

The remaining operators test greater than, less than, greater than or equal to, and less than or equal to.

String Tests

Strings are concatenations of zero or more characters and can include any character except NUL (ASCII 0). They can be tested for equality or inequality, for
nonempty string or null string , , and in bash for alphabetical ordering. The = operator tests for equality, in other words, whether they are identical; != tests
for inequality. bash also accepts == for equality, but there is no reason to use this nonstandard operator.

Here are some examples:

test "$a" = "$b"


[ "$q" != "$b" ]

The -z and -n operators return successfully if their arguments are empty or nonempty:

$ [ -z "" ]
$ echo $?
0
$ test -n ""
$ echo $?
1

The greater-than and less-than symbols are used in bash to compare the lexical positions of strings and must be escaped to prevent them from being
interpreted as redirection operators:

$ str1=abc
$ str2=def
$ test "$str1" \< "$str2"
$ echo $?
0
$ test "$str1" \> "$str2"
$ echo $?
1

The previous tests can be combined in a single call to test with the -a (logical AND) and -o (logical OR) operators:

test -f /path/to/file -a $test -eq 1


test -x bin/file -o $test -gt 1

test is usually used in combination with if or the conditional operators && and ||.

[[ … ]]: Evaluate an Expression

Like test, [[ ... ]] evaluates an expression. Unlike test, it is not a builtin command. It is part of the shell grammar and not subject to the same parsing as a
builtin command. Parameters are expanded, but word splitting and file name expansion are not performed on words between [[ and ]].
It supports all the same operators as test, with some enhancements and additions. It is, however, nonstandard, so it is better not to use it when test could
perform the same function.

Example
String test with patern matching
Test if argument starts with a letter

#avem un script testPattern


#!/bin/bash
[[ $1 == [a-z]* ]] || echo "$1 is not starting with a letter" (comanda echo se executa doar daca prima adica testul fails)
Sau
[[ $1 = [a-z]* ]] || echo "$1 is not starting with a letter"  fara == doar cu un singur =

Sau cu ambele rezultate


#!/bin/bash
[[ $1 = [a-z]* ]] && echo "$1 is starting with a letter" || echo "$1 is not starting with a letter"

[mihail@oc8168772081 TEST_INFOACADEMY]$ ANI2=1


[mihail@oc8168772081 TEST_INFOACADEMY]$ ./testPatern $ANI2
1 is not starting with a letter
[mihail@oc8168772081 TEST_INFOACADEMY]$ ANIMAL=cow
[mihail@oc8168772081 TEST_INFOACADEMY]$ ./testPatern $ANIMAL
cow is starting with a letter

\([a-z]\)\([a-z]\)[a-z]\2\1 can find radar r in first group ,a in second , d in third …then remembered a and r

Diferenta dintre asignare directa si folosirea lui let


#!/bin/bash
echo "My name is 'basename $0' - I was called as $0"
echo "I was called with $# parameters."
count=1
while [ "$#" -ge "1" ]; do
echo "Parameter number $count is: $1"
let count=$count+1
shift
done

Avem output
[mihail@oc8168772081 TEST_INFOACADEMY]$ ./myparams unu doi trei
My name is 'basename ./myparams' - I was called as ./myparams
I was called with 3 parameters.
Parameter number 1 is: unu
Parameter number 2 is: doi
Parameter number 3 is: trei

Daca nu folosim let count=$count+1


#!/bin/bash
echo "My name is 'basename $0' - I was called as $0"
echo "I was called with $# parameters."
count=1
while [ "$#" -ge "1" ]; do
echo "Parameter number $count is: $1"
count=$count+1
shift
done

avem outputul nedorit

[mihail@oc8168772081 TEST_INFOACADEMY]$ ./myparams unu doi trei


My name is 'basename ./myparams' - I was called as ./myparams
I was called with 3 parameters.
Parameter number 1 is: unu
Parameter number 1+1 is: doi
Parameter number 1+1+1 is: trei
[mihail@oc8168772081 TEST_INFOACADEMY]$
Atentie la [ si [[
Exemplu :
< is less than, in ASCII alphabetical order

if [[ "$a" < "$b" ]]

if [ "$a" \< "$b" ]  you need to escape the operator for single [

The == comparison operator behaves differently within a double-brackets test than within single brackets.

[[ $a == z* ]] # True if $a starts with an "z" (pattern matching).


[[ $a == "z*" ]] # True if $a is equal to z* (literal matching).

[ $a == z* ] # File globbing and word splitting take place.


[ "$a" == "z*" ] # True if $a is equal to z* (literal matching).

Exemple de testare directoare


#!/bin/bash
DEST="/backup"
SRC="/home"

# Make sure backup dir exits


[ ! -d "$DEST" ] && mkdir -p "$DEST"

# If source directory does not exits, die...


[ ! -d "$SRC" ] && { echo "$SRC directory not found. Cannot make backup to $DEST"; exit 1; }

# Okay, dump backup using tar


echo "Backup directory $DEST..."
echo "Source directory $SRC..."
/bin/tar zcf "$DEST/backup.tar.gz" "$SRC" 2>/dev/null

# Find out if our backup job failed or not and notify on screen
[ $? -eq 0 ] && echo "Backup done!" || echo "Backup failed"

Find if some words or some chars are inside a file or a variable


# The very useful "if-grep" construct:
# -----------------------------------
if grep -q Bash file
then echo "File contains at least one occurrence of Bash."
fi
word=Linux
letter_sequence=inu
if echo "$word" | grep -q "$letter_sequence"
# The "-q" option to grep suppresses output.
then
echo "$letter_sequence found in $word"
else
echo "$letter_sequence not found in $word"
fi

SED
[mihail@oc8168772081 ~]$ echo Sunday | sed 's/day/night/'
Sunnight

If you want to switch two words around, you can remember two patterns and change the order around:
[mihail@oc8168772081 ~]$ echo day one |sed 's/\([a-z]*\) \([a-z]*\)/\2 \1/'
one day

Sometimes you want to search for a pattern and add some characters, like parenthesis, around or near the pattern you found.
[mihail@oc8168772081 ~]$ echo day | sed 's/[a-z]*/(&)/'
(day)

[mihail@oc8168772081 ~]$ echo day | sed 's/[a-z]*/& &/'


day day

[mihail@oc8168772081 ~]$ echo day | sed 's/[a-z]*/&&/'


dayday

If you want to take off numbers from a line . The "\1" is the first remembered pattern, and the "\2" is the second remembered pattern. Sed has up to nine
remembered patterns. Patern is in between round parantheses ‘( ‘ and ‘ )’
[mihail@oc8168772081 ~]$ echo day1234 | sed 's/\([a-z]*\).*/\1/'
day

If you want to switch two words around, you can remember two patterns and change the order around:
[mihail@oc8168772081 ~]$ echo day one |sed 's/\([a-z]*\) \([a-z]*\)/\2 \1/'
one day

Eliminate the numbers


[mihail@oc8168772081 ~]$ echo day1 one2 |sed 's/\([a-z]*\).* \([a-z]*\).*/\2 \1/'
one day

If you want to reverse the first three characters on a line, you can use
[mihail@oc8168772081 ~]$ echo alex |sed 's/^\(.\)\(.\)\(.\)/\3\2\1/'
elax

"[^ ]*," matches everything except a space  example "won't,"

Put in paranthesis first word


[mihail@oc8168772081 ~]$ echo alex oty | sed 's/[^ ]*/(&)/'
(alex) oty

Put in paranthesis each word use g for global


[mihail@oc8168772081 ~]$ echo alex oty | sed 's/[^ ]*/(&)/g'
(alex) (oty)

Sed is not recursive ( it doesn’t look in the repace to do another replacement)


[mihail@oc8168772081 ~]$ echo loop | sed 's/loop/loop the loop/g'
loop the loop

[mihail@oc8168772081 ~]$ echo oty alex | sed 's/oty alex/oa/' inlocuim 2 cuvinte cu 2 litere
oa
[mihail@oc8168772081 ~]$ echo oty alex nijdyag | sed 's/oty alex/oa/' idem dar ramane restul
oa nijdyag
[mihail@oc8168772081 ~]$ echo oty alex nijdyag | sed 's/oty alex .*/oa/' eliminarea restului de cuvinte
oa

What characters do I need to escape when using sed in a sh script?

There are two levels of interpretation here: the shell, and sed.
In the shell, everything between single quotes is interpreted literally, except for single quotes themselves. You can effectively have a single quote between
single quotes by writing '\'' (close single quote, one literal single quote, open single quote).

Sed uses basic regular expressions. In a BRE, in order to have them treated literally, the characters $.*[\^ need to be quoted by preceding them by a
backslash, except inside character sets ([…]). Letters, digits and (){}+?| must not be quoted (you can get away with quoting some of these in some
implementations). The sequences \(, \), \n, and in some implementations \{, \}, \+, \?, \| and other backslash+alphanumerics have special
meanings. You can get away with not quoting $^ in some positions in some implementations.

Furthermore, you need a backslash before / if it is to appear in the regex outside of bracket expressions. You can choose an alternative character as the
delimiter by writing, e.g., s~/dir~/replacement~ or \~/dir~p; you'll need a backslash before the delimiter if you want to include it in the BRE. If
you choose a character that has a special meaning in a BRE and you want to include it literally, you'll need three backslashes; I do not recommend this, as it
may behave differently in some implementations.

In a nutshell, for sed 's/…/…/':

• Write the regex between single quotes.


• Use '\'' to end up with a single quote in the regex.
• Put a backslash before $.*/[\]^ and only those characters (but not inside bracket expressions). (Technically you shouldn't put a backslash before
] but I don't know of an implementation that treats ] and \] differently outside of bracket expressions.)
• Inside a bracket expression, for - to be treated literally, make sure it is first or last ([abc-] or [-abc], not [a-bc]).
• Inside a bracket expression, for ^ to be treated literally, make sure it is not first (use [abc^], not [^abc]).
• To include ] in the list of characters matched by a bracket expression, make it the first character (or first after ^ for a negated set): []abc] or
[^]abc] (not [abc]] nor [abc\]]).

In the replacement text:

• & and \ need to be quoted by preceding them by a backslash, as do the delimiter (usually /) and newlines.
• \ followed by a digit has a special meaning. \ followed by a letter has a special meaning (special characters) in some implementations, and \
followed by some other character means \c or c depending on the implementation.
• With single quotes around the argument (sed 's/…/…/'), use '\'' to put a single quote in the replacement text.

If the regex or replacement text comes from a shell variable, remember that

• The regex is a BRE, not a literal string.


• In the regex, a newline needs to be expressed as \n (which will never match unless you have other sed code adding newline characters to the
pattern space). But note that it won't work inside bracket expressions with some sed implementations.
• In the replacement text, &, \ and newlines need to be quoted.
• The delimiter needs to be quoted (but not inside bracket expressions).
• Use double quotes for interpolation: sed -e "s/$BRE/$REPL/".
========another answer to the topic ============

Take the following script:


#!/bin/sh
sed 's/(127\.0\.1\.1)\s/\1/' [some file]

If I try to run this in sh (dash here), it'll fail because of the parentheses, which need to be escaped.

The problem you're experiencing isn't due to shell interpolating and escapes - it's because you're attempting to use extended
regular expression syntax without passing sed the -r or --regexp-extended option.
Change your sed line from
sed 's/(127\.0\.1\.1)\s/\1/' [some file]

to
sed -r 's/(127\.0\.1\.1)\s/\1/' [some file]

and it will work as I believe you intend.

By default sed uses uses basic regular expressions (think grep style), which would require the following syntax:
sed 's/\(127\.0\.1\.1\)[ \t]/\1/' [some file]
===============

Stergere caractere dintr-un String sau numeFisier


[mihail@oc8168772081 TEST_INFOACADEMY]$ echo vbnm | sed 's/v.//g' se inlocuieste v. adica v si urmatorul caracter cu nimic
nm
[mihail@oc8168772081 TEST_INFOACADEMY]$ echo cvbnm | sed 's/v.//g' idem
cnm

dirname and basename


[mihail@oc8168772081 TEST_INFOACADEMY]$ pwd
/home/mihail/TEST_INFOACADEMY
[mihail@oc8168772081 TEST_INFOACADEMY]$ my_dir=$(dirname $(pwd)) sau my_dir=$(dirname `pwd`)
[mihail@oc8168772081 TEST_INFOACADEMY]$ echo $my_dir
/home/mihail

[mihail@oc8168772081 TEST_INFOACADEMY]$ my_base=$(basename $(pwd))


[mihail@oc8168772081 TEST_INFOACADEMY]$ echo $my_base
TEST_INFOACADEMY

Diferenta intre pwd so PWD environment variable


[mihail@oc8168772081 TEST_INFOACADEMY]$ pwd
/home/mihail/TEST_INFOACADEMY

[mihail@oc8168772081 TEST_INFOACADEMY]$ echo $PWD


/home/mihail/TEST_INFOACADEMY

[mihail@oc8168772081 TEST_INFOACADEMY]$ echo ${PWD##*/}


TEST_INFOACADEMY
[mihail@oc8168772081 TEST_INFOACADEMY]$ result=`pwd`
[mihail@oc8168772081 TEST_INFOACADEMY]$ echo ${result##*/}
TEST_INFOACADEMY

If you try
[mihail@oc8168772081 TEST_INFOACADEMY]$ echo ${pwd##*/}
 empty result ( it doesn’t work )
[mihail@oc8168772081 TEST_INFOACADEMY]$
Database connection and retrieve values
On connecting with wallet , credentiales are kept in wallet on client side . Once stored, you can connect to database using sqlplus /@connect_string
Create a Oracle Wallet

Syntax - mkstore -wrl -create

$mkstore -wrl /u02/app/oracle/product/11.2.0/dbhome_1/network/admin/wallet

-create

Enter wallet password:

Two files are created.


$ls -ltr
total 8
-rw------- 1 oracle oinstall 3880 Sep 8 22:48 ewallet.p12
-rw------- 1 oracle oinstall 3957 Sep 8 22:48 cwallet.sso

If you schedule cron through oracle user, keep the privileges as such. Please note that if a user has a read permission on these files, it can login to database.
So it's like your House Key which you would like to keep safely with you 

Next step is to add database credential to the wallet. Before this, create a tnsnames entry you will use to access the database
AMIT_TEST11R2 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = db11g)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = test11r2)
)
)

Add user credential to Oracle Wallet. Syntax is


mkstore -wrl wallet_location -createCredential db_connect_string username password</span>

$mkstore -wrl /u02/app/oracle/product/11.2.0/dbhome_1/network/admin/wallet


-createCredential amit_test11r2 amit amit
Enter wallet password:

To confirm, if the credential has been added , use listCredential option


$mkstore -wrl /u02/app/oracle/product/11.2.0/dbhome_1/owm/wallets/oracle
-listCredential
Oracle Secret Store Tool : Version 11.2.0.1.0 - Production
Copyright (c) 2004, 2009, Oracle and/or its affiliates. All rights reserved.

Enter wallet password:


List credential (index: connect_string username)
1: amit_test11r2 amit

Now add following entries in client sqlnet.ora file


WALLET_LOCATION =
(SOURCE = (METHOD = FILE)
(METHOD_DATA = (DIRECTORY = /u02/app/oracle/product/11.2.0/dbhome_1/network/admin/wallet) ) )
SQLNET.WALLET_OVERRIDE = TRUE

Ensure that auto-login is enabled for wallet.

Now let's try connecting to database


[oracle@db11g admin]$ sqlplus /@amit_test11r2

SQL*Plus: Release 11.2.0.1.0 Production on Tue Sep 8 23:34:37 2009

Copyright (c) 1982, 2009, Oracle. All rights reserved.

SQL> show user


USER is "AMIT"

TNS_ADMIN tells sqlplus where to find the tnsnames.ora file.

#!/bin/sh
VALUE=`sqlplus $DB_USERNAME/$PASSWORD@//$HOST_NAME:$DB_PORT/$DB_SID <<END
set pagesize 0 feedback off verify off heading off echo off
SELECT ID FROM TEST_USERS WHERE USER_NAME=$SAMPLE_USER;
exit;
END`
if [ -z "$VALUE" ]; then
echo "No rows returned from database"
exit 0
else
echo $VALUE
fi

sau cu wallet
#!/bin/sh
export TNS_ADMIN=/home/css/BEX_GW/wallet

VALUE=`sqlplus /@NLS_USER
<<END
set pagesize 0 feedback off verify off heading off echo off
SELECT ID FROM TEST_USERS WHERE USER_NAME=$SAMPLE_USER;
exit;
END`
if [ -z "$VALUE" ]; then
echo "No rows returned from database" ------ must be manually entered
exit 0
else
echo $VALUE
fi

data=$(sqlplus -S ${USER}/${PASSWORD} << EOF


set head off
set feedback off
set pagesize 5000
set linesize 30000
select ACCESS_ID, PROFILE_ID, START_DATE, END_DATE, PLATFORM, ACCESS_TYPE, PERM_FLAG, ACTIVE_FLAG from uam.access_list where
USER_ID='${USER_ID}';
exit
EOF)

Daca nu avem TNS_ADMIN definit ( exportat )

SQLPLUS hr@(DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=sales-server)(PORT=1521) )
(CONNECT_DATA=
(SERVICE_NAME=sales.us.acme.com) ) )
=======================
The answer shown above works fine if you are trying to retrieve a single value from sqlplus. If you have a sqlplus script that returns multiple columns, you
could read them into shell variables like this:
sqlplus / @myscript.sql | read var1 var2 var3

This will read 3 columns into var1, var2, and var3. Make sure that if you do this, you don't have blank or null values coming back, otherwise the "read"
command will skip over the blanks/nulls and get the variable assignments out of sync.
Another variation: if you are retrieving multiple rows as well as columns in your sqlplus script, and want to loop over the rows:
sqlplus / @myscript.sql | while read var1 var2 var3
do
<more shell stuff here>
done
A specific example:
in emps.sql:

set head off


set verify off
set feedback off
set head off
set pages 0

select empno, empname


from scott.emp;

exit;

In test.sh:

sqlplus / @emps.sql | while read eno ename


do
echo "Employee number is $eno Name is $ename"
done

Hope this helps

Regular expressions
We have anchors , character sets and modifiers

There are also two types of regular expressions: the "Basic" regular expression, and the "extended" regular expression. A few utilities like awk and egrep
use the extended expression.

Anchors ^ and $

The regular expression "^A" will match all lines that start with a capital A. The expression "A$" will match all lines that end with the capital A.
If the anchor characters are not used at the proper end of the pattern, then they no longer act as anchors. That is, the "^" is only an anchor if it is the first
character in a regular expression. The "$" is only an anchor if it is the last character. The expression "$1" does not have an anchor. Neither is "1^".
If you need to match a "^" at the beginning of the line, or a "$" at the end of a line, you must escape the special characters with a backslash. Here is a
summary:

Pattern Matches

^A "A" at the beginning of a line


A$ "A" at the end of a line

A^ "A^" anywhere on a line

$A "$A" anywhere on a line

^^ "^" at the beginning of a line

$$ "$" at the end of a line

Character set . The simplest character set is a character. The regular expression "the" contains three character sets: "t," "h" and "e". It will match any line
with the string "the" inside it. This would also match the word "other". To prevent this, put spaces before and after the pattern: " the ".

Use this pattern with grep to print every address in your incoming mail box:
grep '^From: ' /usr/spool/mail/$USER
Some characters have a special meaning in regular expressions. If you want to search for such a character, escape it with a backslash.
The character "." is one of those special meta-characters. By itself it will match any character, except the end-of-line character. The pattern that will match a
line with a single characters is

^.$
Regular Expression Matches
[] The characters "[]"
[0] The character "0"
[0-9] Any number
[^0-9] Any character other than a number
[-0-9] Any number or a "-"
[0-9-] Any number or a "-"
[^-0-9] Any character except a number or a "-"
[]0-9] Any number or a "]"
[0-9]] Any number followed by a "]"
[0-9-z] Any number, or any character between "9" and "z".
[0-9\-a\]] Any number, or a "-", a "a", or a "]"

The special character "*" matches zero or more copies. That is, the regular expression "0*" matches zero or more zeros, while the expression "[0-9]*"
matches zero or more numbers.
. (dot ) matches any character.

There is a special pattern you can use to specify the minimum and maximum number of repeats. This is done by putting those two numbers between "\{" and
"\}". The backslashes deserve a special discussion. Normally a backslash turns off the special meaning for a character. A period is matched by a "\." and an
asterisk is matched by a "\*".
If a backslash is placed before a "<," ">," "{," "}," "(," ")," or before a digit, the backslash turns on a special meaning.
This was done because these special functions were added late in the life of regular expressions. Changing the meaning of "{" would have broken old
expressions.
The regular expression to match 4, 5, 6, 7 or 8 lower case letters is
[a-z]\{4,8\}
Any numbers between 0 and 255 can be used.

Regular Expression Matches

* Any line with an asterisk


\* Any line with an asterisk
\\ Any line with a backslash
^* Any line starting with an asterisk
^A* Any line ( starts with 0 or more A’s== any line starts like that
^A\* Any line starting with an "A*"
^AA* Any line if it starts with one "A"
^AA*B Any line with one or more "A"'s followed by a "B"
^A\{4,8\}B Any line starting with 4, 5, 6, 7 or 8 "A"'s followed by a "B"
^A\{4,\}B Any line starting with 4 or more "A"'s followed by a "B"
^A\{4\}B Any line starting with "AAAAB"
\{4,8\} Any line with "{4,8}"
A{4,8} Any line with "A{4,8}"

Searching for a word isn't quite as simple as it at first appears. The string "the" will match the word "other". You can put spaces before and after the letters
and use this regular expression: " the ". However, this does not match words at the beginning or end of the line. And it does not match the case where there is
a punctuation mark after the word.
The characters "\<" and "\>" are similar to the "^" and "$" anchors, as they don't occupy a position of a character. They do "anchor" the expression between
to only match if it is on a word boundary. The pattern to search for the word "the" would be "\<[tT]he\>". The character before the "t" must be either a new
line character, or anything except a letter, number, or underscore. The character after the "e" must also be a character other than a number, letter, or
underscore or it could be the end of line character.

Backreferences - Remembering patterns with \(, \) and \1

Another pattern that requires a special mechanism is searching for repeated words. The expression "[a-z][a-z]" will match any two lower case letters. If you
wanted to search for lines that had two adjoining identical letters, the above pattern wouldn't help. You need a way of remembering what you found, and
seeing if the same pattern occurred again. You can mark part of a pattern using "\(" and "\)". You can recall the remembered pattern with "\" followed by a
single digit. Therefore, to search for two identical letters, use "\([a-z]\)\1". You can have 9 different remembered patterns. Each occurrence of "\(" starts a
new pattern. The regular expression that would match a 5 letter palindrome, (e.g. "radar"), would be

\([a-z]\)\([a-z]\)[a-z]\2\1
=====================================

Reading variables from files


We have a file (propFile) with following content :
#propFile
pathTest1=/home/mihail/TEST_INFOACADEMY/TEST1
locmunca=DB
previous=IBM

and the script

#!/bin/bash
. propFile #source the file in order to have the variable pathTest1 available

myPath=$pathTest1 this is read from propFile


echo "This is the path---> $myPath"
if [ ! -e $myPath ];then
echo " File doesn't exists"
else
echo " All OK "
fi
job=$locmunca;
echo "my job is $job"

echo " my previous job was at $previous"

we will have displayed on the screen :


[mihail@oc8168772081 TEST_INFOACADEMY]$ ./testPropFile
This is the path---> /home/mihail/TEST_INFOACADEMY/TEST1
All OK
my job is DB
my previous job was at IBM
[mihail@oc8168772081 TEST_INFOACADEMY]$

Atentie , valorile variabilelor in fisiere trebuie sa fie in quotes


Ex. Daca am avea
previous=ABG Vaesovia
aceasta nu ar fi citita , trebuie pusa previous=’ABG Varsovia’

Ex2

scriptul test_var.sh contine o singura linie “echo Color is “ . Fara “source” , variabila COLOR definita in shelul curent , nu este vazuta de script ( pornit
intr-un subshell )
[mihail@oc8168772081 scripturi]$ vim test_var.sh

[mihail@oc8168772081 scripturi]$ COLOR=BLUE
[mihail@oc8168772081 scripturi]$ chmod +x test_var.sh 

[mihail@oc8168772081 scripturi]$ ./test_var.sh 

Color is

[mihail@oc8168772081 scripturi]$ source ./test_var.sh 

Color is BLUE

[mihail@oc8168772081 scripturi]$ 

Example of logging
Avem o functie log() intr-un script logging.sh
#!/bin/bash
LOG_LEVEL_INFO=3
LOG_LEVEL_WARNING=2
LOG_LEVEL=$LOG_LEVEL_INFO
LOG_LABEL_INFO=INFO
LOG_LABEL_WARNING=WARNING
LOG_TIMESTAMP="%Y.%m.%d %H:%M.%S"
function log() {
LEVEL=$1
LABEL="$2"
shift 2
[[ $LOG_LEVEL -ge $LEVEL ]] && echo " Am Logat textul [`date +\"$LOG_TIMESTAMP\"` ] $LABEL : $*"|tee -a $Log
VALUE=returned2
return 0

}
function log_info() {
log $LOG_LEVEL_INFO "$LOG_LABEL_INFO" $*
}

function log_warning() {
log $LOG_LEVEL_WARNING "$LOG_LABEL_WARNING" $*
}

=================================================
Scriptul de testare a functiei ( scriptul testeaza mai multe chestii )
#!/bin/bash
. logging.sh
var="int sit dev"
for f in $var
do
echo $f
done
P=/home/mihail/TEST_INFOACADEMY/mihai
Log=/home/mihail/TEST_INFOACADEMY/log.test
TST1=/home/mihail/TEST_INFOACADEMY/TEST1
if [ ! -d $TST1 ];then
echo "dir nu exista"
else
echo "dir exista" && cp /home/mihail/TEST_INFOACADEMY/testawk1 $TST1
fi
log_info "asta trebuie logat" ;
echo $LOG_LEVEL_INFO
echo VALUE este $VALUE
param=""
[[ -z $param ]] && echo "param is empty"
log_warning "WARNING is WORKING"
echo finished

ca rezultat avem o afisare pe display de genul :

[mihail@oc8168772081 TEST_INFOACADEMY]$ ./forLoop


int
sit
dev
dir exista
Am Logat textul [2017.01.10 12:25.44 ] INFO : asta trebuie logat
3
VALUE este returned2
param is empty
Am Logat textul [2017.01.10 12:25.44 ] WARNING : WARNING is WORKING
finished

si de asemenea avem logare in fisierul log.test definit in scriptul de test Log=/home/mihail/TEST_INFOACADEMY/log.test

Am Logat textul [2017.01.10 12:25.44 ] INFO : asta trebuie logat


Am Logat textul [2017.01.10 12:25.44 ] WARNING : WARNING is WORKING

Scripturi to use for oracle database


1)I want to do a check to see if a record in the database before I do the insert. In MS SQL I would do something like this;

IF NOT EXISTS (select * from table1 where field1 = 5100200)


INSERT INTO table1 (table1) VALUES (5100200);
Go
(adica daca (select * from table1 where field1 = 5100200) nu returneaza nimic , nu exista atunci executa insertul )

But I can not find how to do a similar command in Oracle.


Varianta oracle

INSERT INTO Table1(field1)


SELECT 5100200
FROM DUAL
WHERE NOT EXISTS(SELECT * FROM Table1 WHERE field1=5100200)
2) check if a table exists ( if not then create it and do an insert)
select count(*) into v_exists from dba_tables where table_name = 'TABLE_NAME';
if v_exists = 1 THEN
insert into
else
create table . . . . . . .
insert into . . . . . . .
end if;
basically the checking is like
select count(*)
from all_objects
where object_type in ('TABLE','VIEW')
and object_name = 'your_table_name';

ALL_OBJECTS describes all objects accessible to the current user.


USER_OBJECTS describes all objects owned by the current user. This view does not display the OWNER column.
DBA_OBJECTS describes all objects in the database.
OBJECT_TYPE Type of the object (such as TABLE, INDEX)

Example 1: Displaying Schema Objects By Type


SELECT OBJECT_NAME, OBJECT_TYPE FROM USER_OBJECTS;

OBJECT_NAME OBJECT_TYPE
------------------------- -------------------
EMP_DEPT CLUSTER
EMP TABLE
DEPT TABLE
EMP_DEPT_INDEX INDEX
PUBLIC_EMP SYNONYM
EMP_MGR VIEW

Example 2: Displaying Dependencies of Views and Synonyms

When you create a view or a synonym, the view or synonym is based on its underlying base object. The ALL_DEPENDENCIES,
USER_DEPENDENCIES, and DBA_DEPENDENCIES data dictionary views can be used to reveal the dependencies for a view. The
ALL_SYNONYMS, USER_SYNONYMS, and DBA_SYNONYMS data dictionary views can be used to list the base object of a synonym. For
example, the following query lists the base objects for the synonyms created by user jward:

SELECT TABLE_OWNER, TABLE_NAME, SYNONYM_NAME


FROM DBA_SYNONYMS
WHERE OWNER = 'JWARD';

The following is the query output:

TABLE_OWNER TABLE_NAME SYNONYM_NAME


---------------------- ----------- -----------------
SCOTT DEPT DEPT
SCOTT EMP EMP
===========================================================================================================================
#!/bin/bash
progname=$0

if [[ $1 == dev ]];then
echo "ai ales dev"
elif [[ $1 == sit ]];then
echo "ai ales sit"
echo -n "Continue ? Y/N "
read answer
if [[ answer == Y ]]; then
(shift; "progname" $* ) | grep $1
else
exit 0
fi
fi
echo -n "introduceti environmentul si apasati enter "
read env
if [[ $env == dev ]];then
echo "ai ales dev"
elif [[ $env == sit ]];then
echo "ai ales sit"
else
echo " nu ai ales bine"
fi

read -p " Introduceti environmentul si apasati enter " x


if [[ $x == dev ]];then
echo "ai ales dev"
elif [[ $x == sit ]];then
echo "ai ales sit"
echo -n "Continue ? Y/N "
read answer
if [[ answer == Y ]]; then
(shift; "progname" $* ) | grep $1
else
exit 0
fi

else
echo "nu ai ales corect"

how to insert using select

INSERT INTO Customers(Country, CustomerName)


select Country , 'Mikaell' from Customers where CustomerName='Alfreds Futterkiste' ;
aici luam Country din tabelul Customers iar valoarea statica ‘Mikaell’ este introdusa adhoc . Ea nu exista in Customers , este o
valoare ce trebuie introdusa de noi

How to test sftp in linux script


#!/bin/sh -
mykey=/home/localuser/.ssh/id_rsa
remusr=bupuser
remhost=server.destination.domain.com
tmpfile=/tmp/sftptest.$$
cleanup() {
rm -f ${tmpfile}
}
trap cleanup 0
sftp -i $mykey -oPubkeyAuthentication=yes -oPasswordAuthentication=no -oKbdInteractiveAuthentication=no
-oStrichtHostKeyChecking=yes ${remusr}@${remhost} ${tmpfile} 2>&1
dir
exit
EOF
ST=$?
if test $ST -ne 0
then
echo SFTP LOGIN FAILURE. RC=${ST} 1>&2
exit $ST
fi
cat ${tmpfile} # or do some clever grepping on it
exit $ST

or

sftp $SFTP_USER@$SERVER <<EOF | grep -q 'No such file'


cd $SFTP_RDIR
ls $SFTP_RFILE
bye
EOF
if [ $? -eq 1 ] ; then
echo 'no such remote file'
else
sftp $SFTP_USER@$SERVER <<EOF
cd $SFTP_RDIR
get $SFTP_RFILE
bye
EOF

fi
=====================================================================================
Give permission recursivelly
chmod -R <permissionsettings> <dirname>
chmod -R 755 will set this as permissions to all files and folders in the tree. But why on earth would you want to? It might make sense for the directories,
but why set the execute bit on all the files?

I suspect what you really want to do is set the directories to 755 and either leave the files alone or set them to 644. For this, you can use the find
command. For example:

To change all the directories to 755 (drwxr-xr-x):

find /opt/lampp/htdocs -type d -exec chmod 755 {} \;

To change all the files to 644 (-rw-r--r--):

find /opt/lampp/htdocs -type f -exec chmod 644 {} \;

chmod 644 {} \; specifies the command that will be executed by find for each file. {} is replaced by the file path, and the semicolon denotes the end of
the command (escaped, otherwise it would be interpreted by the shell instead of find).

========================================================================================================

Commands combined
sudo apt-get update && sudo apt-get install pyrenamer

There seem to be four possible connectors: &, &&, || and ;

A; B Run A and then B, regardless of success of A

A && B Run B if A succeeded

A || B Run B if A failed

A& Run A in background.

==============================================================================================================================

Folosirea parametrilor si transmiterea valorii variabilelor


#!/bin/bash
#test_param.sh
. text.txt
CTXDEF=”INT PEP ADM SAE”
myval=”three”
echo primul parametru din txt este $param1
. ./testparam2.sh
echo al doilea parametru este $param2
myval=”four”
echo myval2 este $myval2
echo myval is now $myval
. ./testparam3.sh
echo myval3 este $myval3
===============================================================
#testparam2.sh
echo from second script param1 este $param1
echo from second script param2 este $param2
echo from second script myval este $myval
myval2=five
echo from second script myval2 este $myval2
for ctx in $CTXDEF
do
echo $ctx
done
====================================================
#testparam3.sh
echo myval2 este $myval2
myval3=”$myval2 and gogu”
====================================================
#fisierul text.txt
param1=”one”
param2=”two”
===================================================
OUTPUT
primul parametru din txt este one #citire/printare in primul script parametru din fisierul txt
from second script param1 este one #citire/printare in al doilea script parametru din fisierul txt
from second script param2 este two #citire/printare in al doilea script parametru din fisierul txt
from second script myval este three #citire/printare in al doilea script a unei variabile definita in primul script
from second script myval2 este five #citire/printare in al doilea script a unei variabile definita in al doilea script
INT #rulare for loop in al doilea script cu params definiti in primul script
PEP
ADM
SAE
al doilea parametru este two #citire/printare in primul script al doilea parametru din fisierul txt
myval2 este five #citire/printare in primul script al unei variabile din scriptul 2
myval is now four #citire/printare in primul script al unei variabile din scriptul 1
myval2 este five #citire/printare in scriptul 3 al unei variabile din scriptul 1
myval3 este five and gogu #citire/printare in scriptul 1 al unei variabile din scriptul 3 ce contine in ea
valoarea unei variabile din scriptul 2
Source the 2nd script, i.e. . testparam2.sh and third one and it will run in the same shell. This would let you share more complex variables like arrays
easily, but also means that the other script could modify variables in the source shell.
This more or less "imports" the contents of b.sh directly and executes it in the same shell. Notice that we didn't have to export the variable to access it. This
implicitly shares all the variables you have, as well as allows the other script to add/delete/modify variables in the shell. Of course, in this model both your
scripts should be the same language (sh or bash).
( . ./testparam3.sh ). The parentheses will make Bash run its content in a subshell. In this case
variabiles defined in first script will not be overwritten .
Daca in loc de . ./testparam2.sh apelam scriptul 2 cu sh ./testparam2.sh , atunci variabilele definite in scriptul 2 nu
vor fi vizibile in scriptul 1 si nici nu pot citi in scriptul 2 variabilele definite in fisierul txt sau scriptul 1.

========================================================================================================
NETWORKING

Is there a way to ping out or in, on a specific port, to see if it is open?

Assuming that it's a TCP (rather than UDP) port that you're trying to use:

1. On the server itself, use netstat -an to check to see which ports are listening

2. From outside, just telnet host port (or telnet host:port on Unix systems) to see if the connection is refused, accepted, or timeouts

On that latter test, then in general:

 connection refused means that nothing is running on that port


 accepted means that something is running on that port
 timeout means that a firewall is blocking access

On Win7 or Vista default option 'telnet' is not recognized as an internal or external command,operable program or batch file. To solve this, just enable it :
Click Start, Control Panel, Programs, and then Turn Windows Features on or off. In the list, scroll down and select Telnet Client and click OK

The message 'Connection Refused' has two main causes:

1. Nothing is listening on the IP:Port you are trying to connect to.


2. The port is blocked by a firewall.

No process is listening.

This is by far the most common reason for the message. First ensure that you are trying to connect to the correct system. If you are then to determine if
this is the problem, on the remote system run or 1 e.g. if you are expecting a process to be listening on port 22222

sudo netstat -tnlp | grep :22222

or
ss -tnlp | grep :22222
If nothing is listening then the above will produce no output. If you see some output then confirm that it's what you expect then see the firewall section
below.

If you don't have access to the remote system and want to confirm the problem before reporting it to the relevant administrators you can use tcpdump
(wireshark or similar).

When a connection is attempted to an IP:port where nothing is listening, the response from the remote system to the initial SYN packet is a packet with the
flags RST,ACK set. This closes the connection and causes the Connection Refused message e.g.
$ sudo tcpdump ­n host 192.0.2.1 and port 22222 
tcpdump: verbose output suppressed, use ­v or ­vv for full protocol decode
listening on enp14s0, link­type EN10MB (Ethernet), capture size 262144 bytes 

12:31:27.013976 IP 192.0.2.2.34390 > 192.0.2.1.22222: Flags [S], seq 1207858804, win 29200, options [mss 
1460,sackOK,TS val 15306344 ecr 0,nop,wscale 7], length 0 

12:31:27.020162 IP 192.0.2.1.22222 > 192.0.2.2.34390: Flags [R.], seq 0, ack 1207858805, win 0, length 0 

Note that tcpdump uses a . to flag.

Port is blocked by a firewall

If the port is blocked by a firewall and the firewall has been configured to respond with icmp-port-unreachable this will also cause a connection
refused message. Again you can see this with tcpdump (or similar)
$ sudo tcpdump ­n icmp 
tcpdump: verbose output suppressed, use ­v or ­vv for full protocol decode

listening on enp14s0, link­type EN10MB (Ethernet), capture size 262144 bytes 13:03:24.149897 IP 192.0.2.1 > 
192.0.2.2: ICMP 192.0.2.1 tcp port 22222 unreachable, length 68

Note that this also tells us where the blocking firewall is.

So now you know what's causing the Connection refused message you should take appropriate action e.g. contact the firewall administrator or investigate
the reason for the process not listening.
1 Other tools are likely available.

Potrebbero piacerti anche