Sei sulla pagina 1di 40

Network Security and Management

Rotor machine
In cryptography, a rotor machine is an electro-mechanical device used for encrypting and
decrypting secret messages. Rotor machines were the cryptographic state-of-the-art for a brief
but prominent period of history; they were in widespread use in the 1930s–1950s. The most
famous example is the Enigma machine.

The primary component is a set of rotors, also termed wheels or drums, which are rotating
disks with an array of electrical contacts on either side. The wiring between the contacts
implements a fixed substitution of letters, replacing them in some complex fashion. On its own,
this would offer little security; however, after encrypting each letter, the rotors advance
positions, changing the substitution. By this means, a rotor machine produces a complex
polyalphabetic substitution cipher.

The machine consists of a set of independently rotating cylinders through which electrical
pulses can flow. Each cylinder has 26 input pins and 26 output pins, with internal wiring that
connects each input pin to a unique output pin. For simplicity, only three of the internal
connections in each cylinder are shown.

1
Network Security and Management

Consider a machine with a single cylinder. After each input key is depressed, the cylinder
rotates one position, so that the internal connections are shifted accordingly. Thus, a different
monoalphabetic substitution cipher is defined. After 26 letters of plaintext, the cylinder would
be back to the initial position. Thus, we have a polyalphabetic substitution algorithm with a
period of 26.

A single-cylinder system is trivial and does not present a formidable cryptanalytic task. The
power of the rotor machine is in the use of multiple cylinders, in which the output pins of one
cylinder are connected to the input pins of the next. Figure above shows a three-cylinder
system. The left half of the figure shows a position in which the input from the operator to the
first pin (plaintext letter a) is routed through the three cylinders to appear at the output of the
second pin (ciphertext letter B).

With multiple cylinders, the one closest to the operator input rotates one pin position with each
keystroke. The right half of Figure shows the system's configuration after a single keystroke. For
every complete rotation of the inner cylinder, the middle cylinder rotates one pin position.
Finally, for every complete rotation of the middle cylinder, the outer cylinder rotates one pin
position. The result is that there are 26 x 26 x 26 = 17,576 different substitution alphabets used
before the system repeats.

Steganography
Steganography is the art and science of writing hidden messages in such a way that no one,
apart from the sender and intended recipient, suspects the existence of the message, a form of
security through obscurity. The word steganography is of Greek origin and means "concealed
writing" from the Greek words steganos meaning "covered or protected", and graphein meaning
"to write".

The advantage of steganography, over cryptography alone, is that messages do not attract
attention to themselves. Plainly visible encrypted messages—no matter how unbreakable—will
arouse suspicion, and may in themselves be incriminating in countries where encryption is
illegal. Therefore, whereas cryptography protects the contents of a message, steganography can
be said to protect both messages and communicating parties.

Steganography includes the concealment of information within computer files. In digital


steganography, electronic communications may include steganographic coding inside of a
transport layer, such as a document file, image file, program or protocol. Media files are ideal for
steganographic transmission because of their large size. As a simple example, a sender might
start with an innocuous image file and adjust the color of every 100th pixel to correspond to a
letter in the alphabet, a change so subtle that someone not specifically looking for it is unlikely
to notice it.

2
Network Security and Management

Data Encryption Standard


The Data Encryption Standard (DES) is a block cipher (a form of shared secret encryption) that
was selected by the National Bureau of Standards as an official Federal Information Processing
Standard (FIPS) for the United States in 1976 and which has subsequently enjoyed widespread
use internationally. It is based on a symmetric-key algorithm that uses a 56-bit key. The
algorithm was initially controversial with classified design elements, a relatively short key
length, and suspicions about a National Security Agency (NSA) backdoor. DES consequently
came under intense academic scrutiny which motivated the modern understanding of block
ciphers and their cryptanalysis.

The algorithm is designed to encipher and decipher blocks of data consisting of 64 bits under
control of a 64-bit key.

A DES key consists of 64 binary digits ("0"s or "1"s) of which 56 bits are randomly generated
and used directly by the algorithm. The other 8 bits, which are not used by the algorithm, may
be used for error detection. The 8 error detecting bits are set to make the parity of each 8-bit
byte of the key odd, i.e., there is an odd number of "1"s in each 8-bit byte1. A TDEA key consists
of three DES keys, which is also referred to as a key bundle. Authorized users of encrypted
computer data must have the key that was used to encipher the data in order to decrypt it.

A block to be enciphered is subjected to an initial permutation IP, then to a complex key-


dependent computation and finally to a permutation which is the inverse of the initial
permutation IP-1. The key-dependent computation can be simply defined in terms of a function
f, called the cipher function, and a function KS, called the key schedule.

3
Network Security and Management

4
Network Security and Management

Enciphering:
The computation which uses the permuted input block as its input to produce the preoutput
block consists, but for a final interchange of blocks, of 16 iterations of a calculation that is
described below in terms of the cipher function f which operates on two blocks, one of 32 bits
and one of 48 bits, and produces a block of 32 bits.
Let the 64 bits of the input block to an iteration consist of a 32 bit block L followed by a 32 bit
block R. Using the notation defined in the introduction, the input block is then LR.
Let K be a block of 48 bits chosen from the 64-bit key. Then the output L'R' of an iteration with
input LR is defined by:
L' = R
R' = L ⊕ f(R,K)

Deciphering:
The permutation IP-1 applied to the preoutput block is the inverse of the initial permutation IP
applied to the input.
R = L'
L = R' ⊕ f(L',K)

Consequently, to decipher it is only necessary to apply the very same algorithm to an

enciphered message block, taking care that at each iteration of the computation the same block

of key bits K is used during decipherment as was used during the encipherment of the block.

The Cipher Function f


A sketch of the calculation of f(R,K) is given in Figure

5
Network Security and Management

Let E denote a function which takes a block of 32 bits as input and yields a block of 48 bits as

output. Let E be such that the 48 bits of its output, written as 8 blocks of 6 bits each.

Each of the unique selection functions S1,S2,...,S8, takes a 6-bit block as input and yields a 4-bit

block as output.

The permutation function P yields a 32-bit output from a 32-bit input by permuting the bits of
the input block.

6
Network Security and Management

7
Network Security and Management

Figure: General Depiction of DES

The Avalanche Effect


A desirable property of any encryption algorithm is that a small change in either the plaintext or
the key should produce a significant change in the ciphertext. In particular, a change in one bit
of the plaintext or one bit of the key should produce a change in many bits of the ciphertext. If
the change were small, this might provide a way to reduce the size of the plaintext or key space
to be searched.

8
Network Security and Management

Table: Avalanche Effect in DES

(a) Change in Plaintext (b) Change in Key

Number of bits that Number of bits that


Round differ Round differ

0 1 0 0

1 6 1 2

2 21 2 14

3 35 3 28

4 39 4 32

5 34 5 30

6 32 6 32

7 31 7 35

8 29 8 34

9 42 9 40

10 44 10 38

11 32 11 31

12 30 12 33

13 30 13 28

14 26 14 26

15 29 15 34

9
Network Security and Management

Table: Avalanche Effect in DES

(a) Change in Plaintext (b) Change in Key

Number of bits that Number of bits that


Round differ Round differ

16 34 16 35

The Table shows that after just three rounds, 21 bits differ between the two blocks. On
completion, the two ciphertexts differ in 34 bit positions.

Triple DES
In cryptography, Triple DES (3DES) is the common name for the Triple Data Encryption
Algorithm (TDEA) block cipher, which applies the Data Encryption Standard (DES) cipher
algorithm three times to each data block. Because the key size of the original DES cipher was
becoming problematically short, Triple DES was designed to provide a relatively simple method
of increasing the key size of DES to protect against brute force attacks, without designing a
completely new block cipher algorithm.

Keying options
The standards define three keying options:

Keying option 1: All three keys are independent.

Keying option 2: K1 and K2 are independent, and K3 = K1.

Keying option 3: All three keys are identical, i.e. K1 = K2 = K3.

Keying option 1 is the strongest, with 3 x 56 = 168 independent key bits.

Keying option 2 provides less security, with 2 x 56 = 112 key bits. This option is stronger than
simply DES encrypting twice, e.g. with K1 and K2, because it protects against meet-in-the-middle
attacks.

Keying option 3 is no better than DES, with only 56 key bits. This option provides backward
compatibility with DES, because the first and second DES operations simply cancel out. It is no
longer recommended by the National Institute of Standards and Technology (NIST) and not
supported by ISO/IEC 18033-3.

10 
Network Security and Management

Block Cipher Design Principles


Although much progress has been made in designing block ciphers that are cryptographically
strong, the basic principles have not changed all that much since the work of Feistel and the DES
design team in the early 1970s. There are three critical aspects of block cipher design: the
number of rounds, design of the function F, and key scheduling.

Number of Rounds
The greater the number of rounds, the more difficult it is to perform cryptanalysis, even for a
relatively weak F. In general, the criterion should be that the number of rounds is chosen so that
known cryptanalytic efforts require greater effort than a simple brute-force key search attack.
This criterion was certainly used in the design of DES. Schneier [SCHN96] observes that for 16-
round DES, a differential cryptanalysis attack is slightly less efficient than brute force: the
differential cryptanalysis attack requires 255.1 operations, whereas brute force requires 255. If
DES had 15 or fewer rounds, differential cryptanalysis would require less effort than brute-
force key search.

This criterion is attractive because it makes it easy to judge the strength of an algorithm and to
compare different algorithms. In the absence of a cryptanalytic breakthrough, the strength of
any algorithm that satisfies the criterion can be judged solely on key length.

Design of Function F
The heart of a Feistel block cipher is the function F. As we have seen, in DES, this function relies
on the use of S-boxes.

Design Criteria for F

The function F provides the element of confusion in a Feistel cipher. Thus, it must be difficult to
"unscramble" the substitution performed by F. One obvious criterion is that F be nonlinear, as
we discussed previously. The more nonlinear F, the more difficult any type of cryptanalysis will
be. In rough terms, the more difficult it is to approximate F by a set of linear equations, the more
nonlinear F is.

S-Box Design

One obvious characteristic of the S-box is its size. An n x m S-box has n input bits and m output
bits. DES has 6 x 4 S-boxes. Blowfish, has 8 x 32 S-boxes. Larger S-boxes, by and large, are more
resistant to differential and linear cryptanalysis.

11 
Network Security and Management

Key Schedule Algorithm


A final area of block cipher design, and one that has received less attention than S-box design, is
the key schedule algorithm. With any Feistel block cipher, the key is used to generate one
subkey for each round. In general, we would like to select subkeys to maximize the difficulty of
deducing individual subkeys and the difficulty of working back to the main key. No general
principles for this have yet been promulgated.

Public key certificate


In cryptography, a public key certificate (also known as a digital certificate or identity
certificate) is an electronic document which uses a digital signature to bind together a public
key with an identity — information such as the name of a person or an organization, their
address, and so forth. The certificate can be used to verify that a public key belongs to an
individual.

In a typical public key infrastructure (PKI) scheme, the signature will be of a certificate
authority (CA). In a web of trust scheme, the signature is of either the user (a self-signed
certificate) or other users ("endorsements"). In either case, the signatures on a certificate are
attestations by the certificate signer that the identity information and the public key belong
together.

Key Distribution
For symmetric encryption to work, the two parties to an exchange must share the same key, and
that key must be protected from access by others. Furthermore, frequent key changes are
usually desirable to limit the amount of data compromised if an attacker learns the key.
Therefore, the strength of any cryptographic system rests with the key distribution technique, a
term that refers to the means of delivering a key to two parties who wish to exchange data,
without allowing others to see the key. For two parties A and B, key distribution can be achieved
in a number of ways, as follows:

 A can select a key and physically deliver it to B.

 A third party can select the key and physically deliver it to A and B.

 If A and B have previously and recently used a key, one party can transmit the new key to
the other, encrypted using the old key.

 If A and B each has an encrypted connection to a third party C, C can deliver a key on the
encrypted links to A and B.
12 
Network Security and Management

Options 1 and 2 call for manual delivery of a key. For link encryption, this is a reasonable
requirement, because each link encryption device is going to be exchanging data only with its
partner on the other end of the link. However, for end-to-end encryption, manual delivery is
awkward. In a distributed system, any given host or terminal may need to engage in exchanges
with many other hosts and terminals over time. Thus, each device needs a number of keys
supplied dynamically. The problem is especially difficult in a wide area distributed system.

Diffie–Hellman key exchange


Diffie–Hellman key exchange (D–H) is a cryptographic protocol that allows two parties that
have no prior knowledge of each other to jointly establish a shared secret key over an insecure
communications channel. This key can then be used to encrypt subsequent communications
using a symmetric key cipher.

Although Diffie–Hellman key agreement itself is an anonymous (non-authenticated) key-


agreement protocol, it provides the basis for a variety of authenticated protocols, and is used to
provide perfect forward secrecy in Transport Layer Security's ephemeral modes.

The simplest, and original, implementation of the protocol uses the multiplicative group of
integers modulo p, where p is prime and g is primitive root mod p.

Alice and Bob agree to use a prime number p=23 and base g=5.

Alice chooses a secret integer a=6, then sends Bob A = ga mod p

A = 56 mod 23 = 8.

Bob chooses a secret integer b=15, then sends Alice B = gb mod p

B = 515 mod 23 = 19.

Alice computes s = B a mod p

196 mod 23 = 2.

Bob computes s = A b mod p

815 mod 23 = 2.

13 
Network Security and Management

Both Alice and Bob have arrived at the same value, because gab and gba are equal mod p. Note
that only a, b and gab = gba mod p are kept secret. All the other values – p, g, ga mod p, and gb mod
p – are sent in the clear. Once Alice and Bob compute the shared secret they can use it as an
encryption key, known only to them, for sending messages across the same open
communications channel. Of course, much larger values of a, b, and p would be needed to make
this example secure, since it is easy to try all the possible values of gab mod 23 (there will be, at
most, 22 such values, even if a and b are large). If p were a prime of at least 300 digits, and a and
b were at least 100 digits long, then even the best algorithms known today could not find a
given only g, p, gb mod p and ga mod p, even using all of mankind's computing power.

Here's a more general description of the protocol:


Alice and Bob agree on a finite cyclic group G and a generating element g in G. (This is
usually done long before the rest of the protocol; g is assumed to be known by all
attackers.) We will write the group G multiplicatively.

Alice picks a random natural number a and sends ga to Bob.

Bob picks a random natural number b and sends gb to Alice.

Alice computes (gb)a.

Bob computes (ga)b.

14 
Network Security and Management

Figure: General Description of Diffie-Hellman

15 
Network Security and Management

RSA
In cryptography, RSA (which stands for Rivest, Shamir and Adleman who first publicly
described it) is an algorithm for public-key cryptography[1]. It is the first algorithm known to be
suitable for signing as well as encryption, and was one of the first great advances in public key
cryptography. RSA is widely used in electronic commerce protocols, and is believed to be secure
given sufficiently long keys and the use of up-to-date implementations.

The RSA algorithm involves three steps: key generation, encryption and decryption.

Key generation
RSA involves a public key and a private key. The public key can be known to everyone and is
used for encrypting messages. Messages encrypted with the public key can only be decrypted
using the private key. The keys for the RSA algorithm are generated the following way:

1. Choose two distinct prime numbers p and q.

For security purposes, the integers p and q should be chosen uniformly at random and should
be of similar bit-length. Prime integers can be efficiently found using a primality test.

2. Compute n = pq.

 n is used as the modulus for both the public and private keys

3. Compute φ(pq) = (p − 1)(q − 1). (φ is Euler's totient function).

4. Choose an integer e such that 1 < e < φ(pq), and e and φ(pq) share no divisors other
than 1 (i.e., e and φ(pq) are coprime).

 e is released as the public key exponent.

 e having a short bit-length and small Hamming weight results in more efficient
encryption. However, small values of e (such as e = 3) have been shown to be less
secure in some settings.

5. Determine d (using modular arithmetic), 1<d<φ(pq) which satisfies the congruence


relation .

 Stated differently, ed − 1 can be evenly divided by the totient (p − 1)(q − 1).

 This is often computed using the extended Euclidean algorithm.

 d is kept as the private key exponent.

The public key consists of the modulus n and the public (or encryption) exponent e. The private
key consists of the private (or decryption) exponent d which must be kept secret.

16 
Network Security and Management

Public key  (n,e)

Private key  (d)

Encryption
Alice transmits her public key (n,e) to Bob and keeps the private key secret. Bob then wishes to
send message M to Alice.

He first turns M into an integer 0 < m < n by using an agreed-upon reversible protocol known as
a padding scheme. He then computes the ciphertext c corresponding to:

This can be done quickly using the method of exponentiation by squaring. Bob then transmits c
to Alice.

Decryption
Alice can recover m from c by using her private key exponent d by the following computation:

Given m, she can recover the original message M by reversing the padding scheme.

A worked example
Here is an example of RSA encryption and decryption. The parameters used here are artificially
small, but one can also use OpenSSL to generate and examine a real keypair.

1. Choose two prime numbers

p = 61 and q = 53

2. Compute n = pq

3. Compute the totients of product. For primes the totient is maximal and equals x − 1.
Therefore

4. Choose any number e > 1 that is coprime to 3120. Choosing a prime number for e leaves
you with a single check: that e is not a divisor of 3120.

e = 17

17 
Network Security and Management

5. Compute d such that e.g., by computing the modular


multiplicative inverse of e modulo :

d = 2753

since 17 · 2753 = 46801 and 46801 mod 3120 = 1, this is the correct answer.

(iterating finds (15 times 3120)+1 divided by 17 is 2753, an integer, whereas other
values in place of 15 do not produce an integer. The extended euclidean algorithm finds
the solution to Bézout's identity of 3120x2 + 17x-367=1, and -367 mod 3120 is 2753)

The public key is (n = 3233, e = 17). For a padded message m the encryption function is
or abstractly:

The private key is (n = 3233, d = 2753). The decryption function is or in


its general form:

For instance, in order to encrypt m = 123, we calculate

To decrypt c = 855, we tap

Security and practical considerations


Key generation
Finding the large primes p and q is usually done by testing random numbers of the right size
with probabilistic primality tests which quickly eliminate virtually all non-primes.

Numbers p and q should not be 'too close', lest the Fermat factorization for n be successful, if
p − q, for instance is less than 2n1/4 (which for even small 1024-bit values of n is 3×1077) solving
for p and q is trivial. Furthermore, if either p − 1 or q − 1 has only small prime factors, n can be
factored quickly by Pollard's p − 1 algorithm, and these values of p or q should therefore be
discarded as well.

18 
Network Security and Management

Speed
RSA is much slower than DES and other symmetric cryptosystems. In practice, Bob typically
encrypts a secret message with a symmetric algorithm, encrypts the (comparatively short)
symmetric key with RSA, and transmits both the RSA-encrypted symmetric key and the
symmetrically-encrypted message to Alice.

This procedure raises additional security issues. For instance, it is of utmost importance to use a
strong random number generator for the symmetric key, because otherwise Eve (an
eavesdropper wanting to see what was sent) could bypass RSA by guessing the symmetric key.

Key distribution
As with all ciphers, how RSA public keys are distributed is important to security. Key
distribution must be secured against a man-in-the-middle attack. Suppose Eve has some way to
give Bob arbitrary keys and make him believe they belong to Alice. Suppose further that Eve can
intercept transmissions between Alice and Bob. Eve sends Bob her own public key, which Bob
believes to be Alice's. Eve can then intercept any ciphertext sent by Bob, decrypt it with her own
private key, keep a copy of the message, encrypt the message with Alice's public key, and send
the new ciphertext to Alice. In principle, neither Alice nor Bob would be able to detect Eve's
presence. Defenses against such attacks are often based on digital certificates or other
components of a public key infrastructure.

Timing attacks
If the attacker Eve knows Alice's hardware in sufficient detail and is able to measure the
decryption times for several known ciphertexts, she can deduce the decryption key d quickly.
This attack can also be applied against the RSA signature scheme. In 2003, Boneh and Brumley
demonstrated a more practical attack capable of recovering RSA factorizations over a network
connection (e.g., from a Secure Socket Layer (SSL)-enabled webserver). This attack takes
advantage of information leaked by the Chinese remainder theorem optimization used by many
RSA implementations.

One way to thwart these attacks is to ensure that the decryption operation takes a constant
amount of time for every ciphertext. However, this approach can significantly reduce
performance.

19 
Network Security and Management

Digital signature
A digital signature or digital signature scheme is a mathematical scheme for demonstrating the
authenticity of a digital message or document. A valid digital signature gives a recipient reason
to believe that the message was created by a known sender, and that it was not altered in
transit. Digital signatures are commonly used for software distribution, financial transactions,
and in other cases where it is important to detect forgery and tampering.

Digital signatures employ a type of asymmetric cryptography. For messages sent through an
insecure channel, a properly implemented digital signature gives the receiver reason to believe
the message was sent by the claimed sender. Digital signatures are equivalent to traditional
handwritten signatures in many respects; properly implemented digital signatures are more
difficult to forge than the handwritten type. Digital signature schemes in the sense used here are
cryptographically based, and must be implemented properly to be effective. Digital signatures
can also provide non-repudiation, meaning that the signer cannot successfully claim they did
not sign a message, while also claiming their private key remains secret; further, some non-
repudiation schemes offer a time stamp for the digital signature, so that even if the private key
is exposed, the signature is valid nonetheless. Digitally signed messages may be anything
representable as a bitstring: examples include electronic mail, contracts, or a message sent via
some other cryptographic protocol.

A digital signature scheme typically consists of three algorithms:

 A key generation algorithm that selects a private key uniformly at random from a
set of possible private keys. The algorithm outputs the private key and a
corresponding public key.

 A signing algorithm which, given a message and a private key, produces a


signature.

 A signature verifying algorithm which given a message, public key and a


signature, either accepts or rejects the message's claim to authenticity.

Two main properties are required. First, a signature generated from a fixed message
and fixed private key should verify the authenticity of that message by using the
corresponding public key. Secondly, it should be computationally infeasible to generate
a valid signature for a party who does not possess the private key.

20 
Network Security and Management

Digital Signature Standard


The DSS makes use of the Secure Hash Algorithm (SHA) described in Chapter 12 and presents a
new digital signature technique, the Digital Signature Algorithm (DSA).

The DSS Approach


The DSS uses an algorithm that is designed to provide only the digital signature function. Unlike
RSA, it cannot be used for encryption or key exchange. Nevertheless, it is a public-key technique.

Figure contrasts the DSS approach for generating digital signatures to that used with RSA. In the
RSA approach, the message to be signed is input to a hash function that produces a secure hash
code of fixed length. This hash code is then encrypted using the sender's private key to form the
signature. Both the message and the signature are then transmitted. The recipient takes the
message and produces a hash code. The recipient also decrypts the signature using the sender's
public key. If the calculated hash code matches the decrypted signature, the signature is
accepted as valid. Because only the sender knows the private key, only the sender could have
produced a valid signature.

The DSS approach also makes use of a hash function. The hash code is provided as input to a
signature function along with a random number k generated for this particular signature. The
signature function also depends on the sender's private key (PRa)and a set of parameters known
to a group of communicating principals. We can consider this set to constitute a global public
key (PUG). The result is a signature consisting of two components, labeled s and r.

21 
Network Security and Management

MD5
In cryptography, MD5 (Message-Digest algorithm 5) is a widely used cryptographic hash
function with a 128-bit hash value. MD5 has been employed in a wide variety of security
applications, and is also commonly used to check the integrity of files. However, it has been
shown that MD5 is not collision resistant; as such, MD5 is not suitable for applications like SSL
certificates or digital signatures that rely on this property. An MD5 hash is typically expressed
as a 32-digit hexadecimal number.

MD5 processes a variable-length message into a fixed-length output of 128 bits. The input
message is broken up into chunks of 512-bit blocks (sixteen 32-bit little endian integers); the
message is padded so that its length is divisible by 512. The padding works as follows: first a
single bit, 1, is appended to the end of the message. This is followed by as many zeros as are
required to bring the length of the message up to 64 bits fewer than a multiple of 512. The
remaining bits are filled up with a 64-bit integer representing the length of the original message,
in bits.

The main MD5 algorithm operates on a 128-bit state, divided into four 32-bit words, denoted A,
B, C and D. These are initialized to certain fixed constants. The main algorithm then operates on
each 512-bit message block in turn, each block modifying the state. The processing of a message
block consists of four similar stages, termed rounds; each round is composed of 16 similar
operations.

The 128-bit (16-byte) MD5 hashes (also termed message digests) are typically represented as a
sequence of 32 hexadecimal digits. The following demonstrates a 43-byte ASCII input and the
corresponding MD5 hash:

MD5("The quick brown fox jumps over the lazy dog")

= 9e107d9d372bb6826bd81d3542a419d6

Even a small change in the message will (with overwhelming probability) result in a mostly
different hash, due to the avalanche effect. For example, adding a period to the end of the
sentence:

MD5("The quick brown fox jumps over the lazy dog.")

= e4d909c290d0fb1ca068ffaddf22cbd0

22 
Network Security and Management

SHA-1
In cryptography, SHA-1 is a cryptographic hash function designed by the National Security
Agency (NSA) and published by the NIST as a U.S. Federal Information Processing Standard. SHA
stands for Secure Hash Algorithm. The three SHA algorithms are structured differently and are
distinguished as SHA-0, SHA-1, and SHA-2. SHA-1 is very similar to SHA-0, but corrects an error
in the original SHA hash specification that led to significant weaknesses. The SHA-0 algorithm
was not adopted by many applications. SHA-2 on the other hand significantly differs from the
SHA-1 hash function.

SHA-1 is the most widely used of the existing SHA hash functions, and is employed in several
widely-used security applications and protocols. In 2005, security flaws were identified in SHA-
1, namely that a mathematical weakness might exist, indicating that a stronger hash function
would be desirable.

SHA-1 produces a 160-bit digest from a message with a maximum length of (264 − 1) bits. SHA-1
is based on principles similar to those used by Ronald L. Rivest of MIT in the design of the MD4
and MD5 message digest algorithms, but has a more conservative design.

RIPEMD
RIPEMD-160 (RACE Integrity Primitives Evaluation Message Digest) is a 160-bit message digest
algorithm (and cryptographic hash function). It is an improved version of RIPEMD, which in
turn was based upon the design principles used in MD4, and is similar in performance to the
more popular SHA-1.

There also exist 128, 256 and 320-bit versions of this algorithm, called RIPEMD-128, RIPEMD-
256, and RIPEMD-320, respectively. The 128-bit version was intended only as a drop-in
replacement for the original RIPEMD, which was also 128-bit, and which had been found to have
questionable security. The 256 and 320-bit versions diminish only the chance of accidental
collision, and don't have higher levels of security as compared to, respectively, RIPEMD-128 and
RIPEMD-160.

RIPEMD-160 was designed in the open academic community, in contrast to the NSA-designed
SHA-1 and SHA-2 algorithms. On the other hand, RIPEMD-160 appears to be used somewhat
less frequently than SHA-1, which may have caused it to be less scrutinized than SHA.

23 
Network Security and Management

The 160-bit RIPEMD-160 hashes (also termed RIPE message digests) are typically represented
as 40-digit hexadecimal numbers. The following demonstrates a 43-byte ASCII input and the
corresponding RIPEMD-160 hash:

RIPEMD-160("The quick brown fox jumps over the lazy dog") =

37f332f68db77bd9d7edd4969571ad671cf9dd3b

Even a small change in the message will (with overwhelming probability) result in a completely
different hash, e.g. changing d to c:

RIPEMD-160("The quick brown fox jumps over the lazy cog") =

132072df690933835eb8b6ad0b77e7b6f14acad7

Pretty Good Privacy


Pretty Good Privacy (PGP) is a computer program that provides cryptographic privacy and
authentication. PGP is often used for signing, encrypting and decrypting e-mails to increase the
security of e-mail communications. It was created by Philip Zimmermann in 1991.

PGP encryption uses a serial combination of hashing, data compression, symmetric-key


cryptography, and, finally, public-key cryptography; each step uses one of several supported
algorithms. Each public key is bound to a user name and/or an e-mail address.

24 
Network Security and Management

PGP Services

General Format of PGP Message(from A to from B)

25 
Network Security and Management

Data compression
In computer science and information theory, data compression or source coding is the process
of encoding information using fewer bits (or other information-bearing units) than an
unencoded representation would use, through use of specific encoding schemes.

As with any communication, compressed data communication only works when both the sender
and receiver of the information understand the encoding scheme. For example, this text makes
sense only if the receiver understands that it is intended to be interpreted as characters
representing the English language. Similarly, compressed data can only be understood if the
decoding method is known by the receiver.

Compression is useful because it helps reduce the consumption of expensive resources, such as
hard disk space or transmission bandwidth. On the downside, compressed data must be
decompressed to be used, and this extra processing may be detrimental to some applications.
For instance, a compression scheme for video may require expensive hardware for the video to
be decompressed fast enough to be viewed as it is being decompressed (the option of
decompressing the video in full before watching it may be inconvenient, and requires storage
space for the decompressed video). The design of data compression schemes therefore involves
trade-offs among various factors, including the degree of compression, the amount of distortion
introduced (if using a lossy compression scheme), and the computational resources required to
compress and uncompress the data.

Lossless versus lossy compression


Lossless compression algorithms usually exploit statistical redundancy in such a way as to
represent the sender's data more concisely without error. Lossless compression is possible
because most real-world data has statistical redundancy. For example, in English text, the letter
'e' is much more common than the letter 'z', and the probability that the letter 'q' will be
followed by the letter 'z' is very small. Another kind of compression, called lossy data
compression or perceptual coding, is possible if some loss of fidelity is acceptable. Generally, a
lossy data compression will be guided by research on how people perceive the data in question.
For example, the human eye is more sensitive to subtle variations in luminance than it is to
variations in color. JPEG image compression works in part by "rounding off" some of this less-
important information. Lossy data compression provides a way to obtain the best fidelity for a
given amount of compression. In some cases, transparent (unnoticeable) compression is
desired; in other cases, fidelity is sacrificed to reduce the amount of data as much as possible.

Lossless compression schemes are reversible so that the original data can be reconstructed,
while lossy schemes accept some loss of data in order to achieve higher compression.

However, lossless data compression algorithms will always fail to compress some files; indeed,
any compression algorithm will necessarily fail to compress any data containing no discernible
patterns. Attempts to compress data that has been compressed already will therefore usually
result in an expansion, as will attempts to compress all but the most trivially encrypted data.

26 
Network Security and Management

In practice, lossy data compression will also come to a point where compressing again does not
work, although an extremely lossy algorithm, like for example always removing the last byte of a
file, will always compress a file up to the point where it is empty.

An example of lossless vs. lossy compression is the following string:

25.888888888

This string can be compressed as:

25.[9]8

Interpreted as, "twenty five point 9 eights", the original string is perfectly recreated, just written
in a smaller form. In a lossy system, using

26

instead, the exact original data is lost, at the benefit of a smaller file.

Lossy
Lossy image compression is used in digital cameras, to increase storage capacities with minimal
degradation of picture quality. Similarly, DVDs use the lossy MPEG-2 Video codec for video
compression.

In lossy audio compression, methods of psychoacoustics are used to remove non-audible (or
less audible) components of the signal. Compression of human speech is often performed with
even more specialized techniques, so that "speech compression" or "voice coding" is sometimes
distinguished as a separate discipline from "audio compression". Different audio and speech
compression standards are listed under audio codecs. Voice compression is used in Internet
telephony for example, while audio compression is used for CD ripping and is decoded by audio
players.

Lossless
The Lempel-Ziv (LZ) compression methods are among the most popular algorithms for lossless
storage. DEFLATE is a variation on LZ which is optimized for decompression speed and
compression ratio, therefore compression can be slow. DEFLATE is used in PKZIP, gzip and PNG.
LZW (Lempel-Ziv-Welch) is used in GIF images. Also noteworthy are the LZR (LZ-Renau)
methods, which serve as the basis of the Zip method. LZ methods utilize a table-based
compression model where table entries are substituted for repeated strings of data. For most LZ
methods, this table is generated dynamically from earlier data in the input. The table itself is
often Huffman encoded (e.g. SHRI, LZX). A current LZ-based coding scheme that performs well is
LZX, used in Microsoft's CAB format.

27 
Network Security and Management

MIME
Multipurpose Internet Mail Extensions (MIME) is an Internet standard that extends the format
of e-mail to support:

 Text in character sets other than ASCII

 Non-text attachments

 Message bodies with multiple parts

 Header information in non-ASCII character sets

MIME's use, however, has grown beyond describing the content of e-mail to describing content
type in general, including for the web.

Virtually all human-written Internet e-mail and a fairly large proportion of automated e-mail is
transmitted via SMTP in MIME format. Internet e-mail is so closely associated with the SMTP
and MIME standards that it is sometimes called SMTP/MIME e-mail.

The content types defined by MIME standards are also of importance outside of e-mail, such as
in communication protocols like HTTP for the World Wide Web. HTTP requires that data be
transmitted in the context of e-mail-like messages, although the data most often is not actually
e-mail.

MIME headers
MIME Version

The presence of this header indicates the message is MIME-formatted. The value is typically
"1.0" so this header appears as

MIME-Version: 1.0

It should be noted that implementers have attempted to change the version number in the past
and the change had unforeseen results. It was decided at an IETF meeting to leave the version
number as is even though there have been many updates and versions of MIME.

Content-ID

The Content-ID header is primarily of use in multi-part messages (as discussed below); a
Content-ID is a unique identifier for a message part, allowing it to be referred to (e.g., in IMG
tags of an HTML message allowing the inline display of attached images). The content ID is
contained within angle brackets in the Content-ID header. Here is an example:

Content-ID: <5.31.32252.1057009685@server01.example.net>

28 
Network Security and Management

The standards don't really have a lot to say about exactly what is in a Content-ID; they're only
supposed to be globally and permanently unique (meaning that no two are the same, even when
generated by different people in different times and places). To achieve this, some conventions
have been adopted; one of them is to include an at sign (@), with the hostname of the computer
which created the content ID to the right of it.

Content-Type

This header indicates the Internet media type of the message content, consisting of a type and
subtype, for example

Content-Type: text/plain

Content-Disposition

A MIME part can have:

 an inline content-disposition, which means that it should be automatically


displayed when the message is displayed, or

 an attachment content-disposition, in which case it is not displayed


automatically and requires some form of action from the user to open it.

Content-Transfer-Encoding

The content-transfer-encoding: MIME header has 2-sided significance:

1. It indicates whether or not a binary-to-text encoding scheme has been used on top of the
original encoding as specified within the Content-Type header, and

2. If such a binary-to-text encoding method has been used it states which one.

 Suitable for use with normal SMTP:

o 7bit – up to 998 octets per line of the code range 1..127 with CR and LF (codes 13
and 10 respectively) only allowed to appear as part of a CRLF line ending. This is the
default value.

o quoted-printable – used to encode arbitrary octet sequences into a form that


satisfies the rules of 7bit. Designed to be efficient and mostly human readable when
used for text data consisting primarily of US-ASCII characters but also containing a
small proportion of bytes with values outside that range.

o base64 – used to encode arbitrary octet sequences into a form that satisfies the
rules of 7bit. Designed to be efficient for non-text 8 bit data. Sometimes used for text
data that frequently uses non-US-ASCII characters.

29 
Network Security and Management

 Suitable for use with SMTP servers that support the 8BITMIME SMTP extension:

o 8bit – up to 998 octets per line with CR and LF (codes 13 and 10 respectively) only
allowed to appear as part of a CRLF line ending.

 Suitable only for use with SMTP servers that support the BINARYMIME SMTP extension
(RFC 3030):

o binary – any sequence of octets.

S/MIME
S/MIME (Secure/Multipurpose Internet Mail Extensions) is a standard for public key encryption
and signing of MIME data.

S/MIME is on an IETF standards track and defined in a number of documents, most importantly
RFCs. S/MIME was originally developed by RSA Data Security Inc. The original specification
used the recently developed IETF MIME specification with the de facto industry standard
PKCS#7 secure message format.

S/MIME provides the following cryptographic security services for electronic messaging
applications: authentication, message integrity and non-repudiation of origin (using digital
signatures) and privacy and data security (using encryption). S/MIME specifies the
application/pkcs7-mime (smime-type "enveloped-data") type for data enveloping (encrypting):
the whole (prepared) MIME entity to be enveloped is encrypted and packed into an object
which subsequently is inserted into an application/pkcs7-mime MIME entity.

S/MIME functionality is built into the majority of modern e-mail software and interoperates
between them.

Functions
S/MIME provides the following functions:

 Enveloped data: This consists of encrypted content of any type and encrypted-content
encryption keys for one or more recipients.

 Signed data: A digital signature is formed by taking the message digest of the content to
be signed and then encrypting that with the private key of the signer. The content plus
signature are then encoded using base64 encoding. A signed data message can only be
viewed by a recipient with S/MIME capability.

 Clear-signed data: As with signed data, a digital signature of the content is formed.
However, in this case, only the digital signature is encoded using base64. As a result,
recipients without S/MIME capability can view the message content, although they
cannot verify the signature.

30 
Network Security and Management

 Signed and enveloped data: Signed-only and encrypted-only entities may be nested, so
that encrypted data may be signed and signed data or clear-signed data may be
encrypted.

Kerberos
Kerberos is a computer network authentication protocol, which allows nodes communicating
over a non-secure network to prove their identity to one another in a secure manner. It is also a
suite of free software published by Massachusetts Institute of Technology (MIT) that
implements this protocol. Its designers aimed primarily at a client–server model, and it
provides mutual authentication — both the user and the server verify each other's identity.
Kerberos protocol messages are protected against eavesdropping and replay attacks.

Kerberos builds on symmetric key cryptography and requires a trusted third party. Extensions
to Kerberos can provide for the use of public-key cryptography during certain phases of
authentication.

Description
Kerberos uses as its basis the symmetric Needham-Schroeder protocol. It makes use of a trusted
third party, termed a key distribution center (KDC), which consists of two logically separate
parts: an Authentication Server (AS) and a Ticket Granting Server (TGS). Kerberos works on the
basis of "tickets" which serve to prove the identity of users.

The KDC maintains a database of secret keys; each entity on the network — whether a client or
a server — shares a secret key known only to itself and to the KDC. Knowledge of this key serves
to prove an entity's identity. For communication between two entities, the KDC generates a
session key which they can use to secure their interactions. The security of the protocol relies
heavily on participants maintaining loosely synchronized time and on short-lived assertions of
authenticity called Kerberos tickets.

A simplified and more detailed description of the protocol follows. The following abbreviations
are used:

 AS = Authentication Server

 SS = Service Server

 TGS = Ticket-Granting Server

 TGT = Ticket-Granting Ticket

The client authenticates to the AS once using a long-term shared secret (e.g. a password) and
receives a TGT from the AS. Later, when the client wants to contact some SS, it can (re)use this
ticket to get additional tickets from TGS, for SS, without resorting to using the shared secret.
These tickets can be used to prove authentication to SS.
31 
Network Security and Management

IPSec
Internet Protocol Security (IPSec) is a protocol suite for securing Internet Protocol (IP)
communications by authenticating and encrypting each IP packet of a data stream. IPsec also
includes protocols for establishing mutual authentication between agents at the beginning of
the session and negotiation of cryptographic keys to be used during the session. IPsec can be
used to protect data flows between a pair of hosts (e.g. computer users or servers), between a
pair of security gateways (e.g. routers or firewalls), or between a security gateway and a host.

IPsec is a dual mode, end-to-end, security scheme operating at the Internet Layer of the Internet
Protocol Suite or OSI model Layer 3. Some other Internet security systems in widespread use,
such as Secure Sockets Layer (SSL), Transport Layer Security (TLS) and Secure Shell (SSH),
operate in the upper layers of these models. Hence, IPsec can be used for protecting any
application traffic across the Internet. Applications don't need to be specifically designed to use
IPsec. The use of TLS/SSL, on the other hand, must typically be incorporated into the design of
applications.

Security Architecture
The IPsec suite is a framework of open standards. IPsec uses the following protocols to perform
various functions:

 A security association (SA) is set up by Internet Key Exchange (IKE and IKEv2) or
Kerberized Internet Negotiation of Keys (KINK) by handling negotiation of protocols
and algorithms and to generate the encryption and authentication keys to be used by
IPsec.

 Authentication Header (AH) to provide connectionless integrity and data origin


authentication for IP datagrams and to provide protection against replay attacks.

 Encapsulating Security Payload (ESP) to provide confidentiality, data origin


authentication, connectionless integrity, an anti-replay service (a form of partial
sequence integrity), and limited traffic flow confidentiality.

Modes of operation
Transport mode

In transport mode, only the payload (the data you transfer) of the IP packet is encrypted and/or
authenticated. The routing is intact, since the IP header is neither modified nor encrypted;
however, when the authentication header is used, the IP addresses cannot be translated, as this
will invalidate the hash value. The transport and application layers are always secured by hash,
so they cannot be modified in any way (for example by translating the port numbers). Transport
mode is used for host-to-host communications.

A means to encapsulate IPsec messages for NAT traversal has been defined by RFC documents
describing the NAT-T mechanism.
32 
Network Security and Management

Tunnel mode

In tunnel mode, the entire packet (which need not be IP) is encrypted and/or authenticated. It is
then encapsulated into a new IP packet with a new IP header. Tunnel mode is used to create
Virtual Private Networks for network-to-network communications (e.g. between routers to link
sites), host-to-network communications (e.g. remote user access), and host-to-host
communications (e.g. private chat).

SSL
Transport Layer Security (TLS) and its predecessor, Secure Socket Layer (SSL), are
cryptographic protocols that provide security for communications over networks such as the
Internet. TLS and SSL encrypt the segments of network connections at the Transport Layer end-
to-end.

Several versions of the protocols are in widespread use in applications like web browsing,
electronic mail, Internet faxing, instant messaging and voice-over-IP (VoIP).

The TLS protocol allows client/server applications to communicate across a network in a way
designed to prevent eavesdropping and tampering. TLS provides endpoint authentication and
communications confidentiality over the Internet using cryptography. TLS provides RSA
security with 1024 and 2048 bit strengths.

In typical end-user/browser usage, TLS authentication is unilateral: only the server is


authenticated (the client knows the server's identity), but not vice versa (the client remains
unauthenticated or anonymous).

Working
A TLS client and server negotiate a stateful connection by using a handshaking procedure.
During this handshake, the client and server agree on various parameters used to establish the
connection's security.

 The handshake begins when a client connects to a TLS-enabled server requesting a


secure connection, and presents a list of supported CipherSuites (ciphers and hash
functions).

 From this list, the server picks the strongest cipher and hash function that it also
supports and notifies the client of the decision.

 The server sends back its identification in the form of a digital certificate. The certificate
usually contains the server name, the trusted certificate authority (CA), and the server's
public encryption key.

33 
Network Security and Management

 The client may contact the server that issued the certificate (the trusted CA as above)
and confirm that the certificate is authentic before proceeding.

 In order to generate the session keys used for the secure connection, the client encrypts
a random number (RN) with the server's public key (PbK), and sends the result to the
server. Only the server should be able to decrypt it (with its private key (PvK)): this is
the one fact that makes the keys hidden from third parties, since only the server and the
client have access to this data. The client knows PbK and RN, and the server knows PvK
and (after decryption of the client's message) RN. A third party may only know RN if PvK
has been compromised.

 From the random number, both parties generate key material for encryption and
decryption.

This concludes the handshake and begins the secured connection, which is encrypted and
decrypted with the key material until the connection closes.

Difference between SSL Session and SSL Connection


Difference between connection and session is that connection is a live communication channel,
and session is a set of negotiated cryptography parameters.
You can close connection, but keep session, even store it to disk, and subsequently resume it
using another connection, may be in completely different process, or even after system reboot
(of course, stored session should be kept both on the client and on the server).
On other hand, you can renegotiate TLS parameters and create entirely new session without
interrupting connection.
SSL_SESSION object is used for storing sessions to resume them later.
It helps to avoid some resource consuming crypthography operations.

34 
Network Security and Management

Firewall
Firewall is a part of a computer system or network that is designed to block unauthorized
access while permitting authorized communications. It is a device or set of devices which is
configured to permit or deny computer applications based upon a set of rules and other criteria.

Firewalls can be implemented in either hardware or software, or a combination of both.


Firewalls are frequently used to prevent unauthorized Internet users from accessing private
networks connected to the Internet, especially intranets. All messages entering or leaving the
intranet pass through the firewall, which examines each message and blocks those that do not
meet the specified security criteria.

Design Principles
1. All traffic from inside to outside, and vice versa, must pass through the firewall. This is
achieved by physically blocking all access to the local network except via the firewall.
Various configurations are possible, as explained later in this section.

2. Only authorized traffic, as defined by the local security policy, will be allowed to pass.
Various types of firewalls are used, which implement various types of security policies,
as explained later in this section.

3. The firewall itself is immune to penetration. This implies that use of a trusted system
with a secure operating system.

There are several types of firewall techniques:


 Packet filter: Packet filtering inspects each packet passing through the network and
accepts or rejects it based on user-defined rules. Although difficult to configure, it is
fairly effective and mostly transparent to its users. It is susceptible to IP spoofing.

 Application gateway: Applies security mechanisms to specific applications, such as


FTP and Telnet servers. This is very effective, but can impose a performance
degradation.

 Circuit-level gateway: Applies security mechanisms when a TCP or UDP connection is


established. Once the connection has been made, packets can flow between the hosts
without further checking.

 Proxy server: Intercepts all messages entering and leaving the network. The proxy
server effectively hides the true network addresses.

35 
Network Security and Management

Packet Filter
Network layer firewalls, also called packet filters, operate at a relatively low level of the TCP/IP
protocol stack, not allowing packets to pass through the firewall unless they match the
established rule set. The firewall administrator may define the rules; or default rules may apply.
The term "packet filter" originated in the context of BSD operating systems.

Network layer firewalls generally fall into two sub-categories, stateful and stateless. Stateful
firewalls maintain context about active sessions, and use that "state information" to speed
packet processing. Any existing network connection can be described by several properties,
including source and destination IP address, UDP or TCP ports, and the current stage of the
connection's lifetime (including session initiation, handshaking, data transfer, or completion
connection). If a packet does not match an existing connection, it will be evaluated according to
the ruleset for new connections. If a packet matches an existing connection based on
comparison with the firewall's state table, it will be allowed to pass without further processing.

Stateless firewalls require less memory, and can be faster for simple filters that require less
time to filter than to look up a session. They may also be necessary for filtering stateless
network protocols that have no concept of a session. However, they cannot make more complex
decisions based on what stage communications between hosts have reached.

Modern firewalls can filter traffic based on many packet attributes like source IP address,
source port, destination IP address or port, destination service like WWW or FTP. They can
filter based on protocols, TTL values, netblock of originator, of the source, and many other
attributes.

Application Gateway
Application-layer or level firewalls work on the application level of the TCP/IP stack (i.e., all
browser traffic, or all telnet or ftp traffic), and may intercept all packets traveling to or from an
application. They block other packets (usually dropping them without acknowledgment to the
sender). In principle, application firewalls can prevent all unwanted outside traffic from
reaching protected machines.

On inspecting all packets for improper content, firewalls can restrict or prevent outright the
spread of networked computer worms and trojans. The additional inspection criteria can add
extra latency to the forwarding of packets to their destination.

Circuit-Level Gateway
Circuit level gateways work at the session layer of the OSI model, or as a "shim-layer" between
the application layer and the transport layer of the TCP/IP stack. They monitor TCP
handshaking between packets to determine whether a requested session is legitimate.
Information passed to a remote computer through a circuit level gateway appears to have
originated from the gateway. This is useful for hiding information about protected networks.
Circuit level gateways are relatively inexpensive and have the advantage of hiding information
about the private network they protect. On the other hand, they do not filter individual packets.
36 
Network Security and Management

Proxy Server
A proxy device (running either on dedicated hardware or as software on a general-purpose
machine) may act as a firewall by responding to input packets (connection requests, for
example) in the manner of an application, whilst blocking other packets.

Proxies make tampering with an internal system from the external network more difficult and
misuse of one internal system would not necessarily cause a security breach exploitable from
outside the firewall (as long as the application proxy remains intact and properly configured).
Conversely, intruders may hijack a publicly-reachable system and use it as a proxy for their own
purposes; the proxy then masquerades as that system to other internal machines. While use of
internal address spaces enhances security, crackers may still employ methods such as IP
spoofing to attempt to pass packets to a target network.

Virus
A computer virus is a computer program that can copy itself[1] and infect a computer. The term
"virus" is also commonly but erroneously used to refer to other types of malware, including but
not limited to adware and spyware programs that do not have the reproductive ability. A true
virus can spread from one computer to another (in some form of executable code) when its host
is taken to the target computer; for instance because a user sent it over a network or the
Internet, or carried it on a removable medium such as a floppy disk, CD, DVD, or USB drive.
Viruses can increase their chances of spreading to other computers by infecting files on a
network file system or a file system that is accessed by another computer.

Life Cycle of a Virus


Creation
Until a few years ago, creating a virus required knowledge of a computer programming
language. Today anyone with even a little programming knowledge can create a virus. Usually,
though, viruses are created by misguided individuals who wish to cause widespread, random
damage to computers.

Replication
Viruses replicate by nature. A well-designed virus will replicate for a long time before it
activates, which allows it plenty of time to spread.

Activation
Viruses that have damage routines will activate when certain conditions are met, for example,
on a certain date or when a particular action is taken by the user. Viruses without damage
routines don't activate, instead causing damage by stealing storage space.

37 
Network Security and Management

Discovery
This phase doesn't always come after activation, but it usually does. When a virus is detected
and isolated, it is sent to the International Computer Security Association in Washington, D.C., to
be documented and distributed to antivirus developers. Discovery normally takes place at least
a year before the virus might have become a threat to the computing community.

Assimilation
At this point, antivirus developers modify their software so that it can detect the new virus. This
can take anywhere from one day to six months, depending on the developer and the virus type.

Eradication
If enough users install up-to-date virus protection software, any virus can be wiped out. So far
no viruses have disappeared completely, but some have long ceased to be a major threat.

Types of a virus
Direct Action Viruses
The main purpose of this virus is to replicate and take action when it is executed. When a
specific condition is met, the virus will go into action and infect files in the directory or folder
that it is in and in directories that are specified in the AUTOEXEC.BAT file PATH. This batch file
is always located in the root directory of the hard disk and carries out certain operations when
the computer is booted.

Overwrite Viruses
Virus of this kind is characterized by the fact that it deletes the information contained in the files
that it infects, rendering them partially or totally useless once they have been infected.

The only way to clean a file infected by an overwrite virus is to delete the file completely, thus
losing the original content.

Examples of this virus include: Way, Trj.Reboot, Trivial.88.D.

Boot Virus
This type of virus affects the boot sector of a floppy or hard disk. This is a crucial part of a disk,
in which information on the disk itself is stored together with a program that makes it possible
to boot (start) the computer from the disk.

The best way of avoiding boot viruses is to ensure that floppy disks are write-protected and
never start your computer with an unknown floppy disk in the disk drive.

Examples of boot viruses include: Polyboot.B, AntiEXE.

38 
Network Security and Management

Macro Virus
Macro viruses infect files that are created using certain applications or programs that contain
macros. These mini-programs make it possible to automate series of operations so that they are
performed as a single action, thereby saving the user from having to carry them out one by one.

Examples of macro viruses: Relax, Melissa.A, Bablas, O97M/Y2K.

Directory Virus
Directory viruses change the paths that indicate the location of a file. By executing a program
(file with the extension .EXE or .COM) which has been infected by a virus, you are unknowingly
running the virus program, while the original file and program have been previously moved by
the virus.

Once infected it becomes impossible to locate the original files.

Polymorphic Virus
Polymorphic viruses encrypt or encode themselves in a different way (using different
algorithms and encryption keys) every time they infect a system.

This makes it impossible for anti-viruses to find them using string or signature searches
(because they are different in each encryption) and also enables them to create a large number
of copies of themselves.

Examples include: Elkern, Marburg, Satan Bug, and Tuareg.

File Infectors
This type of virus infects programs or executable files (files with an .EXE or .COM extension).
When one of these programs is run, directly or indirectly, the virus is activated, producing the
damaging effects it is programmed to carry out. The majority of existing viruses belong to this
category, and can be classified depending on the actions that they carry out.

Companion Viruses
Companion viruses can be considered file infector viruses like resident or direct action types.
They are known as companion viruses because once they get into the system they "accompany"
the other files that already exist. In other words, in order to carry out their infection routines,
companion viruses can wait in memory until a program is run (resident viruses) or act
immediately by making copies of themselves (direct action viruses).

Some examples include: Stator, Asimov.1539, and Terrax.1069

FAT Virus
The file allocation table or FAT is the part of a disk used to connect information and is a vital
part of the normal functioning of the computer.
This type of virus attack can be especially dangerous, by preventing access to certain sections of
the disk where important files are stored. Damage caused can result in information losses from
39 
Network Security and Management

individual files or even entire directories.

Worms
A worm is a program very similar to a virus; it has the ability to self-replicate, and can lead to
negative effects on your system and most importantly they are detected and eliminated by
antiviruses.

Examples of worms include: PSWBugbear.B, Lovgate.F, Trile.C, Sobig.D, Mapson.

Trojans or Trojan Horses


Another unsavory breed of malicious code are Trojans or Trojan horses, which unlike viruses do
not reproduce by infecting other files, nor do they self-replicate like worms.

Logic Bombs
They are not considered viruses because they do not replicate. They are not even programs in
their own right but rather camouflaged segments of other programs.

Their objective is to destroy data on the computer once certain conditions have been met. Logic
bombs go undetected until launched, and the results can be destructive.

40 

Potrebbero piacerti anche