Sei sulla pagina 1di 29

Cryptography and Network Security1

1. Explain the need for network security.


Computer security is required because many organizations will
be damaged

by hostile software or intruders. There may be

several forms of damage

which are obviously interrelated.

These include:
Damage or destruction of computer systems.
Damage or destruction of internal data.
Loss of sensitive information to hostile parties.
Use of sensitive information to steal elements of monitary value.
Use of sensitive information against the customers which may result
in

legal action by customers against the organization and loss of

customers.
Damage to the reputation of an organization.
Monitory damage, due to loss of sensitive information, destruction
of

data, hostile use of sensitive data, or damage to the

reputation of the

organization.

The methods used to

accomplish these unscrupulous objectives are many

and varied

depending on the circumstances.

2. What is security attack? Explain with examples.


Any action that compromises the security of information
owned by an organization is called security attack. Those who
execute such

actions, or cause them to be executed, are called

attackers or opponents.

Computer-based system has three

interrelated and valuable components

namely, hardware, software,

and data. Each of these assets offers value to


different members of the community affected by the system. To
analyze

security, we can brainstorm about the ways in which the

system or its

information can experience some kind of loss or

harm
Vulnerability is a weakness in the

security system, For example, you can


see certain system does not verify a user's identity or password
before

allowing them to access the data.


A threat to a

computing system is a set of circumstances that has the


potential to cause loss or harm. The threats to computing system
would be

either human-initiated or computer-initiated.

A human

who exploits the vulnerability perpetrates an attack on the system.


An attack can also be launched by another system, as when one
system

sends an overwhelming set of messages to another,

virtually shutting down


we have seen this

the second system's ability to function.

type of attack frequently, as denial-of-

service attacks flood servers with

more messages than they can

handle.

3. What is the difference between substitution and transposition

techniques?
Substitutions are the simple form of encryption in which one
letter is

exchanged for another. A substitution is an acceptable

way of encrypting

text. Here is few examples.The Caesar cipher

has an important place in history. Julius Caesar is said to

have

been the first to use this scheme, in which each letter is translated
to a

letter a fixed number of places after it in the alphabet.

Caesar used a shift of

3, so that plaintext letter pi was

enciphered as ciphertext letter ci by the rule .A one-time pad is


sometimes considered the perfect cipher. The name

comes from

an encryption method in which a large, nonrepeating set of keys


is written on sheets of paper, glued together into a pad.

The

Vernam Cipher is a type of one-time pad devised by Gilbert


Vernam for

AT&T. The Vernam cipher is immune to most

cryptanalytic attacks. The

basic encryption involves an arbitrarily

long nonrepeating sequence of

numbers that are combined with the plaintext. Vernam's invention


used an

arbitrarily long punched paper tape that fed into a

teletype machine. The

tape contained random numbers that were

combined with characters typed

into the teletype. The sequence

of random numbers had no repeats, and

each tape was used only

once. As long as the key tape does not repeat or is

not reused,

this type of cipher is immune to cryptanalytic attack because the


available ciphertext does not display the pattern of the key
.Example:

transposition techniques
The goal of substitution is confusion; the encryption

method is an attempt to
make it difficult for a cryptanalyst or intruder to determine how a
message

and key were transformed into cipher-text.

transposition is an

encryption in which the letters of the message

are rearranged. With

transposition, the cryptography aims for

diffusion, widely spreading the

information from the message or

the key across the ciphertext.

Transpositions try to break

established patterns. Because a transposition is

a rearrangement

of the symbols of a message, it is also known as a

permutation.

A simplest transposition technique is Columnar Transpositions. The


columnar transposition is a rearrangement of the characters of the
plaintext

into columns.

The following set of characters is a five-column transposition. The


plaintext

characters are written in rows of five and arranged one

row after another, as

shown here.

c1

c2

C3

c6

c7

C8

c11

c12

etc.

c4

c9

c5

c10

You form the resulting ciphertext by reading down the columns

Encipherment / Decipherment Complexity

This type is again arranging the letters and reading them off
again. Therefore, the algorithm requires a constant amount of
work per character,

and the time needed to apply the algorithm is

proportional to the length of

the message.

Digrams, Trigrams, and Other Patterns

Just as there are characteristic letter frequencies, there are also


characteristic patterns of pairs of adjacent letters, called digrams.
Letter

pairs such as -re-, -th-, -en-, and -ed- appear very

frequently

4. Using Caesar Cipher encrypt the plaintext Sikkim Manipal


University using key value
Plaintext

ABCDEFGHIJKLMNOPQRSTUVWXYZ

Ciphertext d e f g h i j k l m n o p q r s t u v w x y z a b c
SIKKIM MANIPAL UNIVERSITY
would be encoded as
SIKKIM MANIPAL UNIVERSITY
vl nnl p pd qls do x q ly huv lwb

5. Explain different characteristics that identify a good


encryption technique.
1.

The implementation of the process should be as simple as


possible.

Principle 3 was formulated with hand implementation in

mind: A

complicated algorithm is prone to error or likely to be

forgotten. With the

development and popularity of digital

computers, algorithms far too


became feasible. Still, the issue of

complex for hand implementation


complexity is important.

People will avoid an encryption algorithm whose


process severely hinders message transmission,

implementation
thereby

undermining security. And a complex algorithm is more likely to


be programmed incorrectly

2. The enciphering algorithm and set of keys used should be less


complex.

This principle implies that we should restrict neither

the choice of keys

nor the types of plaintext on which the

algorithm can work. For instance,

an algorithm that works only

on plaintext having an equal number of As

and Es is useless.

Similarly, it would be difficult to select keys such that


the sum of the values of the letters of the key is a prime
number.

Restrictions such as these make the use of the

encipherment

prohibitively complex. If the process is too

complex, it will not be used.

Furthermore, the key must be

transmitted, stored, and remembered, so


it must be short.

3. The amount of secrecy needed should determine the amount of


labor

appropriate for the encryption and decryption.

Principle 1 is a reiteration of the principle of timeliness and of the


earlier

observation that even a simple cipher may be strong

enough to deter the

casual interceptor or to hold off any

interceptor for a short time.


4. Errors in ciphering should not propagate and cause corruption of
further

information in the message.

Principle 4

acknowledges that humans make errors in their use of


enciphering algorithms. One error early in the process should not
throw

off the entire remaining ciphertext.

5. The size of the original message and that of enciphered text


should be

at most same.

6. Compare Symmetric and Asymmetric Encryption Systems.


The two basic kinds of encryption systems are
key based and block based.
Key based encryption is based on either single key or multiple keys.
Block

based encryption is based on either stream or block of

characters

We have two types of encryptions based on keys they

are symmetric (also called "secret key") and asymmetric (also


called "public key"). Symmetric

algorithms use one key, which works

for both encryption and decryption.

Usually, the decryption algorithm is closely related to the encryption


one

The symmetric system means both encryption and the

decryption are performed using the same key. They provide a


two-way channel to their users: A and B share a secret key, and
they can both encrypt information to

send to the other as well

as decrypt information from the other. As long as


the key remains secret, the system also provides authentication,
proof that a

message received was not fabricated by someone

other than the declared


only the legitimate

sender. Authenticity is ensured because

sender can

produce a message that will

decrypt properly with the shared key.

Public key

systems, on the other hand, excel at key management. By the


nature of the public key approach, you can send a public key in an
e-mai message or post it in a public directory. Only the
corresponding private key,

which presumably is kept private, can

decrypt what has been encrypted with

the public key.

For both kinds of encryption, a key must


be kept well secured. Once the
symmetric or private key is known by an outsider, all messages
written

previously or in the future can be decrypted (and hence

read or modified) by

the outsider. So, for all encryption

algorithms, key management is a major

issue. It involves storing,

safeguarding, and activating keys


7. Give the Overview of DES Algorithm.

The most widely used encryption


scheme is based on Data Encryption
standard. Data Encryption Standard (DES), a system developed for
the U.S.

government, was intended for use by the general public

The Data Encryption Standard (DES) specifies an


algorithm to be implemented in electronic hardware devices and used

for

the cryptographic protection of computer data ... Encrypting

data converts it

to an unintelligible form called

cipher. Decrypting cipher converts the data

back to its original

form. The algorithm... specifies both enciphering and

deciphering

operations which are based on a binary number called a key ...


Data can be recovered from cipher only by using exactly the same
key used
to encipher it.
The Data Encryption algorithm is a combination of both
substitution as well
as transposition technique. The strength of DES technique is
improved

when it uses both the techniques together. It uses both

the technique

repeatedly i.e., one on the top of other for a

total of 16 cycles. The sheer

complexity of tracing a single bit

through 16 iterations of substitutions and

transpositions has so far

stopped researchers in the public from identifying

more than a handful of general properties of the algorithm.


The algorithm begins by encrypting the plaintext as blocks of 64
bits. The

key is 64 bits long, but in fact it can be any 56-bit

number. (The extra 8 bits

are often used as check digits and do

not affect encryption in normal


implementations.) The user can change the key at will any time
there is

uncertainty about the security of the old key.

The iterative substitutions and permutations are performed as


outlined in

Figure
DES uses only standard arithmetic and logical operations on numbers
up to

64 bits long, so it is suitable for implementation in

software on most current

computers. Although complex, the

algorithm is repetitive, making it suitable

for implementation on a

single-purpose chip.

8. Explain RSA technique with an example.


RSA stands for Rivest, Shamir, and Adleman, the people who
invented the

algorithm at MIT in 1978 which uses two keys: a

private key and a public

key. Typically, private key algorithms such

as DES can't protect against

fraud by the sender or the receiver

of a message
RSA is an exponentiation cipher. You have to follow the following two
steps.
1. Choose two large prime numbers p and q, and let n = pq. The
totient (n)

of n is the number of numbers less than n with no

factors in common with n.

Example: Let n = 10. The numbers that are less than 10 and are
relatively

prime to (have no factors in common with) n are 1, 3,

7, and 9. Hence,

(10) = 4. Similarly, if n = 21, the numbers

that are relatively prime to n are

1, 2, 4, 5, 8, 10, 11, 13, 16, 17,

19, and 20. So (21) = 12.

2. Choose an integer e < n that is relatively prime to (n). Find a

second integer d such that ed mod (n) = 1. The public key is (e,
n), and the private key is d.
Let m be a message. Then:
c = me mod n
and
m = cd mod n.

Example: Let p = 7 and q = 11. Then n = 77 and (n) = 60. Alice


chooses

e = 17, so her private key is d = 53. In this cryptosystem,

each plaintext

character is represented by a number between 00

(A) and 25 (Z); 26

represents a blank. Bob wants to send Alice

the message "HELLO

WORLD." Using the representation above, the

plaintext is 07 04 11 11 14
26 22 14 17 11 03. Using Alice's public key, the ciphertext is
0717 mod 77 = 28
0417 mod 77 = 16

1117 mod 77 = 44
...

0317 mod 77 = 75
or 28 16 44 44 42 38 22 42 19 44 75.
RSA can provide data and origin
authentication. If Alice enciphers her message using her private
key, anyone can read it,

but if anyone alters it, the (altered)

ciphertext cannot be deciphered

correctly.

Example: Suppose Alice wishes to send Bob the message "HELLO


WORLD" in such a way that Bob will be sure that Alice sent it. She
enciphers the message with her private key and sends it to Bob. As
indicated above, the plaintext is represented as 07 04 11 11 14 26
22 14 17

11 03.

Using Alice's private key, the ciphertext is


0753 mod 77 = 35
0453 mod 77 = 09
1153 mod 77 = 44
...
0353 mod 77 = 05

or 35 09 44 44 93 12 24 94 04 05. In addition to origin authenticity,


Bob can

be sure that no letters were altered.

9. Explain different approaches used in judging the quality of


security.

"secure, it
means that security implies some degree of trust that the program
enforces expected

confidentiality, integrity, and availability

Fixing Faults
One approach to judge quality in security is fixing faults. You might
argue that a module in which 100 faults were discovered and fixed
is better than

another in which only 20 faults were discovered and

fixed, suggesting that

more rigorous analysis and testing had led

to the finding of the larger

number of faults. Early work in

computer security was based on the

paradigm of "penetrate

and patch," in which analysts searched for and

repaired faults.

Often, a top-quality "tiger team" would be convened to test a


system's security by attempting to cause it to fail. The test was
considered to be a "proof" of security; if the system withstood
the attacks, it was considered secure. Unfortunately, far too often

the proof became a

counterexample, in which not just one but

several serious security problems

were uncovered. The problem

discovery in turn led to a rapid effort to

"patch" the system to

repair or restore the security. However, the patch

efforts were

largely useless, making the system less secure rather than


more secure because they frequently introduced new faults.
There are three reasons why.
The fault often had non-obvious side effects in places other than
the

immediate area of the fault.

The system functionality or performance would be affected if


faults

needs to be detected properly.

The pressure to repair a specific problem encouraged a narrow


focus on

the fault itself and not on its context. In

particular, the analysts paid

attention to the immediate cause

of the failure and not to the underlying


requirements faults.

design or

Unexpected Behavior
The inadequacies of penetrate-and-patch led researchers to seek a
better way to be confident that code meets its security
requirements. One way to do that is to compare the requirements
with the behavior. That is, to understand program security, we can
examine programs to see whether

they behave as their designers

intended or users expected. We call such unexpected behavior a


program security flaw; it is inappropriate program behavior caused
by a program vulnerability. There is no direct mapping of the
terms "vulnerability" and "flaw" into the characterization of faults
and

failures. A flaw can be either a fault or failure, and a

vulnerability usually describes a class of flaws, such as a buffer


overflow. In spite of the

inconsistency, it is important for us to

remember that we must view

vulnerabilities and flaws from two

perspectives, cause and effect, so that we see what fault caused


the problem and what failure (if any) is visible to the user

10.

What are the different ways in which an operating system

can assist or offer protection?


Separating one users object from the other is the basic way of
protection. Separation in an operating system can occur in several
ways.

Logical separation:
In which users operate under the illusion that no other processes
exist,

as when an operating system constrains a program's

accesses so that

the program cannot access objects outside its

permitted domain

Physical separation:
Each and every process has its own physical objects, such as
separate

printers for output requiring different levels of

security.
Cryptographic separation:

Each process will protect their data and computations in such a way
that

they are unintelligible to outside processes

Temporal separation:
In which processes having different security requirements are
executed

at different times
There are several ways an operating

system can assist, offering protection at any of several levels.


Do not protect. Operating systems with no protection are
appropriate

when sensitive procedures are being run at separate

times.
Isolate. An operating system providing Isolation feature allow
different

processes to run concurrently and are unaware of the

presence of the

each other. Each process has its own address

space, files, and other

objects. The operating system must

confine each process somehow, so


processes are completely concealed.

that the objects of the other

Share all or nothing. Each user declare its objects either to be


public or

private. There by it will be either available to all users

or only to the

owners respectively.

Share via access limitation. With protection by access limitation, the


operating system checks the allowability of each user's potential
access

to an object. That is, access control is implemented for

a specific user

and a specific object. Lists of acceptable actions

guide the operating

system in determining whether a particular

user should have access to a


particular object. In some sense, the operating system acts as a
guard
accesses

between users and objects, ensuring that only authorized


occur.

Share by capabilities. An extension of limited access sharing, this


form

of protection allows dynamic creation of sharing rights for

objects. The

degree of sharing can depend on the owner or the

subject, on the

context of the computation, or on the object

itself.
Limit use of an object. This form of protection limits not just
the access to

an object but the use made of that object after

it has been accessed. For

example, a user may be allowed to

view a sensitive document, but not to

print a copy of it. More

powerfully, a user may be allowed access to data

in a database

to derive statistical summaries (such as average salary at


a particular grade level), but not to determine specific data values
(salaries of individuals).

Potrebbero piacerti anche