Sei sulla pagina 1di 44

Unit -IV

Cryptography Hash Functions:


Introduction,
Description of MD Hash Family,
Whirlpool,
SHA-512.
Digital Signature:
Comparison,
Process,
Services,
Attacks on Digital Signature,
Digital Signature Schemes,
Variations and Applications.
Key Management:
Symmetric-Key Distribution,
Kerberos,
Symmetric-Key Agreement,
Public-Key Distribution,
Hijacking.
Cryptography Hash Functions: Introduction

Message digest or Hash Function


A Message digest or Hash function is used to turn input of
arbitrary length into an output of fixed length.
Hash is a one-way function(irreversible algorithm) , a complex
encryption algorithms used primarily in cryptography.

Message(arbitrary)--HASH FUNCTION- Message Digest(Fixed


length)/Hash

- The Hash function chops and mixes the data to create the
fingerprint, it is often called hash value.
- Hash value or Message digest or digital fingerprint is commonly
represented as a short string of random-looking letters and
numbers..
Message digest has Advantages :
- O/P always has the same length.
- Processing and storing of short hash
value is very easy.
- Consumes less time while in transit.
- Comparison of long messages become
very easy.
Properties of Hash Applications of hash functions:
Collision Resistant: Cannot find any two 1. Digital Signatures
different messages with the same hash One of the applications of hash. Digital
signatures are perhaps most demanding
value. hash of the message, then apply private key
One-way: Given only a hash value can not to the hash to generate signatures.
construct a message (preimage) that
generated hash. Computing a digital signature for a long
message is very time consuming.

However, computing a digital signature for a


message that is only 128 or 160 bits long can
be done very quickly. So, instead of digitally
signing message the message’s hash is
signed.

To verify the signature, the recipient


computes the hash value of the message
received then compares this against the
fingerprint that was signed. If they are same,
then the message received is authentic.
Applications of hash functions Contd…
2. Message authenticate code
A message authentication code
involves an algorithm (often a one-
way hash function or a block cipher)
that accepts a secret key and a
message as input; it then produces a
MAC (sometimes known as a tag).
MAC is some what similar to digital
signatures. Mac is called a “Keyed
Hash Functions”.
This process provides both an
integrity check i.e. by ensuring that
a different MAC will result if the
message has been altered and an
authenticity check i.e. because only
the person knowing the secret key
could have produced a MAC.
3. Integrity Protection
File transmissions over networks such as
the internet may sometimes introduce
small errors. To verify whether the received
file is identical to the original, the
recipient computes the hash of the received 4. Malicious code Detection
file. This hash is compared to the hash of 5. Random number Generation
the original file.
MAC Vs Digital Signatures

MAC differs from digital signatures, as MAC values are both generated
and verified using same secret key.

This implies that sender and receiver must agree on keys before
initiating communications. For the same reason MAC’s do not provide
property of nonrepudiation offered by digital signature.

In Digital Signature, since the private key is only accessible to its


holder, a digital signature proves that document was signed by none
other than that holder.

Thus digital signature does offer non-repudiation.

In contrast to digital signature, MAC function resists existential forgery


under chosen plain text attacks.
Description of MD Hash Family

MD2 , MD4 , and MD5 are message-digest


algorithms developed by Rivest.

They are meant for digital signature


applications where a large message has to be
"compressed" in a secure manner before being
signed with the private key.

All three algorithms take a message of arbitrary


length and produce a 128-bit message digest.
While the structures of these algorithms are
somewhat similar, the design of MD2 is quite
different from that of MD4 and MD5. MD2 was
optimized for 8-bit machines, whereas MD4
and MD5 were aimed at 32-bit machines.
MD5 working : It is designed by R. Rivest
based on based on Markel- Damgard
construction for building collision resistance
cryptographic hash function.
In this method original file is padded and split
into N equal sized block and compute a 128 bit
hash value of the entire message through
chaining.

In the diagram, the one-way compression function is denoted by f, and transforms two fixed
length inputs to an output of the same size as one of the inputs.

The algorithm starts with an initial value, the initialization vector (IV). The IV is a fixed value
(algorithm or implementation specific).
MD5 working contd…
For each message block, the compression (or compacting) function f takes the result so far,
combines it with the message block, and produces an intermediate result. The last block is
padded with zeros as needed and bits representing the length of the entire message are
appended.

To harden the hash further the last result is then sometimes fed through a finalisation
function.

The finalisation function can have several purposes such as


 compressing a bigger internal state (the last result) into a smaller output hash size or
 to guarantee a better mixing and avalanche effect on the bits in the hash sum.

Note: Though MD5 is widely known, it is not a very


secure hash. There exists many attacks such as
rainbow table attack. The security of the MD5 hash
function is severely compromised
Whirlpool Block cipher based Hash function
The hash function Whirlpool, one of whose designers is also co-inventor of Rijndael, adopted
as the Advanced Encryption Standard (AES).

Whirlpool is based on the use of a block cipher for the compression function. There has
traditionally been little interest in the use of block-cipher-based hash functions because of
the demonstrated security vulnerabilities of the structure.

The following are potential drawbacks:


 Block ciphers do not possess the properties of randomizing functions. For example, they
are invertible. This lack of randomness may lead to weaknesses that can be exploited.

 Block ciphers typically exhibit other regularities or weaknesses. For example,


demonstrates how to compromise many hash schemes based on properties of the
underlying block cipher.

 Typically, block-cipher-based hash functions are significantly slower than hash functions
specifically designed.
 A principal measure of the strength
of a hash function is the length of
the hash code in bits. For block-
cipher-based hash codes, proposed
designs have a hash code length
equal to either the cipher block
length or twice the cipher block
length. Traditionally, cipher block
length has been limited to 64 bits
(e.g., DES, triple DES), resulting in a
hash code of questionable
strength.
SHA (Secure Hash Algorithm)
- Developed by NIST (National Institute of Standards and Technology.
- Published as FIPS (Federal Information Processing Standard ) in 1993
- Revised version released in 1995
- It is based on hash function MD4 and its design.
- SHA produces 160-bits of hash and maximum 264 bits as input.
- Later releases SHA-256, SHA-384, SHA-512,---
- All of the above are having same structure and mathematical model.
Overview of SHA-1
- Like MD4 and MD5 , SHA-1 operates in stages.
- Each stage mangles (crush) the pre-stage message digest by a sequence of operations
based on current message block
- At the end of the stage each word of the mangled message digest is added to its pre-
stage value to produce post-stage value.
- The 160-bit message digest consists of five 32-bit words. We call these words as ABCDE
and they are set to constants.
A (67452301) 16
B (EFCDAB89) 16
C (98BADCFE) 16
D (10325476) 16
E (C3D2E1F0) 16
- After the last stage the value of |A|B|C|D|E| is the message digest for the entire
message.
SHA-1 Processing Steps
Step 1: Appending padding bits
The message is divided into multiple of 512-bits and in each of the 512-
bits the padding of bits is done. The length of the padding will be 448
modulo 512 (i.e.L≡448 mod 512) here L is 64.
Padding always added , even if the message is already in desired length.
Thus the no. of padding bits is in the range of 1-512. The padding can be
done by adding single 1-bits followed by the necessary no. of 0-bits.
[ Padding : If an encryption algorithm requires plaintext to be a multiple of
some no. of bytes, the padding field is used to expand the plaintext to the
required length. Additional padding may be added to provide partial traffic
flow confidentiality by concealing the actual length of the payload.]
Step 2:Append length

A block of 64 bits is appended to the message. This block is treated as an


unsigned 64-bit integer and contains the length of the original message. The
outcome of the first 2 steps yields a message that is an integer multiple of 512-
bits in length.
The expanded message is represented as the sequence of 512-bit blocks
y0,y1,,…yL-1, where L is the no. of blocks. So that the total length of the
message is L X 512 bits. Equivalently, the result is a multiple of 16 32-bit words.
(512=16x 32).

Step 3: Initialize MD buffer:


A 160-bit buffer is used to hold initial, intermediate and final results of hash
functions. The buffer can be represented as five 32-bit registers (ABCDE).
These registers are initialized accordingly.
Step 4: Process Message is 512-bit (16-word) block:
The heart of the algorithm known as “compression function”, consists of 4
Stages of processing of 20 rounds each.
All 4 stages have similar structure but uses a different primitive logical
functions referred to as f1, f2, f3, f4
Each round takes as input the current 512-bit block being processed (Yq) and
the 160-bit buffer value ABCDE and updates the contents of the buffer.
Each round also makes use of an additive constant Kt (hexadecimal)
The output of the fourth round (eightieth step) is added to the input to the first
round compression value (CVq).

Step 5: Output:
After all L 512-bit blocks have been processed, the output from the Lth stage is
the 160- bit message digest.
Checksums and CRC Function
 A Cyclic Redundancy Check (CRC) is a type of hash function used to provide a
“Checksum”, which is a small fixed number of bits against a block of data such
as a packet of network traffic or a block of a computer file.

 The checksum is used to detect errors after transmission or storage.

 A CRC is computed and appended before transmission or storage and verified


afterwards by recipient to confirm no changes occurred on transit.

 Checksums are useful in detecting accidental modifications such as corrupt


stored data, errors in communication channels.

 However, they provide no security against a malicious agent as their simple


mathematical structure makes them trivial to circumvent.

 To provide this level of integrity, the use of a cryptographic hash functions is


necessary.
Digital Signature
a digital code (generated and authenticated by
public key encryption) which is attached to an
electronically transmitted document to verify
its contents and the sender's identity.
Electronic signature Vs The parties on either side of a digital signature can
physical signature also detect whether the signed document was in
An electronic signature is more widely any way that would invalidate it.
accepted. The main reason being, a
physical signature can easily be forged or In addition, electronic messages are signed with the
tampered with, while an electronic sender’s private decryption key and verified by
signature has many layers of security and anyone who can access the sender’s public
authentication and is trusted by encryption key; this further ensures that both
companies and professionals world-wide. parties are who they say they are and that the
content of the message has not been changed or
Sometimes referred to as a cryptographic intercepted.
signature, a digital signature is
considered the most “secure” type of
electronic signature. It includes a
certificate of authority, such as a
Windows certificate, to ensure the
validity of the signatory (the signature’s
author and owner).
Digital Signature process :
.
The digital signature process can be divided
into 2 parts:

1. Signature Generation:
 Generating a pair of public key and
provide key by the sender of the
message.
 Generating the message digest from the
message using a hash function.
 Generating the digital signature from the 2. Signature Verification:
message digest with the private key.  Generating the message digest from the
 Sending the message, the digital message using the same hash function.
signature, and the public key to receiver.  Verifying the digital signature with message
digest using the public key.
Digital Signature Services:  Data Integrity − In case an attacker has
Digital Signature provides the following access to the data and modifies it, the
services: digital signature verification at receiver
end fails. The hash of modified data and
the output provided by the verification
 Message authentication algorithm will not match. Hence, receiver
 Data Integrity can safely deny the message assuming
 Non-repudiation that data integrity has been breached.

 Non-repudiation − Since it is assumed


 Message authentication − When the verifier that only the signer has the knowledge of
validates the digital signature using public the signature key, he can only create
key of a sender, he is assured that signature unique signature on a given data. Thus
has been created only by sender who the receiver can present data and the
possess the corresponding secret private key digital signature to a third party as
and no one else. evidence if any dispute arises in the
future.
Digital Signature Schemes
There are several approaches generating
digital Signatures.RSA and DSS approaches
most common.

RSA Digital Signature Process


In the RSA digital signature process, the
private key is used to encrypt only the
message digest.
The encrypted message digest becomes
the digital signature and is attached to the
original data.
To verify the contents of digitally signed data, the recipient generates a new message digest from the data that
was received, decrypts the original message digest with the originator's public key, and compares the decrypted
digest with the newly generated digest. If the two digests match, the integrity of the message is verified. The
identify of the originator also is confirmed because the public key can decrypt only data that has been encrypted
with the corresponding private key.
DSS Approach
DSS approach makes use of a hash function. The hash code is provided as input to a
signature function along with a random number ‘k’ generated for this particular
signatures.

The signature function also depends on the senders private key(PRa) and global
public key(PUg). The result is a signature consisting of two components s and r.

At the receiving end, the hash code of the


incoming message is generated plus the
signature is input to a verification function.
The verification function also depends on
global public key as well as sender’s public
key (PUa). The Output of the verification is
a value that is equal to signature component,
r , then the signature is valid.
Key Management: Symmetric-Key Distribution
If two user wants to communicate they need to exchange a secret key. If a user
wants to communicate n number of users then Internet is definitely not a secure
method. It is obvious that we need an efficient way to maintain and distribute secret
keys.
Kerberos is the most commonly used example of cryptographic type of
authentication technology.
Kerberos is a distributed Network security authentication service/protocol developed
by MIT.
Kerberos is a secret key / symmetric key cryptography based service for providing
authentication in a network.

When a user A first logs into a workstation, by typing account name and password, the
period from login to logout is termed as login session. During this session user A will
probably need to access remote resources that in turn needs authentication, the
authentication procedure is performed by user A’s workstation on user A behalf. The user A
need not be aware of those happenings.
Symmetric-Key Distribution Contd…

Kerberos has the ability to distribute “session keys” to allow encrypted data
streams over an IP network.

Kerberos builds on symmetric key cryptography and requires a Key


Distribution Center (KDC), which is a Trusted Third Party for the purpose of
distributing the keys.

Extension of Kerberos uses public-key cryptography.

Kerberos used to secure particularly vulnerable communications like FTP,


Telnet and other internet protocols.
KDC Architecture
In Kerberos, all authentications take place between clients and servers. So in
Kerberos terminology, a "Kerberos client" is any entity that gets a service
ticket for a Kerberos service.

A client is typically a user, but any principal can be a client.

The term "Kerberos server" generally refers to the Key Distribution Center, or
the KDC for short. The KDC implements:
 the Authentication Service (AS) and
 the Ticket Granting Service (TGS).

The KDC has a copy of every password associated with every principal (user).
For this reason, it is absolutely vital that the KDC be as secure as possible.

Most KDC implementations store the principals (user’s details) in a database,


so you may hear the term "Kerberos database" applied to the KDC.
KDC Architecture Contd…
KDC Architecture Contd… 1. User logs on to
workstation request service
on the host.

2. AS verifies user’s access


right in database, creates
ticket-granting ticket and
session key. Results are
encrypted using key derived
from user’s password.

3. Workstation prompts user for


password and uses password to
decrypt incoming
message, then sends ticket and
authenticator that contains
user’s name, network
address and time to TGS.
KDC Architecture Contd…

4. TGS decrypts ticket and authenticator, verifies request, then


creates ticket for requested server.

5. Workstation sends ticket and authenticator to Application


server.

6. Server verifies that ticket and authenticator match, then


grants access to service. If mutual authentication is required,
server returns an authenticator.
Diffie-Hellman key Exchange Algorithm
OR
Symmetric-Key Agreement
The purpose of the algorithm is to enable two users to exchange a secret
key securely in an insecure channel that can be used for subsequent
encryption of messages.

Developed by Whitfield Diffie and Martin Hellman in 1976.


Symmetric-Key Agreement Contd….
Step 1
 Choose p , a large prime on the order of 1024 bits
• Choose g , a generator of order p-1
• Both p and q are global public element.
Step 2
• User A chooses a large random number ‘x’ such that 0<= x <= p-1 and calculates R1=gx mod p
Step 3
• User B chooses a large random number ‘y’ such that 0<= y <= p-1 and calculates R1=gy mod p
Step 4
• User A sends R1 to user B, but user A does not send value ‘x’
Step 5
• User B sends R2 to user A, but user B does not send value ‘y’
Step 6
• User A calculates k = (R2)x mod p
Step 7
• User B calculates k = (R1)y mod p
Symmetric-Key Agreement Contd….

Example
Asumptions: g=7 and p=23
User A chooses x =3, calculates R1=73 mod 23 => 21
User B chooses y =6, calculates R2=76 mod 23 => 4
User A sends the no. 21 to user B
User B sends the no 4 to user A
User A calculates symmetric key
k=43 mod 23 => 18
User B calculates symmetric key
k=216 mod 23 => 18
Public-Key Distribution
OR
X.509 Authentication Service: (X.509 Certificate)
 ITU-T (International Telecommunication Union – Telecommunication
Standardization Sector) recommendation X.509 is part of the X.500 series of
recommendations that define a directory service.
 The directory is a server or distributed set of servers that maintains a database of
information about users.
 The information includes a mapping from user name to network address, as well as
other attributes and information about the users.
 X.509 defines a framework for the provision of authentication services by the X.500
directory to its users. The directory may serve as a repository of public-key
certificates.
Public-Key Distribution Contd…

 Each certificate contains the public key of a user and is signed with the private key
of a trusted certification authority. X.509 defines alternative authentication
protocols based on the use of public-key certificates.
 X.509 is an important standard because the certificate structure and
authentication protocols defined in X.509 are used in variety of contexts (SSL,
SET, etc.).
 A third version of X.509 was issued in 1995 and revised in 2000.

Certificate: a digitally signed statement binding


the identifying information of a user, computer or
a service to a public/private key pair, which is
commonly used in the process of authentication
and for securing information on networks.
Public-Key Distribution Contd…

Digital certificate Format with Authentication


Public-Key Distribution Contd…

Digital certificate Format


1. Version: Differentiates among successive versions of the certificate format: the default version
is 1. There are 3 versions exists.
2 Serial number: An unique integer value, used to distinguish other certificates that is created.
3 Signature algorithm identifier: The algorithm used to sign the certificate, together with any
associated parameters. Because this information is repeated in the Signature field at the end of
the certificate, this field has little, if any, utility.
4 Issuer name: X.500 name of the CA that created and signed this certificate (about X.500
names see, for example,
5.Period of validity: Consists of two dates: the first and the last on which certificate is valid
6 Subject name: The name of the user to whom this certificate refers. That is, this certificate
certifies the public key of the subject who holds the corresponding private key
7 Subject’s public key information: The public key of the subject, plus an identifier of the
algorithm for which this key is to be used, together with any associated parameters
8 Issuer unique identifier: An optional bit string field used to identify uniquely the issuing CA in
the event the X.500 name has been reused for different entities
9 Subject unique identifier: An optional bit string used to identify uniquely the subject in the
event the X.500 name has been reused for different entities
10 Extensions: A set of one or more extension fields. Extensions were added in version 3
Public-Key Distribution Contd…

In brief, certificate authentication works in the following way:


 The client sends the user certificate (which includes the user's public key)
to the server.
 The server uses the CA certificate to check that the user's certificate is
valid.
 The server uses the user certificate to check from its mapping file(s)
whether login is allowed or not.
 Finally, if connection is allowed, the server makes sure that the user has a
valid private key by using a challenge.
Hijacking is a type of network security attack in which the attacker takes control of a
communication
- just as an airplane hijacker takes control of a flight
between two entities and masquerades as one of them.

In one type of hijacking (also known as a man in the middle attack), the perpetrator takes
control of an established connection while it is in progress.

The attacker intercepts messages in a public key exchange and then retransmits them,
substituting their own public key for the requested one, so that the two original parties
still appear to be communicating with each other directly.

The attacker uses a program that appears to be the server to the client and appears to be
the client to the server.

This attack may be used simply to gain access to the messages, or to enable the attacker
to modify them before retransmitting them.

Potrebbero piacerti anche