Sei sulla pagina 1di 128

International Journal of

Computational Intelligence and

Information Security
ISSN: 1837-7823

January 2011
Vol. 2 No. 1

© IJCIIS Publication
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

IJCIIS Editor and Publisher


P Kulkarni

Publisher’s Address:
5 Belmar Crescent, Canadian
Victoria, Australia
Phone: +61 3 5330 3647
E-mail Address: ijciiseditor@gmail.com
Publishing Date: January 31, 2011
Members of IJCIIS Editorial Board

Prof. A Govardhan, Jawaharlal Nehru Technological University, India


Dr. A V Senthil Kumar, Hindusthan College of Arts and Science, India
Dr. Awadhesh Kumar Sharma, Madan Mohan Malviya Engineering College, India
Prof. Ayyaswamy Kathirvel, BS Abdur Rehman University, India
Dr. Binod Kumar, Lakshmi Narayan College of Technology, India
Prof. Deepankar Sharma, D. J. College of Engineering and Technology, India
Dr. D. R. Prince Williams, Sohar College of Applied Sciences, Oman
Prof. Durgesh Kumar Mishra, Acropolis Institute of Technology and Research, India
Dr. Imen Grida Ben Yahia, Telecom SudParis, France
Dr. Himanshu Aggarwal, Punjabi University, India
Dr. Jagdish Lal Raheja, Central Electronics Engineering Research Institute, India
Prof. Natarajan Meghanathan, Jackson State University, USA
Dr. Oluwaseyitanfunmi Osunade, University of Ibadan, Nigeria
Dr. Ousmane Thiare, Gaston Berger University, Senegal
Dr. K. D. Verma, S. V. College of Postgraduate Studies and Research, India
Prof. M. Thiyagarajan, Sastra University, India
Dr. Manjaiah D. H. Mangalore University, India
Dr.N.Ch.Sriman Narayana Iyengar, VIT University ,India
Prof. Nirmalendu Bikas Sinha, College of Engineering and Management, Kolaghat, India
Dr. Rajesh Kumar, National University of Singapore, Singapore
Dr. Raman Maini, University College of Engineering, Punjabi University, India
Dr. Seema Verma, Banasthali University, India
Dr. Shahram Jamali, University of Mohaghegh Ardabili, Iran
Dr. Shishir Kumar, Jaypee University of Engineering and Technology, India
Dr. Sujisunadaram Sundaram, Anna University, India
Dr. Sukumar Senthilkumar, National Institute of Technology, India
Prof. V. Umakanta Sastry, Sreenidhi Institute of Science and Technology, India
Dr. Venkatesh Prasad, Lingaya's University, India

Journal Website: https://sites.google.com/site/ijciisresearch/

2
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

Contents
1. An Optimized Approach for Module Placement in VLSI Circuits using Genetic
Algorithm (pages 4-17)
2. Prolonging Network Lifetime by Using a Weight-Based Routing Protocol in Mobile Ad
Hoc Networks (pages 18-24)
3. Evolutionary Approach for Unsupervised Classification and Application to Texture
Images and Alphabetic Letters (pages 25-37)
4. Reliability Measures For A Stochastic Process Working In Different Weather
Conditions (pages 38-45)
5. A Survey on Fight Against Unknown Malware Attack (pages 46-54)
6. Intrusion Detection and Response for Web Interface Using Honeypots (pages 55-60)
7. Survey on Interesting Measures for Association Rules (pages 61-67)
8. Reliability Assessment for a Single Compressor-Multi Evaporator Type Refrigeration
Plant by Employing Boolean Function Technique (pages 68-76)
9. A Framework for Agent Based Warfare Modeling and Simulation (pages 77-84)
10. Advances and Issues in Web Personalization Techniques (pages 85-91)
11. Closed Loop Controlled Bridgeless PFC Converter with Multicore Inductor (pages
92-99)
12. Advanced Hill Cipher Involving a Pair of Keys (pages 100-108)
13. Security Threats and Countermeasures in Mobile Ad Hoc Network (pages 109-117)
14. A Failure-Waiting-Repair Model of a Standby Redundant Complex System (pages
118-126)

3
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

An Optimized Approach for Module Placement in VLSI Circuits using


Genetic Algorithm
Dhiraj Sangwan, Seema Verma and Rajesh Kumar
Assistant Professor, ECE Department, MITS, Laxmangarh, INDIA
Associate Professor, Department of Electronics, Banasthali Vidaypeeth, Banasthali
Associate Professor, Department of Electrical Engineering, MNIT, Jaipur-302017
dhirajsangwan@hotmail.com
seemaverma3@yahoo.com
rkumar@mnit.ac.in

Abstract
In Very Large Scale Integration due to the rapid increase in number of components on a chip, floor planning has
become important step to withstand the quality of design achieved. Designing a floor plan can be considered as a
combinatorial optimization problem, which involves the arrangement of given set of modules in the plane to
minimize the weighted sum of area and wire length required. The objective of this paper is to assign each module
to a slot, and achieving the minimum wire length possible between modules that are connected by each net. An
adaptive Genetic Algorithm for module placement is presented for the assignment of modules to locations on
chip. The GA is also able to modify some of its own parameters during the search based on the performance.

Keywords: VLSI, Module, Floor plan, Interconnect, Genetic Algorithm, Bounding Rectangle, Placement.

4
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

1. Introduction
The layout problem is a principal problem in design of VLSI chip with some constraints like the area required to
implement the design must be minimum along with minimum length of interconnects used. Floor planning is the
preparation step of creating a basic die map showing the expected placement of logic modules and their
interconnections. It decides the chip size, electrical characteristics, timing constraints etc. for final
implementation.
It is assumed, that at these step rough estimations of the modules areas and interconnections between them are
known, while topology of chip and the exact dimensions of the modules of the modules are yet to be defined.
This paper addresses the placement problem i.e. the objective is to assign each module to a slot and achieve a
minimum wire-length possible between modules [1,2]. A placement algorithm is presented for the assignment of
modules to locations on chips. It produces its solutions by continuously considering the various possibilities of
placement and manipulation of a set of possible solutions. Genetic Algorithms have been successfully employed
for optimization problems [3]. The performance of a Genetic Algorithm depends on following parameters i.e.
population size, crossover rate, mutation rate, crossover strategy and selection strategy. The aim is to implement
a algorithm with the ability to place the modules in a given area with minimum length interconnects utilized and
has the ability to modify its control parameters to enhance its performance [4]. The parameters are modified if
the fitness of the new generations does not increase significantly.

1.1 Genetic Algorithm Preliminaries

The Genetic Algorithm have proved to be very effective and powerful optimization approach which works by
emulating the natural process of evolution i.e. survival of the fittest. By following this approach it tries to move
towards optimum solutions satisfying some constraints also [5].
Genetic Algorithms are search techniques, which are based on principles of natural evolution. They are able to
provide a detailed search in solution space. Each algorithm starts with a set of randomly generated possible
solutions called as chromosomes, which are represented as finite length strings. During every iteration, the
population elements in the mating pool are evaluated according to their fitness score to select parents for
generating offspring’s through crossover, mutation and inversion technique. The genetic operators create a new
generation of configurations by combining the sub placements of parents selected from current generation. Due
to the stochastic selection process, the fitter parents, who are expected to contain some good sub placements, are
likely to produce more offspring’s. Thus in the next generation, the number of good sub placements or high
fitness valued parents tend to increase and number of badly scored ones tend to decrease. This leads towards the
improvement of fitness of entire population [6]. As more configurations are tried out, the relatively proportions
of various schemata in the population reflect their fitness more and more accurately. The success of genetic
algorithm depends on its choice of various parameters and functions, which control its execution like selection
principle, crossover probability and technique, mutation probability.

2. Problem Definition

The paper discusses one sub problem i.e. Placement out of the several distinct problems in layout. Floorplan of
an integrated circuit is a schematic representation of tentative placement of its major functional blocks
The input to the placement problem is a set of P circuit elements called as Cells or Modules or Circuit elements
represented as P =[p1, p2, p3… pp] and a set of signals or interconnects or simply nets, N = [n1, n2, n3… nn].
Along with the set of modules and their nets a set of locations is also provided on the chip which are also called
as slots, S = [s1, s2, s3… ss] where S >= P. The task is to assign each module to its respective slot after satisfying
its placement constraints in terms of quality of rout ability attained. The placement is done by keeping in mind
that it always leads to minimum interconnect congestion and requires minimum length of interconnect to
complete all the connections between the modules. The idea of having minimum wire congestion is based on the
fact that the more is the interconnect utilized, more is the propagation delay incurred in the signal passing
through it [7,8]. This way two or more constraints needs to be optimized like one target is to minimize the layout
area while simultaneously the wires must also be shorten in length. This is case dependent like by putting a
restriction on wiring length
may provide a better protection to timing restriction or the restrictions on placement of cells may be given in
order to maintain the circuit character. We have focused on increasing the efficiency of placement process by
targeting in improving the accuracy of a solution by maintaining the diversity [9].

5
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

2.1 Importance of Problem

VLSI system design deals with the design of integrated circuit chips, boards that include these chips and
systems that use the boards. In addition, as the complexity of integrated circuit chips increases, analog hardware
often accompanies the digital hardware on a chip. As a consequence, analog design and mixed signal design are
also potential components of this area. Placement is an essential step in electronic design automation - the
portion of the physical design flow that assigns exact locations for various circuit components within the chip’s
core area. An inferior placement assignment will not only affect the chip's performance but might also make it
non realizable by producing excessive wire length, which is beyond available routing resources [10, 11]. So the
placement problem being the most complex and important part of chip design; we need an optimization
algorithm to get the best possible results. It reduces the chip size and makes it faster as the timing constraints are
fulfilled. Out of several approaches to solve the placement problem Genetic Algorithm is an important technique.
The main objectives of the placement problem are: Total wire length, Timing, Congestion and Power. In this
paper we are including the total wire length reduction using semi-perimeter calculation and reducing area by
placing the cells in regular pattern such that the final placement will be done with minimum area acquired,
provided we are considering cells of equal dimensions in it [12, 13].

3. Problem Implementation

3.1 Overview of the Algorithm

The algorithm to be implemented since belongs to search procedures, so it requires a set of possible solutions
called as chromosomes consisting of set of string. So a set of solutions will be operated upon and solutions with
higher cost would be eventually be replaced by solutions with those having lower cost [14]. The flow chart
showing the typical working of a genetic algorithm is given below

Figure 1: Flow of Genetic Algorithm

3.2 Components of the Algorithm

GAs belongs to the class of evolutionary algorithms, based on a mechanism, which mimics the way nature
follows to improve the characteristics of living beings. The fitness value associated with each individual
measures how close the individual is to the optimum solution. The process starts with an initial population of
individuals created partially randomly and remaining by rule based. In random generation the modules will result
into their random placement across the placement space and in rule based we attempt to group the modules in the
same net to improve the scores of initial population. New individuals are generated through evolution. The
fitness based selection method will be used in selecting pairs of individuals for crossover. This will result in new
child individuals contributing to the next population. The crossover operator requires two parents, one acting as a
passing parent while the other acts like a target parent. In this the passing parent is used to redefine the
arrangement in the target parent. Next, mutation will be performed to add diversity into the population. The
mutation operator uses a directed evolution approach and tries to reduce the bounding rectangle of a randomly

6
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

selected net with minimum disturbance to other nets. If any of the individuals in this population satisfies the
specifications for logic function and hardware specific criteria then the genetic evolution terminates otherwise it
will be continued.

3.3 Performance Measurement of a Placement Solution

To evaluate the effectiveness of a GA it is necessary to define when each experiment stops and how to deal
with the randomness of the approach. Several stopping conditions can be adopted: we chose to stop an
experiment when it is likely to have reached a steady state. According to this condition, each experiment finishes
as soon as the area of the best individual remains the same for a given number of generations. Moreover, being
partially random, the results can be different from one run to another. As a consequence, we repeated 10 times
each experiment, and computed the average of the results. We also provide data about the distribution of values
produced by the experiments.
One common characteristic of the ten basic instances is they are all relatively small in size. However, the
sizes are comparable with placement instances. By using instances of this size, we were able to do extensive
experimental testing on various genetic algorithm alternatives, even though we had limited computational
resources. Yet, we understand the importance of larger instances, and we report with our limited experience with
Genetic Algorithm on such instances at the end of this section.
The solution comes out to be different each and every time we run the program because random function is
being called at different stages of the program. But the result is also determined by how much mutation and cross
over is done for the calculation of each solution and depending upon that processor usage can be noted.
Processor usage is more if in case the crossovers and mutations are more. So the performance each and every
time is different from the last one.

3.4 Implementation of the Algorithm

The Genetic algorithm repetitively runs the crossover, selection and mutation operations in a main loop. The
flowchart showing the replica of execution steps is given below.

7
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

Figure 2: Flow chart showing the Algorithm Execution

4. Circuit Evaluation

4.1 Algorithm Implementation

The Genetic Algorithm repetitively runs the crossover, selection and mutation operations in a main loop. The
main operation of the program is as follows:

Step1: Read the Test Circuit details line by line from input circuit file and create the Net and Cell structure. Each
Net is associated with its weight, which specifies the priority with which it is to be considered. Each cell is
specified with its connectivity list of its respective nets. A 2D array will be constructed for storing the structure.
Step2: The Population creation or generation follows the structure formation of the cells with their respective
nets. It has been tried that if a mix of Random and Rule based generation will be done then it will lead to better
placements. so the population generation will consist of fixed percentage of placement(25 %) as random and the
remaining as rule based i.e. on the basis of information of connected nets from the Net structure. Normally by
swapping the position of cells into the placement space will lead to newer placement solutions and population
elements.

8
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

Step3: Once a placement set consisting of various placement options as population elements has been generated
the next step is to calculate their effectiveness in terms of area acquired and interconnect length required. This
step is also called as cost calculation or fitness evaluation. To calculate cost the weight associated with each net
is multiplied with semi perimeter, which is calculated on bounding rectangle approach. This entire process is
repeated for each net. Hence contribution of each net is considered for getting the final cost. Thus, final cost is
the cumulative cost found by the product of semi perimeter with the net weight.
Step4: After fitness evaluation crossover is performed to create a new placement solution. For this two parents
are chosen randomly out of the mating pool one is called as target parent and the other is called as passing parent
and on the basis of the information passed to the target parent from passing parent an offspring is created that
resembles the target parent and has a little portion that is the copy of target parent also. For doing this a portion
of parent is chosen and on the basis of the bounding modules in this portion a movement of the modules in the
target parent is done.
Step5: Once crossover is generated a new population element in the form of offspring has been generated in
addition to the already existing population elements, so a selection is done to pass a fixed number of elements to
form the next population.
Step6: After this mutation is performed to a fixed number of population elements by randomly selecting a
solution from the population and act to introduce a new combination into solution space. To do this a random
selection of net is done and the cells, which are furthest away from each other but attached to the same net are
brought closer to reduce the wire length and hence bounding rectangle.
Step7: In the next step the newly created population set is evaluated for its fitness to find whether some
improvement in reduction of placement area is achieved is obtained or not. If yes then program will get
terminated with the present solution. If No then a new iteration of the algorithm is repeated with the above
mentioned steps until required Cost will not get achieved.

4.2 Case Study

12 16
N1 7 N1 2
N2 5 N2 4
N3 4 N3 7
N4 3 N4 3
N5 2 N5 4
N6 8 N6 7
N7 2 N7 2
N8 5 N8 8
N9 9 N9 2
N10 7 N10 5
C1 N1 N6 N9 C1 N5 N8
C2 N3 N5 N7 C2 N2 N4
C3 N3 N4 C3 N3 N7 N10
C4 N2 N8 C4 N9 N10
C5 N5 N6 N10 C5 N4 N6 N9
C6 N1 N3 N4 C6 N1 N2 N5 N10
C7 N4 N7 C7 N3 N6 N8
C8 N5 N8 C8 N8 N9
C9 N1 N2 N6 N8 N10 C9 N1 N5 N6 N7
C10 N7 N9 C10 N4 N7

Figure 3: Circuit Description files for the two circuits under consideration i.e. Circuit I & II
Case Study I - Circuit I
The first case study is of a circuit with 10 nets and 10 cells. Each cell and net is specified with a number so as to
differentiate them from each other. Each cell is considered to be a rectangular block. The different cells are
shown with their connectivity list, which depicts that which cell is connected with which net or nets. The circuit
description starts with maximum Row width to be considered. This parameter relates with the number of
interconnects possible with a give row size.
Each Net is also specified with its weight, which indirectly specifies its priority of consideration during
placement.

9
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

Case Study II - Circuit II


The Second case study is another circuit with 10 nets and 10 cells but with different connection specifications.
The minimum number of nets connected with a cell is 2 and the maximum number is 4. The number of nets
connected with a cell directly relates the complexity with which it can be placed without affecting the placement
constraints. Each cell is considered to be a rectangular block.

4.3 Module Description

4.3.1 initNet
This module is used to describe the weight which is associated with each net. In the input file we give input as
is shown. The weight of the net which is associated is provided in front of it after a blank space. We calculate the
number of elements that are there in each line of the input file.
Now we use a temporary array variable which is a 2-D array and it stores the value of the weight linked with
which net. After execution of this module the NetAr structure updates its value as shown below as Figure 5.
4.3.2 initCell
This module is used to describe the id of nets linked and number of nets linked with each cell and the offset or
the position at which the net is located in each cell. In the input file we give input as is shown. The nets which
are associated with each cell are written in each line with each cell. When we read the input file, we read it line-
wise and simultaneously store it in a string. This module is called only if the initial character of the line is ‘C’. In
this module we use a temporary array variable which is used to store the cell and the nets which are linked to it.
So it is a 1-D array with 6 columns. In the first column the cell number is stored and in the rest five, the net id
which is linked to it. Here, we have taken the maximum number of nets as 5. And the column depicts the cell
numbers and net id for each cells. As we do that, we simultaneously calculate the value of number of ports.
Number of ports is the measure of the total number of nets connected to each cell and we calculate it by
subtracting two from total number of columns. Now, according to the number of ports connected we make a
switch-case structure, where on the basis of the values we decide the location of the net in each cell. And we take
cases till 5 only because we have considered the maximum number of ports that can be linked with each cell in
our program is 5. The cellPos related with each NetAr structure is also updated. As we run this module the
structure OrigCellAr updates itself like as shown in Figure 6.
4.3.3 compNet
As we saw, after execution till initNet the NetAr structure updated but still the fields numConnected and
cellConnected were not updated to their actual values. With the help of origCellAr structure, in this module, we
try to update the NetAr structure also. And simultaneously it is also calculated that how many cells are connected
to each net. And then the NetAr structure updates as shown in Figure 7.
4.3.4 getXY
This module is used to retrieve the exact values of the x-y coordinate of each net. In origCellAr structure the
offset of each net is stored but the exact location of each of the nets is not there
4.3.5 getWirelength
In our program, we have used the wire length as a perimeter to calculate cost. So in this module we calculate
the wirelength as semiperimeter. And wirelength is calculated depending upon the number of cell each net is
associated with.
4.3.6 Moveout
This module is called from module crossover to move out the cells in offspring from the site. The module
crossover passes one by one the Cells in D1, Cells in C1 to the move out module along with offspring. Since Cell
in D1 contains the cells that are to be moved out and Cells in C1 contains the cells that occupy the selected site in
passing parent. So if we swap the Cells in D1 and the Cells in C1 then the Cells in D1 are moved out of the
selected site and the selected site now contains the same cell as in passing but not necessarily in same order so
the offspring now returned to crossover is passed to module copyin so as to get the cell in the selected site as they
are in the passing parent.
4.3.7 Calcost
This module is used to calculate the cost of each placement. By using getXY module the locations of the net
are calculated and assigned to the x, y coordinates. Now, for each net we calculated the semiperemeter also by
using the module getwirelength. To calculate cost we multiply the weight attatched to this particular net with
semiperimeter. This entire process is repeated for each net. Hence contribution of each net is considered for
getting the final cost. Thus, final cost is the cumulative cost found by the product of semiperimeter with the net
weight.

10
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

Table 1: NetAr Structure upon execution


NetA NetAr[10
Fields/ NetAr[1] NetAr[2] NetAr[3] NetAr[4] NetAr[5] NetAr[6]
r [7]
NetAr[8] NetAr[9]
]

1 2 3 4 5 6 7 8 9 10
ID
7 5 4 3 2 8 2 5 9 7
Weight
[0 0
CellConnected [0 0 0] [0 0 0] [0 0 0] [0 0 0] [0 0 0] [0 0 0] [0 0 0] [0 0 0] [0 0 0]
0]
NumConnected 0 0 0 0 0 0 0 0 0 0

Table 2: OrigCellAr Structure upon execution


Field ID cellPos numPort netConnected offestX offestY
origCellAr[1] 1 [0 0] 3 [1 6 9 -1 -1] [1 3 2] [4 4 0]
origCellAr[2] 2 [4 0] 3 [3 5 7 -1 -1] [1 3 2] [4 4 0]
origCellAr[3] 3 [8 0] 2 [3 4 -1 -1 -1] [2 2] [4 0]
origCellAr[4] 4 [0 4] 2 [2 8 -1 -1 -1] [2 2] [4 0]
origCellAr[5] 5 [4 4] 3 [1 3 4 -1 -1] [1 3 2] [4 4 0]
origCellAr[6] 6 [8 4] 3 [4 7 -1 -1 -1] [1 3 2] [4 4 0]
origCellAr[7] 7 [0 8] 2 [4 7 -1 -1 -1] [2 2] [4 0]
origCellAr[8] 8 [4 8] 2 [5 8 -1 -1 -1] [2 2] [4 0]
origCellAr[9] 9 [8 8] 5 [1 2 6 8 10] [1 3 2] [4 4 0 0]
origCellAr[10] 10 [0 12] 2 [7 9 -1 -1 -1] [2 2] [4 0]

Table 3 : NetAr structure upon execution


NetAr
Fields NetAr[1] NetAr[2] NetAr[3] NetAr[4] NetAr[5] NetAr[6]
[7]
NetAr[8] NetAr[9] NetAr[10]

1 2 3 4 5 6 7 8 9 10
ID
7 5 4 3 2 8 2 5 9 7
Weight
[2 7
CellConnected [1 6 9] [4 9] [2 3 6] [3 6 7] [2 5 8] [1 5 9] [4 8 9] [1 10] [5 9]
10]
NumConnected 3 2 3 3 3 3 3 3 2 2

11
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

4.3.8 PrintDiagram
This module is used to print the cells at their right positions on command prompt. In this module the
various input parameters are: origCellAr structure, numCells, maxHeight and maxRowWidth. So according to
these parameters each and every time we get a new diagram. As the offsets and cellPos changes in the
origCellAr, the cell positions in the actual diagram also changes and we dynamically get a new diagram.
Also in all these diagrams we allocate the position of nets dynamically as well.
4.3.9 Move2
In Genetic Algorithm the first step is population creation or reproduction. This module helps us to swap the
cell position of two cells thereby creating new solutions. We pass the generated placement to this module.
4.3.10 Crossover
Crossover operation creates a new solution. Two placements are passed to this module one is known as the
target parent and another is the passing parent and an offspring is created that resembles the target parent and
has a little portion that is the copy of passing parent. Initially the offspring is created by making a copy of the
target parent then a site is selected randomly by getting x co-ordinate and y co-ordinate in variable randX and
randY. We consider that a max of four cells can be selected in a site. Depending on the randX and randY a
site is considered. Two cells ahead oh randX and two cells above randY are taken. The cells from the above
mentioned site of passing parent is stored in an 1-Darray CellsinC and cell from the above mentioned site of
target parent is stored in an 1-Darray CellsinD. Now we want to copy the area from passing parent into
offspring at the same place but before that the offspring must first move the cells that occupy that area into a
different location. So we need to find out the cells that are to be moved out thus we check the elements of
CellsinD with CellsinC and eliminate the common element. Thus, CellsinD1 creates the cells that have to be
moved out and CellsinC1 now stores the new cells that will be introduced at the selected site of the offspring.
Now moveout module is called to move out the cells in offspring and copyin module is called that copies the
site of passing to offspring.
4.3.11 Copyin
Copyin module is called from the crossover module to copy the cells from passing parent to the offspring
at the selected site. Hence, crossover module needs to pass the passing parent, the offspring and the cells in
passing parent at the selected site.

Figure 4: Print Diagram function output

4.3.12 Select New Population


The population set considered for the input file shown is five but after crossover we get an offspring, hence
we have six placements so, we need to select five placements out of these six that are to be carried to next
generation and this selection is done randomly to prevent convergence to inferior solution that closely
resembles the early best placement.
To randomly select the placements that are included in next generation a 1-D array is created having size of
population set in addition to number of offspring each array index is assigned the same value as its index this

12
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

is followed by swapping the elements. This results in an array that has random elements varying from one to
the size of array with no element being repeated. The new population is now made corresponding to the
element in the array if the element at that index corresponds to a value that is less than or equal to the
population size the placement is fetched from the initial set of the population otherwise it corresponds to the
offspring generated by crossover.
4.3.13 Mutate
This mutate function is used to randomly select a solution from the population and act to introduce a new
combination into the solution space. A net in the solution is randomly selected and the cells which are the
furthest away from each other but attached to the same net are brought closer. This reduces the wirelength of
the respective net.
4.3.14 Find Cell
In the mutate module we tried to shift the position of one cell by one so that it comes near to another cell
which is connected to the same cell. But while shifting it goes at the position of different cell which is
adjacent to it. In this module we retrieve that particular cell at whose location the cell is shifting. For this we
first find the x and y co-ordinates of the cell to which the cell is shifting and then we run a loop in which we
compare theses coordinates with the coordinates of each and every cell, that cell value is returned. Now
initially we store the value 999 so that if the coordinates are not matched then it does not end up giving us a
wrong solution.
4.3.15 Shift
This module is used to do the shift which is required in mutate function. This is a step-by-step procedure in
which cellt moves towards cells by interchanging positions with each encountered cell.

5. Results and Discussion

Circuit I : Results

The proposed technique involves the placement of modules of rectangular shape with different interconnect
requirements and hence different placement costs. The placement is based on the concept of bounding
rectangle which has a direct relationship with the number of nets to be attached to a cell. The technique has
been successfully tried over several test circuits, the results of two among them are shown here. In the first
test circuit the original placement out of the randomly generated population elements results in a placement
cost of 648. The algorithm over two iterations results in a better placement resulting into a best cost of 514.

Figure 5: Initial Placement of the Circuit Figure 6: Placement after One iteration of GA

13
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

The figure 5 shows the initial placement of the circuit under test with 10 Cells and 10 Nets. The maximum
row width is 12. This results from the random and rule based placement with no optimization effort
performed.
It has been clearly shown in figure 6 that after the first iteration of the GA the cost of placement comes down
to 598 from 648. This has been due to the optimization due to crossover and mutation and thus more favor is
given to low cost solutions.

Figure 7: Placement after Two iterations of GA Figure 8: Placement after Four iterations of GA

With further run of GA the placement cost further reduces to 564 as shown in figure 7. It has been shown in
the figure 8 that as the Algorithm progresses it might be population some time never evolves in some
generations but in those cases the GA must run to explore new possibilities in future generations of GA

14
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

Figure 9: Placement after five and final iteration Figure 10: Initial Placement of the Circuit II

Iteration 5 is the last loop for the current circuit; with it done, the GA finally evolves with the best placement
cost as 514 as shown in figure 9. It has been shown that with a simulation time of 0.43 seconds the algorithms
places a 10-cell circuit with a reduction in placement cost from 648 to 514.

Circuit II : Results

In the first test circuit the original placement out of the randomly generated population elements results in a
placement cost of 480 as shown in figure 10 on previous page. The algorithm over two iterations results in a
better placement resulting into a best cost of 442.

Figure 11: Placement after One iteration of GA Figure 12: Placement after Two iterations of GA

15
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

The figure 10 shows the initial placement of the circuit under test with 10 Cells and 10 Nets. The initial
placement cost is 480.The maximum row width is 16. This results from the random and rule based placement
with no optimization effort performed.
The figure 11 clearly demonstrates one aspect of optimization algorithm that they does not always move
towards better from the good ones. It might results that the generated population will lead to the increased
population cost than the current one. It has been shown that the GA operators may lead to the increased
placement cost from current 480 to 621. But the later generation will remove the bad placements from the
current pool and tries to evolve the good ones only.

Figure 13: Placement after Four iterations of GA Figure 14: Placement after Five iterations of GA

During the second iteration of the GA the good placements are chosen and are further modified to result in a
reduced placement cost from 621 to 504 as shown in figure 12. The placement cost further increases after
fourth iteration to 521 in figure 13 due to the convergence of algorithm towards a poorly resulted placement.
It has been clearly shown by the figure 14 that after five iterations the placement cost finally converges to a
well-defined and always minimum placement cost to 442. The total time take to place a 10-cell circuit is
0.434 seconds.

6. Conclusion

Since placement is such an important component of circuit layout, it has been examined extensively and
many diverse approaches have been successfully proposed can, for the most part, be loosely classified as
either constructive or iterative improvement techniques, there is one fundamental difference- they act on a set
of solutions rather than a single solution. This feature, along with their usage of crossover and mutation
operators, gives genetic algorithms the unique ability to explore and combine simultaneously diverse
placement ideas in a variety of situations. This ability, when combined with the probabilistic nature of its
remaining operators and functions, allows a speedy directed search of the state-space towards those portions
with desirable characteristics. Since we find this method of operation to be intuitively appealing, we initiated
an effort that resulted in Genetic Algorithm.
So our conclusion is that our program gives an optimal solution for smaller number of instances. Also while
working we faced some problems during the best placement block which should undergo ten thousand
iterations for the best solution so we limited the iterations to a much smaller number which is apt for a
normally available processor.

References
[1] James P.cohoon, William D.Paris " Genetic Placement ", IEEE Transactions on Computer-Aided Design
of Integrated Circuits and Systems,November 1987 ,pp 956 – 964

16
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

[2] Sipakoulis, G.C.; Karafyllidis, I.; Thanailakis, A., " Genetic Partitioning and Placement for VLSI
Circuits", Proceedings of 6th IEEE International Conference on Electronics, Circuits and Systems,
1999,pp 1647-1650
[3] Zoltan Baruch, Octavian CreŃ, Kalman Pusztai, "Genetic Algorithm for Circuit Partitioning",
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.99.1017&rep=rep1&type=pdf,pp 1-6
[4] A. E. Dunlop and B. W. Kernigham 1985 “A procedure for placement of standard-cell VLSI circuits”
IEEE Transactions on Computer-Aided Design,1985 Vol(4), pp. 92-98.
[5] Yosuke Kimura, Kenichi Ida," Floorplan design problem using improved genetic algorithm ", Journal of
Artificial Life and Robotics Volume 8, Number 2, 123-126, DOI: 10.1007/s10015-004-0298-4
[6] James P. Cohoon, Shailesh U. Hegde, Worthy N. Martin,and Dana S. Richards," Distributed Genetic
Algorithms for the Floorplan Design Problem", IEEE Transactions on Computer-Aided Design of
Integrated Circuits and Systems, Vol 10, NO. 4. APRIL 1991, Pp 483-492
[7] Suphachai Sutanthavibul, Eugene Shragowitz, J. Ben Rosen,"An Analytical Approach to Floorplan
Design and Optimization.",Proc of 27th ACM IEEE Design Automation Conference,1990, pp 187-192
[8] Khushro Shahookar And Pinaki Mazumder," A Genetic Approach to Standard Cell Placement Using
Meta-Genetic Parameter Optimization", IEEE Transactions on Computer-Aided Design of Integrated
Circuits and Systems,May 1990,pp 500-511
[9] Gary Kok-Hoo Yeap,and Majid Sarrafzadeh,"A Unified Approach to Floorplan Sizing and
Enumeration",Ieee Transactions On Computer-Aided Design Of Integrated Circuits And Systems, Vol.
12, NO. 12, DECEMBER 1993,pp 1858-1867
[10] Kurdahi, F.J.,Parker, A.C," Techniques for Area Estimation of VLSI Layouts",IEEE Transactions on
Computer-Aided Design of Integrated Circuits and Systems,Jan 1989,pp 81-92
[11] Nallasamy Mani, Bala Srinivasan," Using Genetic Algorithm for Slicing Floorplan Area Optimization in
Circuit Design",IEEE International Conference on Systems, Man, and Cybernetics, 1997. 'Computational
Cybernetics and Simulation'.,Oct 1997, pp 2888-2892
[12] Youssef, H.; Sait, S.M.; Nassar, K.; Benten, M.S.T.” Performance driven standard-cell placement using
the genetic algorithm” Fifth Great Lakes Symposium on VLSI, 1995, pp. 124 – 127.
[13] Schnecke, V.; Vornberger, O. “Genetic Design of VLSI Layouts”, Proc of First International Conference
on Genetic Algorithms in Engineering Systems: Innovations and Applications, 1995. GALESIA,1995 ,
pp: 430 – 435
[14] D. F. Wong and C. L. Liu, "A New Algorithm for Floorplan Design," in Proc. of the 23rd ACM/IEEE
Design Automation Conf., 1986, pp. 101-107.

17
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

Prolonging Network Lifetime by Using a Weight-Based Routing


Protocol in Mobile Ad Hoc Networks
Hekmat Mohammadzadeh1 and Shahram Jamali2
1
Department of Computer, Islamic Azad University, Parsabad Moghan Branch, Parsabad Moghan, Iran
2
Department of Electrical and Computer, University of Mohaghegh Ardabili, Ardabili, Iran
1
Hekmat@iaupmogan.ac.ir , 2Jamali@iust.ac.ir

Abstract
Prolonging a network lifetime is one of crucial issues of routing protocols in mobile ad-hoc networks
(MANETs), because MANETs consist of mobile devices powered by limitted batteries. The main goal of this
paper is to maximize the network life by efficiently utilizing the battery of mobile nodes. Many Protocols
have designed for multicasting in MANETs that On-Demand Multicast Routing Protocol (ODMRP) is one of
them. In the basic ODMRP, a multicast receiver selects routes always based on the minimum delay. We
believe this can put some routes under heavy load and hence energy of nodes in these routes deplete earlier
than other nodes. This unbalanced loads lead to decreased network life time and as a result the throughput of
network decreases. To solve this problem we propose a novel weight-based approach to develop a routing
protocol that distributes the traffic uniformly between nodes and selects the routes with the high level of
energy. This cause the energy of nodes to deplete with same rates and leads to increased lifetime of network.
Simulation illustrate that propposed approach improves network life time remarkably.

Keywords: Mobile Ad Hoc, Network Lifetime, ODMRP, Multicast, Routing

18
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

1. Introduction
A mobile ad hoc network (MANET) consists of a collection of wireless mobile nodes that are capable
of communicating with each other without the use of any centralized administration or network infrastructure.
The networks topology of MANETs may change frequently due to mobility of nodes. Many protocols have
designed for routing in ad hoc networks. But a good routing protocol should always forward packets close to
the shortest path from source to destination and be able to adapt quickly to topology changes. In the
application level, MANET users communicate with together as teams. Thus applications need for group
communication (multicasting) to data forwarding and real time traffics. Routing in mobile ad hoc networks
because of frequent topology changes and dynamic group membership has special problems and require
robust and flexible mechanisms for discover and maintenance the routes. Because of group-oriented
computing, multicasting is an efficient way of providing necessary services for ad hoc applications and is one
of the methods that mostly used in MANET routing. Thus by combining the applications of ad hoc networks
with multicasting we can provide large number of group applications. Multicasting reduces the
communication cost for application that sending the same data to many destinations instead of sending via
multiple unicast.
Constructing and maintaining multicast tree should be simple to keep channel overhead low. Many
multicast routing protocols have been proposed and designed for ad hoc networks. MAODV and ODMRP
try to minimize the communicating overhead by on demand route discovery process. In this paper we discuss
ODMRP protocol. As we said previously, ODMRP is a mesh based protocol and applies on-demand
procedures to build routes. This protocol uses soft state to maintain multicast group memberships.
In the basic ODMRP, a multicast receiver selects routes based on the minimum delay. But this route
may not be stable route that lead to link breakage and decrease network lifetime. In this paper and in
ODMRP route discovery process, instead of using only the minimum delay to select stable route, we
consider route expiration time and nodes’ residue energy, as well. In this way, when a source sends JOIN-
QUER(J.Q), it appends its location, speed, direction, link expiration time, and residue energy. Therefore we
can determine minimum energy of each route and route stability.

2. ODMRP Protocol
On-demand multicast routing protocol (ODMRP) is mesh based protocol and uses a forwarding group
(FG) concept. In other words only a subset of nodes forwards the multicast packets. In this protocol group
membership and multicast routes are established and updated by the source on demand. Consider Figure.1,
the source S, desiring to send packets to a multicast group but having no route to it. In this way, it will
broadcast J.Q control packets to the entire network. These J.Q packets periodically broadcast to refresh the
membership information and update routes. When an intermediate node receives the J.Q packet, it stores the
source ID and sequence number in its message cache to detect any duplicates. The routing table is updated
with the upstream node ID (i.e., backward learning) from the message which was received for the reverse
path back to the source node. If the message is not a duplicate and the Time-To-Live (TTL) is greater than
zero, it is rebroadcast.

Figure 1. Join Query and Join Reply Propagation


When the J.Q packets reach a multicast receiver, it creates and broadcasts a JOIN-REPLY (J.R) to its

19
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

neighbors. When a node receives a J.R, it checks whether the next hop node ID of one the entries matches its
own ID or not. If it does, the node realizes that it is on the path to the source.
Thus it is the part of forwarding group. So it sets the FG-flag and broadcasts its J.R built on matched
entries. The J.R is thus propagated by each forwarding group member until it reaches the multicast source via
the shortest path. This process constructs (or updates) the routes from source to destinations and builds a
mesh of nodes, the forwarding group. All nodes inside the bubble (multicast members and forwarding group
nodes) forward multicast data packets. But in basic ODMRP we select a route based on minimum delay. This
process may cause to link breakage and PDR reduction.

3. Proposed Algpritm
In this paper, we propose a stable weight based method in ODMRP protocol. The goal of this work is to
improve routing quality in ODMRP by using the available information of network. The proposed approach
uses a weight function for increasing network lifetime and selects a stable routing path. Before we introduce
the proposed approach, we define some parameters that are important for increasing network lifetime and to
select a stable routing path.
1- Residue energy (RES).
2- Route Expiration Time (RET).
3- Delay
In section 3.2 we will discuss these metrics.

3.1 Route Discovery Process


In basic ODMRP when the source node sends J.Q packets, it appends, position, speed, direction and
mobility. Furthermore we send Link Expiration Time (LET) for finding stable route in each link. We compare
this value with LET, and finally we can obtain route stability to destination. These steps are basic ODMRP
operations. For obtaining node’s energy, extra field as MinRES must be added into J.Q packets, that MinRES
is set to first node RES. When a multicast member receives the J.Q, it calculates the Residue energy (RES)
and compares With MinRES. If the RES of node that receives J.Q is lesser than MinRES, it replaced in
MinRES. Otherwise MinRES is conserved. When the destination receives J.Q, it can calculate the stability of
route according to the weight function in section 3.2.2 Then the algorithms selects stable route for sending
data.

3.2 Route Selection Process


In this section we first introduce the factors that are important in weight function. Then we define the
route weight function of proposed approach.
3.2.1 The Factor of Weight Function
The weight function for increasing reliability and to select stable routing path includes three factors:
residue energy, route expirations time and delay. We describe the three factors as below.
3.2.1.1 Residue Energy (RES)
Each node can be informed from remaining energy and own position (by using GPS). Furthermore
source node and intermediate nodes calculate transmitting energy ( Etx ) for each packet by equation (1).

Packetsize × Ptx
Etx = (1)
BW
Note that Ptx is essential power for transmitting a packet and BW is link bandwidth. Shortest path from
source to destination requires minimum energy. Finally minimum energy for each node to data transmitting
can be obtained from equation (2).
NEC = n * ( E tx + E rx ) (2)

20
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

In the node energy consumption (NEC) we calculate essential energy for sending data before transmitting
data packets according to the size of file or data. Note that n is total data packets for sending and Erx is
essential energy for receiving data packets, If Etx < E rx .
After obtaining NEC, we can calculate current RES of node according to the node energy consumption,
that current RES obtain from equation (3).
RESnew = RESold − NEC (3)

3.2.1.2 Route Expiration Time (RET)


The link expiration between two nodes can be obtained by the principle that two neighbors in motion will
be able to predict future disconnection time. In this predication basic ODMRP use many parameters. These
parameters include speed, direction and radio rang that can be obtained from GPS. We assume that nodes n1
and n2 have equal transmission radius r and that they are initially within hearing let (xi,yi) and (xj,yj) denote
the x-y position for node n1 and n2, respectively. Also let Vi and Vj denote their speeds along the directions
θ and θ , respectively. Then the duration of time between nodes n1 and n2 is given by equation (4).
i j

−( ab + cd ) + ( a 2 + c 2 ) r 2 − (ad − cd ) 2
LET = (4)
(a 2 + c 2 )

d = y i − y j , c = Vi Sin θ i − V j Sin θ j

a = V i Cos θ i − V j Cos θ j b = x i − x j ,

We can obtain the LET between two nodes in the feasible route. Then the RET is equal to the minimum
of the set of LETs for the feasible route. The value of RET obtain from equation (5).
RETi = Min {LETsi } (5)

3.2.1.3 Delay
In basic ODMRP, a multicast receiver selects routes based on the minimum delay. In other words routes
taken by the first JOIN-QUERY received. And this route has minimum end to end delay for sending multicast
information.
3.2.2 Route Weight Function
The reliability for a feasible route is based on three items: residue energy, route expiration time and
delay. Destination nodes calculate stability for each route by using route weight function from equation (6).

(6)
 RETi   MaxDelayi   RES i 
Wi = C1 *   + C2 *   + C3 *  
 MaxRETi   Delayi   MaxRES i 
if MaxRET i , Delayi , MaxRES i > 0

Where C 1 + C 2 + C 3 = 1 and i specifies route ID. The values of C 1 , C 2 and C 3 can be chosen
according to the system needs. For example route stability in military environment for special reasons is
important from delay. Therefore C 1 and C 3 should be select positive and larger than C 2 , (for example:
C 1 =0.5, C 2 =0.2, C 3 =0.3).

21
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

Figure 2. Basic ODMRP Route Selection

Figure 3. Proposed Approach Route Selection


According to Figure 2 in basic ODMRP by routing protocol the minimum delay is selected
(<S,C,F,J,D>). But in Figure 3 namely proposed approach when we use the energy consumption and route
weight function, the stable route is selected (<S,A,E,J,D>).

4. Methodology and Simulation Parameters


To implement proposed approach we have made some modifications over ODMRP module of the
GloMoSim simulator. GloMoSim is a scalable simulation environment for large wireless communication
networks and uses C++ codes for simulating. This simulation considers a network with 100 mobile nodes in a
network space of 1000*1000 m2. Each node is placed uniformly and its mobility is modeled using random
wave point. In this approach traffic bit rate is constant and transmission range for each node is 250 meters.
The simulation results have been discussed for 10 minutes simulation time with 2Mbps bandwidth and packet
size is 1024 byte. MAC layer protocol is IEEE 802.11 and number of sent packet is 100.

5. Simulation Result
To study the impact of our enhancements, we simulated two schemes. Scheme A is the basic ODMRP
that uses the minimum delay as the route selection metric. Scheme B is the proposed protocol (S-ODEMRP)
that uses the route weight function as the route selection metric. The lifetime of protocols are illustrated for
mobility speed and number of nodes.
In the first set of experiments, the mobility speed is was set constant to 20 and number of mobile nodes
was varied from 10 to 100. In the second of experiments the number of nodes was set constant to 100 and
mobility was varied from 10 to 70. In addition, the metrics of interest is Network Lifetime: Network lifetime
can be defined as the time taken for the first node or a fixed percentage of the nodes to energy resource
exhaustion. We define the network lifetime as the total elapsed time from the state of network
connectedness to a state in which the network connectivity ratio drops to 30%.
Figure 4 and figure 5 illustrate the network lifetime of each scheme. Figure 4 shows that for a wide range
of mobility speed, the lifetime of proposed protocol is higher than basic ODMRP. Figure 5, on the other hand,
shows that various numbers of nodes, the lifetime of our protocol is better than basic ODMRP. Better
performance of the proposed protocol is due to this fact that it uses those routes that have higher level of
energy. This leads to balanced consumption of energy in various routes and nodes; hence, this protocol
experiences less route breakages and achieves better performance.

22
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

800
700
600

Lifetime
500
400
300
O DMRP
200
100 S-O DMRP
0
0 10 20 30 40 50 60 70
Mobility Speed (m/s)
Figure 4. Lifetime Vs. Mobility Speed

800
700
600
Lifetime

500
400
300
200 O DMRP
100 S-O DMRP
0
10 20 30 40 50 60 70 80 90 100
Number of Nodes
Figure 5. Lifetime Vs. Number of Nodes

6. Conclusion
In this paper we proposed a novel routing protocol as an improvement over ODMRP routing protocol. It
uses energy-aware weighted function to involve energy of nodes in the route selection process. The main
feature of this protocol is that it distributes load among routes considering energy of routes. This causes the
load to be distributed uniformly among routes and leads to long life of network. The simulating results
indicated that the total delivered data, network life time and system life time in our protocol are better than
ODMRP routing protocol.

References
[1] M. Gerla, S. J. Lee, and W. Su, “On-demand multicast routing protocol for ad hoc networks”, Internet
Draft, draft-ietf-manet-odmrp-02.txt, work in progress, 2000.
[2] R. Bagrodia, M. Gerla, J. Hsu, W. Su, and S. J. Lee, “A performance comparison study of ad hoc
wireless multicast protocols”, Proc. of the 19th Annual Joint Conf. of the IEEE Computer and
Communications Societies, Pages 565 –574, March 2000.
[3] S. J. Lee, “Routing and Multicasting Strategies in Wireless Mobile Ad hoc Networks”, PhD.Eng thesis,
University of California, Los Angles, 2000.
[4] C. de Morais Cordeiro, H. Gossain, D. P. Agrawal, “Multicast over wireless mobile ad hoc networks:
present and future directions”, IEEE Network, Vol. 17, Issue: 1, February 2003.
[5] M. Gerharz, C. de Wall, P. martini, P. james, “Strategies for Finding Stable Paths in Mobile Wireless
Ad Hoc Neworks”, Proceedings of the 28th IEEE International Conference on Local Computer
Networks, Pages 130-139, 2003.
[6] M.Maleki and M.Pedram, "Lifetime-Aware Multicast Routing in Wireless Ad Hoc Networks", In
Processings of IEEE WCNC ,Vol.3, Pages 1317-1323, 2004.
[7] N. Meghanathan, “Comparison of Stable Path Selection Strategies for Mobile Ad Hoc Networks”,
ICN/ICONS/MCL, Mauritius, April 2006.

23
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

[8] Sh. Joudi begdillo, “Improving Packet Delivery Ratio in ODMRP with Route Diversity”, Parsabad
Moghan University, IJCSNS, South Korea, pages 146 -151, 2007.
[9] M. Al-Akaidi and M. Alchaita, “Link stability and mobility in ad hoc wireless networks”, De Montfort
University, UK, IET Commun, Vol. 1, No. 2, pages 173–178, 2007.
[10] N. Sarma, S. Nandi, “A Route Stability Based Multipath QoS Routing in MANETs”, ICETET, India,
April 2008.
[11] H. Xu, D. Mu, “Cluster Based Stable Multicast Routing Protocol in Ad Hoc Networks”, in: IEEE/IFIP
ECU 2008, Vol. 2, Pages 723-728, 2008.
[12] J. Tang, B. Chuang, F. Wu, “Link Stability-Based Routing Protocol for Mobile Ad Hoc Network”,
IEEE Conference on Systems, Man, and Cybernetics, Taipei, Taiwan, Octeber 2008.
[13] M. M EffatParvar, N. Yazdani, F. Lahooti and M. EffatParvar, “Link Stability Approach and
Scalability Method on ODMRP in Ad Hoc Networks”, CNSRC, pages 416-421, 2009.
[14] M. Shakarami, A. Movaghar, “A Clustering Algorithm to Improve Routing Stability in Mobile Ad-
hoc Networks”, 14th annual International CSI Computer Conference, Iran, Tehran, October 2009.
[15] F. Akbari, S. Majid Noorhosseini, M. Imanazari Bonab, “Maximizing Network Lifetime for Multicast
Communication in Mobile Ad Hoc Networks”, IJCSNS, Vol. 9, No. 12, Pages 123-128, December
2009.
[16] R. Biradar, S. Manvi, M. Reddy, “Link Stability Based Multicast Routing Scheme in MANET”,
Computer Networks, Vol. 54, No. 7, Pages 1183-1196, May 2010.

24
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

Evolutionary Approach for Unsupervised Classification and Application


to Texture Images and Alphabetic Letters
F. Lekhal, M. El Hitmy, O. El Melhaoui and M. Nasri
LABO LEPAS, FS, University of Mohammed I, OUJDA, MOROCCO
lekhal_ f@yahoo.fr
Abstract
The selection of the most pertinent attributes and the choice of the optimal number of classes are two
very important problems for unsupervised data classification. In order to obtain a more precise classification,
it is necessary to solve these problems optimally. We propose in this work a new optimization algorithm for
unsupervised classification, based on the evolutionary approach, which optimizes the number of classes, the
number of attributes and selects the attribute space by minimizing a cost function that we proposed. The
proposed approach is validated on two types of data which are alphabetic letters and a set of local oranges,
apples and tomates. The tests are carried out by simulation and good results are obtained.
Keywords: Unsupervised classification, fuzzy C-means algorithm, number of classes, attributes selection,
genetic algorithm, evolution strategy.

25
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

1. Introduction
Data classification consists of regrouping the data into different classes. The data of a same class are all
similar according to some criterion. The data of two different classes are dissimilar according to the same
criterion [1, 2]. Data classification has played an important role for resolving problems in pattern recognition,
imaging, image segmentation and has been used in fields such as medicine, biology, industry,
telecommunications etc. Data classification is of non supervised type if the prior information about the data
is entirely or partly not available. It is of supervised type if some necessary a priori information about the data
is given.
In many applications of unsupervised classification, there are two very important problems,
determination of the optimal partition [2,3,4] and selection of the most pertinent attribute parameters [5, 6, 7,
8]. In the literature, several studies have been made to solve these two problems.

The most classical algorithm used for determining the partition is the fuzzy C-means algorithm FCM [4,
9], it works iteratively, and requires prior knowledge of the number of classes. If this number is not known,
several trials are made and the quality of the results is assessed. This is done through a validity criterion,
which is minimized. Among the existing criteria of validity, Bezdek [10, 11], has proposed two of them, the
coefficient of fuzzy partition and entropy, these two criteria give a measurement of the degree of classes
overlapping, they do not express the intrinsic geometrical property of the classes. Xie and Beni [4, 8] have
proposed another criterion that would determine the degree of compactness and separability of the fuzzy
partitions.

Data analysis consists in the initial stage to select the most pertinent attributes to better discriminate
between the data. In fact, the attributes are not necessarily all pertinent: some may be redundant and others
not significant. The optimal choice of these parameters gives rise to a rapid and concise decision and may
play, for the classification, the role of a filter against the noise brought about by the non representative
parameters [2]. There are currently several methods for selecting the attribute parameters for the data
classification [12], three approaches are possible: the wrapper [13], the filter [14] and the hybrid [15]. Several
articles on this subject have been published, involving either new techniques or improved versions of known
methods. Several of these methods have been published in the 1970s [15] in statistical pattern recognition,
machine learning, data Mining and were widely applied to many fields such as text categorization, image
retrieval, customer relationship management intrusion detection, and genomic analysis.

In this work, we have proposed a new learning algorithm for unsupervised classification, for solving the
two classification problems which are the selection of the most pertinent attributes and the choice of the
optimal number of classes. The algorithm is based on the evolutionary approach which optimizes the number
of classes, the number of attributes to be selected and the attribute space by minimizing a cost function that
we proposed. The new function proposed is based on a new criterion which is obtained from a measurement
on compactness, separability of the classes and degree of correlation between the attributes to be selected
2. Fuzzy classification
2.1. Descriptive elements
Let’s consider a set of M objects {o1, o2,..,oi, .., oM }, characterized by N parameters regrouped in a line
initial vector Vinit = (a1, a2, .., aj, .., aN). Let Ri = (aij)1≤j≤N be a line vector of RN where the jth component aij is
the value taken by aj for the object oi. Ri is called the observation associated with oi. RN is the observation
space, or the parameters space. Let EV = {a1, a2, ..., aj, .., aN} be the set of attributes associated with Vinit.
2.2. Fuzzy C-means algorithm.
Let’s consider M observations (Ri)1≤i≤M to be associated to C different classes (CLs)1≤s≤C with respective
centers (xs)1≤s≤C . Fuzzy C-means algorithm FCM associates with each observation Ri its membership degree
µis, to the class CLs. µis∈ [0, 1]. FCM method consists of determining the class centers which minimize the
optimization criterion defined by [2, 3, 9]:

26
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

M C df
J m = ∑ ∑ (µis ) Ri − x s
2
i =1 s =1
Under the constraints:
C M
∑µ
s =1
is =1 for i=1 to M and 0<
∑ µis < M for s=1 to C
i =1
||.|| is the Euclidean distance, df is the ‘’fuzzy degree’’ often taken equal to 2 [4]. In order to minimize
Jm, µis and xs, must be updated at each iteration according to [2]:
M

 R − x
1
2  1− df

∑ (µis )df Ri
 i s
 i =1
µ is = and x s = M
1
∑ (µis )df
C
∑  Ri − x k 2  1− df

 i =1
k =1

Despite the simplicity and the popularity of this algorithm, it suffers from the same drawbacks as the C-
means does. These are: the number of classes must be known a priori, the initialization problem and the
possibility that the convergence point may stack on a local rather than on a global optimum [2, 11, 16]. We
present in the coming section an evolutionary adaptation of the FCM based on an evolution strategy in order
to get around these three drawbacks [16, 17].
2.3. Evolutionary Fuzzy C-means algorithm.
A classification method based on the evolutionary fuzzy C means (EFCM ) has been reported in [17].
The method looks for the optimal value for the center of the classes in an iterative way. The algorithm adapts
the population elements constituted by class centers (xs)1≤s≤C , in such a way to minimize a fitness function
defined on the search space by [3, 9, 17]:
M C df
J ( chr ) = ∑ ∑ (µ is ) Ri − x s
2

i =1 s =1
Where
1
 R − x 2  1− df
 i s

µ is =
C 1

∑  R − x
 i k
2  1− df

k =1

chr=(xsj)1≤s≤C,1≤j≤N
=( x11… x1N x21… x2N… xs1…. xSN… xC1…xCN)

Chr is a real line vector of size C×N. The genes are the (xsj)1≤j≤N components of the (xs) center.

At each iteration, a new population of centers is generated; it is formed by the best centers of the
previous generation. New center values are then produced by using the selection and the mutation genetic
operators. The new chromosome is obtained from the previous one by [15, 20]:

xs* =xs + fm (xs0 - xs) × N (0,1)

Where fm is a weighting factor generally taken between 0.5 and 1, xs0 is the gravity center of the CLs
class, where CLs={Ri / µis=maxµir, 1≤r≤C}, CLs is formed by the Ri ’s for which the µis is the highest with
respect to the other classes. µis is the membership degree of the element represented by Ri to the class CLs.
The strategic parameter σ’= fm (xs0 - xs) is small when xs gets near to xs0, σ’ is high when xs is far away from

27
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

xs0. N (0,1) is a gaussian distribution of real values of zero as a mean and one as a standard deviation.

2.4. Optimization of the number of classes.


In many problems of partition of objects into different classes, the choice of the number of classes C is
difficult. It is necessary to define a criterion to determine it [2], Several criteria have been proposed in the
literature [2,3]. These include the informational, the MDL, the standard log-likelihood, the Xie and Beni, the
entropy test, etc. The criterion of Xie and Beni is considered as the best [2, 3, 9] in our context, it is based on a
measure of compactness and separability of the classes:

Comp(C )
FXB (C ) =
Sep(C )

Where:

M C df

∑∑ (µis )
1 2 2
Comp (C ) = Ri − x s and Sep ( C ) = min s ≠ s ' x s − x s '
M
i =1 s =1
To choose the number of classes by calculating the index FXB, it is necessary to run the algorithm FCM
or EFCM for various values of C from Cmin to Cmax by a step of 1. The optimal number of classes Copt is the
one for which FXB(C) is the lowest

C opt = arg min C F XB (C )

The criterion is of high reliability and accuracy and has been widely uses for fuzzy classification
validation [3].

3. Attributes selection criteria


Several criteria for selecting the most pertinent attributes have already been published in the literature
[5, 6, 7, 8]. These were mainly designed to evaluate the discriminating capacity of a set of attributes. Three
approaches are possible, the wrapper [13], the filter [14] and the hybrid [15]. The wrapper approach uses
directly the classifier recognition rate as a selection criterion of the attribute space. The procedure used with
this criterion is to perform a classification on a test sample of large size and to measure the percentage of well
classified observations. The most discriminating subset is that for which this percentage is maximum. The
filter approach only estimates the discriminating power of a space of attributes from statistical measurements
taken from a training sample. Among the existing criteria we quote [2]: Trace, Hotelling, Auray …etc, which
are all based on a measure of compactness and separability of the classes, they give an evaluation of the
discriminating power of a space of attributes. Compactness and separability measures require that the classes
have already been obtained. The hybrid approach attempts to take advantage of the two approaches by
exploiting their different evaluation criteria in different search stages. In this work we are interested only in
the filter approach.

4. Optimal choice of the number of classes, the number of attributes and the
attribute vector for the unsupervised classification by an evolutionary approach.
Attributes selection criteria requires that the number of classes be fixed or known a priori, while the
validation criteria based on an FCM look for the number of classes in an attribute space fixed or known a
priori. In this work we have combined the two problems of finding the optimal number of classes and the
optimal attribute space, into one problem where both these two factors will be obtained simultaneously. We
have used an index measure of compactness and separability of the classes to determine a criterion through

28
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

which we can obtain an optimal number of classes Copt, an optimal number of attributes qopt and the most
pertinent qopt attributes, which find the best compromise of compactness and separability of the Copt classes.

4.1. Proposed criteria

Let C be the number of classes, q the number of attributes to be selected (q<<N), and Vk* (k∈ N*) an
arbitrary attribute vector of dimension q obtained from Vinit. To estimate the discriminating power of Vk*,
which partitioned the M objects characterized by the q attributes, of realizations (ri)1≤i≤M in C classes, we
proposed a new criterion J based on the notion of compactness, separability of the classes and the correlation
between the attributes to be selected. ri is the observation vector formed by the selected q attribute values for
the object oi. For calculating J, we proceed as follows:

1- Initially, fix the number of classes C, the number of attributes q and the q attributes to be selected for
obtaining the Vk* vector.
2- Use the EFCM method to find the optimal centers (xs)1≤s≤C of the C classes, which best partition the
M objects (oi)1≤i≤M
3- Calculate the two criteria of compactness and separability of classes, which are defined by:

df
1 M C
∑ ∑ (µ is ) ri − x s
2
Comp (C , q, Vk* ) =
M i =1 s =1
C 2
Sep ( C , q , V k* ) = ∑ xs − xs'
( s , s ') =1
s ≠ s ', s p s '
4- Calculate the correlation coefficient ρ between the q attributes two by two.

The optimization criterion proposed is defined by:


Comp(C, q,Vk*)
*
J (C, q,Vk ) = × ∑ ρ(a j , a )
l
Sep(C, q,Vk*) (a j ,al )∈E2*
Vk
j ≤l
Where: ρ (aj ,al ) is the correlation coefficient between the two parameters aj and al of EVk*

This criterion J is to be minimized. The obtained C classes, q attributes and Vk* through the
minimization of the criterion J, gather in the most compact way the points of each class, separates to the
maximum the different classes two by two and the attributes of Vk* are less correlated between them two by
two.
The correlation factor made it possible to avoid the information redundancy. Indeed the correlated
parameters provide similar information, considering all the correlated parameters in the selection procedure
means an increase of the parameters space dimension and an increase in the classifier complexity. This
implies an increase in the computation time, a slow decision process and a less precise selection of classes. If
the dimension q of the selected attribute space contains k correlated parameters between them two by two,
then k-1 parameters from among them do not improve significantly the discrimination. In the same time the
criterion value may increase by the addition of these parameters. By removing these correlated parameters it
would be possible to obtain a more discriminate space with a lower dimension.

4.2. Proposed coding


Let P be an initial population consisting of maxpop chromosomes. Each chromosome chrk (k=1 to
maxpop) is characterized by a vector Vk composed of N +2 genes, where N is the number of initial attributes:
chrk= (Gk1 Gk2 gk1 …gkj … gkN)

29
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

Genes Gk1 and Gk2 are integer numbers representing successively the number of classes C and the
number of attributes to be selected q.
Let’s consider chrk* = (gk1 …gkj … gkN)
The N genes (gkj)1≤j≤N are binary values, each representing an attribute, they form the chromosome chrk*
associated with the vector Vk*, which represents the arbitrary attribute vector extracted from Vinit :
1 if aj is present
gk j = 0 otherwise
Each gene encodes the presence or absence of the aj parameter in the vector Vk*. Values 1 or 0 for the
gene, means ''selecting aj'' or ''eliminating aj’’.
Each chromosome chrk is chosen randomly. The number of classes C defined by the Gk1 gene must
belong to the interval [2, Cmax] , where Cmax is the maximum number of classes, generally taken equal to
E( ) (M the number of objects to classify and E is the integer part of a decimal number) [2,3, 4].
Cocquerez et al [2] recommended a value of q to be comparable to the number of classes. Postaire [2,
18] indicates that the selection of the most pertinent parameters in the sense of classification performances
depends on the data to classify and the speed of execution to be sought. After these three recommendations of
Postaire and Cocquerez, for each chromosome k of the population P, the number of attributes to be selected q
is chosen randomly, this number is defined by the gene Gk2 which must be chosen in the interval [2, 2×Gk1].
The vector encoded by the chrk chromosome, can only be a possible solution to the problem
if:
1- Gk1 [2, Cmax]
2- Gk2 [2, 2 ×Gk1], if 2 ×Gk1>N, we took 2 ×Gk1=N
3-

If the chromosomes encoding the vector Vk do not meet these three constraints they are automatically
eliminated. To this end, we propose to take the two genes Gk1 and Gk2, to verify the first two conditions, then
we propose to change randomly the N genes (gkj)1≤j≤N of this chromosome until the third constraint is met.
As an example, a chromosome of 10 genes where the initial number of attributes N = 8 may be defined
by:
chr= (3 2 0 1 0 0 1 0 0 0)
For this example, the attribute space dimension is equal to 2, and only the attributes a2 and a5 will be
considered, to partition the data into 3 classes.
The optimal chromosome will simultaneously obtain the optimal number of classes Copt, the optimal
number qopt of attributes to be selected and the qopt best attributes.

4.3. Proposed fitness function.


To calculate the selective value of chrk coding Vk, we define the selection function corresponding to the
proposed criterion J by:
Comp (C , q,Vk* )
F (chr ) = × ∑ ρ (a j , a )
k l
Sep(C , q,Vk* ) (a ,a )∈E 2
j l Vk*
j ≤l
chrk is the optimal solution if F(chrk) is minimal.

5. Experimental results
To evaluate the performance of our algorithm, we used two types of data, which are the texture images
and the alphabetic letters.

30
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

5.1. Application to the classification of alphabetic letters


5.1.1. Alphabetic letter parameters
Recognizing a letter requires the analysis of its shape, and the extraction of its special characteristics
which will be exploited for its identification. The characteristic types may be divided into four principal
groups [19], structural, statistical, global transformations, and the superposition of models and correlation. In
this work we are only interested by the statistical parameters. Eleven parameters are commonly used in this
case [20], these are : height-width rate (R), the number of pixels used for the letter image (PN), the gravity
center coordinates (Gx, Gy), the letter pixels density (D), the number of connected components, in 4-
connexity and 8-connexity (Cn4, Cn8), the three inertia moments, in x, y and xy (Ix, Iy, Ixy) and the
orientation angle (OA).

Vinit = (R, PN, Gx, Gy, D, Cn4, Cn8, Ix, Iy, Ixy, OA)

5.1.2. Test 1
We have considered for this test 120 alphabetic letters distributed over three classes, each class contains
40 alphabetic test letters, which come from the handwritten of the same letter by the main author (figure 1):

Letter1 ∈ CL1 Letter2 ∈ CL2 Letter3 ∈ CL3


Figure 1: Three samples of alphabetic letters used

Running the proposed algorithm was done rapidly; the running time is 5min 24 seconds under MATLAB
in my machine, figure 2 shows the evolution of the selective value associated with the best chromosome of
the current population with respect to the generation number

Figure 2: Fitness evolution for the proposed criterion.


Optimal Chromosome given by the proposed algorithm is:

chropt=(3 2 1 1 0 0 0 0 0 0 0 0 0)

We deduce, from this chromosome:

• Optimal number of classes is Copt =3, this number coincides with the actual number of classes.
• optimal number of selected attributes is qopt =2
• optimal attribute space is Vopt =(R, PN)

31
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

To better evaluate the performance of this criterion, we conducted a supervised classification of the
testing alphabetic letters by the EFCM algorithm, with the number of classes Copt and the attribute space Vopt
of dimension qopt, obtained by the J criterion. EFCM algorithm converges to the optimal solution after a low
number of iterations, The results obtained are shown in figure 3:

Figure 3: Three Classes obtained in the (R, PN) space.


The rate of misclassification in the space obtained is 0%, ie none of the alphabetic letters are
misclassified among 120, which confirms the good performance of the J criterion. The three classes obtained
are compact, separable and the two selected parameters are less correlated (the correlation coefficient is ρ(R,
PN)=0.1935)

5.2. Application to the classification of texture images


5.2.1. Texture parameters
A texture is generally defined by a repetition of a basic pattern in different directions of the space. Its
analysis plays an important role in the interpretation of scenes. Many texture characterization methods were
proposed in the literature [2]. Many authors have proposed discriminate parameters for characterizing the
texture.
Haralick [2] has proposed fourteen parameters which were extracted from a cooccurrence matrix. Most
commonly used parameters are: Homogeneity (Hom), Local Homogeneity (HomL), Uniformity (Uni),
Directivity (Dir), Entropy (Ent) and Contrast (Cont). Galloway [2] has proposed five parameters which were
extracted from a run length matrix. These parameters are: Short Run Emphasis (SRE), Long Run Emphasis
(LRE), Gray Level Nonuniformity (GLN), Run Length Nonuniformity (RLN) and Run percentage (RP).
In each of our experiment case study we used the same initial attribute vector Vinit given by:
Vinit = (Dir, Uni, Hom, Cont, HomL, Ent, SRE, LRE, GLN, RLN, RP)

5.2.2. Test 2
We have considered in this experiment 20 oranges, 20 apples and 20 tomatoes of good quality, for each
orange, each apple and each tomato we took 3 images of textures for different sides. So we get 180 images
divided into three classes (figure 4). These images are initially digitized, in gray level, encoded in 8 bits
(quantized over 256 gray level) and of size 256*256 pixels. The number of gray level is reduced to 16 by the
histogram equalization method [2], this has reduced the computation time of the cooccurrence and run length
matrices for each texture image, it has also reduced the computation time of the texture parameters to be
extracted from these matrices [2].

Orange ∈ CL1 Apple ∈ CL2 Tomato ∈ CL3


Figure 4: Three samples of texture images used

32
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

Running the proposed algorithm was done rapidly; the running time is 6min 36 seconds under MATLAB
in my machine, figure 5 shows the evolution of the selective value associated with the best chromosome of
the current population with respect to the generation number
0.25

0.2

0.15

Fitness
0.1

0.05

0
0 5 10 15 20 25 30
Générations

Figure 5: Fitness evolution for the proposed criterion.

Optimal Chromosome given by the proposed algorithm is:

chropt=(3 2 0 0 1 0 0 1 0 0 0 0 0)

We deduce, from this chromosome:

• Optimal number of classes is Copt =3, this number coincides with the actual number of classes (the
orange classes CL1, the apple classes CL2 and the tomatoes CL3).
• Optimal number of selected attributes is qopt =2
• Optimal attribute space is Vopt =(Hom, Ent)

To better evaluate the performance of this criterion, we conducted a supervised classification of the
testing texture images by the EFCM algorithm, with the number of classes Copt and the attribute space Vopt of
dimension qopt, obtained by the J criterion. EFCM algorithm converges to the optimal solution after a low
number of iterations; The results obtained are shown in figure 6:

Figure 6: Two classes obtained in the (HomL, Ent) space.

The three classes obtained are compact, separable and the two selected parameters are less correlated
(the correlation coefficient is ρ(Hom,Ent)=0.8206). The classification error rate obtained is null. This
confirms the good performance of our algorithm.

33
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

5.2.3. Test 3
In this experiment, we are interested in the behavior of the proposed algorithm with respect to the nature
of textures to be classified. We have considered four different classes of textures, the first three are the same
as for the previous experiment, the fourth class contains 60 texture images taken from 20 oranges which are
of different type and quality from those used previously (figure 7)

Orange1∈ CL1 Apple∈ CL2 Tomato∈ CL3 Orange2∈ CL4


Figure 7: Four samples of texture images used

Running the proposed algorithm was done rapidly; the running time is 9min 67 seconds under MATLAB
in my machine, figure 8 shows the evolution of the selective value associated with the best chromosome of
the current population with respect to the generation number

Figure 8: Fitness evolution for the proposed criterion.

Optimal Chromosome given by the proposed algorithm is:

chropt=(4 3 1 0 1 0 1 0 0 0 0 0 0)

We deduce, from this chromosome:

• Optimal number of classes is Copt =4, this number coincides with the actual number of classes.
• Optimal number of selected attributes is qopt =3
• Optimal attribute space is Vopt =( Dir, Hom, HomL)

The results of the texture images classification by EFCM are shown in figure 9. The rate of
misclassification in the space obtained is 0%, which confirms the good performance of the fitness function F.
The three classes obtained are compact, separable and the two selected parameters are less correlated, the
correlation coefficient is r(Dir, Hom, HomL)= 0,7880.

34
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

Figure 9: Four classes obtained in the (Dir, Hom, HomL) space.

6. Discussion
All validation criteria based on the CMF algorithm (for example Xie and Beni, correlation coefficient,
entropy criterion …) normally look for the optimal number of classes in a known attribute space. Criteria for
selecting attributes based on the filter approach, (for example trace, hotteling, Auray …) require prior
knowledge of the number of classes. The advantage of our criterion is that it finds simultaneously the three
quantities, the optimal number of classes, the optimal number of attributes and the most pertinent attributes
for the unsupervised classification. The proposed criterion for achieving such a goal minimizes the
compactness value of the classes, maximizes the separability between the classes and the attributes selected
are less correlated between them.

Selecting the most pertinent attributes from an initial large set of attributes has been studied massively
and comparison between different selecting methods has been performed [9]. Genetic algorithm for small
scale problems has given good results for both pertinent attribute selection and computation time.

Many articles use the genetic algorithm for selecting the most pertinent parameters; they use for this
purpose different fitness functions. In [21, 22], we have proposed a learning algorithm for unsupervised
classification based on genetic algorithms. The method optimizes the number of classes and the attribute
space by minimizing a Xie and Beni type of cost function. In this method for each chromosome coded by the
Vk vector, the number of classes is varied from Cmin to Cmax, and Copt is obtained which corresponds to the
minimal value of the Xie and Beni criterion. We compared this method [21, 22] and the method we proposed
in this paper; we found the same results but the computation time is far smaller with the present technique as
compared to the previous one. For one example that we tried the program runs in 5min for the present
method, it took 15 min for the previous one.

7. Conclusion
The unsupervised classification of data requires the choice of the optimal number of classes and
selecting the most pertinent parameters allows a good separation of the classes. We used the evolutionary
approach to optimize the choice of the number of classes, the number of selected attributes and the attribute
space by minimizing a cost function that we proposed. For each iteration and for each chromosome, we used
the EFCM method to find the optimal center of classes, which better partition the objects to classify in the
selected attribute space. The proposed cost function is based on a measure of compactness, separability of the
classes, and correlations between attributes. The obtained attributes must be less correlated between them to
avoid the problem of redundant information.

The experimental results obtained for two types of data, clearly show the good performance of the
proposed algorithm. The method has chosen in each experiment, the optimal number of classes, the optimal

35
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

number of attributes and the optimal attribute vector, where the attributes are less correlated between them.
The algorithm has provided good separability and compactness of the classes in the parameter space.

The four experiments conducted in this paper confirm that the proposed algorithm is a learning
technique for unsupervised classification. The selected attributes are used to classify the objects by finding
the optimal class centers which are used as prototypes.

The proposed algorithm can be used effectively in problems of quality control by artificial vision; the
choice of pertinent parameters and the optimal center of classes are required to obtain good control.

References

[1] P. Jiang, F. Ren and N. Zheng. ‘A new approach to data clustering using a computational visual
attention model’’. International Journal of Innovative Computing, Information and Control Volume
5, Number 12(A), December 2009
[2] M. Nasri. Contribution à la classification des données par approches evolutionnistes : simulation et
application aux images de textures. Thèse de Doctorat, Université Mohammed premier, Faculté des
sciences Oujda, 2004.
[3] C. Duo, L. Xue and C. Du-Wu. An Adaptive Cluster Validity Index For The Fuzzy C-means, IJCSNS
International Journal of Computer Science and Network Security, VOL.7 No.2, February 2007
[4] D. -W. Kim, K. H. Lee, D. Lee, On Cluster Validity Index for Estimation of the Optimal Number of
Fuzzy Clusters, Pattern Recognition, 2004, 37: 2009–2025
[5] Dash M. and Liu. H.”Consistency-based search in feature selection”. Artificial intelligence, 151: 155-
176, 2003.
[6] Yuehui Chen, Ajith Abrahama and Bo Yanga. ”Feature selection and classification using flexible
neural tree”. Neurocomputing 70, 14 January 2006, 305-313
[7] P. Mitra, C.A. Murthy, and S.K. Pal, ”Unsupervised Feature Selection Using Feature Similarity”.
IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 24, no. 3, pp. 301-312, Mar. 2002.
[8] Liu H. and Senior. ”Toward integrating feature selection algorithms for classification and clustering”.
IEEE Transactions on Knowledge and Data Engineering. Vol 17, No. 4, pp; 491-502. April 2005.
[9] K. Tasdemir and E. Merenyi. A new cluster validity index for prototype based clustering algorithms
based on inter- and intra-cluster density. In Proc. Int’l Joint Conference on Neural Networks, 2007
(IJCNN 2007), Orlando, FL, Aug 12-17, 2007
[10] J. C. Bezdek, Cluster Validity with Fuzzy Sets, J. Cybernet. 1974, 3: 58–72.
[11] J. C. Bezdek, Numerical Taxonomy with Fuzzy Sets, J. Math.Biol. 1974, 1: 57–71.
[12] Kudo M. and Sklansky J. ‘’Comparison of algorithms that select features for pattern classifiers’’.
Pattern Recognition, 33(1): 25-41, 2000.
[13] J.G. Dy and C.E. Brodley, “Feature Subset Selection and Order Identification for Unsupervised
Learning,” Proc. 17th Int’l Conf. Machine Learning, pp. 247-254, 2000.
[14] L. Yu and H. Liu, “Feature Selection for High-Dimensional Data: A Fast Correlation-Based Filter
Solution,” Proc. 20th Int’l Conf. Machine Learning, pp. 856-863, 2003.
[15] Huan Liu, Senior, and Lei Yu, Toward Integrating Feature Selection Algorithms for Classification
and Clustering. IEEE transactions on knowledge and data engineering, Vol. 17, N°. 4, April 2005
[16] H. Ouariachi. Classification non supervisée de données par réseaux de neurones et une approche
évolutionniste: application à la classification d’images. Thèse soutenu le 20 janvier 2001. Faculté des
sciences oujda, Maroc.
[17] M. Nasri, M. El Hitmy, H. Ouariachi and M. Barboucha. ‘’Optimization of a fuzzy classification by
evolutionary strategies’’. In proceedings of SPIE Conf., 6th international Conference on Quality
Control by Artificial Vision, Vol. 5132, pp. 220-230, USA, 2003. Repulished as an SME Technical
paper by the Society of manufacturing engineers (SME), Paper number MV03-233, ID
TP03PUB135, Dearborn, Michigan, USA, pp. 1-11, 24 June 2003.
[18] Postaire, J. G. : De l'image à la décision : analyse des images numériques et théorie de la décision.
Editions DUNOD, 1987.
[19] Haitaamar, S. ‘’Segmentation de textes en caractères pour la reconnaissance optique de l’écriture

36
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

arabe’’. Thèse de doctorat, Université El-Hadj Lakhdhar Batna, 2007.


[20] Ghaouth Belkasmi. M. ‘’ Reconnaissance des formes par réseaux de neurones : application aux
caractères arabes’’. Thèse de DESA. Université Mohammed premier Oujda. 2007.
[21] F. Lekhal, M. El Hitmy and O. El Melhaoui. L'optimisation du vecteur attribut et du nombre de
classes pour la classification non supervisée par l'approche évolutionniste. 6éme Rencontre National
des Jeunes Chercheurs en Physique, 24-25 decembre 2009, Page 22, Casa, Maroc.
[22] F. Lekhal, M. El Hitmy, O. El Melhaoui and M. Nasri. Genetic approach and Xie and Beni criterion
for optimizing the number of classes and the attribute vector: Applications for unsupervised
classification. International Journal of Computational Intelligence and Information Security, Vol. 1,
No. 9, pp. 28 – 36. Novembre 2010.

37
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

RELIABILITY MEASURES FOR A STOCHASTIC PROCESS WORKING IN


DIFFERENT WEATHER CONDITIONS

Anuj Kumar
Lecturer, Dept. of Mathematics, S.G.I.T., Ghaziabad, UP, India
Email: anuj.chdhr1@gmail.com

Abstract
In this paper, the author has estimated various reliability parameters like availability and profit
function for a stochastic process, which works in different weather conditions. The system is of Non-
Markovian nature, the author has introduced some supplementary variables to make this system Markovian.
The author has been used the continuity argument and limiting procedure to form difference-differential
equations for all possible transition-states of the system. Laplace transform has been used to solve the
mathematical model. All the failure rates follow exponential time distribution whereas all repair rates follow
general time distribution. The whole system can also fail due to human error. The mathematical expression
for various transition-states probabilities, availability function and profit function for the system has
computed. To improve practical utility of the model, steady-state probability for different transition states and
a particular case (when repairs follow exponential time distribution) have also obtained. A numerical
computation together with its graphical illustration has appended at last to highlight important results of the
study.

Key Words: Markovian Process, Supplementary Variables, Steady-state Behavior, Laplace Transform etc.

38
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

1. Introduction
The whole system under consideration consists of two distinct components working in parallel redundancy
[1]. On failure of one component the system works in reduced efficiency state [2]. This failed unit can be
repaired immediately. When both the units get failed, the whole system state being failed and the system have
to wait for repair under these circumstances. When this system works in different weather conditions [3], the
failure and repair rates [4] will be different. Transition- state diagram is depicted in fig-1.
The results obtained in the study are of much importance for various systems of practical utility and
can be used as it is for the similar configurations. For example, it is the time of computer and we can find
several parts of it with similar configuration that ability differes for different weather conditions. If we don’t
care for these parts, it causes a big loss not only of important data saved in it but also of money and time to
remove such problem.
Also we can consider the example of aeroplane as it consists various parts with it of similar
configuration and their ability differes with different weather conditions. If we don’t care for these it also
causes a big loss not only of money but also of many lives that travel in it. Thus, we can used the results
obtained in this study to those sensitive parts to prevent a big loss.
The following assumptions have been associated with this study:
(i) The failure and repair rates differ with different weather conditions.
(ii) After failure of one unit of the system, we can give immediate repair but the system has to wait
for repair in case both units get failed.
(iii) Failures are S-independent.
(iv) All failures and waiting rates follow exponential time distribution whereas all repairs follow
general time distribution.
(v) Repairs are perfect, i.e., the system works like new after repair.
(vi) The whole system can fail due to human initiated errors.

List of notations used in this study is as follows:


λi (i = 1,2 − − n ) th
Failure rate of first unit of the system when it works in i weather
condition.
µ i (i = 1,2 − − n ) Failure rate of second unit of the system when it works
th
in i weather condition.
W Waiting rate to repair both units of system.
H Human error rate.
β i ( x )∆ / γ i ( y )∆ The first order probability that one /two units of the system will
be repaired in the time interval ( x, x + ∆ ) / ( y, y + ∆ )
(i = 1,2,− − n )
conditioned that it was not repaired up to the time x/y, while the
th
system is working in i weather condition.
α h (z )∆ The first order probability that the human error will be repaired in
the time interval ( z, z + ∆ ) , conditioned that it was not repaired
up to the time z.
P0 (t ) : Pr {system is operable at time t}.

Pi1 ( x, t )∆ : Pr {system is degraded due to failure of one unit at time t}.


Elapsed repair time for this failure in ith weather condition, lies in
the interval ( x, x + ∆ ) .
Pi 2 (t ) : Pr {system is fail at time t due to failure of both units of the
system}. It is working in ith weather condition and it has to wait
for repair.
Pi 2 R ( y, t )∆ : Pr {the system is ready for repair of two units at time t}. Elapsed
repair time for this in ith weather condition lies in the
interval ( y, y + ∆ ) .
Ph ( z , t )∆ : Pr {the system is fail due to human error at time t}. Elapsed
repair time lies in the interval ( z, z + ∆ ) .

39
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

2. Literature Review:
In this section, author has done analysis for mathematical formulation of the model, its solution, some
particular cases and various results related to reliability estimation.

2.1 Formulation of mathematical model


By using continuity argument and limiting procedure [5], we obtain the following set of difference-
differential equations, which is continuous in time and discrete in space [6], governing the behavior of
considered model:
∞ ∞
d n
 n n

 dt + ∑ λi + h  P0 (t ) = ∑ ∫ P ( x, t )β ( x)dx + ∑ ∫ P ( y, t )γ i ( y )dy
1 2R
i i i
 i =1  i =1 0 i =1 0 (1)

+ ∫ Ph ( z , t )α i ( z )dz
0

∂ ∂  1 (2)
 ∂x + ∂t + µ i + β i ( x) Pi ( x, t ) = 0, ∀i = 1,2,− − − − − n
∞ (3)
d  2
+ = ∫0 Pi ( x, t )µ i dx, ∀i = 1,2,− − − − −n
1
 dt w P
 i (t )

∂ ∂  2R (4)
 ∂y + ∂t + γ i ( y ) Pi ( y, t ) = 0, ∀i = 1,2,− − − − −n
 
∂ ∂  (5)
 ∂z + ∂t + α h ( z ) Ph ( z , t ) = 0
Boundary conditions are:
Pi1 (0, t ) = λi P0 (t ), ∀i = 1,2,− − − − −n (6)

Pi 2 R (0, t ) = wPi 2 (t ), ∀i = 1,2,− − − − − n (7)

Ph (0, t ) = hP0 (t ) (8)


Initial conditions are:
P0 (t ) = 1, otherwise zero (9)

40
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

λ1 µ1 w
P11 ( x, t ) P12 (t ) P12 R ( y, t )

β1 ( x)

µ2 w
P21 ( x, t ) P22 (t ) P22 R ( y, t )

λ2
β 2 ( x) µn
Pn1 ( x, t ) Pn2 (t )

λn w
P0 (t )
β n (x)
Pn2 R ( y, t )
γ 2 ( y)
γ n ( y)

γ 1 ( y)
h

Ph ( z , t )
α h (z )

Good Degraded Failed Ready to repair


States :

Fig-1 State-Transition Diagram

2.2 Solution of mathematical model


We shall solve the above system of difference-differential equations with the aid of Laplace transform to
obtain probabilities of different transition states, depicted in fig-1. Thus, taking Laplace transform [7], [8] of
equations (1) through (8) subjected to initial conditions (9) and then on solving them one by one, we obtain:

41
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

1
P 0 (s) = (10)
A( s )
λi
D β i (s + µ i ),
1
P i (s) = ∀i = 1,2,− − − − −n (11)
A(s )
λi µ i
Dβ (s + µ i ) ∀i = 1,2,− − − − − n
2
P i (s) = (12)
A( s )(s + w) i
λi µ i w
Dβ (s + µ i )Dγ i (s ) ∀i = 1,2,− − − − − n
2R
P i ( s) = (13)
A( s )(s + w) i
h
P h (s) = Dh ( s ) (14)
A( s )
i
Where, Di ( j ) = , ∀ i and j
i+ j
n n
A( s ) = s + ∑ λi + h −∑ λi S β i (s + µ i ) − h S h (s )
i =1 i =1
n
λi µ i w
−∑ D β i (s + µ i )S γ i (s ) (15)
i =1 (s + w )
It is worth noticing that
n

i =1
[
P 0 ( s ) + ∑ P i (s ) + P i ( s ) + P i ( s ) + P h ( s ) =
1 2 2R
] 1
s
(16)

2.3 Steady-state behaviour of the system


Using Abel’s Lemma, viz., Lim F (t ) = Lim s F ( s ) = F ( say ), provided the limit on L.H.S exists, in
t →∞ s →0
equations (1) through (14), we have following steady-state [7] probabilities:
1 (17)
P0 =
A′(0)
λi (18)
Pi1 = Dβ i (µ i ), ∀i = 1,2,− − − − − n
A′(0 )
λi µ i (19)
Pi 2 = D β i (µ i ) ∀i = 1,2,− − − − − n
wA′(0)
λi µ i (20)
Pi 2 R = Dβ i (µ i )M γ i ∀i = 1,2,− − − − −n
A′(0)
h (21)
and P h = Mh
A′(0)
d 
Where, A′(0) =  A( s )
 ds  s =0

and M a = − S a (0) = mean time to repair a th component.

42
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

2.4 A particular case


When all repairs follow exponential time distribution:
a
In this case, setting S a (b) = for all a and b, in equations (10) through (14), we can obtain the
b+a
following transition-states probabilities:

1 (22)
P 0 (s) =
B(s)
1 λi 1 (23)
P i (s) = . , ∀i = 1,2,− − − − − n
B (s ) s + µ i + β i
2 λi µ i 1 (24)
P i (s) = . ∀i = 1,2,− − − − − n
B ( s ) (s + w ) s + µ i + β i
2R λi µ i w 1 1 (25)
P i (s) = . , ∀i = 1,2,− − − − − n
B ( s ) (s + w ) s + µ i + β i s + γ i
h 1 (26)
and P h ( s ) = .
B( s) s + α h
Where,
n n
λi β i hα h
B (s ) = s + ∑ λ i + h − ∑ −
i =1 i =1 s + µ i + β i s +αh
n
λ i µ i wγ i
−∑
i =1 (s + w )(s + µ i + β i )(s + γ i )

2.5 Up and down state probabilities of the system


We have
η
Pup (s ) = P 0 (s ) + ∑ P i (s )
1

i =1
Putting the values on R.H.S and on taking inverse Laplace transform, we may obtain the up state probability
of the whole system and is given by:
Pup (t ) = (1 + E )e − (λ + h )t − Ee − µt (27)
n n
where, ∑λ
i =1
i = λ, ∑ µi = µ
i =1

λ
and E= (28)
µ −λ −h
Also, the down state probability is
Pdown (t ) = 1 − Pup (t ) (29)
It is interesting to note here that Pup (0 ) = 1
2.6 Profit function for the system
Profit function for the system can be obtained by the formula.
t
PR(t ) = C1 ∫ Pup (t )dt − C 2 t − C 3 (30)
0

43
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

where, C1 is the revenue per unit time, C2 is the repair cost per unit time and C3 is the setup cost per cycle.
Using (27), equation (30) gives:
 (1 + E )
PR (t ) = C1 
E 
[
1 − e −(λ + h )t − 1 − e µt  − C 2 t − C 3 ] [ ] (31)
 (λ + h ) µ 
where, E, λ and µ are explained earlier.

2.7 Numerical computation


To observe the variations in the values of availability and profit function w.r.t. the time ‘t’, let us consider the
following numerical computation:
λ = 0.001, µ = 0.002, h = 0.02, C1 = Rs.5.00 / unit time, C 2 = Re .1.00 / unit time,
C 3 = Rs.7.00 / setup and t=0,1,2----------.
Using these values in equation (27), one can draw the graph shown in fig-2. By using this numerical
computation in equation (31), one can draw the graph shown through fig-3.

3. Result and discussion

In this study, the author has evaluated some important reliability parameters for a stochastic process which is
working in different weather conditions. Supplementary variables technique and Laplace transform have been
used to formulate and solve the mathematical model. Steady-state behavior of the system, a particular case
have also appended to improve practical utility of the model. A numerical computation has considered
highlighting important results.
Fig-2 reveal that the availability of the system is decreasing very slowly and in uniform manner as we make
increase in time ‘t’. A critical examination of fig-3 yields that initially profit function is negative because we
spend money to establish the new system, but after t=2 if becomes positive and starts increase with time t
approximately in constant manner.

Pup(t) vs t P(t) vs t

35
1.2
30
1
Availabiliti ----->

25
Profit ------>

0.8 20
0.6 Pup(t) 15
P(t)
10
0.4
5
0.2 0
0 -5 1 2 3 4 5 6 7 8 9 10 11
1 2 3 4 5 6 7 8 9 10 11 -10
t -----> t ------>

Fig- 2 Fig-3

References

[1] Barlow, R.E; Proschan, F. (1965): “Mathematical Theory of Reliability”, New York; John Wiley.
[2] Chung, W, K (1988): “A K-out-of-n:G redundant system with dependant failure rates and common
cause failures”, Microelectron. Reliability U.K., Vol 28: 201-203.
[3] Gnedenko, B.V; Belayer, Y.K; Soloyar (1969): “Mathematical Methods of Reliability Theory”,
Academic press, New York.
[4] Gupta, P.P.; Gupta, R.K. (1986): “Cost analysis of an electronic repairable redundant system with
critical human errors”, Microelectron . Reliab. , U.K, Vol 26: 417-421.
[5] Nagraja, H.N.; Kannan, N.; Krishnan, N.B. (2004): “Reliability”, Springer Publication.

44
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

[6] Pandey, D.; Jacob, Mendus 1995: “Cost analysis, availability and MTTF of a three state standby
complex system under common-cause and human failures”, Microelectronic . Reliab., U.K., Vol. 35:
91-95.
[7] Sharma, S.K. ;Sharma, Deepankar ; Masood, Monis (2005): “Availability estimation of urea
manufacturing fertilizer plant with imperfect switching and environmental failure”, Journal of
combinatorics, information & system sciences, Vol.29 (1-4) : 135-141.
[8] Sharma, Deepankar, Sharma, Jyoti (2005); “Estimation of reliability parameters for
telecommunication system”, Journal of combinatorics, information & system sciences, Vol.29 (1-4):
151-160.

45
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

A Survey on Fight Against Unknown Malware Attack


S.MURUGAN MCA.,Mphil.,CGT.,MISTE.,(MS).,
ACTS,CDAC Knowledge Park, No 1 Old
Madras Road,Bangalore
s.murugan@teacher.com
Mobile: 9902147721

Dr.K.Kuppusamy,Director i/c
Computer Centre,Alagappa
University,Karaikudi
kkdiksamy@yahoo.com
Mobile: 9443005711

Abstract

Intrusion detection system(IDS) has played an important role as a device to defend our networks from
unknown malware attacks. However, since it still suffers from detecting an unknown attack, i.e., 0-day
attack, the ultimate challenge in intrusion detection field is how we can exactly identify such an attack. This
paper will analyze the various unknown malware activities while networking ,internet or remote connection.
For identifying known malware various tools are available but that does not detect Unknown malware
exactly.It will varies according to connectivity and using tools and finding strategies what they used.
Anyhow like known Malware few of unknown malware listed according to their abnormal activities
and changes in the system.
First, an Unknown Malware attack is defined as the occurrence of an organization being actively
subjected to a threat. In other words, the presence of a threat “in the wild” does not constitute an Unknown
Malware attack. It is only when the threat is operating against a specific target that the associated organization
is considered to be “under attack.” An “unknown attack” then is one where the organization’s
countermeasures are not able to identify the threat it is being exposed to.
Unknown attacks are quickly becoming the next great information security challenge for today’s
organizations. As the window of time between the disclosure of a new vulnerability and the emergence of
unique threats that operate against it continues to diminish, so does the effectiveness of many conventional
countermeasures, including patch management.
In this paper we will see the various Unknown methods and avoiding preventions as birds eye view
manner.

46
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

Attack and Unknown Attack

First, an attack is defined as the occurrence of an organization being actively subjected to a threat. In
other words, the presence of a threat “in the wild” does not constitute an attack. It is only when the threat is
operating against a specific target that the associated organization is considered to be “under attack.” An
“unknown attack” then is one where the organization’s countermeasures are not able to identify the threat it is
being exposed to. For example, if an organization was confronted with the variant ‘Zotob.B’ before its anti-
virus or intrusion detection/prevention systems received applicable signature updates, then this would be
characterized as an unknown attack (assuming that no other installed countermeasures could identify it
either). Furthermore, it is important to realize that just because an attack is unknown does not mean that an
organization is susceptible to it or that it is unable to protect itself against it. This last aspect and other
nuances of unknown attacks will be explored further in subsequent sections of this paper. In fact, two
additional terms will prove helpful when that time comes, specifically:

A zero-day vulnerability is one that is unpublished. By definition, all vulnerabilities are zero-day
before they are disclosed to the world, but practitioners in the art commonly use the term to refer to
unpublished vulnerabilities that are actively exploited in the wild. We further distinguish zero-day
vulnerabilities from published vulnerabilities as those for which no patch, upgrade, or solution is yet available
from the responsible vendor, although some fail to make this distinction. Short of unplugging from the
network and powering off the computer system, no IT professional in charge of security can be sure that the
system doesn’t contain a vulnerability .

And herein lies the root cause of the zero-day vulnerability fear: how can we protect against a
vulnerability we don’t know if we don’t know what that vulnerability is? Sure, we can implement defense-
indepth and redundancy, and enforce the minimum privilege principle, but at the end of the day, how do we
know that we’ve done enough? In our daily lives, we usually allay our fear of the unknown with knowledge;
it’s no different in the computer security arena. Once upon a time, most computer breakins on the Internet
(erstwhile, the Arpanet) were the result of zero-day vulnerabilities that a few kept to themselves and that
vendors refused to acknowledge or fix. The response to this environment was the full disclosure movement,
which aimed to shed light into the dark unknown of vulnerabilities. Practitioners in the field have debated
whether this change was for the better or worse ever since—a flame war we will
leave for another day—but it suffices to say that currently, most attackers appear to be using previously
disclosed vulnerabilities to break into computer systems. Theoretically,
these are vulnerabilities we should be protected from, but this theory assumes that that there is enough time
to develop and put protections in place after disclosure but before coming under attack.
To fight against new unknown malware, there are two potential approaches: proactive and reactive.
Proactive approaches focus on eliminating vulnerabilities to prevent any malware.
Reactive approaches concern about actions after malware is unleashed. We believe that there is along
way to go before we could achieve zero attacks. Since here are more and more malware variants and self-
propagating malware can spread very rapidly, we need fast, automatic detection. In this thesis, we aim to develop
new techniques and systems to automate the detection of new unknown malware.To tackle this problem, we
face two fundamental challenges.
False Alarms: To detect new unknown malware means we cannot use signatures. Instead, we must
detect it by recognizing certain patterns possessed solely by some malware. In this Anomalybased intrusion
detection techniques have been studied for many years [4, 18], but they are not widely deployed in practice due
to their high false alarms. Axelsson [5] used the baserate fallacy phenomenon to show that, when intrusions are
rare, the false alarm rate (i.e., the probability that a benign event causes an alarm) has to be very low to achieve
an acceptable Bayesian detection rate (i.e., the probability that an alarm implies an intrusion). In automatic
detection and reaction, we tolerate fewer false alarms because actions on false alarms would interrupt normal
communication and computation.
User Intent: It is a unique characteristic on personal computers. Most benign software
running on personal computers shares a common feature, that is, its activity is user driven. Thus, if we can infer
user intent, we can detect break-ins of new unknown malware by identifying activity that the user did not intend.
In other words, instead of attempting to model anomalous behavior, we model the normal behavior of benign
software: their activity follows user intent. Since users of personal computers usually use input devices such as

47
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

keyboards and mice to control their computers, we assume that user intent can be inferred from user-driven
activity such as key strokes and mouse clicks. Software activity may include network requests, file access, or
system calls. We focus on network activity in this thesis. The idea is that an out bound network connection that
is not requested by a user or cannot be traced back to some user request is treated as likely to be malicious. A
large class of malware makes such malicious outbound network connections either for self-propagation
(worms)or to disclose user information spyware / adware. We refer to these malicious, user-unintended,
outbound network connections as extrusions. Therefore, we can detect new unknown malware on personal
computers by identifying extrusions..
A signature-based IDS works similar to anti-virus software. It employs a signature database of well-
known attacks, and a successful match with current input raises an alert. .Signatures generally target widely used
applications or systems for which security vulnerabilities are widely advertised. Similarly to anti-virus software,
which fails to identify unknown viruses (either because the database is out of date or because no signature is
available yet),a signature-based IDS fails to detect unknown attacks.
To overcome this limitation, researchers have been developing anomaly-based IDSs. An anomaly-based
IDS works by building a model of normal data/usage patterns, then it compares (using a similarity metric) the
current input with the model. A significant difference is marked as an anomaly. The main feature of an anomaly-
based IDS is its the ability to detect previously unknown (or modifications of well-known) attacks as soon as
they take place. An anomaly-based IDS can also adapt to custom-developed systems /applications, which are
widely deployed, and for which it is not cost effective to maintain a set of signatures.

As a result, an alternative approach, called intrusion detection, has been proposed .The idea is to retrofit
existing systems with security by detecting attacks, preferably in real time, and alerting a system security officer
(SSO) or a security analyst.
Using either definition of an intrusion, we define an Intrusion Detection System (IDS) in the following
way:
Working definition 3 (Intrusion Detection System [Axe05]) An automated system detecting and
alarming of any situation where an intrusion has taken or is about to take place.

The general architecture of an IDS is shown in Figure 2.1. The IDS observes the behavior of the system
monitored and collects the audit information, which can subsequently be stored prior to processing. The
processing component uses reference and configuration data (e.g., information about prior intrusions or the
baseline of normal behavior for anomaly-based IDSs), processes the audit information and generates alerts.
While doing this, the processing component may temporarily store data needed for further processing (e.g.,
context, session information). Finally, the output in the form of alerts is processed by either a human analyst,
who can initiate a response to an intrusion, or an automated response system.
The latter approach is particularly appealing as only a fully automated response can prevent attacks in
real time; however, it is potentially dangerous if implemented incorrectly.
In fact, some automated intrusion response systems have been known to cause serious damage,
including preventing legitimate users from using the systems or even being misused by an intruder to trick the
system to perform a denial-of-service attack against itself. Therefore automated response systems have to be
used with extreme caution.
In the next two sections we will discuss two main components of IDSs: the audit collection and the
processing component.
An Intrusion Detection System (IDS) inspects the activities in a system for suspicious behaviour or
patterns that may indicate system attack or misuse. There are two main categories of intrusion detection
48
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

techniques; Anomaly detection and Misuse detection. The former analyses the information gathered and
compares it to a defined baseline of what is seen as “normal” service behaviour, so it has the ability to learn how
to detect network attacks that are currently unknown. Misuse Detection is based on signatures for known attacks,
so it is only as good as the database of attack signatures that it uses for comparison. Misuse detection has low
false positive rate, but cannot detect novel attacks. However, anomaly detection can detect unknown attacks, but
has high false positive rate.
ADAM (Audit Data Analysis and Mining) [4] is an intrusion detector built to detect intrusions using
data mining techniques. It first absorbs training data known to be free of attacks. Next, it uses an algorithm to
group attacks, unknown behaviour, and false alarms. ADAMhas several useful capabilities, namely;
o Classifying an item as a known attack
o Classifying an item as a normal event
o Classifying an item as an unknown attack
o Match audit trial data to the rules it gives rise to.

The framework of the Intrusion Detection Model.

49
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

Protection
Zero-day protection is the ability to provide protection against zero-day exploits. Zero-day attacks can
also remain undetected after they are launched.Many techniques exist to limit the effectiveness of zero-day
memory corruption vulnerabilities, such as buffer overflows.These protection mechanisms exist in contemporary
operating systems such as Apple's Mac OS X, Microsoft Windows Vista Security and safety features new to
Windows Vista, Sun Microsystems Solaris, Linux, Unix, and Unix-like environments; Microsoft Windows XP
Service Pack 2 includes limited protection against generic memory corruption vulnerabilities. Desktop and
server protection software also exists to mitigate zero day buffer overflow vulnerabilities.
"Multiple layers" provides service-agnostic protection and is the first line of defense should an exploit
in any one layer be discovered. An example of this for a particular service is implementing access control lists in
the service itself, restricting network access to it via local server firewalling (i.e. iptables), and then protecting
the entire network with a hardware firewall. All 3 layers provide redundant protection in case a compromise in
any one of them is discovered.
The use of port knocking or Single Packet Authorization daemons may provide effective protection
against zero-day exploits in network services. However these techniques are not suitable for environments with a
large number of users. Whitelisting effectively protects against zero day threats. Whitelisting will only allow
known good applications to access a system and so any new or unknown exploits are not allowed access.
Although whitelisting is effective against zero-day attack, unless it is combined with other methods of protection
such as HIPS or a blacklist of virus definitions it can sometimes be quite restrictive to the user.
Engineers and vendors such as Gama-Sec in Israel and DataClone Labs in Reno, Nevada are attempting
to provide support with the Zeroday Project, which purports to provide information on upcoming attacks and
provide support to vulnerable systems.
Another method to avoid zero day attacks is to wait for a reasonable period of time before upgrading to
a new major version. Exploits which are discovered in new software are often addressed in a timely manner by
the software developer and fixed by later minor updates. Minor updates to older software that contain security
fixes should obviously always be installed to maximize security. While this method avoids "zero day"
vulnerabilities that are discovered by the zeroth day of the software release cycle, security holes can be
discovered at any time. If they are announced to the public before the software vendor, exploits can made on the
"zeroth day" of the vulnerability window.

Steps in Learning and Generalization of Unknown Attacks

1. Determine if observed error is repeatable based on connection history file since last rejuvenation. If
repeatable, declare attack and continue. If not, return.
2. Determine which connection request (or requests) from history file caused the observed error.
50
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

3. Develop filter rule to block this connection request(s) pattern, test it, and send to content filter. Also block the
associated user ID and IP address.
4. Characterize the observed attack (i.e., classify it according to meaningful types).
5. Shorten the blocking filter, if possible.
a. Determine if the observed attack sequence has an initiating event
b. If the initiating event is smaller than the observed attack sequence, shorten the blocking filter to block just the
specific initiating event and test it.
6. Based on characterization and observed attack specifics, generalize the blocking filter
to protect against simple variants of the attack and test it.
7. Return

Examining the relative effectiveness of various countermeasures against unknown attacks can be
simplified by considering them in terms of two broad classifications: negative-model countermeasures and
positive-model
Countermeasures

1.Negative-model countermeasures

Negative-Model countermeasures are countermeasures based on identifying those bits of traffic and
communication sessions which are known to be bad, presumably because they are associated with a
threat. Antivirus and intrusion detection/prevention systems are classic examples of this type of
countermeasure.

2.Positive-model countermeasures
Positive-Model Countermeasures are countermeasures based on identifying those bits of traffic and
communication sessions which are known to be good, and then allowing only them to proceed, thereby
excluding all other traffic and communications. Router ACLs (access control lists) and firewalls are
classic examples of this type of countermeasure

Negative-model countermeasures
In reality, this is the only type of countermeasure that can identify a specific threat in conjunction with
stopping it. However, by definition this means that negative-model countermeasures are ineffective against
unknown attacks. The problem with the negative-model approach is that it depends on enumerating all of the
possible threats that will ever exist prior to them being built and launched. It should be clear that this is
essentially an impossible task—sort of like trying to count the stars in our galaxy (which is actually an unknown
number somewhere between a few hundred billion and a couple of trillion). Indeed, the magnitude of this
challenge puts the primary factor responsible for the allure of negative model tools in perspective. Specifically, it
really is only a small consolation that substantially all of the enumeration effort is outsourced to the associated
product vendors.

To be fair however, there is a handful of emerging detection mechanisms being used as extensions of
negativemodel tools, such as anti-virus and intrusion detection/prevention systems, that are actually proving to
be somewhat effective against unknown attacks. In contrast to earlier generations of these tools which solely
sought to detect each threat individually and specifically, the newer mechanisms are based on identifying various
“signs” of a threat, instead of the actual threat itself. They include heuristics, vulnerability based signatures, and
anomaly detection algorithms.

• Heuristics are essentially signatures that represent broad-based behaviors that are known to always be bad.
For example, an unauthorized port systematically scanning an unused address space could be considered a
known component of a threat. Further details regarding the exploit mechanism or the specific vulnerability
it is operating against are simply irrelevant in such a scenario. The challenge with heuristics, however, is that the
set of possible signatures is relatively small (estimated at less than 50). Consequently the degree of threat
coverage that can be provided, while potentially significant, is by no means comprehensive.

• Vulnerability based signatures involve predicting the characteristics of threats on the basis of the
characteristics of the vulnerability they will be seeking to exploit. In other words, once a new vulnerability is
51
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

disclosed, researchers develop and encourage deployment of signatures (and possibly implementation of other
mechanisms, such as a specific firewall rule) that anticipate the nature of yet-to-be-created threats. The
shortcomings with this approach are two-fold. First, it is based on a race condition, with the sum of the time for a
threat to be built, released, and spread stacked against the time for signatures to be generated and implemented.
This one’s a toss up—though the previously mentioned trends are increasingly tipping the scales in favor of the
bad guys. The second issue with vulnerability based signatures is that predicting how a future threat will act is, at
least to some degree, a guessing game. The defenders may have the advantage in many cases, but the bad guys
have proven many times to be innovative, or even just lucky.

• Anomaly detection mechanisms are based on generating a model of “normal” or baseline activity and then
identifying when deviations to this model occur. Again, while potentially helpful, this approach suffers from two
significant shortcomings. First, in real-world computing environments there is generally insufficient (or at least
unreliable) correlation between “abnormal” traffic conditions and the occurrence of an actual attack. For
example, a new business application appearing on the network might be interpreted as a deviation from the
baseline that defines “normal,” but this is clearly not an attack. The second issue is that anomaly detection is
based on exceeding some sort of threshold. Unless the trigger point for that threshold is zero, then it can
reasonably be assumed that any corresponding attacks will have already progressed to some potentially
significant degree by the time they are detected.

The key point of all this is that even with recent technological advancements, negative-model countermeasures
have significant limitations when it comes to preventing unknown attacks.

Positive-model countermeasures
In contrast, positive-model countermeasures are exceedingly effective at preventing unknown attacks.
The approach of enumerating all legitimate traffic and then denying everything else dramatically reduces an
organization’s attack surface area by inherently eliminating exposure to all sorts of attacks – unknown as well as
known. Admittedly, performing this type of detailed enumeration of what is “legitimate” and needed to support
your networked services can be a challenging task, but on a relative basis it is far more achievable than that
required for the negative model—sort of like trying to count the stars visible to the naked eye from earth (which
is no more than a few thousand). In any event, the main point here deserves to be repeated: positive
countermeasures are highly effective because they are based on enumerating all legitimate traffic and then
automatically denying everything else. Positive-model tools never need to explicitly identify an attack; they just
stop it as a result of their policy to “deny everything else.” On the other hand, negative-model tools MUST be
able to identify the attack, otherwise they can’t stop it.

As such, positive-model countermeasures are essentially immune to the trends of faster threat
generation and propagation. However, it is important to realize that they are not immune to the trend of threats
evolving up the computing stack. Indeed, the extent of the effectiveness of positive-model countermeasures is
dependent on the granularity with which legitimate traffic can be specified. For example, if the degree of
granularity is such that HTTP-based traffic can only be “all allowed” or “all dis-allowed,” then it will inevitably
be allowed for very good business reasons, such as to support an e-commerce application. But in doing so, this
will also allow other, unwanted HTTP-based traffic to pass, including the vast collection of threats that take
advantage of this very same protocol.

In fact, this is why many tools based on stateful-packet-filtering technology, despite being positive-
model countermeasures, still have only limited effectiveness against many of today’s unknown attacks. What is
needed instead are positive-model tools that incorporate an in-depth knowledge of how a wide range of
applications work. Ultimately, as long as sufficient accuracy can be maintained, it would also be beneficial for
such tools to be able to dynamically learn about new applications and the traffic that corresponds to their
accepted modes of usage within a particular organization. This would provide even further granularity for
enumerating all legitimate traffic while also reducing some of the administrative burden associated with
manually identifying and specifying what constitutes legitimate traffic.

52
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

Signature of a port 80 attack


TCP/IP port 80 is the default port for HTTP traffic. Web servers purposely listen for requests on this
port. A conduit for port 80 must exist in any firewalls between the end user’s browser, typically on the Internet
somewhere, and the web server for it to be able to do its job. Most firewalls do not inspect each packet looking
for particular exploits. This makes an attack on port 80 difficult to prevent before it gets to the web server. What
are the signs of a typical attack over port 80?

First, they take advantage of programming bugs in code executed by the web server. This code could be
an integral part of the web server itself, or extension code that is executed to perform a particular function.
Generally, the bug is revealed using a malformed URL, such as one that is overly long or improperly
constructed. In the case of Code Red, a buffer overflow in the Microsoft Index Server extension of Internet
Information Server was taken advantage of. Some attacks attempt to access a dummy resource in the web
server’s root directory. Others target some other directory with execute privileges that is installed by default
during the web server installation. An example is the /SCRIPTS directory of Microsoft’s Internet Information
Server.
This method is effective because it is not unusual for analysts to install the web server using default
values, a process that is familiar from installing other software products.
Once the server is compromised, the attacker can issue root level commands by traversing to the system
directory, changing the user context of the web server, and/or uploading trojan code. In the case of Code Red,
code was uploaded to the web server that would then attack near by web servers using the same technique that
compromised the victim web server. In this way, the worm also created a distributed denial of service attack,
which made it difficult to identify and fix the victimized web servers.
An attack on port 80 is similar to a random shoplifting incident. The perpetrator comes in through the
front door just like any other customer. But before you know it, he’s taken advantage of some lapse in security
and has made off with the goods.

Conclusion:
Most of the anti malware tools provide soln for unknown attack as patch file through internet as updation.Apart
from that intelligence intrusion detection prevention system and automated malware detestion prevention
systems provide continual updation without patch management. Apart from that port locking is also a alternative
technique for avoiding unknown malware attack.

Reference:
[1].Malware Forensics: Detecting the Unknown, Martin Overton, IBM ISS, UK
[2].A Comprehensive Approach to Detect Unknown Attacks Via Intrusion Detection Alerts ,Jungsuk Song1,
Hayato Ohba1, Hiroki Takakura2, Yasuo Okabe2, Kenji Ohira1, and Yongjin Kwon3
[3].Detecting Unknown Network Attacks using Language Models ,Konrad Rieck and Pavel Laskov
[4].Proactively Guarding Against Unknown Web Server Attacks SANS Institute InfoSec Reading Room
[5].Worm Poisoning Technology and Application Bing Wu, Xiaochun Yun and Xiang Cui,
[6].Validation Methods of Suspicious Network Flows for Unknown Attack Detection ,Ikkyun Kim, Daewon
Kim, Yangseo Choi, Koohong Kang, Jintae Oh and Jongsoo Jang
[7].A new malware detection method based on raw information qiao-ling han1, yu-jie hao1, yan zhang2, zhi-
peng lu1, rui zhang1
[8].Attack Signatures and Internet Traffic Analysis, Michael Ligh
[9].Learning Unknown Attacks – A Start James E. Just1, James C. Reynolds2, Larry A. Clough2, Melissa
Danforth3, Karl N. Levitt3, Ryan Maglich2, Jeff Rowe3
[10].Modeling Unknown Web Attacks in Network Anomaly Detection Liang Guangmin
[11].Unknown Malware Detection Based on the Full Virtualization and SVM

53
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

Author Biography:

Mr S.MURUGAN is Working as ACTS Coordinator , CDAC ,Bangalore.He received BSc in


Physics from Madurai Kamaraj University ,Madurai, in 1989 and MCA degree in Computer
Applications from Alagappa University,Karaikudi,Tamilnadu ,India and MPhil(CS) from
Manonmaniam Sundaranar University,Tirunelveli,Tamilnadu,India . He has 18 years of teaching and admin
experience at PG level in the field of Computer Science. He has published 6 papers in the National conferences
and 2 in International conference. His research interests include: Intelligence Network Security Algorithms,
Malware prevention and Detection mechanism and algorithm. He has published 8 books and courseware in the
field of Computer Science.

Prof. Dr K.KUPPUSAMY is working as an Associate Professor, Department of Computer Science and


Engineering, Alagappa University, Karaikukdi, Tamilnadu, India. He has received his Ph.D in Computer
Science and Engineering from Alagappa University, Karaikudi, Tamilnadu in the year 2007. He has 22 years
of teaching experience at PG level in the field of Computer Science. He has presented many papers in the
National and International conferences. His areas of research interests include Information/Network Security,
Algorithms, Neural Networks.

54
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

Intrusion Detection and Response for Web Interface Using Honeypots


Seema Verma1, Tanya Singh2
1
Reader [Associate Professor], Dept of Electronics, Banasthali, Rajasthan
2
Assitant Professor, Amity Institute of Information Technology, AUUP, UP
seemaverma3@yahoo.com
tanyasingh@aiit.amity.edu

Abstract

In this paper, a new risk minimizing security model is presented for distributed intrusion detection with
automated response. Previously, systems were focused on fixing the problem after it had already occurred
whereas, now, systems are being designed to find the problem before it happens and prevent it. This paper entails
the reasons of security problems within and outside an enterprise network and discusses the solution to the
problems using effective network management and traffic routing of data through honey pots. A centralized data
collection system that correlates data from multiple distributed honeynets as shown in the model proposed and
pattern reading can be done to realize the behaviour of Intruder.

Keywords: Intrusion Detection and Response System, Honeypots, Honeyd@WEB, Network Management,
Data Collection; Traffic Routing

55
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

1. Introduction
It has been observed that there is a paradigm shift in security perspective. The scope has changed from technical
problem to a business problem within an organization. The cost incurred to secure an enterprise is not
expenditure but an investment. In other words, Practice based IT security has become Process Based Business
Continuity [1] [2].
There is a steep rise in the information warfare attacks and frauds. Even before a method is devised to respond to
a given vulnerability, threat or attack, the hackers or crackers have used the same technology tool to attack the
same enterprise. As quoted by CERT/ CC Statistics, 2009, the total vulnerability is 6,058 in the first three
quarter, 2008. The year 2006, has observed the peak in security issues with the 8,064 vulnerabilities reported
directly. The dip in the no. for year 2008 is because the some of the enterprises are aware about risk in handling
their operations and have been using proper ways to secure their network [3] [4].
According to Cohen et al., if there is 10 hour delay from detection, 80% of the attacks succeed and given 30 hrs,
almost all the attacks succeed in their goal irrespective of the skill of the security manager [5].

2. What is Intrusion Detection and Response?

According to Haines et al, intrusions or in other words, attacks are categorized into 4 different types [6]:

• Probe
• U2R
• R2L
• DoS

1) Probe is when unauthorized intruders attempt to scan the opening ports (Services), OS and web server
programs. The attack would be the first step to lead to another type of attack. After the attacker learns the
weakness of the target system, they will know where to exploit vulnerabilities or even crash the whole system.
The available probe tools range from simply scanning ports to reporting everything about the hosts.

2) U2R (User to Root): it is the attack where the users try to gain the privilege of the root account.
The attack type can cause catastrophic damage since root can modify and delete everything in the system.

3) R2L (Root to Local): It is the attack that intruders seem to seek the way to gain access to the network system.
The first step to gain root access usually start with having local user account and crack the root password later.

4) DoS (Denial of Service): The DoS attack aims to disrupt the system services. This is normally accomplished
by flooding the system resource to deny services.
Hence, there is a need to develop and deploy a real time automated intrusion detection and Response system that
read the mind and motives of attackers and responded to them suitably. Intrusion detection functions include [7]:

• Monitoring and analyzing both user and system activities


• Analyzing system configurations
• Assessing system and file integrity
• Ability to recognize patterns typical of attacks
• Analysis of abnormal activity patterns
• Tracking user policy violations

3. Related Work

According to Kabiri et.al., there are two major approaches for detecting intrusions: Signature - Based and
Anomaly – Based intrusion detection. While in the first case, attack pattern of the intruder is modelled where as
in the second case, behaviour of the network is modelled. There is another system, Specification Based Intrusion
Detection. In this system, the normal behaviour of the host is specified and consequently modelled. But, the
freedom of host is limited [8].

56
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

According to Carver et. al., there are three principal approaches for Intrusion Response: notification, manual
response and automatic response. Generally most of the systems are notified through alarms, e-mail messages,
console alerts or pager activations about intrusion. Manual Alarm system depends on the capability of system
administrator whereas automatic response system do not wait for the administrator but automatically respond to
intrusive behaviour through decision table and rule based systems [9].

Sapon Tanachaiwiwat et.al speaks about adaptive intrusion response to minimize risk over Multiple Network
Attacks. Adaptive response strategies are suggested based on alarm confidence, attack frequency, assessed risks
and estimated response costs. Dynamic security is achieved by frequency policy update against changing attack
patterns or varying network conditions [10].

Y. Sung Wu et al provides a method to automatically trigger a response of appropriate severity and it bias future
responses towards those that are found to be more effective. It uses IDAG – Intrusion Directed Acyclic Graph
and parallel algorithms to measure response severity [11].

A lot has been spoken for IDRS(Intrusion Detection and Response System); however, there is a strong need of
research in the field of Intrusion response. The intrusion response system counter attacks those which attempt to
compromise the integrity, confidentiality or availability of a resource. [12]

According to Stakhanova et.al. , there are no tools which can actually detect attacks and respond it. Hence,
honeypots is one of the tools which can be used to supplement the existing methods of Intrusion Response. It is a
security resource whose value lies in being probed, attacked and compromised. They are closely monitored
decoys serving several purposes that include the following [13]:

1. They can distract attackers from more valuable machines on a network.


2. They provide early warning about new attacks and exploitation trends

Honeypots do not have production purpose; there is no authorized interaction with them. So any kind of
interaction with honeypots is probe, scan or attack. In general there are two types of honeypots: Research and
Production. In the past, a lot of focus has been given to Research. But the production honeypots are the one
which are used to directly secure an organization. Where NIDS [Network Intrusion Detection System] fail,
honeypots are a success. It can detect even IPv6 activity but a NIDS can’t [14] [15].

Kuwatly et al. discusses a dynamic honeypot design for Intrusion detection which speaks about the deployment
of Virtual Honeypots.[13] Wei Ren et.al. have combined the Honeynet and Network Forensics for the purpose of
network investigation [16].

4. Honeypots Approaches:

A honey pot is a program that serves no purpose in production. Even if it is accessed it is assumed that it is for
some reason that is not related to the site purpose and could be a possible attack. Honeyd is one such program.
It simulates an entire environment. A second type of honeypot is the proxypot. The open proxy honey pot allows
internet clients to connect and make requests to the proxy server for connection to internet hosts, even those that
are behind the proxy server. This allows server traffic to be examined to detect various threats including
distributed password account guessing, nessus web vulnerability scans, and proxy chaining. There is a third
honeypot program called the Deception Tool Kit which allows the user to access and configure each port
individually. They get little traffic but it is of very high value. The advantage is it reduces false positives and
negatives. Honeypots are either a real standalone computers or software to emulate services or network i.e. they
have a modified Linux kernel which allows it to play tricks with TCP traffic. The bandwidth is limited for each
class and the amount of abusive traffic is reduced.

57
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

The different types of Honeypots are classified in accordance to the Level of Involvement, the different types of
Honeypots are:

1. Honeyd
2. Honeynets
3. Honeytrap

Honeyd is an interaction honeypot. Developed by Niel Provos, Honeyd is Open Source and designed to run
primarily on UNIX systems (though it has been ported to Windows). Honeyd works on the concept of
monitoring unused IP space. Anytime it sees a connection attempt to an unused IP, it intercepts the connection
and then interacts with the attacker, pretending to be the victim. By default, Honeyd detects and logs any
connection to any UDP or TCP port. Honeynets are architecture, an entire network of computers designed to
attack. It has a highly controlled network, one where all activity is controlled and captured. Within this network
we place our intended victims, real computers running real applications. The bad guys find, attack, and break
into these systems on their own initiative. When they do, they do not realize they are within a Honeynet. All of
their activity, from encrypted SSH sessions to emails and files uploads, are captured without them knowing it.
This is done by inserting kernel modules on the victim systems that capture all of the attacker's actions. At the
same time, the Honeynet controls the attacker's activity. Honeynets do this using a Honeywell gateway. This
gateway allows inbound traffic to the victim systems, but controls the outbound traffic using intrusion prevention
technologies. This gives the attacker the flexibility to interact with the victim systems, but prevents the attacker
from harming other non-Honeynet computers. On the other hand, Honeytraps have the ability to log attack
incidences. However, they are also capable of gathering data on the malware itself - its binary code, system of
delivery, etc. These types of honeypots are also under the GPL license. Honeytrap has the additional ability of
listening to ports and mirroring the malicious attacks back to their source after sufficient information has been
gathered [17].

4.2 Honeypots for web interface:

There are certain programs such as web interface, mail server protection, cross- site scripting, SNMP
vulnerability, SQL injection, Backdoor and Trojans which are attacks on the web interface. Client Honeypots
will search those malicious servers that attacks the web – interface. Honeyd@WEB, uses a combination of
deployment strategies, such as, “Deception Ports on Production Systems” to simulate honeypot services,
substituted for well-known services (for instance HTTP, SMTP, POP, DNS and FTP) and “Proximity Decoys”
where the honeypot decoys are in close proximity to the production hosts (in the same logical subnet). No matter
how good the emulation is, a skilled attacker can eventually detect its presence. Another disadvantage is that it
will not allow the researcher to capture any additional data associated with the attack other than the initial probe.
Honeyd@WEB used in the intranet can help to detect both internal attackers as well as to detect a misconfigured
firewall [20]. The methodology used by it is to run P0f to gather necessary network information at the interface;
it then runs the ARP daemon to listen to a specified range of unused IP addresses. Honeyds are used to deploy
the virtual honeypots and templates are made.P0f performs a passive fingerprinting technique based on
information coming from a remote host when it establishes connection to a system. The passive OS
fingerprinting technique is based on analysing the information sent by a remote host while performing usual
communication tasks, such as, a remote party visiting a web page, connections to a machine, or a connection to a
remote host with a browser or by other standard means. In contrast to active fingerprinting with tools such as
Nmap, the process of passive fingerprinting does not generate any additional or unusual traffic; as a result this
process cannot be detected easily. Captured packets contain enough information to identify the remote OS,
through the subtle differences between Transport Control Protocol/Internet Protocol (TCP/IP) stacks, sometimes
certain implementation flaws. The honeypot Hit@Me was a multi-homed host running on the same computer.
One network interface card was used for the connectivity to the Internet (the so-called external interface) and the
other one served as an administrative interface. Other implementation likes Honeyd is Honeytraps. The
honeytrap acts as a firewall. All recognised users are filtered to the production system while attackers are
contained in the honeytrap. The attackers’ activities are monitored and all the information collected is routed to
another system that is protected by a firewall, to ensure the integrity of the data. The serial architecture forces the
attackers to go through the honeytrap to attack the production system thus exposing all attackers to the honeytrap
monitoring techniques [18][19][20].

58
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

5. Proposed System Architecture:

A model system architecture is proposed using honeypots such that LAN is monitored by honeynet network. The
packets are reorganized into individual transport layer connections between the machines. Protocol parsing and
analysis may help in finding the data which may be hiding. The access list applied on the border router will filter
all abusive traffic. Still, there may be incoming attacks through packets. The different threats are distinguished
and automatic classification is done. The incoming traffic is re-routed in accordance to the security policy
generated by the system administrator. This technique may include changing the packet formatting at the anti-
spam router used (inside the Intranet). Generally it is seen that for database, separate firewalls are required. But
by using honeypots, there is no requirement of database firewalls which reduces the complexity of the model.

Pattern recognition mechanism can be used to analyse the pattern of attack and countermeasures can suitably
taken. The use of VLANs further divides the traffic into trusted or untrusted networks.

6.0 Conclusion
An implementation design of adaptive real time automated intrusion detection is devised using honeypots and
effective network management using VLANs. A clustering system is used using Linux operating system. Use of
two interfaces helped to perform scanning of all the channels while another card would transmit and receive only
on the valid channel. The use of multiple cards would only alleviate the problem of having no traffic on the
simulated networks since extra cards could be used to transmit random or simulated traffic. The use of VLANs
helped to segregate the networks into trusted and untrusted networks. The router used in VLANs will be anti-
spam routers and will filter all incoming and outgoing traffic. The traffic re-direction will help to further reduce
the abusive traffic on Honeypot server. This model can be used to secure a large enterprise network.

References

1. Chapter 51, Security Technologies, Internetworking Technologies Handbook, pp 1-12., (www.cisco.com)


2. Vachon, R Graziani, Accessing the Wan,(2008) pp 190 -298, Pearson Education, 2008
3. RFC 2196, the Enterprise Security Policy, IETF Site Security Handbook
4. http://www.cert.org/resiliency/index.html
5. F. B. Cohen, ”Simulating Cyber Attacks, Defences, and Consequence”Available at
http://all.net/journal/ntb/simulate/simulate.html
6. J.W.Haines, R.P. Lippman, D.J.Fried, E. Trans, S. Boswell, M.A. Zissman (1999),”DARPA Intrusion Detection
Evaluation: design and procedures”, Technical Report, MIT Lincoln laboratory,1999
7. N. Stakhanova, S. Basu, J. Wong, “A Taxanomy of Intrusion Response Systems”, (2007) International Journal
of Intrusion Response Sytems, Volume 1, pp 1-2..
8. P. Kabiri, and Ali A. Ghorbani, (2005) “Research on Intrusion Detection and Response: A Survey”,
International Journal of Network Security, Vol.1, No. 2, pp. 84 – 102.
9. Curtis A. Carver Jr., “Intrusion Response Systems: A survey”, Dept. of Computer Science, Texas A & M
university, College station, TX 77843 – 3112, USA

59
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

10. Sapon Tanachaiwiwat, Kai Hwang and Yue Chen, (2002) ,“Adaptive Intrusion Response to Minimize Risk
over Multiple Network Attacks”, ACM transactions on Information and System Security, pp 1 - 30.
11. Yu- Sung Wu, Bingrui Foo, Yu- Chun Mao, Saurabh Baghchi, Eugene Spafford, (2005) “ADEPTS: Adaptive
Intrusion Containment and Response using Attack Graphs in an E- Commerce Environment”, International
Conference on dependable systems and Networks(DSN), Yokohama ,Japan.
12. N. Stakhanova, S. Basu, J. Wong, (2007) “A Taxonomy of Intrusion Response Systems”, International Journal
of Intrusion Response Systems, Volume 1, Number1-2.
13. Iyad Kuwatly, Malek Sraj, Zaid Al Masri, and Hassan Artail, “A dynamic Honeypot Design for Intrusion
Detection”, proceedings of the IEEE/ACS nternational Conference on Pervasive Services (ICPS ‘04), pp 1-
10.
14. Iyatiti Mokube, Michele Adams, (2007) “Honeypots: Concepts, Approaches and Challenges”, ACMSE
2007, 45th Annual ACM Southeast Regional Conference, March, pp 321 – 326.
15. R. Talabis (2006), “Honeynet Learning: Discovering IT Security”, Volume 38, No. 2, (, pp. 110 -114.

16. Wei Ren, Hia Jin, (2005) ”Honeynet Based Distributed Adaptive Network Forensics and Active Real Time
Investigation”, ACM Symposium on Applied Computing,
17. Lance Spintzer, “Honeypots: Definitions and Value of Honeypots”, http://www.tracking-hackers.com/, 2003
18. T.R. Jackson, J.G. Levine, J.B. Grizzard, H.L.Owen, “An Investigation of a Compromised Host on a Honeynet
being used to increase the security of a Large Enterprise Network”, 5th Annual IEEE proceeding from
Information Assurance Workshop, 2004, pp 9 – 14.
19. J.S. Bhatia, R.Sehgal, Bharat Bhushan, H.Kaur, “Multilayer Cyber Attack Detection through Honeynet” IEEE
conference on New Technologies, Mobility, and Security, NTMS’, 2008, pp 1 – 5.
20. Nor Badrul Anuar,Omar Zakaria, and Chong Wei Yao, “Honeypot through Web (Honeyd@WEB): The
Emerging of Security Application Integration”, Issues in Informing Science and Information Technology,
Volume 3, 2006

60
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

Survey on Interesting Measures for Association Rules


MA. Jabbar 1 T.Nagalakshmi2
1
MIEEE, Head of Department computer Science and Engineering
Aurora’s Technological and Research institute Hyderabad
2
Asst.Professor, Department computer Science and Engineering
Aurora’s Technological and Research institute Hyderabad

hodcse@atri.edu.in
nlakshmi.t@gmail.com

Abstract

A data mining system has the potential to generate thousands of rules. But only a small fraction of patterns are
potentially generated would actually be interest to any given user. So the user can’t analyze all the rules. A rule
is interesting if it is easily understood by humans, Valid on new or test data with some degree of certaining,
potentially useful and Novel an interesting pattern represents knowledge. Many different measures of
interestingness have been proposed for generating rules. In this paper we analyzed different measures and we
have compared these measures by using a dataset.

Keywords: Data mining, association Rules, Measures of interesting, objective measure, subjective measure

61
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

1) Introduction
Among the areas of data mining, the problem of deriving association from data has received a great deal of
attention. The problem was formulated by Agarwal et. in 1993. Association rules are widely used in various
areas such as telecommunication networks, market basket analysis, health data mining etc. Let Ii, I1, I2 …… Im
be a set of m distinct attributes, T be a transaction that contains a set of items such that TCI, D be a database
with different transaction records. An association rule is an implication in the form of x => r, where x,y C I are
set of items called Item sets, and
x∩r =Ф. x is called antecedent and y is called consequent, the rule means x implies y “A rule may contain more
than one item in the antecedent and consequent of the rule.”

1.1) Overview of Interesting measures


Interesting measures are necessary to help select and rank association rule pattern. Each interestingness measure
produces different results and experts have good opinion of what constitutes a good rule. The primary problem is
the selection of Interesting measure for a given Application Domain. Hilder Man and Hamilton established five
primary principles that a good Interesting. Measures should satisfy: 1) The minimum value principle: which
states that a uniform distribution is the most uninteresting. The maximum value principle: This states that the
most uneven distribution is most interesting.
2) The skewness principle: This states that the interesting measure for the most uneven distribution will
decrease when the number of classes of tuples increases. 3) The permutation invariance principle: which states
that the interesting for diversity is unrelated to the order of the class and it is only determined by distribution of
counts. 4) The transfer principle: This states that the Interestingness increases when a positive transfer is made
from a count of one tuple to another whose count is greater.

1.2) Interesting measures are grouped into two:


Objective measures: These measures are based on the structure of discovered patterns and statistics
underlying them.
Piatetsky-Shapiro proposed three rule interestingness measures to objectively evaluate the values of patterns.
These
measures effectively quantity the correlation between the antecedents and the Consequent.
1) RI= 0 if |A & B| = |A or B| \ N 2) RI monotonically increases with |A & B| when other parameters are fixed.
3) RI monotonically decreases with |A | or | B| when other parameters are fixed. Where N is total number of
patterns, A and B are the tuples that satisfy A & B when RI = 0, A and B are independent and Rule is not
interesting.
4) RI monotonically increases with |A | with fixed factor cf > cf.

Subjective Interesting Measures:

Objective measures may not highlight the most important patterns produced by the datamining system.
Subjective techniques generally operate by comparing users belief against the pattern discovered by the
datamining algorithm. The measures usually determine if a pattern is actionable and or unexpected.
“Actionability refers to the organizations ability to do something useful with the dicovered pattern. A pattern is
said to be interesting if it is both unexpected and actionable”.

62
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

Fig1. Taxonomy of Interesting Measures

The numbers of techniques have been used in the past to define subjective domain knowledge.
1) Probabilistic measures: - Bayesian approaches have been used to enable conditional probabilities to be
used.
2) Syntactic distance measure: - This measure is based on the degree of distance between the new patterns
and a set of beliefs.
3) Logical contradiction: - This technique uses the statistical strength of a rule.

2) Description of Measures

The sample data for the analysis purpose is shown in Table 1 and Table 1.1
A. Support:
This is the primary measure of significance .Support for an association rule X=>Y is the percentage of
transactions in the database that contain X U Y. The support of the rule is the probability that X and Y hold
together among all the possible presented cases. Supp (X=> Y) = Supp (XU Y) = p (X U Y)

63
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

Table-1
Transaction Items
ID
T1 I1,I2,I5
T2 I2,I4
T3 I2,I3
T4 I1,I2,I4
T5 I1,I3
T6 I2,I3
T7 I1,I3
T8 I1,I2,I3,I5
T9 I1,I2,I3
T10 I4,I5

Binary format for transactional data:


Table – 1.1

TID I1 I2 I3 I4 I5
T1 1 1 0 0 1
T2 0 1 0 1 0
T3 0 1 1 0 0
T4 1 1 0 1 0
T5 1 0 1 0 0
T6 0 1 1 0 0
T7 1 0 1 0 0
T8 1 1 1 0 1
T9 1 1 1 0 0
T10 0 0 0 1 1
Total 6 7 6 3 3

It basically uses count of transaction and therefore called frequent constraint.Any itemset with a support greater
than a set of minimum support threshold is called a frequent itemset.Support possesses downward closure
property i.e, all the subsets of a frequent itemsets are also frequent. The disadvantage of support is the rare item
problem. Items which are infrequent are pursed eventhough they produce interesting and valuable rules.Support
for different items are shown in Table – 1.2

64
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

Table – 1.2

Items Support
I1 6
I2 7
I3 6
I4 3
I5 3

B. Confidence:
The rule X=>Y has a confidence c in the transaction set D if e is percentage of transactions containing in X that
also contain in Y. Confidence (X=>Y) = P (X and Y) / P(X)
Confidence gives different values for the rules X=>Y and Y=>X. Confidence is not down-ward closed and was
developed together with support. As support is used to find the frequent itemsets from the database, confidence
is used to generate rules from the frequent itemsets. It is used to produce rules from frequent itemsets that exceed
a min confidence threshold. I1∩I2=>I5 confidence = 2 / 4 = 50 %, I1∩I5=>I2 confidence = 2 / 2 = 100 %,
I2∩I5=>I1 confidence = 2/ 2 = 100 %.A problem with confidence is that it is sensitive to the frequency of
consequent (Y) in the database. Consequents with high support will automatically produce higher confidence
values even if there exits no association between items.
C. Lift:
Rules mined using minimum support and confidences are filtered using their lift values. The measure of lift also
called as Interest. Lift (X=>Y) = P (X and Y) / P (X). P (Y) Different values ranges from 0 to ∞. Value closes to
1 implies that X, Y are independent and the rule is not interesting. Value far from 1 indicates that the evidence of
X provides the information about Y.
D. Correlation Coefficient:
Correlation coefficient is defined as: Corr x,y = Supp ( XUY ) / Supp (X ) . Supp(Y) This measure is used to
mine positive and negative values association rules. The value of correlation coefficient exist the following three
situations:

If corr x,y > 1 +ve


correlation
Corr x,y =1 x,y are
independent
Corr x,y < 1 -ve
correlation
E. Laplace:
This measure sometimes used in classification. It is a confidence estimator that takes support into account.
Laplace (X=>Y) = Supp (X U Y) + 1 / Supp (X) + 2
F. Leverage:
This measure is proposed by Piatetsky-Shapiro. It measures the difference of X and Y that appearing together in
the dataset and what would be expected if X and Y were statistically dependent. Leverage (X->Y) = Supp ( X U
Y ) – Supp ( X ) . Supp (Y ) It is also called as Novelty. This measure also suffers from the rare item problem.
This is used to measure how much more counting is obtained from the co-occurance of antecedent and
consequent from the independence. It ranges within [-0.25, 0.25].
G. All – Confidence:
It was first introduced by omiencinski in 2003. When compared to any confidence it satisfies downward closure
property and computed efficiently. It is defined an itemsets as the minimum confidence if all possible rule
generated from the itemset. It is defined as All conf ( I ) = Support ( I ) / max item Supp ( I ) Where I is a
itemset.
H. Conviction:
It is used to overcome same of the weakness of confidence and lift. Unlike lift, conviction is sensitive to rule
direction. This measure is introduced by Sergey brin, Rajeev motwani, Ullmann and Shalon Tsur. It is defined
as:

65
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

Conviction (X->Y) = 1- Support ( r ) / 1- Confidence (X->Y) Its values are in range [ 0 , +∞ ]. If X,Y are
independent it is equal to 1. Conviction is an asymmetric version of interest factor. Conviction does not suffer
from the problem of providing misleading rules.
I. Cosine:
It measure is introduced by Tan et. And can be viewed as harmonized lift measure. It is used to measure the
distance between antecedent and consequent, when these two can be viewed as binary values. It is defined as
Cosine (X->Y) = Supp (XUY) / √ Supp (X). Supp (Y) Its value ranges between [0…..1]. The value 1 indicates
antecedent and consequent are coincide.When antecedent and consequent have no overlap, its value is 0.
J. Coverage:
This measure calculates rule coverage. It shows what parts of consequents are covered by rule. It is defined as
Coverage (X->Y) = Supp(XUY) / Supp(Y) It’s value ranges from 0 to 1.
K. Ф – coefficient:
Like cosine measure, it is used to measure association between X and Y. It is analogous to pearson correlation
coefficient. It is defined as Ф (X->Y) = leverage (X->Y) / √ Supp (X) . Supp (Y) . (1- Supp (r)) Its values range
between [-1, 1].
L. Jaccard coefficient:
This measure is used to find the distance between antecedent and consequent in association rule. It measures the
fraction of cases covered by both with respect to the fraction of cases covered by one of them. Its values ranges
between 0 to 1. Value 1 indicates that both antecedent and consequent cover the same cases. It is defined as
Jacc(X->Y) = Supp (XUY) / Supp(X) + Supp(Y) – Supp (XUY)
M. Cross – support ratio:
It is defined as itemsets as the ratio of the support of the least frequent item to the support of the most frequent
item. Cross – support patterns have a ratio smaller than a set a threshold most mined patterns are cross- support
patterns which contain frequent as well as rare items.

3) Comparison of measures
We compared various interesting measures discussed in the previous section on the sample data given in Table-
1. Comparison of measures is given in Table 1.3
Table1.3 Comparison of various measures on sample data:
Rule Sup- Confid- Lift Laplace Leverage Corr- Co Coverage All- conviction
port ence Coefficient Sine Confi
dence

I1->I2 4 0.66 0.09 0.62 -0.02 0.09 0.61 0.57 0.57 0.88
I1->I3 4 0.66 0.11 0.62 0.04 0.11 0.66 0.66 0.66 1.17
I2->I3 4 0.57 0.09 0.55 -0.02 0.09 0.61 0.66 0.57 0.93
I1->I5 2 0.33 0.11 0.37 -0.01 0.11 0.47 0.66 0.33 1.04
I2->I4 2 0.28 0.09 0.33 -0.01 0.09 0.43 0.66 0.28 0.97
I2->I5 2 0.28 0.09 0.33 -0.01 0.09 0.43 0.66 0.28 0.97

4) Conclusion
Any measure can’t determine the interesting rule alone. We have to use a combination of different measures in
order to decide whether rule is interesting or not. If the support increases number of frequent items will be
received and in the case of confidence if the threshold value increases number of rules which fall below
minimum threshold will be pruned, if the threshold value is high rules generated will be less.

References

[1] Edward R Omiecinski, (2003) “Alternative Interest measures for mining Associations in Databases”, IEEE
Transactions,Volume 15,Issue1.
[2] Liaquat Majeed Sheikh,etc, (2004)“Interesting Measures for Mining Association Rules” , Fast-
66
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

Nuces,Lahore, IEEE Xplore..


[3] K. Mcgarry, (2005), “Survey of Interesting Measures for knowledge Discovery”. The Knowledge
Engineering review, Vol 20:1, 39-61.
[4] Paulo J Azevedo and Alipio M.Jorge, (2007) “Comparing Rule Measures for predictive Association Rules”
PKDD.
[5] Nils, Tuft, Jens Frederik Agger and Jeanett Bruun, “Measures of Association and Effect”
[6] Huebner,Richard A.,Norwich,(2009) “Diversity based Interestingness Measures for Association Rule
Mining”,Proceedings of ASBBS,Volume 16,Feb.
[7] Geng,L.&Hamilton,H.J,(2006) “Interestingness measures for data Mining :A Survey” ACM Computing
Surveys, Articles.
[8] Agarwal,R.,Mielinski, T., Swami A,(1993), “Mining Association Rules between sets of items in large
databases” ACMSIGMOD Record ,22(2),207-216
[9] Honglei Zhu , Zhigang Xu,(2009) “An Effective Algorithm for Mining Positive and Negative Association
Rules”,IEEE.

67
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

Reliability Assessment for a Single Compressor-Multi Evaporator Type


Refrigeration Plant by Employing Boolean Function Technique
Anuj Kumar
Lecturer, Dept. of Mathematics, S.G.I.T., Ghaziabad, UP, India
Email: anuj.chdhr1@gmail.com

Abstract

The author, in this paper, has considered a refrigeration plant for analysis of its some important performance
measures. The author’s idea is to take one standby condenser to improve the capability of the system. This standby
condenser followed online through an imperfect switching device, on failure of main unit. There are two expansion
values and are working in parallel redundancy.
Boolean function technique has been used to formulate and solve the mathematical model. Reliability of the system
has obtained in two cases: (i) when failure rates follow exponential time distribution, and (ii) when failure rates
follow weibull time distribution. Mean time to failure of the system has also been given at the end. One numerical
example together with the graphical illustration has been appended at the end to highlight the important results of the
study.

Key Words: Refrigeration plant, Boolean function, Algebra of Logics.

68
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

1. Introduction
This refrigeration plant has a single compressor with multi-evaporators [4]. These single compressor type refrigeration
plants can be categorized into following:
(a) Multi-evaporator type at multi-temperature.
(b) Multi-evaporator type at same-temperature.
The functioning of evaporator together with expansion valve, is to generate the constant temperature corresponding to
required state. Therefore, the different evaporators can be fixed either for same temperature or for different temperatures.
In this present study, the author’s investigations are based on category (b) “Multi-evaporator type at same temperature”.
System configuration has been shown in fig-1.

The following assumptions have been associated with this model:


(i) The state of every component and of the whole refrigeration plant is either good or bad.
(ii) There is no repair facility [5] to failed unit.
(iii) The reliability of every component is known in advance.
(iv) The states of all components are statistically independent.
(v) The failure times of all components are arbitrary.
(vi) The three evaporators are connected in parallel and contribute same temperature. This part is of 1 – out – of 3;
G nature.
(vii) There are two identical condensers connected in standby redundancy. The standby unit followed on line,
through an imperfect switching device, after failure of main unit.

The following notations have been used throughout this study:


x1/x2 : State of compressor/ switching device.
x3, x4 : States of condensers.
x5, x6, x7 : States of evaporators.
x8, x9 : States of expansion values.
xi' : Negation of xi, for all i= 1, 2 ………9.
xi (i = 1, 2…….9) : 0, in bad state
1, in good state

∧, ∨ : Conjunction, Disjunction

2. Literature Review:
In this section, author has done analysis for mathematical formulation of the model, its solution, some particular cases and
various results related to reliability estimation.

2.1 Formulation of mathematical model


The object of the system is to contribute the same temperature generated by refrigeration plant with the help of three
different evaporators. Using Boolean function technique [1], the conditions of capability of the successful operation of the
complex system in terms of logical matrix [2] are expressed as follows:

69
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

 x1 x3 x5 x8 
x x3 x6 x8 
 1 
 x1 x3 x7 x8 
 
 x1 x3 x5 x9 
 x1 x3 x6 x9 
 
 x1 x3 x7 x9 
F(x1, x2------- x9) =  ------(1)
x x2 x4 x5 x8 
 1 
 x1 x2 x4 x6 x8 
x x2 x4 x7 x8 
 1 
 x1 x2 x4 x5 x9 
 
 x1 x2 x4 x6 x9 
 x1 x2 x4 x7 x9 

2.2 Solution of the model


By the application of algebra of logics [3], equation (1) may be written as:
F(x1, x2--------- x9) = x1 ∧ f(x2, x3--------- x9) ------- (2)

 x3 x5 x8   T1 
 x3 x6 x8  T 
   2
 x3 x7 x8   T3 
   
 x3 x5 x9   T4 
 x3 x6 x9   T5 
   
 x3 x7 x9   T6 
where, f(x2, x3--------- x9) =  = ------- (3)
x2 x4 x5 x8  T 
   7
 x2 x4 x6 x8   T8 
 x2 x4 x7 x8  T 
   9
 x2 x4 x5 x9  T10 
   
 x2 x4 x6 x9  T11 
 x2 x4 x7 x9  T12 
where,
T1 = x3 x5 x8 ---------- (4)

T2 = x3 x6 x8 ---------- (5)

T3= x3 x7 x8 ---------- (6)

T4 = x3 x5 x9 ---------- (7)

T5 = x3 x6 x9 ---------- (8)

70
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

T6 = x3 x7 x9 ---------- (9)

T7 = x2 x4 x5 x8 ---------- (10)

T8 = x2 x4 x6 x8 ---------- (11)

T9 = x2 x4 x7 x8 ---------- (12)

T10= x2 x4 x5 x9 ---------- (13)

T11= x2 x4 x6 x9 ---------- (14)

T12= x2 x4 x7 x9 ---------- (15)

Using orthogonalization algorithm, equation (3) may be written as:

f(x2, x3, …….. x9) = T1


T1 ’ T2
T1 ’ T2 ’ T3
T1 ’ T2 ’ T3 ’ T4
T1 ’ T2 ’ T3 ’ T4 ’ T5 ------ (16)
T1 ’ T2 ’ T3 ’ T4 ’ T5 ’ T6
T1 ’ T2 ’ T3 ’ T4 ’ T5 ’ T6 ’ T7
T1 ’ T2 ’ T3 ’ T4 ’ T5 ’ T6 ’ T7 ’ T8
T1 ’ T2 ’ T3 ’ T4 ’ T5 ’ T6 ’ T7 ’ T8 ’ T9
T1 ’ T2’ T3’ T4’ T5’ T6’ T7’ T8’ T9’ T10
T1 ’ T2’ T3’ T4’ T5’ T6’ T7’ T8’ T9’ T10’ T11
T1 ’ T2’ T3’ T4’ T5’ T6’ T7’ T8’ T9’ T10’ T11’ T12

T1 ’ = x3 ’
x3 x5 ’
x3 x5 x8 ’

T1’T2 = x3 ’
x3 x5 ’ ^ x3 x6 x8
x3 x5 x8 ’

= x3 x’5 x6 x8 --- (17)

Similiarly,

T1’T2’T3 = x3 x5 ’ x6 ’ x7 x8 --- (18)

T1’T2’T3’T4 = x3 x5 x8 ’ x9 --- (19)

71
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

T1’T2’T3’T4’T5 = 0 --- (20)

T1’T2’T3’T4’T5’T6 = 0 --- (21)

T1’T2’T3’T4’T5’T6’T7 = x2 x’3 x4 x5 x8 ---- (22)

T1’T2’T3’T4’T5’T6’T7’ T8 = x2 x’3 x4 x’5 x6 x8


----- (23)

T1’T2’T3’T4’T5’T6’T7’ T8’T9 = x2 x’3 x4 x5 x’6 x7 x8


x2 x’3 x4 x’5 x’6 x7 x8
x2 x’3 x4 x’5 x6 x7 x8

----(24)
T1’T2’T3’T4’T5’T6’T7’ T8’T9’T10 = x2 x’3 x4 x5 x’8 x9
x2 x’3 x4 x5 x8 x9
----- (25)

T1’T2’T3’T4’T5’T6’T7’ T8’T9T’10T11 = 0 --- (26)

T1’T2’T3’T4’T5’T6’T7’ T8’T9’T10’T11’T12 = x2 x3 ’ x4 x5 ’ x6 ’ x7 x9
x2 x3 ’ x4 x5 ’ x6 x7 x9
x2 x3 ’ x4 x5 x6 ’ x7 x9

---(27)

Making use of equation (17) through (27) in equation (16), we obtained:

f(x2, x3 --------- x9) = x3 x5 x8


x3 x5 ’ x6 x8
x3 x5 ’ x6 ’ x7 x8
x3 x5 x8 ’ x9
x2 x3 ’ x4 x5 x8
x2 x3 ’ x4 x5 ’ x6 x8
x2 x3 ’ x4 x5 x6 ’ x7 x8
x2 x3 ’ x4 x5 ’ x6 ’ x7 x8
x2 x3 ’ x4 x5 ’ x6 x7 x8
x2 x3 ’ x4 x5 x8 ’ x9
x2 x3 ’ x4 x5 x8 x9
x2 x3 ’ x4 x5 ’ x6 ’ x7 x9
x2 x3 ’ x4 x5 ’ x6 x7 x9
x2 x3 ’ x4 x5 x6 ’ x7 x9
---- (28)

72
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

In view of equation (28), (2) implies

F(x1, x2 --------- x9) = x1 x3 x5 x8


x1 x3 x5 ’ x6 x8
x1 x3 x5 ’ x6 ’ x7 x8
x1 x3 x5 x8 ’ x9
x1 x2 x3 ’ x4 x5 x8
x1 x2 x3 ’ x4 x5 ’ x6 x8
x1 x2 x3 ’ x4 x5 x6 ’ x7 x8
x1 x2 x3 ’ x4 x5 ’ x6 ’ x7 x8
x1 x2 x3 ’ x4 x5 ’ x6 x7 x8
x1 x2 x3 ’ x4 x5 x8 ’ x9
x1 x2 x3 ’ x4 x5 x8 x9
x1 x2 x3 ’ x4 x5 ’ x6 ’ x7 x9
x1 x2 x3 ’ x4 x5 ’ x6 x7 x9
x1 x2 x3 ’ x4 x5 x6 ’ x7 x9

--- (29)

Since R. H. S. of equation (29) is the disjunction of pair-wise disjoint conjunctions. Therefore,


reliability of refrigeration plant is given by:

R S = Pr{F(x1, x2----------x9) =1}

=R1 [R3R5R8 + R3R6R8(1-R5) + R3R7R8(1-R5)(1-R6) + R3R5R9(1-R8) +


R2R4R5R8(1-R3) + R2R4R6R8(1-R3)(1-R5) + R2R4R5R8R9(1-R3)
+ R2R4R5R9(1-R3)(1-R8) + R2R4R7R8(1-R3)(1-R5)(1-R6)
+ R2R4R5R7R8(1-R3) (1-R6) + R2R4R6R7R8(1-R3)(1-R5)
+ R2R4R7R9(1-R3)(1-R5)(1-R6) + R2R4R6R7R9(1-R3)(1-R5)
+ R2R4R5R7R9(1-R3)(1-R6)] --- (30)

where, R1,R2------- R9 are the reliabilities of the components of the complex


system corresponding to the states x1, x2------- x9, respectively .

2.3 Some particular cases

(i) If the reliability of every component of complex system is R:


The equation (30) yields:
RS =2R8-R7-7R6+6R5+R4 --- (31)
(ii) When failure rates follow Weibull distribution:
Let failure rate of every component of the complex system be a, then the reliability of complex system
at an instant‘t’, is given by:
RSW(t) = 2e −8 at − e −7 at − 7e −6 at + 6e −5at + e −4 at
p p p p p
---(32)
where p is a positive parameter.
(iii) When failure rates follow exponential time distribution:
Exponential distribution is nothing but a particular case of Weibull distribution for p=1 and is very useful in various
practical problems. Therefore, the reliability of the complex system as a whole at an instant‘t’, is given by:
RSE(t) = 2e −8 at − e −7 at − 7e −6 at + 6e −5at + e −4 at --- (33)
73
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

Again, the expression for M.T.T.F., in this case is given by:



M.T.T.F. = ∫R
0
SE (t )dt

1 1 1 7 6 1 
= − − + +
a  4 7 6 5 4 
0.390476
= --- (34)
a

2.4 Numerical computation


Setting a=0.001, p=2, t=0, 1,2 ------ in equations (32) and (33) . Also, a=0.001, 0.002-----, in equation (34), one can sketch
the graphs as given in figs (2) and (3), respectively.

3. Conclusion
In this paper, the author has considered a refrigeration plant with single compressor- multi-evaporator type to evaluate its
performance measures. To improve the system’s capability one standby condenser has been used and to online this standby
unit an imperfect switching device is working. Boolean function technique has used to formulate and solve the mathematical
model [3]. Some particular cases, i.e., when failure rates follow Weibull and exponential time distributions [5], have shown
in the end to improve practical utility of the model. Reliability and M.T.T.F. of Refrigeration plant have been obtained. One
numerical example with graphical illustration has been appended in the last to highlight the important results.
fig(2), shows that reliability of the complex system decreases in uniform manner in case of exponential distribution.
fig(3) , shows that M.T.T.F. decreases catastrophically in the beginning but thereafter it decreases in a smooth way as we
make increase in the failure rate a.

74
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

Condenser
Compressor
x3
x1

x2

Condenser
x3

Evaporator-I
x5

Evaporator-II
x6

Evaporator-III
x7

Expansion Value
x8

Expansion Value
x9

Fig-1: System Configuration

75
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

Time Vs Reliability

1.05

1
Reliability-->

0.95
RSE(t)
RSW(t)
0.9

0.85

0.8
1 2 3 4 5 6 7 8 9 10 11
Tim e -->

Fig-2

Failure rate a Vs MTTF

450
400
350
300
TTF-->

250
MTTF
200
M

150
100
50
0
.001
.002
.003
.004
.005
.006
.007
.008
.009.01.011
Fa ilure rate a -->

Fig-3

References

[1] Cluzeau, T.; Keller, J.; Schneeweiss, W. (2008): “An Efficient Algorithm for Computing the Reliability of
Consecutive-k-Out-Of-n:F Systems”, IEEE TR. on Reliability, Vol.57 (1), 84-87.
[2] Gupta P.P., Agarwal S.C. (1983): “A Boolean Algebra Method for Reliability Calculations”, Microelectron.
Reliability, Vol.23, 863-865.
[3] Lai C.D., Xie M., Murthy D.N.P.(2005): “On Some Recent Modifications of Weibull Distribution”, IEEE TR. on
Reliability, Vol.54 (4), 563-569.
[4] Pandey,D; Jacob , Mendus 1995: “ cost analysis ,availability and MTTF of a three state standby complex system
under common-cause and human failures” , Microelectronic . Reliab., U.K., vol. 35, 91-95.
[5] Zhimin He., Han T.L., Eng H.O.(2005): “A Probabilistic Approach to Evaluate the Reliability of Piezoelectric
Micro-Actuators”, IEEE TR. on Reliability, Vol.54 (1), 44-49.

76
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

A Framework for Agent Based Warfare Modeling and Simulation

Kamal Mirzaie and Mehdi N. Fesharaki


Department of Computer Engineering, Science and Research Branch, Islamic Azad University, Tehran,
Iran
Department of Computer Engineering, Science and Research Branch, Islamic Azad University, Tehran,
Iran
k.mirzaie@srbiau.ac.ir, fesharaki@mut.ac.ir

Abstract
Agent Based modeling and simulation is a field of research to gain insight into military operations. We need a
framework to design the architecture of warfare simulators. It helps to have a high quality simulation software. In this paper,
we have proposed a framework to design the architecture of warfare simulators. The proposed framework is based on agent,
network, and process. This framework determines the main concepts and components of a simualtor. Therfore, modeling and
simualtion will be done systematically with this framework.
Keywords: Agent-Based Simulation (ABS), Agent-Based Distillation (ABD),Capra conceptual framework, Complex
Adaptive System (CAS), Multi- Agent Systems (MAS)

77
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

1. Introduction
Agent Based Simulation (ABS), to gain insight in to military operations is one of the fields of research. This field is
usually known as Agent-Based Distillation (ABD). Researchers in this field are trying to extract the appropriate concepts
and mechanisms in order to create simulation models to describe the behavior of warfare. These simulations explore abstract
high level scenarios with different parameters in a plan or operation [1, 2, 3].
With the advent of complex systems theory, researches have realized that warfare modeling based on complex system
theory is a suitable approach. In this approach, warfare is characterised by nonlinear behaviours and that engagement is a
Complex Adaptive System (CAS), which adapts, evolves and co-evolve with its environment [2, 4, 5]. A complex system is
a dynamical system composed of many nonlinearly interacting parts and its overall behaviours stems from some basic set of
underlying principles. In other words, the result of the interactions between parts, the emergent behaviour would appear [6,
7].
Living systems such as cells, organizations, society, and the earth in which there is the concept of life and evolution are
all examples of complex systems. In modern biology, the main feature of living systems is life and life process is intertwined
with cognition. In this view, all actions and reactions of living systems with the environment are cognitive. In living
systems, there are interactions between components for survival and evolution. The living system is a complex system that
adapts with its surrounding environment throughout its life for survival and evolution. Adaptation means how a system
responds to the changing environment and adapts with it [8].
The modeling of complex systems usually leads to a simulation software, in which researchers can simulate and test
their models and theories [9]. Multi agent systems (MAS) are a suitable approach for complex systems modeling [7]. Today,
there are different methods, frameworks, architectures and methodologies that use the concept of MAS.
As the design and development of Agent Based Distillation software is modeling and simulation warfare as complex
system, in this paper, we propose a framework for designing and developing ABDs. The proposed framework is based on
Capra conceptual framework. In Capra conceptual framework, any complex phenomena can be discussed and studied in
four perspectives: structure, pattern, process and meaning.
The rest of the paper is organized as follow. First, Agent-Based Distilliations (ABDs) are introduced. Then Capra
conceptual framework is explained. After that, the proposed framework for Agent-Based modeling and simualtion is
described. Having explained the proposed framework we will use it to describe warfare modeling and simulation.
Conclusions are finally drawn.
2. Agent Based Distillation
Agent based distillations (ABDs) represent an emerging technology within the field of warfare simulation. Researchers
designed and developed ABDs until analysts to gain insight of deferent scenarios. ABD emphasizes the concept of
embodiment of agent in the environment [10] and enables analysts to study emergent behaviour in warfare.
Generally, distillations can be defined as a simulator attempting to model warfare scenarios by implementing a small
set of rules that allow agents to adapt within each scenario [11, 12]. Distillations are far less detailed than traditional
simulations and rely on sensible global behavior to emerge naturally, because, exploratory analysis requires the development
of smaller, low fidelity, and abstract complex system simulation, which may help analyst to develop and verify concepts,
principles, and answer "what-if" questions. This simplicity gives distillations the characteristics of speed, transparency, ease
of configuration, and the ability to use the systems with minimal training. Today, there are a number of ABDs such as
ISSAC [13], EINSTein [14], MANA [15], CROCADILE [16], BactoWars [17], WISDOM-I and WISDOM-II [18].
ISSAC is a skeletal agent based model of land combat. It has been designed and developed for analyzing nonlinear
dynamics in land combat by identifying, exploring of emergent behaviours on the battlefield.
EINSTein extends the simulator ISSAC. It provides a user friendly GUI for the system. This makes it easier for a user
to setup scenarios and view what is happening during the simulations.
The main ideas of the simulator MANA has been derived from ISSAC and EINSTein. Although, MANA has similar
parameters as ISSAC and EINSTein, however new concepts such as situation awareness, communication model, terrain
map, are introduced.
CROCODILE is an agent-based distillation, which is designed to improve the limitation on generality and fidelity in
ISSAC, EINSTein and MANA.
BactoWars focuses on problem representation. It adopts modern artificial intelligence techniques such as semantic
network and frame theory, and software engineering techniques such as design pattern to build agents in ABD.
WISDOM-I was first proposed in 2004 and is similar to other existing ABDs. It improved the movement algorithm and
used the relational database to store information during simulations. Version II of the WISDOM has been developed based
on a novel agent architecture, called the Network Centric Multi-Agent Architecture (NCMAA). WISDOM-II allows analysts
to study the dynamics of warfare easily, especially for NCW. WISDOM-II not only uses the spirit of CAS in explaining its
dynamics, but also centers its design on fundamental concepts in CAS.
78
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

3. Capra Conceptual Framework


The living systems that can be biological or social are assumed by many researchers as complex systems. Hence,
theories and frameworks for living systems can be utilized for understanding and modeling complex systems. In a living
system, there are interactions among components for survival and evolution. Likewise, a complex system has many
components and interaction between them forms an emergency complex behavior.
In the study of living systems, we can use Capra conceptual framework. Capra has presented a unique framework for
understanding the biological and social phenomena in four perspectives. Three out of four perspectives is about life and the
fourth one is meaning. The first perspective is pattern. The organization pattern of a living system defines the relation types
among the system components. Structure, the second perspective, is defined as the material embodiment of system pattern.
The Structure of a living organism evolves in interaction with its environment. The third perspective is the life process
integrating the pattern and the structure perspectives. (Figure 1).
When we try to extend new understanding of cognition to the social life, we immediately encounter many misleading
phenomena - rules of behavior, values, goals, strategies, intentions, designs and power relations - that often do not have a
role in non-human world, but they are essential for human social life. For expanding life to the social domain, meaning
perspective is added to three other ones (Figure 1). Thus, we can understand social phenomena from four perspectives:
pattern, structure, process, and meaning. Culture, for instance, has created and preserved a network (pattern) of
communication (process) with embedded meaning. Material embodiment of culture includes art and literary masterpieces
(structure) that transfer meaning from one generation to another [8].
Process

Meaning
Pattern Structure

Figure 1: Four perspectives of Capra conceptual framework

According to Capra conceptual framework, any complex phenomena can be discussed and studied in four perspectives.
In order to close these four perspectives to the terminology of complex systems modeling, we replace "pattern" with
"network" and "structure" with "agent". Therefore, Capra conceptual framework is redefined in four perspectives: network,
agent, process, and meaning (Figure. 2).
Process

Meaning
Network Agent

Figure 2: Redefinition of Capra conceptual framework

4. The Proposed Framework


If in an ABD development, agents are programmed without an underlying theoretically sound software architecture, it
is very difficult to verify and validate them. The Proposed framework can be used for software architecture design.
According to Capra conceptual framework, to create the desired meaning, simulator architecture is designed based on three
perspectives agent, network and process. So we will develop ABD based on these three perspectives. Three steps in the
proposed framework are summarized as follow:
o First step: agent design using main properties
o Second step: determining and modeling necessary networks for describing the relation and communication
between agents
79
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

o Third step: process design and modeling.


4.1 Agents
Agents that model combatants in battlefield simulation can be represented by vectors of properties. These vectors
include personalities and characteristics properties (Figure 3). Tendency to friends and attacking the enemies are examples
of agents' personality properties. The characteristics properties can be radius of vision, firing range, communication range,
and rank of agents.
Personalities Characteristics
Properties Properties

Figure 3: Personalities and Characteristics properties of Agents


4.2 Networks
What makes agent powerful in multi agent systems is the relation between agents. It means that agents have relation
and communication with each other for achieving a given goal. Relationships and interactions between agents depend on the
type of agent. These relations form networks of agents.
Agents interact with each other in a battlefield simulator and graph can be suitable to model these interactions. Since in
the ABD, vision, communication, hierarchy, friendship, enmity, and engagement have been determined as main relations
between agents, therefore six graphs model them in the proposed method (Figure 4).

Hierarchy
Graph

Vision Communication Graph


Graph

Friendship Enmity
Graph Graph

Engagement
Graph

Figure 4: Six graphs for modeling necessary networks

4.3 Process
The process perspective combines and intertwines network and agent perspectives to create meaning. It should be noted
that the main cycle of simulator is based on process. This cycle determines main stages of simulation.
We can use process such as OODA, CECA [19], C-OODA [20] and M-OODA [21].
In developing the simulator using proposed method, OODA process was used. As OODA is simple and
comprehensible, it has often been used as a common and acceptable decision making model. The OODA utilized in this
paper, however, is not a simple and linear process rather it has intermediary feedback loops (Figure 5).

Observe Orient Decide Act

Figure 5: OODA Loops

In OODA process (Figure 5), feedback loops cause adaptation and learning capability in simulator.
The main stages of the simulator loop based on OODA process is defined in the four stages of Observe, Orient, Decide,
and Act (Figure 6).
80
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

Initialize Agents

Observe

Hierarchical, Vision and Com. Net

Orient
Clock Friendship and Enmity Net

Decide

Engagement Net
Learning
and
Adaptation Act

Fire and Move

Figure 6: The main stages of the simulator


5. Evaluation of Scenarios Using Battlefield Simulator
Simulators help analysts to evaluate scenarios with different parameters. Scenario is a combination of various paths that
leads to probable futures. Scenario is modeled with initial parameters and set-ups as well as decision making rules.
To evaluate scenarios, we need evolution functions such as the number of living agents after each simulation run, loss-
exchange ration, and spatial and temporal functions. In this paper, we evaluate three scenarios related to the firing and
adaptive movement of agents using the simulator.
In order to, evaluate this three scenario, (1) is used.

Eval ( Strategy ) = (# Blue + 1) /( # Re d + 1) (1)

In (1), #Blue is the number of blue living agents and #Red the number red living at the end of simulation run.
If the value of (1) is higher than 1, it shows that blue living agents are more than red living agents and vice versa.

Figure 7: Battlefield simulator snapshot

Scenario 1: In this scenario, firing of blue agents is coordinated with other blue agents in a way that each blue agent
fires at one red agent. Red agents use the strategy of firing at the nearest goal (without coordination with other red agents)
and this is a common strategy in most simulations. In this scenario is setup with 30 blue and 30 red agents and a 100x100
two-dimensional grid. Figure 8 shows the result of scenario simulation with 100 iterations. The result implies that blue
strategy (coordination) is not more efficient than red strategy (without coordination). That is because blue and red agents can
spread will in 100x100 grid. In this case, the strategy of firing at the nearest goal (red strategy) is more efficient. The blue
agent that uses coordinated strategy of firing can be killed by red agents which are closer to it. While this blue agent may not
fire at any red agents due to coordinated strategy.

81
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

Figure 8: Evaluation function for 100 iterations in scenario 1

Scenario 2: This scenario is setup with a 35x35 two dimensional grid. Other settings are the same as the ones in
scenario 1. Figure 9 shows that in this scenario, coordinated firing (blue strategy) is more efficient than firing at the nearest
goal (red strategy). In this case, agents are less spread than the agents in scenario 1. The coordinated firing strategy of blue
agents deletes repeated firing and enables blue agents to shoot more red agents. On the other hand, with the red agents'
strategy of firing the nearest goal, some blue agent may be hit by several red agents (repeated firing).

Scenario 3: In the previous scenarios, both forces moved towards each other and their different was in firing strategy.
In this scenario, blue agents adapt direction of their movement according to the movement of neighboring agents in the
previous state. Adaptation means moving in the direction that most neighboring living against have moved in. The
neighboring agents are determined by vision graph (Figure 4). This scenario is the same as scenario 1, and the only
difference is moving strategy. The moving strategy of blue is adaptive and red is towards the opposite (blue) agents. This
simulation results with 100 iterations in Figure 10 reveals that agents' adaptive movement is more efficient while scenario 1
was less efficient due to the lack of adaptive movement of blue agents.

82
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

Figure 10: Evaluation function for 100 iterations in scenario 3

6. Conclusion and Future Research


The common approach in the evaluation of different scenarios is modeling and simulation. This approach evaluates the
performance of scenarios with different parameters. In this paper, battlefield simulator was designed based on the three
perspectives of Agent, Network, and Process using the proposed framework. When we design the simulator with this
approach, we have applied a special methodology in the battlefield simulation. Simulation of these three scenarios showed
that the strategy of firing at the nearest goal in a 100x100 grid and coordinated firing strategy in a 35x35 grid is more
efficient. Also if in the coordinated firing strategy of 100x100 grid, agents have adaptive movement, we will have
appropriate efficiency. In the future researches, we can simulate and evaluate other capabilities and dynamics of battlefield
with different scenarios.

References
[1] Ilachinski, A., (2004), Artificial War: Multiagent-Based Simulation of Combat, Singapore, World Scientific
Publishing Company.
[2] Yang, A., (2006), “A Networked Multi-Agent Combat Model: Emergence Explained”, PhD Thesis, University of New
South Wales, Australian Defiance Force Academy.
[3] Yang, A., Abbass, H. A. and Sarker, H., (2006), “Characterizing Warfare in Red Teaming”, IEEE Transactions on
Systems, Man, Cybernetics, Part B, Vol. 36 No. 1, pp. 268-285, 2006.
[4] Smith, E. A., (2002), Effects Bases Operations: Applying Network Centric Warfare in Peace, Crisis and War, CCRP
publication.
[5] Lauren, M. K., (2000), “Modelling combat using fractals and the statistics of scaling systems”, Military Operations
Research 5, pp. 47–58.
[6] Miller, J. H. and Page, S. E., (2007), Complex Adaptive Systems: An Introduction to Computational Models of Social
Life, Princeton University Press.
[7] Yang, A., and Shan, Y., (2008), Intelligent Complex Adaptive Systems, IGI Publishing.
[8] Capra, F., (2002), The Hidden Connections: Integrating The Biological, Cognitive, And Social Dimensions Of Life
Into A Science Of Sustainability, Doubleday, 2002.
[9] Gilbert, N., and Troitzsch, K. G., (2005), Simulation for the Social Scientist, Open University Press, McGraw-Hill
Education, Second Edition.
[10] Brooks, R. A., (1991), “Intelligence Without Reason”, Proceedings of 12th Int. Joint Conf. on Artificial Intelligence,
Sydney, Australia, pp. 569–595.
[11] Easton, A., Barlow, M., (2002), “CROCADILE: An Agent-based Distillation System Incorporating Aspects of
Constructive Simulation”, Available online at: www.siaa.asn.au/get/2395361059.pdf
[12] Sprague, K. and Dobias, P., (2008), “Behaviour in Simulated Combat: Adaptation and Response to Complex Systems
Factors”, Defence R&D Canada, Centre for Operational Research and Analysis, DRDC CORA TM 2008-044.
[13] Ilachinski, A., (1997), “Irreducible semi-autonomous adaptive combat (isaac): An artificial life approach to land
combat”, Research Memorandum, CRM 97-61, Center for Naval Analyses, Alexandria.
[14] Ilachinski, A., (1999), “Enhanced ISAAC Neural Simulation Toolkit (EINSTein): An Artificial-Life Laboratory for
Exploring Self-Organized Emergence in Land Combat (U) ”,Beta-Test User’s Guide, CIM 610.10, Center for Naval
Analyses.
[15] Lauren, M. K. and Stephen, R.T., (2002), “MANA: Map Aware Non-uniform Automata, A New Zealand approach to
scenario modeling”, Journal of Battlefield Technology, Vol. 5 No. 1, pp. 27-31.
83
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

[16] Barlow, M. and Easton, A., (2002), “Crocadile: an open, extensible agent-based distillation engine”, Information and
Security Vol. 8, pp. 17–51.
[17] White G, (2004), “The Mathematical Agent-A Complex Adaptive System Representation in BactoWars”, Proceedings
from the Inaugural Complex Adaptive Systems for Defence, Adelaide, Australia.
[18] Yang, A., Abbass, H. A. and Sarker, R. A., (2005), “WISDOM-II: A Network Centric Model for Warfare”, KES, Vol.
3, pp. 813-819.
[19] Bryant, D. J., (2003), "Critique, Explore, Compare, and Adapt (CECA): A new model for command decision making",
Defiance R&D Canada – Toronto, Ontario, Technical Report DRDC Toronto TR 2003-105.
[20] Breton, R., (2008.), "The modeling of three levels of cognitive controls with the Cognitive-OODA loop framework",
Defence R&D Canada – Valcartier, Technical Report, DRDC Valcartier TR 2008-111.
[21] Breton, R. and Rousseau, R., (2008), "The M-OODA: Modelling of the OODA loop as a modular functional system",
Defence R&D Canada – Valcartier, Technical Memorandum, DRDC Valcartier TM 2008-130.

84
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

Advances and Issues in Web Personalization Techniques


A.Jebaraj Ratnakumar

Professor and Head, Department of Computer Science and Engineering,


Apollo Engineering College, Chennai, Tamil Nadu, India
E-Mail : ajrk_jeba@yahoo.co.in

Abstract

Web Personalization is a process of gathering and storing information about site visitors, analyzing the information, and
based on the analysis, delivering right information to each user at the right time. Commercial websites increasingly employ
personalization to help retain customers and reduce information overload. Web Personalization constitutes the mechanisms
and technologies necessary to customize information access to the end-user. Web personalization is one of the most
important trends in data processing and businesses. The objectives of this article are as follows: i) how personalization can
be realized by employing planning and reasoning about actions ii) how personalization based on adaptive hypermedia can be
realized and generalized to achieve re-usable, encapsulated personalization functionality for the Semantic Web iii) how rule-
based user modeling for the Semantic Web can be achieved.

Keywords : Web Mining, Semantic Web, Personalization, Hypermedia, User Profiling

85
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

1 Introduction

The objective of personalization for the purpose of delivery of personalized information is fairly straightforward. It is to
deliver information that is relevant to an individual or a group of individuals in the format and layout specified and in time
intervals specified. When information sources are updated, it is important that updated information be delivered to
individuals. Updated information may be delivered immediately upon updates to the information sources or based on a
schedule specified by the individuals or by a system default.

A second origin of the term personalization is the concept of one-to-one marketing in which a business does marketing
tailored to a group of individual customers rather than to the entire population of its geographical marketing territory. The
motivation for one-to-one marketing is to increase the revenue and decrease the loss for a business by understanding the
needs, habits and lifestyle, preferences, likes and dislikes of its customers, and addressing customers' individual needs and
preferences. The idea is that by understanding the needs, attitudes, and preferences of its customers, a business may tailor
different marketing campaigns, and pricing and distribution strategies for different categories of customers, thereby
becoming more successful in acquiring new customers, retaining existing customers, and selling additional goods and
services to existing customers. Further, a business may be able to reduce financial losses and expenses by cutting the cost of
acquiring new customers, preventing significant defection of existing customers, and detecting and preventing risky
business transactions with some of the customers. There is no doubt that web personalization has attracted more visits from
users to the Internet, and it helps to guide users down to the path of their interests more effectively.

2. Personalization Process
Personalization aims to provide users with what they need without requiring them to ask for it explicitly. This means that a
personalization system must somehow infer what the user requires based on either previous or current interactions with the
user. This in itself assumes that the system somehow obtains information on the user and infers what his needs are based on
this information.
In terms of the learning task, personalization can be viewed as a
Prediction Task: A model must be built to predict ratings for items not currently rated by the user. Depending on whether
the user ratings are numeric or discrete, the learning task can be viewed as a being one of regression or classification.
Selection Task: A model must be built that selects the N most relevant items for a user that the user has not already rated.
While this task can be viewed as one of post processing the list of predictions for items generated by a prediction model, the
method of evaluating a selection based personalization strategy would be very different from that of a prediction based
strategy.

2.1 User Profiling


Having a thorough grasp of users’ needs is essential for personalization. The user profiling process serves this purpose by
gathering information about the users. This section will focus more on the output of this process – user profiles. User
profiles are a collection of information used to describe a particular user. This information plays an important role in any
personalization system, as it is the only element in personalization that recognizes the differences between users. Basically,
user profiles tell us about who the user is, what he likes, and what his level of knowledge. There are various kinds of
information that user profiles can tell. However, the part that intrigues us is the information of the interests of each user.
Since user profiles are used to describe users, it is not surprising that input from users is vital to its construction. Therefore,
we will cover two major options for gathering user input, namely explicit input and implicit input in the last section of this
chapter.

2.2 Personalization: From the World Wide Web to the Semantic Web
Approaches to personalization have been investigated since the early days of the World Wide Web. In order to establish
personalization on the Semantic Web, it is essential to study personalization techniques on the World Wide Web, learn from
research findings and use the possibilities of machine-processable semantic descriptions of Web resources provided in the
Semantic Web (including languages as well as architectures and other technologies) to overcome problems and gaps
reported in the WWW-case.

86
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

3 Personalization Techniques
3.1 Planning and Reasoning about Actions

Another line of research has investigated the use of reasoning techniques in order to obtain forms of personalization. The
study has involved two application areas: that of educational systems and the emerging area of web services. In both cases,
the idea that we explored is to base adaptation on the reasoning capabilities of a rational agent, built by means of a
declarative language. For what concerns the former application domain, the focus was put on the possible uses of three
different reasoning techniques, namely planning, temporal projection, and temporal explanation, which have been developed
for allowing software agents to build action plans and to verify whether some properties of interest hold after the application
of given sequences of actions. In both cases actions are –usually– not executed in the real world but their execution is
simulated “in the mind” of the system, which has to foresee their effects in order to build solutions. A group of agents,
called reasoners, works on a dynamic domain description, where the basic actions that can be executed are of the kind
“attend course X” and where also complex professional expertise can be described. Effects and conditions of actions are
essentially given in terms of a set of abstract competences, which are connected by causal relationships. The set of all the
possible competences and of their relations defines an ontology. This multi-level description of the domain bears along
many advantages. On the one hand, the high modularity that this approach to knowledge description manifests allows course
descriptions as well as expertise descriptions to be added, deleted or modified, without affecting the system behavior. On the
other hand, working at the level of competences is close to human intuition and enables the application of both goal-directed
reasoning processes and explanation mechanisms.

The reasoning process that supports the definition of a study plan, aimed at reaching a certain learning goal, either computes
over the effects of attending courses (given in terms of competence acquisition, credit gaining, and the like) or over those
conditions that make the attendance of a course reasonable from the educator point of view. The logic approach also enables
the validation of student-given study plans with respect to some learning goal of interest to the student himself.

The same mechanisms can be used for composing, in an automatic and goal-driven way, learning objects that are
represented according to the SCORM standard of representation. This kind of descriptions, in fact, account also for a
semantic annotation in terms of preconditions and learning objectives, that allows the interpretation of learning objects as
actions.

Reasoning techniques can also be used for automatically retrieving and composing web services in order to accomplish
(complex) tasks of interest, respecting the constraints and the preferences of the user. In this case, the reasoning process
works on a declarative description of the communication policies that are followed by the services, which are to be included
in their documentation. The approach that was proposed is set in the Semantic Web field of research and inherits from
research in the field of multi-agent systems. By taking the abstraction of web services as software agents, that communicate
by following predefined, public and sharable interaction protocols, the possible benefits, provided by a declarative
description of their communicative behavior, in terms of personalization of the service selection and composition have been
studied. The approach models the interaction protocols provided by web services by a set of logic clauses representing
policies, thus at high (not at network) level. A description by policies is definitely richer than the usual service profile
description consisting in the input and output, precondition and effect properties usually taken into account for the
matchmaking. Moreover having a logic specification of the protocol, it is possible to reason about the effects of engaging
specific conversations, and, on this basis, to perform many tasks in an automatic way; in particular, selection and
composition. Actually, the approach that has been proposed can be considered as a second step in the matchmaking process,
which narrows a set of already selected services and performs a customization of the interaction with them. Indeed, the
problem that this proposal faces can intuitively be described as looking for a an answer to the question “Is it possible to
make a deal with this service respecting the user’s goals?”. Given a representation of the service in terms of logic-based
interaction policies and a representation of the customer needs as abstract goals, expressed by a logic formula, logic
programming reasoning techniques are used for understanding if the constraints of the customer fit in with the policy of the
service. More specifically, we have presented an approach to web service selection and composition that is based on
reasoning about conversation protocols, within the framework of an agent language, DyLOG, based on a modal logic of
action and beliefs. The approach extends with communication the proposal to model rational agents in [1]. Since the
interested was on reasoning about the local mental state’s dynamics, this approach differs from other logic-based approaches
to communication in multi-agent systems, as the one taken in [12], where communicative actions affect the global state of a
system, Actually, the target of these latter approaches is to prove global properties of the overall multi-agent system
execution. The focus on the internal specification of interaction protocols for planning dialogue’s moves is closer to [11],
where negotiation protocols, expressed by sets of dialogue constraints, are included in the agent program and used for

87
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

triggering dialogues that achieve goals. However such an approach does not support plan extraction and it cannot exploit
information about the others, that instead is supplied by nested beliefs.

3.2 Adaptive Hypermedia Approach to Personalization

In the area of adaptive hypermedia, research has been carried out to understand how personalization and adaptation
strategies can be successfully applied in hypertext systems and hypertext like environments. It has been stated that in the
area of adaptive hypermedia and of adaptive web–based systems, the focus of developed systems has been so far on closed
world settings. This means that these systems work on a fixed set of resources which are normally known to the system
designers at design time (see the discussion on closed corpus adaptive hypermedia [3]). This observation also relates to the
fact that the issue of authoring adaptive hypermedia systems is still one of the most important research questions in this area,
see e. g. [2]. A generalization of adaptive hypermedia to an Adaptive Web [4] depends therefore on a solution of the closed
corpus problem in adaptive hypermedia. Within the Personal Reader project (www.personal-reader.de), we propose an
architecture for applying some of the techniques developed in adaptive hypermedia to an open corpus. A modular
framework of components / services - for providing the user interface, for mediating between user requests and available
personalization services, for user modeling, for providing personal recommendations and context information, et cetera, is
the core of the Personal Reader framework [10].

3.3 Rule-based User Modeling: Application to e-Learning domain

User modeling is concerned with gathering, discovering, deducing, and providing knowledge about users to supply user
centered adaptation components with information needed in adaptation decisions. In our work, we have studied approaches
to rule-based user modeling particularly in the e-learning domain. In [9], we have proposed an approach for dynamically
generating personalized hypertext relations powered by reasoning mechanisms over distributed RDF annotations. We have
shown an example set of reasoning rules that decide for personalized relations to example pages given some page. Several
ontologies have been used which correspond to the components of an adaptive hypermedia system: a domain ontology
(describing the document space, the relations of documents, and concepts covered in the domain of this document space), a
user ontology (describing learner characteristics), and an observation ontology (modeling different possible interactions of a
user with the hypertext). For generating hypertext structures, a presentation ontology has been introduced.

88
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

Fig.1 E-Learning Program

The Fig.1 represents how the user interacts with the E-Learning modules in the Web. Based on this general approach for
characterizing e-Learning with ontological descriptions and reasoning approaches, we have concentrated on approaches to
rule-based learning modeling [9, 8], and rule-based methods for learner assessment [5, 6, 7].

4. Personalization and Its Commercial Use

Personalization technologies have been widely adopted in electronic commerce. Its immediate objectives are to understand
users’ preferences and contexts and to deliver highly focused, relevant content matched to their needs. The long-term
objective is to generate more business opportunities and increase customers’ satisfaction.

Firms employ personalization technologies in different ways to generate business opportunities. Some firms use
personalization technologies as recommenders to provide offers to each individual in the hope of generating up-sell and
cross-sell opportunities. The main objective is to maximize online firms’ revenue. A well-known example is Amazon, which
greets the users by name and shortlists some books on the recommendation pages. The recommendations are generated
based on the users’ previous purchases and the preferences of like-minded people. Amazon continues to enhance its
personalization system, and more filtering mechanisms are being added to make the recommendations more useful and
relevant to users’ needs. Personalization technologies are also used to dynamically arrange the index of product pages based
on click-stream analysis to reduce the users’ search efforts. One example is My Yahoo, which personalizes the content
based on users’ profiles. For instance, it provides information on the horoscope for the correct star sign matched with a
person’s date of birth. It also presents the users with an array of choices and allows the users to select what is of interest to
them. The users can personalize not only the content, but also the layout. My Yahoo is considered to be one of the
forerunners among the growing number of personalized websites that have been emerging on the Internet over the last few
years. Personalization is not limited to online shopping, and it is applied to search engines. In 2004, two personalized search
engines, A9.com by an Amazon subsidiary and MyJeeves by Ask Jeeves, were launched to let users store individual search
results and then provide personalized web searches. Recently, the My Yahoo service has been enhanced with personalized

89
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

searching. That is, the users can save pages of search results to a ‘personal web’ and block URLs from appearing on the
result list. Google generates personalized offers under the sponsored links shown on the top of the browser.

5 Conclusions

To develop systems which can filter information according to the requirements of the individual, which can learn the needs
of users from observations about previous navigation and interaction behavior, and which can continuously adapt to the
dynamic interests and changing requirements is still one of the challenges for building smart and successful Web
applications. Looking at the techniques in adaptive hypermedia, we can see that re-usability of these techniques is still an
unsolved problem. We require a formalism expressing adaptive functionality in a system-independent and re-usable manner,
which allows us to apply this adaptive functionality in various contexts, as it has been done e.g. for the adaptive educational
hypermedia systems. Looking at the personalization techniques based on Web mining, we can see that the filtering
techniques (content-based, collaborative-based, demographic based, utility-based, knowledge-based, or others) are limited
as they require a critical mass of data before the underlying machine learning algorithms produce results of sufficient
quality. The birth of the Semantic Web brought along standard models, languages, and tools for representing and dealing
with machine-interpretable semantic descriptions of Web resources, giving a strong new impulse to research on
personalization. Just as the current Web is inherently heterogeneous in data formats and data semantics, the Semantic Web
will be heterogeneous in its reasoning forms and the same will hold for personalization systems developed in the Semantic
Web. In this article we have analyzed some possible applications of techniques for reasoning about actions and change and
of techniques for reasoning about preferences, the so called defeasible logic, but, indeed, the availability of a variety of
reasoning techniques, all fully integrated with the Web, opens the way to the design and the development of forms of
interaction and of personalization that were unimaginable still a short time ago. To this aim it is necessary to integrate
results from many areas, such as Multi-Agent Systems, Security, Trust, Ubiquitous Computing, Ambient Intelligence,
Human-Computer Interaction and, of course, Automated Reasoning.

References
[1] Ardissono, L., A. Goy, G. Petrone, and M. Segnan (2002). Personalization in business-to-customer interaction.
Communications of the ACM, 45 (5), 52-53.
[2] Baldoni, M., Giordano, L., Martelli, A., and Patti, V. Programming Rational Agents in a Modal Action Logic. Annals of
Mathematics and Artificial Intelligence, Special issue on Logic-Based Agent Implementation 8, 5 (2004), 597–635.
[3] Bra, P. D., Aerts, A., Smits, D., and Stash, N. AHA! Version 2.0: More Adaptation Flexibility for Authors. In
Proceedings of the AACE ELearn’2002 conference (Oct. 2002).
[4] Brusilovsky, P. Adaptive Hypermedia. User Modeling and User-Adapted Interaction 11 (2001), 87–110.
[5] Brusilovsky, P., and Maybury, M. The Adaptive Web. Communications of the ACM, 2002.
[6] Cheniti-Belcadhi, L., Henze, N., and Braham, R. An Assessment Framework for ELearning in the Semantic Web. In
Proceedings of the Twelfth GI- Workshop on Adaptation and User Modeling in interactive Systems (ABIS 04) (Berlin,
Germany, October 2004).
[7] Cheniti-Belcadhi, L., Henze, N., and Braham, R. A Framework for dynamic Assessment for Adaptive Content
Presentation in Educational Hypermedia. In Proceedings of the IADIS WWW / Internet 2004 Conference (Madrid,
Spain, October 2004).
[8] Cheniti-Belcadhi, L., Henze, N., and Braham, R. Towards a service based architecture for assessment. In Proceedings of
the Thirteenth GI- Workshop on Adaptation and User Modeling in interactive Systems (ABIS 05) (Saarbr¨ucken,
Germany, October 2005).
[9] Dolog, P., and Sch¨afer, M. Learner modeling on the semantic web. In nternational Workshop on Personalization on the
Semantic Web PersWeb’05 (Edinburgh, UK, July 2005).
[10] Henze, N., Dolog, P., and Nejdl, W. Reasoning and Ontologies for Personalized e-Learning. Educational Technology &
Society 7, 4 (2004).
[11] Henze, N., and Kriesell, M. Personalization Functionality for the Semantic Web: Architectural Outline and First
Sample Implementation. In 1st International Workshop on Engineering the Adaptive Web (EAW 2004) (Eindhoven,
The Netherlands, 2004).
[12] Manber, U., Patel, A., and Robison, J. (2000). Experience with personalization on Yahoo! Communications of the
ACM, 43 (8), 35-39.
[13] Sadri, F., Toni, F., and Torroni, P. Dialogues for Negotiation: Agent Varieties and Dialogue Sequences. In Proc. of
ATAL’01 (Seattle, WA, 2001).
[14] Shapiro, S., Lespance, Y., and Levesque, H. J. Specifying communicative multiagent systems. In Agents and Multi-
Agent Systems - Formalisms, Methodologies, and Applications (1998), vol. 1441 of LNAI, Springer-Verlag, pp. 1–14.
90
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

[15] Wang, W. and I. Benbasat (2005). Integrating TAM with trust to explain online recommendation agent adoption,
Journal of Association for Information Systems, 6 (3), 72-101.

91
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

Closed Loop Controlled Bridgeless PFC Converter with Multicore Inductor

M.GOPINATH1, S.RAMAREDDY2
RESEARCH SCHOLAR, BHARATH UNIVERSITY, CHENNAI, INDIA.1
PROFESSOR, JERUSALEM COLLEGE OF ENGG, CHENNAI, INDIA.2
mgopinath_10@yahoo.co.in

Abstract
For the sake of energy saving, harmonic current suppression and drive performance improvement, Bridgeless PFC circuit
topologies are used more and more extensively. The implementation of a bridgeless power factor correction (PFC) boost
rectifier with low common-mode noise is presented in this paper. The proposed method is compared with conventional
method. The proposed implementation employs a unique multiple-winding, multicore inductor to increase the utilization of
the magnetic material. Open and closed loop model is developed and simulated. The simulation results justify the theoretical
analysis.

Index Terms—Boost converter, bridgeless, magnetic integration, power factor correction (PFC).

92
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

I. INTRODUCTION
TO maximize the power supply efficiency, bridgeless power factor correction (PFC) circuit topologies that
may reduce the conduction loss by reducing the number of semiconductor components in the line-current path have been
introduced. The bridgeless PFC boost implementations have received the most attention all over. Bridgeless PFC is one of
the options to meet these new requirements. The main goal of this paper is to present a bridgeless solution that is relatively
easy to implement in the sense that it does not require any specific controller and the operation remains very similar to that
of a conventional PFC. The major drawback of the conventional PFC is the low utilization of switches and magnetic
components. The proposed implementation employs a unique multiple-winding, multicore inductor to increase the
utilization of the magnetic material. In this paper, a systematic review of the bridgeless PFC boost rectifier and proposed
implementation employs unique multiple-winding, multicore inductor implementations that have received the most attention
is presented. Performance comparison between the conventional bridgeless PFC boost rectifier and a representative member
of the bridgeless PFC boost rectifier with multicore inductor is performed. Open and closed loop controlled modeled is
developed for the proposed system.

Fig.01. Bridgeless PFC boost converter Fig.02. Proposed converter with common- core inductors.

II. BRIDGELESS PFC CONVERTER WITH COMMON-CORE INDUCTORS


The bridgeless PFC boost rectifier in Fig.01 consists of two boost PFC rectifiers, each operating during a half
line cycle. One boost rectifier operates while the other boost rectifier is idle. As a result, the utilization of switches and
magnetic components is only one-half of that of the conventional PFC boost converter that always utilizes all the
components during the entire line cycle. The low utilization of the components may be a serious penalty in terms of weight,
power density, and cost. However, the utilization can be improved by minimizing the number of components through
component integration. As discussed the number of components can be reduced by integrating magnetic components such as
transformers and inductors on the same core. The utilization of the magnetic components in the circuit in Fig.01 can be
significantly improved by employing a unique multiple-winding, multicore inductor structure. The circuit diagram of this
implementation of the dual-boost PFC rectifier is shown in Fig.02.

Fig.03. Two-winding integrated magnetic device Fig.04. Simplified symbol of the magnetic device
with the decoupled energy storage.

93
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

As shown in Fig.03, boost inductor LB consists of a first winding, a second winding, and two cores. The first winding (NA)
consists of series-connected windings NA1 and NA2. The second winding (NB) consists of series-connected windings NB1 and
NB2. Windings NA1 and NB1 are wound on the first core in the same direction. However, windings NA2 and NB2 are wound on
the second core in opposite directions. To facilitate the explanation of the magnetic element, Fig.04 shows the simplified
symbol of the integrated magnetic device in Fig.03 with the polarity mark of each winding. Moreover, Fig.05 shows the
integrated magnetic device in Fig.03 with reference directions of currents and magnetic flux as current iA flows through
winding NA. To make the two windings magnetically independent of each other, windings NA1 and NA2 should have an equal
number of turns, i.e., NA1 = NA2 . In addition, windings NB1 and NB2should also have an equal number of turns, i.e., NB1 and
NB2. As can be seen in Fig.05, current iA generates magnetic flux φA=NA X iA in each core. The change of flux φA induces
the current in windings NB1 and NB2 in each core. Because of the opposite winding directions and the equal number of turns
of NB1 and NB2, the induced currents in windings NB1 and NB2 have opposite directions and equal magnitudes. As a result,
the total current of winding NB is zero, i.e., iB = 0. Similarly, current iA is zero when current iB flows in winding NB. As a
result, the first winding and the second winding are magnetically independent and can be used as two different inductors. As
shown in Fig.02, during the period when ac input voltage Vac is positive, the boost rectifier that consists of switch S1 , diodes
D1 and D4 , and windings NA1 and NA2 operates to deliver energy to the output, while the boost rectifier that consists of
switch S2 , diodes D2 and D3 , and windings NB1 and NB2 is idle.

Fig.05. Integrated magnetic device


It should be noted that the two cores on which windings NA1 and NA2 are wound are fully utilized although windings NB1
and NB2 are idle. Similarly, during the period when ac input voltage Vac is negative, the boost rectifier that consists of switch
S2 , diodes D2 and D3 , and windings NB1 and NB2 operates to deliver energy to the output, while the boost rectifier that
consists of switch S1 , diodes D1 and D4 , and windings NA1 and NA2 is idle. It should be also noted that the two cores are still
fully utilized by windings NB1 and NB2 although windings NA1 and NA2 are idle. As a result, the high utilization of the
magnetic cores significantly improves power density and reduces the overall weight of the power supply. While windings
NA1, NA2, NB1 and NB2 can be easily manufactured with an equal number of turns, the cross-sectional area and permeability
of magnetic cores exhibit small differences within the specified manufacturing tolerances. As a result of this difference in
the core parameters, the magnetizing inductances of the two coupled inductors may not be the same so that the cancellation
of the currents in the inactive windings (windings NB1 and NB2 during positive line half cycles and NA1 and NA2 during
negative line half cycles) may not be perfect. However, the lack of a perfect current cancellation in the inactive windings has
virtually no effect on the electromagnetic interference (EMI) performance of the circuit in Fig.02 since return diodes D3 and
D4 always provide low-impedance current path for the return current, i.e., they always connect the load directly to the
source. The effect of the mismatched magnetizing inductance of the cores is observed as a current flow of the switching-
frequency component of the return current (ripple current) through the inactive winding. It should also be noted that in
bridgeless boost PFC implementations with the return diodes, the line-frequency return current divides between the path
through a return diode and the path through the inactive switch and inductor in accordance to the low-frequency (dc)
impedances of this two paths. According to this analysis, for gapped cores that typically exhibit magnetizing inductance
tolerances in the ±10 range, only 10% of the ripple current returns through the inactive windings. Such a small current has
virtually no effect on the performance of the circuit, i.e., it practically does not affect the efficiency or EMI of the circuit.
Finally, the leakage inductance of the coupled inductors has no effect on the operation and performance of the circuit, and
can be neglected.

94
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

III. SIMULATION RESULTS ANALYSIS.


Simulation is done using Matlab Simulink and the results are presented. The dual boost converter is shown in
Fig.6.a.The corresponding AC input voltage and current waveforms are shown in Fig.6.b. It can be seen that the current and
voltage are almost in phase. The acceptance of the implementation in practical applications is hampered by a high common-
mode noise produced by high-frequency switching of S1 and S2. The major drawback of the rectifier is the low utilization of
switches and magnetic components. Modified dual-boost PFC rectifier with common-core inductors is shown in Fig.7.a. It is
assumed that a controlled switch is implemented as the power MOSFET with its inherently slow body diode. Voltage across
the MOSFET’s 1& 2 are shown in Fig.7.b respectively. Ac input voltage and current are shown in Fig.7.c. DC output
current and output voltage are shown in Figs.7.d and 7.e respectively. Open loop controlled dual-boost PFC rectifier with
common-core inductors circuit is shown in Fig 8.a. A disturbance in input voltage as well as in the DC output voltage is
shown in Figs.8.b and 8.c.In Open loop control no proper controller is present.

Fig.06.a. Simulation circuit of dual boost converter Fig.6.b. AC input voltage and current

Fig.07.a. Simulation circuit of dual boost converter Fig.07.b. Voltage across the MOSFET’s 1& 2 with inductor core

95
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

Fig.7.c.Proposed circuit -AC input voltage and current Fig.7.d. DC Output Current

Fig.7.e. DC Output Voltage

Fig.08.a. Open loop circuit of dual boost converter with inductor core.

96
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

Fig.08.b. Ac input voltage with disturbance Fig.08.c. DC Output voltage with disturbance
Closed loop system is shown in Fig.9.a. Output voltage is sensed and it is compared with a reference voltage. The
error is processed through a PI controller. There is a step rise in input voltage for closed loop system. The output of the pulse
generator controls the output voltage till it reaches the set value. It can be seen that the DC voltage reaches the set value as
shown in Fig.9.b. The comparison of input voltage for conventional and proposed method is shown in Fig.9.c.

Fig.09.a. Closed loop circuit of dual boost converter with inductor core.

97
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

Fig.09.b. Output voltage with disturbance Fig.09.c. Comparison of input voltage for conventional
and proposed method

IV. CONCLUSION.
Bridgeless PFC Converter with magnetic utilization is modeled and simulated using Matlab. Open loop and
closed loop models are developed and they are used successfully for simulation. The simulation studies indicate that the
dual-boost PFC rectifier employs a multiple-winding magnetic device to increase the utilization of the magnetic core. The
performance of the proposed rectifier was verified on Matlab Simulink. The proposed technique improves the efficiency by
approximately 1% compared to the conventional PFC boost rectifier, and improves the utilization of the magnetic cores
from the conventional bridgeless dual-boost rectifier, resulting in a low-cost high-power-density design.

REFERENCES

[1]. W.-Y. Choi, J.-M. Kwon, E.-H. Kim, J.-J. Lee and B.-H. Kwon, “Bridgeless boost rectifier with low conduction losses
and reduced diode reverse-recovery problems,” IEEE Trans. Ind. Electron., vol. 54, no. 2, pp. 769–780, Apr. 2007.
[2]. H. Ye, Z. Yang, J. Dai, C. Yan, X. Xin, and J. Ying, “Common mode noise modeling and analysis of dual boost PFC
circuit,” in Proc. Int.Telecommunication Energy Conf., Sep. 2004, pp. 575–582.
[3]. B. Lu, R. Brown, and M. Soldano, Bridgeless PFC implementation using one cycle control technique,” in Proc. IEEE
Applied Power Electronics Conf., Mar. 2005, pp. 812–817.
[4]. P. Kong, S.Wang, and F. C. Lee, “Common mode EMI noise suppression in bridgeless boost PFC converter,” in Proc.
CPES Power electronics Conf., Apr. 2006, pp. 65–70.
[5]. Huber L., Yungtaek Jang, Jovanovic M.M.: Performance Evaluation of Bridgeless PFC Boost Rectifiers. IEEE
Transactions on Power Electronics, 2008, Vol 23 no3, pp. 1381–1390.
[6] A. F. Souza and I. Barbi, “High power factor rectifier with reduced conduction and commutation losses,” presented at
the Int. Telecommun. Energy Conf. (INTELEC), Copenhagen, Denmark, Jun. 1999.
[7] T. Ern¨o and M. Frisch, “Second generation of PFC solutions,” Power Electron. Eur., no. 7, pp. 33–35, 2004.
[8] J. Sun, K. F. Webb, and V. Mehrotra, “Integrated magnetics for current doubler rectifiers,” IEEE Trans. Power Electron.,
vol. 19, no. 3, pp. 582– 590, Nov. 2004.
[9] P. Xu, M. Ye, P. Wong, and F. C. Lee, “Design of 48-V voltage regulator modules with a novel integrated magnetics,”
IEEE Trans. Power Electron., vol. 17, no. 6, pp. 990–998, Nov. 2002.
[10]. Yungtaek Jang, Milan M. Jovanovi´c, “A Bridgeless PFC Boost Rectifier With Optimized Magnetic Utilization” IEEE
Transaction on Power Electronics, VOL. 24, NO. 1, January 2009.
[11]. Gwi-Geun Park, Kee-Yong Kwon and Tae-Woong Kim “PFC Dual Boost Converter Based on Input Voltage
Estimation for DC Inverter Air Conditioner” Journal of Power Electronics, Vol. 10, No. 3, May 2010

98
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

About Authors
M.Gopinath has obtained his B.E degree from Bharathiar University, Coimbatore in the year 2002. He
obtained his M-Tech degree from Vellore Institute of Technology, Vellore in the year 2004.He is presently
doing his research work at Bharath University, Chennai. He is working as an Assistant Professor/EEE, at
Ganadipathy Tulsi’s Engg College, Vellore.

S.Ramareddy is Professor and Head of Electrical Department Jerusalem Engineering College, Chennai.
He obtained his D.E.E from S.M.V.M Polytechnic, Tanuku, A.P. A.M.I.E in Electrical Engg from
institution of Engineers (India), M.E in Power System Anna University. He received Ph.D degree in the
area of Resonant Converters from College of Engineering, Anna University, Chennai. He has published
over 20 Technical papers in National and International Conference proceeding/Journals. He has secured
A.M.I.E Institution Gold medal for obtaining higher marks. He has secured AIMO best project awards and
Vijaya Ratna Award. He has worked in Tata Consulting Engineers, Bangalore and Anna University,
Chennai. His research interest is in the area of resonant converter, VLSI and Solid State drives. He is a life member of
Institution of Engineers (India), Indian Society for India and Society of Power Engineers. He is a fellow of Institution of
Electronics and telecommunication Engineers (India). He has published books on Power Electronics and Solid State circuits.

99
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

Advanced Hill Cipher Involving a Pair of Keys


V.U.K.Sastry1, Aruna Varanasi2 and S.Udaya Kumar3
1
Department of computer Science and Engineering,SNIST
Hyderabad, India,
vuksastry@rediffmail.com
2
Department of computer Science and Engineering,SNIST
Hyderabad, India,
varanasi.aruna2002@gmail.com
3
Department of computer Science and Engineering,SNIST
Hyderabad, India,
uksusarla@rediffmail.com

Abstract
In this paper, we have devoted our attention to the development of a symmetric block cipher by generalizing the
advanced Hill cipher. In this we have taken two keys K and L both of size n/2xn/2 and two more keys d and e,
wherein each number is lying in the interval [0-255]. Here we have determined a pair of involutory matrices A and
B by including K and d, L and e appropriately. Matrices A and B are used as left and right multiplicants
(respectively) of the plaintext P, and operation with mod 256 is carried out in each round of the iteration process. A
function called Permute() is included in this cipher for achieving confusion and diffusion. The avalanche effect and
the cryptanalysis studied in this investigation have revealed that the cipher is a strong one and it offers thorough
security.

Key words: pair of keys, involutory matrix, cryptanalysis, avalanche effect, ciphertext, block cipher.

100
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

1. Introduction
The study of the Hill cipher and its variants has attracted the attention of several researchers. Many authors have modified
the Hill Cipher [1-3] by including iteration and permutation/mixing. In all these investigations the multiplication with key or
keys, coupled with the modular arithmetic, scatters the plaintext, and the permutation/mixing causes markedly confusion
and diffusion. All these operations have enhanced the strength of the cipher.
In a recent investigation [4-6], we have considered several problems wherein we have extended the ideas of Hill cipher to
advanced Hill cipher. In this cipher we have used a matrix, called involutory matrix, where the modular arithmetic inverse
of a matrix is the same as the matrix it self. This matrix includes the key, and all the other elements of the matrix depend
upon the key. On using these concepts we have discussed various aspects of the advanced Hill cipher.
In the present analysis our objective is to study the advanced Hill Cipher which involves a pair of keys, wherein one key
is embedded in the involutory matrix which is acting as the left multiplicant of the plaintext, and the other key is embedded
in another involutory matrix which is acting as the right multiplicant of the plaintext. Here our interest is to examine how the
left and the right multiplicants of the plaintext uplift the strength of the cipher, and how the permutation influences in
furthering the strength of the cipher.
In section 2, we have introduced the development of the cipher and presented the algorithms for the encryption and the
decryption. Section 3 is devoted to the illustration of the cipher. In this we have pointed out the avalanche effect. In Section
4 we have discussed the cryptanalysis. Finally in Section 5 we have dealt with the computations and conclusions.

2. Development of the cipher

In this cipher, let us represent the plaintext P, the key K and the ciphertext C in the following form
P = [Pij], i = 1 to n, j= 1 to n. (2.1)
K= [Kij], i = 1 to n/2, j = 1 to n/2. (2.2)
C = [Cij], i = 1 to n, j = 1 to n. (2.3)
On using the basic ideas of the involutory matrix [4] we get
 A 11 A 12 
A = A A 22 
(2.4)
 21
where A11=K, (2.5)
A22 mod N = - A11 mod N, (2.6)
A12=[d(I- A11)] mod N, (2.7)
A21=[λ(I+ A11)] mod N, (2.8)
in which I is the Identity matrix, d is a positive integer, which can be treated as another key, and λ is also a positive
integer governed by the relation
(dλ) mod N =1. (2.9)
In the above relations, the N is chosen as 256 as the plaintext is converted into decimal numbers by using EBCDIC code.
In a similar manner we obtain another involutory matrix B by using the relations
B11 B12 
B= B B  (2.10)
 21 22 
B11=L, (2.11)
B22 mod N = - B11 mod N, (2.12)
B12=[e(I- B11)] mod N, (2.13)
B21=[µ(I+ B11)] mod N, (2.14)
and
(e µ) mod N =1. (2.15)
in which ‘e’ plays the role of another key. The relations (2.10) to (2.15) are exactly similar to (2.4) to (2.9). Here it is to be noted
that the two keys K and L, chosen by us, are embedded in the involutory matrices A and B respectively.
The flow charts describing the encryption and the decryption processes are presented in Fig.1.

101
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

The corresponding algorithms governing the encryption and the decryption are as follows:

Algorithm for Encryption


• Read n,P,K,L,r,d,e
• A11= K,
B11= L,
• A= involute(A11,d)
B = involute(B11,e)
• for i = 1 to r
{
P = (A P B ) mod 256
P= Permute(P)
}
C=P
• Write( C )
Algorithm for Decryption
1. Read n,C,K,L,r,d,e
2. A11= K
B11=L

3. A= involute(A11,d)
B = involute(B11,e)

102
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

4. for i= 1 to r
{
C = IPermute(C)
C = (A C B) mod 256
}
5. P = C

• Write (P)

involute(A11, d) // This function is used for obtaining A as well as B.


{
- A22 = - A11 mod 256
- for i = 1 to 255
{

if(id mod 256=1)


{
λ =i;
break;
}
}
- A12=[d(I- A11)] mod 256
- A21=[λ(I+ A11)] mod 256
5. Obtain A by using relation

 A 11 A 12 
A = A A 22 
 21
}

In the process of encryption, in each round of the iteration process, we have used the function Permute() which can be
explained as follows:
After the computation of (APB)mod 256, we get a matrix, say M, which is of size nxn. In this each number lies in [0-
255]. On representing each number in its binary form, we get a matrix of size nx8n. The upper half of this matrix consisting
of the rows 1 to n/2 is mapped into the left half of a new matrix such that the elements occupy n rows and 4n columns. The
right half is filled with the lower portion consisting of the rows (n/2+1) to n. This new matrix is converted into its decimal
form by taking the binary bits, eight at a time, in a row wise manner. For a detailed discussion of the process involved in the
function Permute(), we may refer to [6].
The IPermute(), used in the process of decryption, stands for the reverse process of Permute().
The number of rounds ‘r’ in the iteration process is 16.

3. Illustration of the cipher


Let us consider the plaintext given below:
Dear brother! you have joined in army for the protection of our Country. Father is around liquor shops for which licence is
given by the Government. Mother is worried with depression. You know Iam doing my B.Tech final year.
(3.1)
Now let us focus our attention on the first 64 characters of the above plaintext. Thus we have
‘Dear brother! You have joined in army for the protection of our ’ (3.2)
On applying the EBCDIC code, (3.2) can be brought to the form

103
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

196 133 129 153 64 130 153 150


163 136 133 153 79 64 168 150
P=   (3.3)
164 64 136 129 165 133 64 145
 
150 137 149 133 132 64 137 149
 64 129 153 148 168 64 134 150
 
153 64 163 136 133 64 151 153
150 163 133 131 163 137 150 149
 
 64 150 134 64 150 164 153 64 

Let the pair of key matrices K and L be taken in the form

228 52 145 94 
199 72 37 99 
K=   (3.4)
245 25 59 133
 72 221 27 94 

and

102 210 33 45 
117 120 89 97 
L= . (3.5)
 79 49 53 223 
180 133 254 37 

On using the relations given in section 2, the involutory matrices A and B are obtained in the form

228 52 145 115 244 193 254


94
199 72 37 99 23 15121 243
 
A= 245 25 59 26 117 
133 229 201 (3.6)
 
 72 221 27 94 200 77 43 205
 11 140 159 66 28 204 111 162 
 
137 103 203 45 57 184 219 157 
 251 151 4 107 11 231 197 123 
 
 56 147 245 113 184 35 229 162 

and

102 210 33 45 215 6 235 111


117 120 89 97 135 157 83 171
 
B=  79 49 53 223 229 155 60 21  (3.7)
 
180 133 254 37 188 55 234 140 
 235 74 125 217 154 46 223 211
 
 1 117 213 189 139 136 167 159 
 51 77 158 131 177 207 203 33 
 
100 209 70 206 76 123 2 219

In obtaining A and B we have taken d=251 and e=117 respectively.


On using the encryption algorithm given in section 2, we get

104
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

116 210 244 221 47 133 73 196


 225 183 21 143 201 195 122
69
 
 21 231 200 204 152 201 32 
C = 187  (3.8)
202 108 153 185 194 91 79 80 
254 14 250 60 17 187 61 173
 
 58 248 239 245 156 6 26 196
140 59 16 97 218 76 35 122
 
148 208 88 2 184 118 206 123

Then on using (3.6) to (3.8) and adopting the decryption algorithm, we get back the original plaintext given by (3.3).

Let us now discuss the avalanche effect. To carry out this one, we change the eleventh character ‘d’, in the plaintext, to
‘e’. As the EBCDIC codes of d and e are 132 and 133 respectively, they differ by one binary bit. On using this modified
plaintext, the involutory matrices given by (3.6) and (3.7) and adopting the encryption algorithm, given in section 2, we get
the corresponding ciphertext. This is given by

 81 59 39 211 144 86 43 53 
142 99 184 27 184 174 49 39 
 
C = 252 59 8 230 166 40 185 161  (3.9)
 
133 48 2 109 100 2 104 191 
185 179 110 195 55 235 83 206
 
228 22 84 128 29 110 81 221
 50 89 100 40 113 179 24 142 
 
 43 197 10 194 249 16 21 155 
On comparing (3.8) and (3.9), after converting them in to their binary form, we find that the two ciphertexts differ by 283
binary bits (out of 512bits). This is a very good result.

Now let us change K42 from 221 to 220. These two also differ by one binary bit. On applying the encryption algorithm with
the modified key (correspondingly A changes) and the other required data intact, we obtain the corresponding ciphertext C
given by
111 151 216 48 49 144 80 159
 78 32 179 163 1 18 238 160
 
C=  94 242 24 108 23 188 243 11  . (3.10)
 
127 228 152 3 212 130 26 164
157 125 29 251 248 136 232 35 
 
 217 140 73 108 121 228 113 87 
169 192 145 52 139 114 205 119
 
129 248 74 245 14 171 191 87 

On comparing (3.8) and (3.10), it is interesting to note that the two ciphertexts differ by 276 binary bits (out of 512 bits).
This also exhibits the strength of the cipher.

4. Cryptanalysis
In the literature of cryptography, the well known cryptanalytic attacks are
1) Ciphertext only attack (brute force attack),
2) Known plaintext attack,
3) Chosen plaintext attack,
4) Chosen ciphertext attack.

2
In this analysis, the size of the key space is 2(4n +16) as we have the keys K and L, wherein each one is of size n/2xn/2,
and in addition to these we have two more keys namely d and e, wherein each one is of size 8 binary bits. If the time

105
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

required for the computation with one key in this key space is assumed to be 10-7 seconds, then the time required with all the
keys in the key space is
2
2 4 n +16 ×10 −7 +16
years = 24 n × 3.17 × 10−15 years
2

365× 24×60×60

In the present analysis as n=8, the time required for the computation with all the keys in the key space is 3.17x1066.6
years. As this is very large, the cipher cannot be broken by the brute force attack.
In the known plaintext attack, we know as many pairs of plaintext and ciphertext as we require. However, in this
algorithm, as there is a right multiplicant to the plaintext, in addition to the left multiplicant, and a mod operation followed
by the permutation in each round of the iteration process, the relation between the ciphertext C and the plaintext P assumes a
complicated form given by
C = f((A . . . . (A f((APB)mod 256)B)mod 256 . . . . B)mod 256), (4.1)
in which, f denotes the function Permute(). This notation is used for convenience.
As the plaintext matrix, P is surrounded by the right side involutory matrix, B, the mod operation and the permutation,
unlike in the case of the classical Hill cipher, here, though the inverse of P can be obtained, it cannot be transferred to the
left side, and hence the cipher cannot be broken.
Regarding the last two cases, we notice that, apparently, no special choice of the plaintext or the ciphertext is a feasible
one for breaking the cipher.
In the light of the above discussion, we conclude that the cipher cannot be broken by any cryptanalytic attack.

5. Computations and conclusions

In this cipher the algorithms for the encryption and the decryption are implemented in java programming language.
The entire plaintext given in (3.1) is divided into four blocks, wherein each block contains 64 characters, however the last
block is appended with thirty two characters (blank characters), so that it also contains 64 characters. Then on applying the
encryption algorithm, we have obtained the corresponding ciphertext, by computing each block separately, as shown below:

106
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

116 210 244 221 47 133 73 196


225 183 21 69 143 201 195 122
187 21 231 200 204 152 201 32
202 108 153 185 194 91 79 80
254 14 250 60 17 187 61 173
58 248 239 245 156 6 26 196
140 59 16 97 218 76 35 122
148 208 88 2 184 118 206 123
157 23 183 228 229 180 98 59
185 199 41 38 20 25 129 94
44 140 67 127 202 78 131 153
173 90 33 199 202 211 103 102
244 79 121 247 226 56 178 31
212 246 176 155 69 0 159 86
98 169 108 77 137 68 188 22
67 29 250 57 21 150 78 225
6 45 145 248 171 132 242 194
72 219 223 145 189 16 102 131
2 135 210 191 182 155 252 14
2 22 170 235 9 74 117 152
88 163 43 208 221 51 38 137
245 76 34 161 6 113 238 255
176 175 183 158 33 127 166 166
111 177 214 208 188 163 151 45
60 85 199 233 111 79 13 244
65 110 225 148 140 11 82 38
93 205 18 147 227 178 195 19
166 209 237 33 45 54 65 28
189 41 21 219 180 244 4 147
132 19 192 27 44 68 204 50
0 64 168 243 128 226 202 154
185 17 189 243 221 124 149 121
In this analysis, we have seen that the avalanche effect studied by changing one bit in the plaintext and the
same discussed by changing one bit in the key, both indicate that the ciphertext undergoes a lot of change which
implies very clearly that the cipher is a strong one. The cryptanalysis carried out by considering various attacks
also confirm that the cipher is a potential one and it cannot be broken by any cryptanalytic approach.
In this cipher, as we have four keys, namely, K, L, d and e, the strength of the cipher is quite significant.
Further as K and L are embedded in A and B, and as all the binary bits of A, B and P (after being operated by
mod 256) undergo a lot of scattering on account of the permutation, we notice that the confusion and diffusion
caused by this process makes the cipher much more effective. Here it is interesting to note that, as we have four
keys, the cipher cannot be deciphered even when one of the keys remains as a secret. This leads to high security
of the information under consideration.

107
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

References:
[1] William Stallings, Cryptography and Network Security, Principles and Practice, Third edition, Pearson,
2003.
[2] V.U.K.Sastry, S.Udaya Kumar, and A.Vinaya Babu, “ A Block Cipher Basing upon Permutation,
Substitution, and Iteration”, Journal of Information Privacy and Security, Vol.3, No.1, 2007.
[3] V.U.K.Sastry, N.Ravi Shankar, “Modified Hill Cipher for a Large Block of Plaintext with Interlacing and
Iteration”, Journal of Computer Science 4(1), pp.15-20,2008.
[4] V.U.K.Sastry, Aruna Varanasi, and S.Udaya Kumar, “ Advanced Hill Cipher Involving Permutation and
Iteration”, , paper sent to International Journal of Advanced Research in Computer science for publication.
[5] V.U.K.Sastry, Aruna Varanasi, and S.Udaya Kumar, “ Advanced Hill Cipher Handling the Entire Plaintext as
a Single Block”, paper sent to International Journal of Advanced Research in Computer science for
publication.
[6] V.U.K.Sastry, Aruna Varanasi, and S.Udaya Kumar, “Advanced Hill Cipher Involving a Key Applied on
Both the Sides of the Plaintext”, paper sent to International Journal of Computational Intelligence and
Information Security for publication.

108
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

Security Threats and Countermeasures in Mobile Ad Hoc Network


Kamanashis Biswas
Senior Lecturer, CSE Department
102, Shukrabad, Daffodil International University, Dhaka-1207
Email: ananda@daffodilvarsity.edu.bd

Abstract
Mobile Ad Hoc Network (MANET) is a collection of communication devices or nodes that can communicate
without any fixed infrastructure and pre-determined organization of available links. The nodes in MANET
themselves are responsible for dynamically discovering other nodes to communicate. Although the ongoing
trend is to adopt ad hoc networks for commercial uses due to their certain unique properties, the main challenge
is the vulnerability to security attacks. A number of challenges like open peer-to-peer network architecture,
stringent resource constraints, shared wireless medium, dynamic network topology etc. are posed in MANET. As
MANET is quickly spreading for the property of its capability in forming temporary network without the aid of
any established infrastructure or centralized administration, security challenges has become a primary concern to
provide secure communication. In this study, I have identified the existent security threats that an ad hoc network
faces, the security services required to be achieved and the countermeasures for attacks in each layer. From this
study, I have found that necessity of secure routing protocol is still a burning question. There is no general
algorithm that suits well against the most commonly known attacks such as wormhole, rushing attack etc. Some
important issues like robust key management, trust based systems, data security in different layer etc. still
demand more research.
Keywords: MANET, blackhole, wormhole, DoS, routing, TCP ACK storm, capture effect

109
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

1. Introduction
Now-a-days, Mobile ad hoc network (MANET) is one of the recent active fields and has received keen
attention because of their self-configuration and self-maintenance capabilities [15]. While early research effort
assumed a friendly and cooperative environment and focused on problems such as wireless channel access and
multihop routing, security has become a primary concern in order to provide protected communication between
nodes in a potentially hostile environment. Recent wireless research indicates that the wireless MANET presents
a larger security problem than conventional wired and wireless networks.
Although mobile ad hoc networks have several advantages over the traditional wired networks, on the other
sides they have a unique set of challenges. Firstly, MANETs face challenges in secure communication. For
example, the resource constraints on nodes in ad hoc networks limit the cryptographic measures that are used to
secure messages. Thus it is susceptible to link attacks ranging from passive eavesdropping to active
impersonation, message replay and message distortion. Secondly, mobile nodes without adequate protection are
easy to compromise. An attacker can listen, modify and attempt to masquerade all the traffic on the wireless
communication channel as one of the legitimate node in the network. Thirdly, static configuration may not be
adequate for the dynamically changing topology in terms of security solution. Various attacks like DoS (Denial
of Service) can easily be launched and flood the network with spurious routing messages through a malicious
node that gives incorrect updating information by pretending to be a legitimate change of routing information.
Finally, lack of cooperation and constrained capability is common in wireless MANET which makes anomalies
hard to distinguish from normalcy. In general, the wireless MANET is particularly vulnerable due to its
fundamental characteristics of open medium, dynamic topology, and absence of central authorities, distribution
cooperation and constrained capability [2].

2. Related Work
A number of researches are done on security challenges and solutions in Mobile ad hoc network. Zhou and
Haas have proposed using threshold cryptography for providing security to the network [17]. Hubaux et al. have
defined a method that is designed to ensure equal participation among members of the ad hoc group, and that
gives each node the authority to issue certificates [3]. Kong, et al. [8] have proposed a secure ad hoc routing
protocol based on secret sharing; unfortunately, this protocol is based on erroneous assumptions, e.g., that each
node cannot impersonate the MAC address of multiple other nodes. Yi et al. also have designed a general
framework for secure ad hoc routing [16]. Deng, et al. have focused on the routing security issues in MANETs
and have described a solution of ‘black hole’ problem [2]. Sanzgiri, et al. have proposed a secure routing
protocol ARAN which is based on certificates and successfully defeats all identified attacks [13].Yang, et al. have
identified the security issues related to multihop network connectivity, discussed the challenges to security
design, and reviewed the state-of-art security proposals that protect the MANET link- and network-layer
operations of delivering packets over the multihop wireless channel [15]. In this paper, I have isolated the attacks
as layer-wise and identified their countermeasures.

3. Security Services
The ultimate goals of the security solutions for MANETs is to provide security services, such as
confidentiality, integrity, authentication, nonrepudiation, availability and scalability to mobile users. In order to
achieve this goal, the security solution should provide complete protection spanning the entire protocol stack.
There is no single mechanism that will provide all the security services in MANETs. The first security service
that must be achieved is availability. Availability deals with the unauthorized upholding of resources. It ensures
the survivability of network services despite of various attacks like Denial of Service or Distributed Denial of
Service attacks. Confidentiality ensures that transmitted information is only readable or accessible by the legal or
authorized party. Basically, it protects data from passive attacks. Integrity implies that the authorized parties are
only allowed to modify the information or messages. It also ensures that a message being transmitted is never
corrupted. Authentication provides the security that the access and supply of data is done only by the authorized
parties. It assures the recipient that the message is from the source that it claims to be from. Nonrepudiation
prevents either sender or receiver from denying a transmitted message i.e. when a message is sent then the
receiver can prove that the message was in fact sent by the alleged sender. On the other hand, after sending a
message, the sender can prove that the message was received by the alleged receiver. Besides these common
security services, scalability and authorization are two major issues that should be considered to achieve security
goals. Scalability deals with adding new node to the existent network without giving any opportunity to the
attacker to compromise that node and gain illegal access. One important thing is that one secure part plus one

110
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

secure part does not mean whole secure system. In case of authorization, access control mechanism is used to
limit and control the access to host systems and applications via communication links.

4. Security Threats in Different Layers of MANET


This section focuses on different types of security threats that are common to TCP/IP network model.
Attacks on the physical layer, link layer, network layer, transport layer and application layer are identified and
described here.

4.1 Threats in Physical Layer


Physical layer must adapt to rapid changes in link characteristics. The most common physical attacks in
MANET are eavesdropping, interference, denial-of-service and jamming. It is easy to jam or intercept the
common radio signal. Eavesdropping is the reading of messages and conversations by unintended receivers. The
nodes in MANET share a wireless medium and the wireless communication use the RF spectrum and broadcast
by nature which can be easily intercepted with receivers tuned to the proper frequency. As a result transmitted
message can be overheard as well as fake message can be injected into the network. Jamming and interference
of radio signals causes message to be lost or corrupt. A powerful transmitter can generate signal that will be
strong enough to overwhelm the target signal and can disrupt communications. Pulse and random noise are the
most common type of signal jamming [14].

4.2 Threats in Link Layer


A number of attacks can be launched in link layer by disrupting the cooperation of the protocols of this layer.
The IEEE 802.11 MAC is vulnerable to DoS attacks. To launch the DoS attack, the attacker may exploit the
binary exponential backoff scheme. That is the attacker corrupts others’ frames and captures the channel by
sending data continuously. Another vulnerability to DoS attacks is exposed in IEEE 802.11 MAC through NAV
field carried in the RTS/CTS frames. During the RTS/CTS handshake, a small RTS frame including the time
needed to complete the CTS, data and ACK frames is sent by the sender. All the neighbors of the sender and
receiver update their NAV field according to the time that they overheard for transmission duration. The attacker
in the local neighborhood is also aware of the duration of the ongoing transmission and he/she may transmit a
few bits within this period to incur bit errors in a victim’s link layer frame via wireless interference [15]. Another
issue in link layer security is that the first security scheme provided by IEEE 802.11 standards is Wired
Equivalent Privacy (WEP). Basically, it was designed to provide security for WLAN. But it suffers from many
design flaws and some weakness in the way RC4 cipher used in WEP. It is well known that WEP is vulnerable
to message privacy and message integrity attacks and probabilistic cipher key recovery attacks. Now, WEP is
replaced by AES in 802.11i. Most of the link layer attacks in MANET are removed by enhancing the existing
protocol or proposing a new protocol to thwart such threats. For example, WPA, RSN/AES-CCMP is also being
developed to improve the cryptographic strength and enhance security. Still attacks using the NAV field of
RTS/CTS frame remains unsolvable, it remains unclear how to defeat such resource consumption DoS attacks in
MANET.

4.3 Threats in Network Layer


The whole communication among the nodes of MANET mainly depends on routing protocols as the
nodes also function as routers that discover and maintain routes to other nodes in the network. As a result,
network layer has become the most interesting target point for the attackers. Two types of network
vulnerabilities can take place such as routing attacks and packet forwarding attacks. An attacker can absorb
network traffic, inject themselves into the path between the source and destination and thus control the network
traffic flow. This attack is known as routing attack. Another type of attack i.e. router table overflow attack
happens to proactive routing algorithms, which update routing information periodically. To launch routing table
overflow attack, the attacker tries to create routes to nonexistent nodes to the authorized nodes present in the
network. He/she can simply send excessive route advertisements to overflow the target system’s routing table.
The goal is to have enough routes so that creation of new routes is prevented or the implementation of routing
protocol is overwhelmed. Routing cache poisoning attack uses the advantage of the promiscuous mode of
routing table updating. This occurs when information stored in routing tables is either deleted, altered or injected
with false information. Suppose a malicious node M wants to poison routes node to X. M could broadcast
spoofed packets with source route to X via M itself, thus neighboring nodes that overhear the packet may add the

111
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

route to their route caches [14]. Besides the common attacks, more sophisticated and subtle attacks have been
identified in MANET. They are blackhole (or sinkhole), byzantine, wormhole, rushing attacks etc.
Wormhole attack is also known as tunneling attack. A tunneling attack is where two or more nodes may
collaborate to encapsulate and exchange messages between them along existing data routes as shown in the
following figure. Suppose node S wishes to form a route to D and initiates route discovery. When M1 receives a
RREQ from S, M1 encapsulates the RREQ and tunnels it to M2 through an existing data route, in this case {M1 --
> A --> B --> C --> M2}. When M2 receives the encapsulated RREQ on to D as if had only traveled {S --> M1 --
> M2 --> D}. Neither M1 nor M2 update the packet header. After route discovery, the destination finds two routes
from S of unequal length: one is of 5 and another is of 4. If M2 tunnels the RREP back to M1, S would falsely
consider the path to D via M1 is better than the path to D via A. Thus, tunneling can prevent honest intermediate
nodes from correctly incrementing the metric used to measure path lengths.

Falsely tunneled path


M1 M2
encapsulate
S decapsulate
D

A B C
Figure 1: Wormhole or Tunneling Attack

Unlike wormhole, blackhole attack is performed in two steps. For example, in the next figure, node 1 wants
to send data packets to node 4 and initiates the route discovery process. At first step, the malicious node exploits
the mobile ad hoc routing protocol such as AODV, to advertise itself as having a valid route to a destination
node. In second step, the attacker consumes the packets and never forwards. We assume that node 3 is a
malicious node and it claims that it has route to the destination whenever it receives RREQ packets, and
immediately sends the response to node 1. If the response from the node 3 reaches first to node 1 then node 1
thinks that the route discovery is complete, ignores all other reply messages and begins to send data packets to
node 3. As a result, all packets through the malicious node is consumed or lost [2].

2 6

5 4
3

Figure 2: Blackhole Attack

Byzantine attack can be launched by a single malicious node or a group of nodes that work in cooperation. A
compromised intermediate node works alone or set of compromised intermediate nodes works in collection to
form attacks. The compromised nodes may create routing loops, forwarding packets in a long route instead of
optimal one, even may drop packets. This attack degrades the routing performance and also disrupts the routing
services.
Another advanced type of attack is rushing attack. In wormhole attack, two colluded attackers form a tunnel
to falsify the original route. If luckily the transmission path is fast enough (e.g. a dedicated channel) then the
tunneled packets can propagate faster than those through a normal multi-hop route, and result in the rushing
attack. Basically, it is another form of denial of service (DoS) attack that can be launched against all currently
proposed on-demand MANET routing protocols such as ARAN and Ariadne [5].
Energy is a critical parameter in the MANET. Battery-powered devices try to conserve energy by
transmitting only when absolutely necessary [2]. The target of resource consumption attack is to send request of
excessive route discovery or unnecessary packets to the victim node in order to consume the battery life. An
attacker or compromised node thus can disrupt the normal functionalities of the MANET. This attack is also
known as sleep deprivation attack.
Location disclosure attack is a part of the information disclosure attack. The malicious node leaks
information regarding the location or the structure of the network and uses the information for further attack. It

112
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

gathers the node location information such as a route map and knows which nodes are situated on the target
route. Traffic analysis is one of the unsolved security attacks against MANETs.
A number of routing protocols are used in MANET that is vulnerable to these above different types of
attacks. For example, in Ad-hoc On-demand Distance Vector (AODV) routing algorithm, the attacker may
advertise a route with a smaller distance metric than the original distance or advertise a routing update with a
large sequence number and invalidate all routing updates from other nodes. In case of Dynamic Source Routing
(DSR) protocol, it is possible to modify the source route listed in the RREQ or RREP packets by the attacker.
Deleting a node from the list, switching the order or appending a new node into the list is also the potential
dangers in DSR. Authenticated Routing for Ad-hoc Networks (ARAN) is immune to rushing attack. ARIADNE
is an on-demand secure ad-hoc routing protocol based on DSR that implements highly efficient symmetric
cryptography. Although it is free from a flood of RREQ packets and cache poisoning attack, but it is immune to
the wormhole attack and rushing attack. Another routing protocol SEAD also does not cope with wormhole
attack.

4.4 Threats in Transport Layer


The security issues related to transport layer are authentication, securing end-to-end communications through
data encryption, handling delays, packet loss and so on. The transport layer protocols in MANET provides end-
to-end connection, reliable packet delivery, flow control, congestion control and clearing of end-to-end
connection. Like TCP protocol in the Internet model, the nodes in a MANET are also vulnerable to the SYN
flooding and session hijacking attacks.
The SYN flooding attack is also DoS attack which is performed by creating a large number of half-opened
TCP connections with a target node. TCP connection between two communicating parties is established through
completing three way handshakes as shown in the following figure. The sender sends a SYN message to the
receiver with a randomly generated ISN (Initial Sequence Number). The receiver also generates another ISN and
sends a SYN message including the ISN as an acknowledgement of the received SYN message. The sender
sends acknowledgement to the receiver. In this way the connection is established between two communicating
parties using TCP three way handshakes.

Figure 3: TCP SYN Flooding

During SYN flooding attack, a malicious node sends a large amount of SYN packets to the target node,
spoofing the return address of the SYN packets. When the target machine receives the SYN packets, it sends out
SYN-ACK packets to the sender and waits for response i.e. ACK packet. The victim node stores all the SYN
packets in a fixed-size table as it waits for the acknowledgement of the three-way handshake. These pending
connection requests could overflow the buffer and may make the system unavailable for long time.
Among other transport layer attacks, session hijacking is a critical error and gives a malicious node the
opportunity of behaving as a legitimate system. All the communications are authenticated only at the beginning
of session setup. The attacker may take the advantage of this and commit session hijacking attack. At first,
he/she spoofs the IP address of target machine and determines the correct sequence number. After that he
performs a DoS attack on the victim. As a result, the target system becomes unavailable for some time. The
attacker now continues the session with the other system as a legitimate system.
Finally, TCP ACK storm is very simple transport layer attack. To perform the attack, the attacker launches a
TCP session hijacking attack at the beginning. After that the attacker sends injected session data as depicted in
the fig. 7.2 and node A acknowledges the received data with an ACK packet to node B. Node B is confused as
the packet contains an unexpected sequence number and it tries to resynchronize the TCP session with node A by
sending an ACK packet that contains the intended sequence number. But the steps are followed again and again
and results in TCP ACK storm [14].

113
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

Figure 4: TCP ACK Storm

4.5 Threats in Application Layer


Applications need to be designed to handle frequent disconnection and reconnection with peer applications as
well as widely varying delay and packet loss characteristics [12]. Like other layers, application layer also
vulnerable and attractive layer for the attacker. Because this layer contains user data that supports many
protocols such as SMTP, HTTP, TELNET and FTP which have many vulnerabilities and access points for
attackers. Various malicious codes such as virus, worm, backdoor and Trojan horse attack both operating
systems and user applications that cause the computer system and network to slow down or even damaged. An
attacker can produce this type of attacks in MANET and can seek their desire information [14]. For example,
using worms large number computers can be controlled by the attackers and these worms can open backdoor
for attacker that bypasses all security mechanisms to the intruders.

5. Countermeasures
The big problem of securing MANET is that there is no common framework or security model that can keep
MANET free from these attacks. The only way to provide secure communication is that security mechanisms
must be applied to each individual layer. For example, directional antennas [1] are used at the media access layer
to protect wormhole attacks while packet leashes [6] are used for network layer defense. In this section, the
layer-wise countermeasures are described. First, consider about physical layer. The physical layer of MANET is
immune to signal jamming, DoS attack and also some passive attacks. Two spread spectrum technologies can be
used to make it difficult to detect or jam signals. Spread spectrum technology changes frequency in a random
fashion or spreads it to a wider spectrum which makes the capture of signal difficult. The FHSS (Frequency
Hopping Spread Spectrum) makes the signal unintelligible duration impulse noise to the eavesdroppers. On the
other hand, DSSS (Direct Sequence Spread Spectrum) represents each data bit in the original signal by multiple
bits in the transmitted signal through 11-bit Barker code. However, both FHSS and DSSS pose difficulties for
the malicious user while trying to intercept the radio signals. To capture and release the content of transmitted
signal, the attacker must know frequency band, spreading code and modulation techniques. Still, there is a
problem. These mechanisms are secure only when the hopping pattern or spreading code is unknown to the
eavesdropper [14].
The security issues that are closely related to link layer are protecting the wireless MAC protocol and
providing link-layer security support. One of the vulnerabilities in link layer is its binary exponential backoff
scheme which has been described in section 4.2. But recently a security extension to 802.11 proposed in [10].
The original 802.11 backoff scheme is slightly modified in that the backoff timer at the sender is provided by the
receiver in stead of setting an arbitrary timer value on its own. As mentioned earlier, the threats of resource
consumption (using NAV field) is still an open challenge though some schemes have been proposed such as
ERA-802.11[11]. Finally, the common known security fault in link layer is the weakness of WEP. Fortunately,
the 802.11i/WPA [7] has mended all obvious loopholes in WEP and future countermeasures such as RSN/AES-
CCMP are also being developed to improve the strength of wireless security.
Network layer is more vulnerable to attacks than all other layers in MANET. A variety of security threats is
imposed in this layer. Use of secure routing protocols provides the first line of defense. The active attack like
modification of routing messages can be prevented through source authentication and message integrity
mechanism. For example, digital signature, message authentication code (MAC), hashed MAC (HMAC), one-

114
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

way HMAC key chain is used for this purpose. By an unalterable and independent physical metric such as time
delay or geographical location can be used to detect wormhole attack. For example, packet leashes are used to
combat this attack [6]. IPSec is most commonly used on the network layer in internet that could be used in
MANET to provide certain level of confidentiality. The secure routing protocol named ARAN protects from
various attacks like modification of sequence number, modification of hop counts, modification of source routes,
spoofing, fabrication of source route etc [13]. The research by Deng [2], et al presents a solution to overcome
blackhole attack. The solution is to disable the ability to reply in a message of an intermediate node, so all reply
messages should be sent out only by the destination node.
One way to provide message confidentiality in transport layer is point-to-point or end-to-end communication
through data encryption. Though TCP is the main connection-oriented reliable protocol in Internet, it does not fit
well in MANET. TCP feedback (TCP-F) [4], TCP explicit failure notification (TCP-ELFN) [4], ad-hoc
transmission control protocol (ATCP) [4], and ad hoc transport protocol (ATP) have been developed but none of
them covers security issues involved in MANET. Secure Socket Layer (SSL) [9], Transport Layer Security
(TLS) [9] and Private Communications Transport (PCT) [9] protocols were designed on the basis of public key
cryptography to provide secure communications. TLS/SSL provides protection against masquerade attacks, man-
in-middle attacks, rollback attacks, and replay attacks.
Viruses, worms, backdoors, trozan horses are the common and challenging application layer attacks in any
network. Antivirus and Firewall provides protection against some of these attacks. For example, antivirus can
protect the malicious software while firewall can provide access control, user authentication, incoming and
outgoing packet filtering, network filtering, accounting service etc. Anti-spyware software can detect spyware
and malicious programs running on the system. Still using firewall is not enough because in certain situation the
attacker even can penetrate firewall and make an attack. Another mechanism, Intrusion Detection System (IDS)
is effective to prevent certain attacks such as trying to gain unauthorized access to a service, pretending like a
legitimate user etc. The application layer also detects a DoS attack more quickly than the lower layers.

6. Discussions
Mobile Ad Hoc Networks have the ability to setup network on the fly in a harsh environment where it may
not possible to deploy a traditional network infrastructure. Whether ad hoc networks have vast potential, still
there are many challenges left to overcome. Security is an important feature for deployment of MANET. In this
paper, I have overviewed the challenges and solutions of the security threats in mobile ad hoc networks. A
variety of attacks related to different layer has been presented. This isolation of attacks on the basis of different
layers makes easy to understand about the possible security flaws in ad hoc networks. The potential
countermeasures either currently used in wired or wireless networking or newly designed specifically for
MANET are also presented in this study. Finally, it can be said that to provide security in MANET, security
must be ensured for the entire system since a single weak point may give the attacker the opportunity to gain the
access of the system and perform malicious tasks.
Significant research in MANET has been ongoing for many years, but still in an early stage. Existing
solutions are well-suited only for specific attack. They can cope well with known attacks but there are many
unanticipated or combined attacks remaining undiscovered. Resource consumption DoS attack is still unclear to
the researchers. More research is needed on secure routing protocol, robust key management, trust based
protocols, integrated approaches to routing security, data security in different level and cooperation enforcement.
Identifying new security threats as well as new countermeasures demands more research in MANET. The table
shown in the next page summarizes the finding of this research.

115
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

Table 1: Security Threats and Countermeasures in different layers

Layers Attacks Solutions

Application
Lack of cooperation attacks, Malicious
Layer Cooperation enforcement (Nuglets, Confidant,
code attacks (virus, worms, backdoor,
CORE) mechanisms, Anti-virus, Firewalls, IDS etc.
spywares, Trojan horses) etc.
Transport

Session hijacking attack, SYN Authentication and securing end-to-end or point-to-


Layer

flooding attack, TCP ACK storm point communication, use of public cryptography
attack etc. (SSL, TLS, SET, PCT) etc.

Routing protocol attacks (e.g. DSR, Source authentication and message integrity
AODV etc.), cache poisoning, table mechanisms to prevent routing message modification,
Network
Layer

overflow attacks, Wormhole, Securing routing protocols (e.g. IPSec, ESP, SAR,
blackhole, Byzantine, flooding, ARAN) to overcome blackhole, impersonation
resource consumption, impersonation, attacks, packet leashes, SECTOR mechanism for
location disclosure attacks etc. wormhole attack etc.
Data link

Traffic analysis, monitoring, No effective mechanism to prevent traffic analysis


layer

disruption MAC (802.11), WEP and monitoring, secure link layer protocol like LLSP,
weakness etc. using WPA etc.
Physical
Layer

Using Spread spectrum mechanisms e.g. FHSS, DSSS


Jamming, interceptions, eavesdropping
etc.

7. References
[1] S. Capkun, L. Buttyan, and J. Hubaux, “Sector: Secure Tracking of Node Encounters in Multi-hop Wireless
Networks.Proc. of the ACM Workshop on Security of Ad Hoc and Sensor Networks,” 2003.
[2] H. Deng, W. Li, Agrawal, D.P., “Routing security in wireless ad hoc networks,” Cincinnati Univ., OH, USA;
IEEE Communications Magazine, Oct. 2002, Volume: 40, page(s): 70- 75, ISSN: 0163-6804
[3] J.-P. HuBaux, L. Buttyan, and S. Capkun., “The quest for security immobile ad hoc network,” In Proc. ACM
MOBICOM, Oct. 2001.
[4] H. Hsieh and R. Sivakumar, “Transport Over Wireless Networks,” Handbook of Wireless Networks and
Mobile Computing, Edited by Ivan Stojmenovic. John Wiley and Sons, Inc., 2002.
[5] Y. Hu, A. Perrig, and D. Johnson, “Ariadne: A Secure On-Demand Routing for Ad Hoc Networks,” Proc. of
MobiCom 2002, Atlanta, 2002.
[6] Y. Hu, A. Perrig, and D. Johnson, “Packet Leashes: A Defense Against Wormhole Attacks in Wireless Ad
Hoc Networks,” Proc. of IEEE INFORCOM, 2002.
[7] IEEE Std. 802.11i/D30, “Wireless Medium Access Control (MAC) and Physical Layer (PHY)
Specifications: Specification for Enhanced Security,” 2002.
[8] J. Kong et al., “Providing robust and ubiquitous security support for mobile ad-hoc networks,” In Proc. IEEE
ICNP, pages 251–260, 2001.
[9] C. Kaufman, R. Perlman, and M. Speciner, “Network Security Private Communication in a Public World”,
Prentice Hall PTR, A division of Pearson Education, Inc., 2002
[10] P. Kyasanur, and N. Vaidya, “Detection and Handling of MAC Layer Misbehavior in Wireless Networks”,
DCC, 2003.

116
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

[11] A. Perrig, R. Canetti, J. Tygar, and D. Song, “The TESLA Broadcast Authentication Protocol,” Internet
Draft, 2000.
[12] R. Ramanathan, J. Redi and BBN Technologies, “A brief overview of ad hoc networks: challenges and
directions,” IEEE Communication Magazine, May 2002, Volume: 40, page(s): 20-22, ISSN: 0163-6804
[13] K. Sanzgiri, B. Dahill, B.N. Levine, C. Shields, E.M. Belding-Royer, “Secure routing protocol for ad hoc
networks,” In Proc. of 10th IEEE International Conference on Network Protocols, Dept. of CS, California
Univ., Santa Barbara, CA, USA. 12-15 Nov. 2002, Page(s): 78- 87, ISSN: 1092-1648
[14] B. Wu, J. Chen, J. Wu, M. Cardei, “A Survey of Attacks and Countermeasures in Mobile Ad Hoc
Networks,” Department of Computer Science and Engineering, Florida Atlantic University,
http://student.fau.edu/jchen8/web/papers/SurveyBookchapter.pdf
[15] H. Yang, H. Luo, F. Ye, S. Lu, L. Zhang, “Security in mobile ad hoc networks: challenges andsolutions,” In
proc. IEE Wireless Communication, UCLA, Los Angeles, CA, USA; volume- 11, Page(s):38- 47, ISSN:
1536-128439
[16] S. Yi, P. Naldurg, and R. Kravets, “Security-awaread hoc routing for wireless networks,” In Proc. ACM
Mobihoc, 2001.
[17] L. Zhou, Z.J. Haas, Cornell Univ., “Securing ad hoc networks,” IEEE Network, Nov/Dec 1999, Volume:
13, Page(s): 24-30, ISSN: 0890-8044(TH).

Kamanashis Biswas, born in 1982, post graduated from Blekinge Institute of Technology, Sweden
in 2007. His field of specialization is on Security Engineering. At present, he is working as a Senior
Lecturer in Daffodil International University, Dhaka, Bangladesh.

117
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

A FAILURE-WAITING-REPAIR MODEL OF A STANDBY REDUNDANT


COMPLEX SYSTEM

Pravesh Kumar, Research Scholar, Singhania University, Rajasthan, India

Dr. Deepankar Sharma, Prof.& Head, Dept. of Maths, D.J. College of Engg. & Tech.,
Modinagar, India

ABSTRACT
In this paper, the authors have considered a complex redundant system for its reliability and M.T.T.F. evaluation.
Considered process is a non-Markovian Process, so the authors have used supplementary variable technique to
formulate the mathematical model for considered complex system. It has solved with the aid of Laplace
transform. Various transition states probabilities have obtained. Ergodic behavior of the system, some particular
cases, reliability of the system and mean time to failure have also computed to improve practical utility of the
model. A numerical example and its graphical illustration have given at last to highlight important results.

KEY WORDS: Markovian Process, Supplementary Variables, Ergodic Behavior, Laplace Transform etc.

118
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

1. INTRODUCTION
In this system, there are two identical components in standby redundancy. When one online component will fail,
the standby component will follow online with the help of automatic imperfect switching device. In a
component, there are two identical units working in parallel redundancy (Gupta, P.P. et al, 1986). On failure of
any one unit, the component will work in reduced efficiency state (Chung, W,K, 1988) and it has to wait in
starting of repair. Component will be failed completely when its both units stop working and goes immediately
to repair facilities. State- transition diagram (Nagraja, H.N etal 2004) has been shown in fig-1. All the failures
follow exponential time distribution where as all the repairs follow general time distribution. Waiting rate also
follows general time distribution.

2. ASSUMPTIONS
The following assumptions have been associated herewith:
(i) In beginning all units are operable with full of their efficiency.
(ii) In one state only one change can take place.
(iii) Failures are statistically independent.
(iv) The system has to wait for repair, in case, only one unit is failed.
(v) Repair is perfect, i.e., after repair units work like new.
(vi) All failures follow exponential time distribution.
(vii) All repairs and waiting follow general time distribution.

3. NOMANCLATURE
P0(t) / P3(t) :Pr { at time ‘t’, the system is operable due to
working of first / second component}
P1(x,t) ∆ / P5(x,t) ∆ : Pr { at time ‘t’, the system is in degraded state due to failure of one unit of
first / second component and has to wait for repair}. The elapsed waiting
time lies in (x, x+∆).
P2(y,t) ∆ / P6(y,t) ∆ : Pr { at time ‘t’, the single unit of first / second component is ready to
repair}. The elapsed repair time lies in (y, y + ∆).
P7(m,t) ∆ : Pr { at time ‘t’, the system is failed due to failure of both the unit of second
component}. The elapsed repair time lies in (m, m + ∆).
P4(z,t) ∆ : Pr { at time ‘t’, the system is failed due to failure of switching device}. The
elapsed repair time lies in (z, z + ∆).
α1 / α2 : Failure rate of any one / second unit of either
components.
(1 - γ) : failure rate of switching device.
w(x) : General waiting rate
β1(y)∆ / β2(m)∆ / β(z)∆ : The first order pr that one unit / both units / switching device will be
repaired in the time interval (y, y + ∆) / (m, m + ∆) /(z, z + ∆) , conditioned
that it was not repaired up to the time y / m / z.
F (s ) : Laplace transform of function F (t).

4. FORMULATION OF MATHEMATICAL MODEL


Elementary probability consideration and limiting procedure yields the following set of difference-differential
equations (Sharma, S.K. ;Sharma, Deepankar ; Masood, Monis 2005), governing the behavior of considered
system:
L
M
d OP (t ) = zP ( y, t ) β ( y )dy + zP (m, t ) β (m)dm
+α P
∞ ∞
− − − (1)
Ndt Q 1 0
0
2 1
0
7 2

L
M ∂
+
∂ O
+ γα + w ( x ) PP ( x , t ) = 0 - - - - - (2)
N Q
2 1
∂x ∂t

119
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

L
M ∂ ∂ O
+ + β ( y) PP ( y, t ) = 0 - - - (3)
N Q
1 2
∂ y ∂t

L O
M d
Nd t
+ (1 − γ ) + α PP (t ) = γα
Q
1 3 2
z0

P 1 ( t ) + P 6( y , t ) β 1( y ) dy

z ∞
+ P 4 ( z , t ) β ( z )dz
0
- - - (4)

L
M ∂
+
∂ O
+ β ( z ) PP (z,t) = 0 - - - (5 )
N∂ z ∂ t Q
4

β2(m) P 7 (o,t)
P 7 (m,t)

α2

P 5 (o,t) w(x) P 6 (o,t)


P 5 (x,t) P 6 (y,t)
P0(t)

α1
β1(y)
α1

P 1 (o,t) γα2 P3(t)


P 1 (x,t)

(1-γ) β(z)
w(x)
P 4 (o,t)
P 2 (o,t) P 4 (z,t)

P 2 (y,t)
β1(y)

120
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

Degraded and Ready


Good waiting for repair
Failed for repair

Fig-1: State-transition diagram

L
M ∂
+

+ α + w (x)
O
PP (x,t) = 0 - - - (6 )
N∂ x ∂ t Q
2 5

L
M ∂
+

+ β (y)
O
P P ( y,t) = 0 - - - (7 )
N∂ y ∂ t Q
1 6

L
M ∂
+

+ β
O
( m ) PP ( m , t ) = 0 - - - (8 )
N∂ m ∂ t Q
2 7

Boundary conditions are:


P 1 (0, t ) = α 1 P 0 (t ) - - - (9 )
P 2 ( 0, t ) z∞
= P 1( x , t ) w ( x ) dx
0
− − − (10)

P 4 ( 0, t ) = (1 − γ ) P 3( t ) − − − (11)

P 5( 0, t ) = α 1 P 3( t ) − − − (12 )

P 6 ( 0, t ) = z∞

0
P 5 ( x , t ) w ( x ) dx − − − (13)

P 7 ( 0, t ) = α 2 P 5( t ) − − − (14 )

Initial conditions are:


P 0 (0 ) = 1, otherwise all state probabilities are zero -------(15)

5. SOLUTION OF THE MODEL


Taking Laplace transforms of equations (1) through (14) subjected to initial conditions (15) and then on solving
(Gnedenko, B.V etal. 1969) them one by one, we obtain:
1
P 0( s) = − − − (1 6 )
B ( s)
α
P 1( s ) = D w( s + γ α 2) − − − (1 7 )
1

B ( s)

121
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

P (s) =
α 1
S w(s + γ α ) D (s) − − − (1 8 )
2 2 β 1
B (s)

A (s)
P 3(s) = − − − (1 9 )
B (s)

A (s)
P 4 (s) = (1 − γ ) D β (s) − − − (20)
B (s)

A (s)
P 5(s) = α 1 D w ( s + α 2) − − − ( 2 1)
B (s)

A ( s)
P 6 ( s) = α 1 S w( s + α 2) D β1 ( s) − − − (22 )
B ( s)

A ( s)
P 7 ( s) = α 1α 2 D w( s + α 2) D β2 ( s) − − − (2 3)
B ( s)

where,
1 − S i( j )
D i( j ) = , ∀ i and j
j
γ α 1 α 2 Dw ( s + γ α 2 )
A( s ) =
s + α 1 1 − S w ( s + α 2 ) S β 1( s) + (1 − γ ) sDβ ( s)

and B( s) = s + α 1 − α 1 Sw( s + γ α 2) S β1 ( s)
− α 1α 2 A( s) Dw( s + α 2) S β 2 ( s)

It is worth noticing that

7
1
∑ P ( s)
i =0
i =
s
− − − (24)
6. ERGODIC BEHAVIOR OF THE SYSTEM
Using Abel’s lemma, viz ; lim s F ( s ) = lim F (t ) = F ( say ) , provided the limit on R.H.S.
s →0 t →∞

exists , in equations (16) through (23), we can obtain the time independent state probabilities.

122
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

7. PARTICULAR CASES
(i) When repairs and waiting follow exponential time distribution
Setting S i ( j ) = i / j + i ∀ i , j in equations (16) through (23), one may obtain the
following Laplace transforms of different state probabilities (Pandey, D.; Jacob, Mendus
1995) in this case:
1
P 0 ( s) = − − − ( 2 5)
C ( s)
P 1 ( s) =
α1 1
− − − (26)
C ( s) s + γ α 2 + w
α1 w 1
P 2 ( s) = − − − (27)
C ( s) s + γ α 2 + w s + β 1

D ( s)
P 3( s ) = − − − (28)
C ( s)
D ( s) 1
P 4 ( s) = (1 − γ ) ⋅ − − − (29)
C ( s) (s + β )

P 5 ( s) =
D ( s) α1 − − − (30)
C ( s) s+α2 + w

P 6 ( s) =
D ( s) w α1 ⋅ − − − ( 3 1)
C ( s) s+α2 + w s + β 1

D ( s) α1 ⋅α 2 1
P 7 ( s) = ⋅ − − − (32)
C ( s) s + α 2 + w s + β 2

where,
γ α 1α 2 / s + γ α 2 + w
D ( s) =
s+α1
L
M1−
w

β1
+
O
s (1 − γ )
P
N s + α2 + w s + β1 Q
s+ β

α 1wβ1 α 1α 2 D ( s)β 2
and C ( s) = s + α 1 − −
( s + γ α 2 + w ) ( s + β 1) ( s + α 2 + w ) ( s + β 2 )

(ii) When the switching device is perfect:


Put γ = 1 in equations (16) through (23), we can obtain all the state probabilities in this case.
Note that P4(t) will be zero here.
(iii) If there is no waiting for repair:
Put w(x) = 0 in equations (16) through (23), we may obtain all state probabilities in this case.

123
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

8. RELIABILITY AND M.T.T.F. EVALUATION


Reliability of the considered system is given by

R(t ) = Ae−α t + Be−(γ α +w)t + Ce−(α +1−γ )t


1 2 1 − − − (33)
where,
α1 γ α 1α 2
A =1+ −
γ α 2 + w ( α 1 − γ α 2 − w )( 1 − γ )

γ α 1α 2 α1
B= −
( α 1 + 1 − γ − γ α 2 − w )( α 1 − γ α 2 − w ) γ α 2 + w

γ α 1α 2
and C =
( α 1 + 1 − γ − γ α 2 − w )( 1 − γ )

also, M.T.T.F. for the system is

M .T .T . F . = z ∞

0
R ( t )dt

A B C
= + + − − − ( 34 )
α1 γ α2 + w α1 + 1 − γ

where, A , B and C are mentioned above.

9. NUMERICAL COMPUTATION
For a numerical computation, let us consider the values
α 1 = 0.001 , α 2 = 0.002, w = 0.4 , γ = 0.6 and t = 0, 1, 2-------------.
By using these values in equations (33) and (34), one can draw the graphs shown in fig-2,
3, and 4 respectively.

1.002
1
0.998
0.996
R(t)

0.994
0.992
0.99
0.988
1 2 3 4 5 6 7 8 9 10 11
t
Fig-2

124
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

Gamma Vs MTTF--->
1005
1000

995
MTTF--> Series1
990
985

980
.1 .2 .3 .4 .5 .6 .7 .8 .9 1
Gamma-->
Fig-3

w Vs MTTF-->

1002.55
MTTF-->

1002.5 Series1

1002.45

1002.4
.1 .2 .3 .4 .5 .6 .7 .8 .9
w-->

Fig-4

10. RESULTS AND DISCUSSION


In this study, the authors analyzed the performance measures of standby redundant complex system. An
automatic imperfect switching device has been used to online the standby component. Supplementary variables
and Laplace transform have used to formulate and solve the mathematical model, respectively. All transitions
state probabilities, steady state behavior of the system and some particular cases have obtained to improve
practical utility of the model. Graphical illustration for a numerical example has shown to highlight the important
results of study.
Fig.-2 predicts that reliability of the complex system decreases in a very smooth way and for t =0 to 10 its value
changes form 1 to 0.992477. In fig-3, we have shown that mean time to failure for the system decreases
approximately in uniform manner, as we make decrease in failure rate of switching device. For a perfect
switching device the M.T.T.F. is 987.5286
Critical examination of fig-4 reveals that M.T.T.F. of the system decreases in constant way as we make increase
in the waiting rate w for repair of single unit of each component.

REFERENCES

Barlow, R.E; Proschan, F. (1965): “Mathematical Theory of Reliability”, New York; John Wiley.
Chung, W, K (1988): “A K-out-of-n:G redundant system with dependant failure rates and common cause
failures”, Microelectron. Reliability U.K., Vol 28: 201-203.
Gnedenko, B.V; Belayer, Y.K; Soloyar (1969): “Mathematical Methods of Reliability Theory”, Academic press,
New York.
Gupta, P.P.; Gupta, R.K. (1986): “Cost analysis of an electronic repairable redundant system with critical human
errors”, Microelectron . Reliab. , U.K, Vol 26: 417-421.
Nagraja, H.N.; Kannan, N.; Krishnan, N.B. (2004): “Reliability”, Springer Publication.
Pandey, D.; Jacob, Mendus 1995: “Cost analysis, availability and MTTF of a three state standby complex system
under common-cause and human failures”, Microelectronic . Reliab., U.K., Vol. 35: 91-95.

125
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

Sharma, S.K. ;Sharma, Deepankar ; Masood, Monis (2005): “Availability estimation of urea manufacturing
fertilizer plant with imperfect switching and environmental failure”, Journal of combinatorics,
information & system sciences, Vol.29 (1-4) : 135-141.
Sharma, Deepankar, Sharma, Jyoti (2005); “Estimation of reliability parameters for telecommunication system”,
Journal of combinatorics, information & system sciences, Vol.29 (1-4): 151-160.

126
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

IJCIIS Reviewers
A. Govardhan, Jawaharlal Nehru Technological University, India
Ajay Goel, Haryana Institute of Engineering and Technology, India
Ajay Sharma, Raj Kumar Goel Institute of Technology, India
Akshi Kumar, Delhi Technological University, India
Alok Singh Chauhan, Ewing Christian Institute of Management and Technology, India
Amandeep Dhir, Helsinki University of Technology Finland, Denmark Technical University, Denmark
Amit Kumar Rathi, Jaypee University of Engineering and Technology, India
Amol Potgantwar, Sandip Institute of Technology and Research Centre, India
Anand Sharma, MITS, India
Aos Alaa Zaidan Ansaef, Multimedia University, Malaysia
Arash Habibi Lashkari, University of Technology, Malaysia
Arpita Mehta, Christ University, India
Arul Lawrence Selvakumar, Kuppam Engineering College, India
Ayyappan Kalyanasundaram, Rajiv Gandhi College of Engineering and Technology, India
Azadeh Zamanifar, Iran University of Science and Technology University and Niroo Research
Institute, Iran
Bilal Bahaa Zaidan, University of Malaya, Malaysia
Binod Kumar, Lakshmi Narayan College of Technology, India
B. L. Malleswari, GNITS, India
B. Nagraj, Tamilnadu News Prints and Papers, India
Chakresh Kumar, Galgotias College of Engineering and Technology, India
C. Suresh Gnana Dhas, Vel Tech Multitech Dr.Rengarajan Dr.Sagunthla Engg. College, India
C. Sureshkumar, J. K. K. M. College of Technology, India
Deepankar Sharma, D. J. College of Engineering and Technology, India
Dhirendra Pandey, Babasaheb Bhimrao Ambedkar University, India
Durgesh Kumar Mishra, Acropolis Institute of Technology and Research, India
D. S. R. Murthy, SreeNidhi Institute of Science and Technology, India
Hafeez Ullah Amin, KUST Kohat, NWFP, Pakistan
Hanumanthappa Jayappa, University of Mysore, India
Himanshu Aggarwal, Punjabi University, India
Jagdish Lal Raheja, Central Electronics Engineering Research Institute, India
Jatinder Singh, UIET Lalru, India
Iman Grida Ben Yahia, Telecom SudParis, France
Kanwalvir Singh Dhindsa, B. B. S. B. Engineering College, India
K Padmasree, Yogi Vemana University, India
K. V. N. Sunitha, G. Narayanamma Institute of Technology and Science, India
Leszek Sliwko, CITCO Fund Services, Ireland
M. Azath, Anna University, India
Md. Mobarak Hossain, Asian University of Bangladesh, Bangladesh
Mohd Nazri Ismail, University of Kuala Lampur, Malaysia
Mohammed Salem Binwahlan, Hadhramout University of Science and Technology, Yemen
Mohamed Elshaikh, Universiti Malaysia Perlis, Malaysia
M. Surendra Prasad Babu, Andhra University, India
M. Thiyagarajan, Sastra University, India
Manjaiah D. H., Mangalore University, India

127
International Journal of Computational Intelligence and Information Security, Vol. 2 No. 1, January 2011

Nahib Zaki Rashed, Menoufia Univesity, Egypt


Nagaraju Aitha, Vaagdevi College of Engineering, India
Natarajan Meghanathan, Jackson State University, USA
Navneet Sikarwar, B. S. A. College of Engineering and Technology, India
N. Jaisankar, VIT University, India
Ojesanmi Olusegun Ayodeji, Ajayi Crowther University, Nigeria
Oluwaseyitanfunmi Osunade, University of Ibadan, Nigeria
Perumal Dananjayan, Pondicherry Engineering College, India
Piyush Kumar Shukla, University Institute of Technology, Bhopal, India
Poonam Garg, Institute of Management Technology, India
Praveen Ranjan Srivastava, BITS, India
Rajesh Kumar, National University of Singapore, Singapore
Rajeshwari Hegde, BMS College of Engineering, India
Rakesh Chandra Gangwar, Beant College of Engineering and Technology, India
Raman Kumar, D A V Institute of Engineering and Technology, India
Raman Maini, University College of Engineering, Punjabi University, India
Ramveer Singh, Raj Kumar Goel Institute of Technology, India
Sateesh Kumar Peddoju, Vaagdevi College of Engineering, India
Shahram Jamali, University of Mohaghegh Ardabili, Iran
Sriman Narayana Iyengar, India
Suhas Manangi, Microsoft, India
Sujisunadaram Sundaram, Anna University, India
Sukumar Senthilkumar, National Institute of Technology, India
S. Murugan, Alagappa University and Centre for Development for Advanced Computing, India
S. S. Mehta, J. N. V. University, India
S. Smys, Karunya University, India
S. V. Rajashekararadhya, Adichunchanagiri Institute of Technology, India
Thipendra P Singh, Sharda University, India
T. Ramanujam, Krishna Engineering College, Ghaziabad, India
T. Venkat Narayana Rao, Hyderabad Institute of Technology and Management, India
Vasavi Bande, Hyderabad Institute of Technology and Management, India
Vishal Bharti, Dronacharya College of Engineering, India
Vuda Sreenivasarao, St. Mary's College of Engineering and Technology, India
V. Umakanta Sastry, Sreenidhi Institute of Science and Technology, India
Yee Ming Chen, Yuan Ze University, Taiwan

128

Potrebbero piacerti anche