Sei sulla pagina 1di 864

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014

ISSN 2091-2730

2 www.ijergs.org

Table of Content
Topics Page no
Chief Editor Board 3-4
Message From Associate Editor 5
Research Papers Collection

6-863





















International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

3 www.ijergs.org

CHIEF EDITOR BOARD
1. Dr Gokarna Shrestha, Professor, Tribhuwan University, Nepal
2. Dr Chandrasekhar Putcha, Outstanding Professor, University Of California, USA
3. Dr Shashi Kumar Gupta, , Professor, IIT Rurkee, India
4. Dr K R K Prasad, K.L.University, Professor Dean, India
5. Dr Kenneth Derucher, Professor and Former Dean, California State University,Chico, USA
6. Dr Azim Houshyar, Professor, Western Michigan University, Kalamazoo, Michigan, USA
7. Dr Sunil Saigal, Distinguished Professor, New Jersey Institute of Technology, Newark, USA
8. Dr Hota GangaRao, Distinguished Professor and Director, Center for Integration of Composites into
Infrastructure, West Virginia University, Morgantown, WV, USA
9. Dr Bilal M. Ayyub, professor and Director, Center for Technology and Systems Management,
University of Maryland College Park, Maryland, USA
10. Dr Sarh BENZIANE, University Of Oran, Associate Professor, Algeria
11. Dr Mohamed Syed Fofanah, Head, Department of Industrial Technology & Director of Studies, Njala
University, Sierra Leone
12. Dr Radhakrishna Gopala Pillai, Honorary professor, Institute of Medical Sciences, Kirghistan
13. Dr P.V.Chalapati, Professor, K.L.University, India
14. Dr Ajaya Bhattarai, Tribhuwan University, Professor, Nepal
ASSOCIATE EDITOR IN CHIEF
1. Er. Pragyan Bhattarai , Research Engineer and program co-ordinator, Nepal
ADVISORY EDITORS
1. Mr Leela Mani Poudyal, Chief Secretary, Nepal government, Nepal
2. Mr Sukdev Bhattarai Khatry, Secretary, Central Government, Nepal
3. Mr Janak shah, Secretary, Central Government, Nepal
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

4 www.ijergs.org

4. Mr Mohodatta Timilsina, Executive Secretary, Central Government, Nepal
5. Dr. Manjusha Kulkarni, Asso. Professor, Pune University, India
6. Er. Ranipet Hafeez Basha (Phd Scholar), Vice President, Basha Research Corporation, Kumamoto, Japan
Technical Members
1. Miss Rekha Ghimire, Research Microbiologist, Nepal section representative, Nepal
2. Er. A.V. A Bharat Kumar, Research Engineer, India section representative and program co-ordinator, India
3. Er. Amir Juma, Research Engineer ,Uganda section representative, program co-ordinator, Uganda
4. Er. Maharshi Bhaswant, Research scholar( University of southern Queensland), Research Biologist, Australia


















International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

5 www.ijergs.org

Message from Associate Editor In Chief
Let me first of all take this opportunity to wish all our readers a very happy, peaceful and
prosperous year ahead.
This is the Fourth Issue of the Second Volume of International Journal of Engineering Research
and General Science. A total of 106 research articles are published and I sincerely hope that each
one of these provides some significant stimulation to a reasonable segment of our community of
readers.
In this issue, we have focused mainly on the Upgrading of Recent Technology and Research. We also welcome more
research oriented ideas in our upcoming Issues.
Authors response for this issue was really inspiring for us. We received many papers from many countries in this issue
than previous one but our technical team and editor members accepted very less number of research papers for the
publication. We have provided editors feedback for every rejected as well as accepted paper so that authors can work out
in the weakness more and we shall accept the paper in near future. We apologize for the inconvenient caused for rejected
Authors but I hope our editors feedback helps you discover more horizons for your research work.
I would like to take this opportunity to thank each and every writer for their contribution and would like to thank entire
International Journal of Engineering Research and General Science (IJERGS) technical team and editor member for their
hard work for the development of research in the world through IJERGS.
Last, but not the least my special thanks and gratitude needs to go to all our fellow friends and supporters. Your help is
greatly appreciated. I hope our reader will find our papers educational and entertaining as well. Our team have done good
job however, this issue may possibly have some drawbacks, and therefore, constructive suggestions for further
improvement shall be warmly welcomed.



Er. Pragyan Bhattarai,
Assistant Editor-in-Chief, P&R,
International Journal of Engineering Research and General Science
E-mail -Pragyan@ijergs.org
Contact no- +9779841549341




International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

6 www.ijergs.org

Active Cardiac Model and its Application on Structure Revealing from Fetal
Ultrasound Sequence
Manikandan. M
1
, Prof.S.Prabakar
2

1
Research Scholar (PG), Department of ECE, Kathir College of Engineering, Coimbatore, India
2
Associate Professor, Department of ECE, Kathir College of Engineering, Coimbatore, India
E-mail- maniece022@gmail.com
Abstract A fetal cardiac defect is one of the most high risk fetal congenital anomalies and is also one of the primary
reasons for the death of the newborn. The fetal heart structure detection from is important for diagnosis of the fetuses which is difficult
due to the small size of the initial level of the fetuses. Fetal heart abnormalities are the most common congenital anomalies and also
the leading cause infant mortality related to birth defects. A novel method is proposed for the detection of fetal heart structure from
ultrasound images. An initial pre-processing is done for removal of noise and enhances the noiseless images. Level set method is
applied to the sequence of fetal ultrasound images to segment the region of interest. However to observe the outflows tracts
successfully requires special training in fetal cardiac image is known as an active appearance model, which is used to designed and
trained using ultrasound sequences which efficiently extract the cardiac structure from an input image. The developing methods are
efficient which has been verified, validated and appreciated by the doctors.
.Keywords Cardiac Defects, Ultrasound Image, Level Set, Appearance Model, Cardiac Structure.
INTRODUCTION
Fetal cardiac defects are one of the most high risk congenital cardiac defects. Approximately 1% of the fetuses unnatural
from congenital cardiac defects. This is also the most important reason of the death of the new born baby. The difficult anatomy and
dynamics of the fetal heart put together it a challenging organ to image. More complex and investigation methods are essential to
obtain diagnostic information concerning fetal cardiac anatomy and functions.

Congenital heart disease (CHD) is a leading root of infant mortality with a predictable incidence of about 4-13 per 1000 live
births. Despite the well accepted utility of a four chamber view. We should be aware of potential diagnostic pitfalls that can prevent
timely recognition of CHD. It technically routine views of the outflow tracts should be attempted as part of a comprehensive basic
cardiac inspection. Evaluation of outflow tracts can expand the detection rates for major cardiac irregularity above those achievable by
the four chamber view alone.

An extended basic examination minimally requires that ordinary great vessels are more or less identical in size and that they
cross each other. The basic cardiac selection examination relies on a four chamber view of the fetal heart. This view should not be
mistaken for a simple chamber count because it involves a careful evaluation of detailed criteria. To help identification of fetal heart is
this paper we proposed a method for detection of fetal cardiac structure in the four chamber view.
The remaining of this paper is prepared as follows section 2 will describes the input ultrasound image sequences is first
converted into the gray scale image. The next level is filtered the image sequences for the higher level process. The sequences of
images will segmented by the best segmentation method and are compare with the active appearance model to extract the four
chamber identification of the fetal heart is made. Section 3 describes the experimental results of the developed method and conclusion
is drawn in Section 4.
Highly skilled operations and time consuming for the doctors is required in the fetal heart diagnosis. From this sense many
methods are proposed and to help the diagnosis of the fetal heart. Using the level set snake based to measure the size of the septal
defects in image based on the fast marching method by Lassige et al. To applied the self organizing map to fetal heart segment to
obtain the heart structure is used Siqueira. Irving Dindoyal proposed a more improved level set algorithm. Segment four chambers of
fetal heart by introducing the shape prior. Bhagwati Charan Patel used an adaptive K-Means clustering algorithm for detection of
micro calcifications for the breast image segmentation.
Pedro F Felzenszwalb developed an object detection system. This object was mixture of multiscale deformable part model of
highly variable objects. A discriminative procedure used to train these models by bounding boxes for the objects in a set of images. An
efficient direct optimization approach that simultaneously matches shape and texture result in this method is rapid, accurate and robust
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

7 www.ijergs.org

was developed Cootes. The Rayleigh distribution is used to model the speckle and adopt the robust maximum likelihood estimation
method was developed by Aysal and his teams.
To extract the boundaries of the four chamber activities based on geometric models based automatic segmentation was
introduced by Antunes. The performance of this technique is compared with three alternative level set functions, the break point
segmentation and the contours are developed by a pediatrician. Yagel and Cohen are to view the cardiac activities in 3D/4D fetal
echocardiography. Compare all this methods and we proposed the following described method.

II MATERIALS AND METHODS
The pioneering method is proposed for the detection of fetal cardiac structure from ultrasound images a preliminary
preprocessing is done, the eliminate noise and enhance the images using median filtering. An effective Level set algorithm is then
applied to the segments the region of interest (ROI). To end with an active appearance model is proposed to identify the structure of
the fetal.
In this part, the proposed technique is described in detail and the flowchart is shown as Figure 1.








Figure 1: Proposed Technique
1. Pre-processing:-
During last several decades ultrasound imaging become widely used and safe medical diagnostic method. Ultrasound is an
oscillating sound pressures wave with a frequency greater than the upper limit of the human being hearing range. Ultrasound is used in
many special fields. Ultrasonic procedures are used to discover objects and measures distances. Ultrasonic imaging is used both
veterinary medicine and human medicine.

Ultrasound can be used for medical imaging, detection measurements cleaning. Human can hear the ultrasound ranges up to
20 KHz but the animals can detect the frequency ranges beyond 100 KHz, possibly up to 200 KHz. Ultrasound based diagnostic
medical imaging technique and to imagine muscles, tendons and many inner organs to detain their size, arrangement and any
pathological lesions with actual tomography images. Ultrasound is used to imagine fetuses during usual and emergency parental care.
As at present useful in the medical fields properly performed ultrasound pose no known risk to the patients.

A).Conversion of grayscale:-
Before preprocessing the input images are converted into gray scale images to enable the application of filter. The true
color ultrasound images in RGB are converted grayscale intensity image by eliminating the hue and saturation information while
retaining the luminance.
ULTRASOUND IMAGES
MEDIAN FILTERING

LEVEL SET
SEGMENTATION

ACTIVE APPEARANCE
MODEL

STRUCTURED
CARDIAC OUTPUT

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

8 www.ijergs.org


B). Median Filtering:-
The Median filtering is a nonlinear digital filtering technique often used to remove noise.

[() + ()] = [()] + [()] (1)

It is widely used as it is very effective at removing noise while preserving edges. It is particularly effective at removing salt
and pepper type noise. The median filter works by moving through the image pixel by pixel, replacing each value with the median
value of adjacent pixels. The prototype of neighbors is called the window, which slides, pixel through pixel more than the complete
image pixel, image. The median is designed by first sorting all the pixel values from the window into mathematical order, and then
replacing the pixel being measured with the center pixel value. Such noise reduction is a typically preprocessing step to get better the
results of later processing.

Median filtering is very widely used in digital image processing because; under definite circumstances it preserves edges
while removing noise. In median filtering, the neighboring pixels are .In median filtering, the neighboring pixels are ranked according
to brightness and the median value becomes the new value for the central pixel. The median is, in a sense, a more robust average
than the mean, as it is not affected by outliers. Since the output pixel value is one of the neighboring values, new unrealistic values
are not formed near edges. Since edges are simply despoiled, median filters can be applied repetitively, if essential.

2. Segmentation:
Segmentation is defined as partitioning portions of an image. It adds structure to a raw image. In the case of medicine, this
can involve identifying which portions of an image is the tumor, or separating white matter from grey matter in a brain scan. This
report presents a simple implementation of an active contour method using level sets and demonstrates this methods abilities. This
report will present the formulation of the level set method and issues in numerically implementing the problem. It will then follow
with results of the implementation and close with areas for further improvements.
The segmentation problem reduces to finding curve to enclose regions of interest. Intuitively, the model and the curves
directly using control points. Data structures for the curve would then need to be updated as well. If control points are too close
together, how should they be merged there are solutions to these difficulties. However, these issues can all be all deviated using the
level set method.

In mathematics, a level set of a real-valued function of n real variables f is a set of the form
L
c
(f) = {(x
1
, . , x
n
)|f(x
1
, , x
n
) = c} (2)
That is, a given constant value c a set where the function takes.
When the number of variables is two a curve is generically a level set called a level curve, curve line or isoline. So a level
curve is the set of all real-valued solutions of an equation in two variables x
1
and x
2
. When n = 3 a level set is called a level surface
and for higher values of n the level set is a level hyper surface. So a level surface is the set of all real-valued roots of an equation in
three variables x
1
, x
2
and x
3
, and a level hyper surface is the set of all real-valued roots of an equation in n variables.

3. Active appearance model:-
In the field of medical image processing there is arise a need to fit the shape of an object. If the object is rigid then matching
of such model is not necessary on the other hand if the object is non-rigid the matching is needed. Such a matching is carry out by
Active Appearance Model (AAM) is used to match the defined set of points to images using their texture information as the matching
criteria .In object recognition application, accurate object alignment has determinative effect. Active appearance model is one of the
most studied methods for accurate locating objects.
An active appearance model is a computer vision algorithm for matching a statistical model of object shape and appearance
to a new image. They are built at some stage in a preparation phase. A images set, together with coordinates of landmarks that appear
in all of the images, is provided to the training controller. The draw near is widely used for identical and tracking faces and
for medical image analysis. The algorithm uses the dissimilarity between the current approximation of appearance and the target
image to drive an optimization method. By taking benefit of the least squares techniques, it can counterpart to new images very
rapidly. It is related to the active shape model (ASM). One disadvantage of ASM is that it only uses shape constraints and does not
take advantage of all the available information in the texture crossways the goal object. This can be model by means of an AAM.


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

9 www.ijergs.org

Let the optimal training set for the automated segmentation of short axis left ventricular magnetic resonance (MR) imaging
studies in clinical practice based on active appearance model. The segmentation accuracy by varying the size and composition of the
training set. The accuracy was assessed using the degree of similarity and the difference in ejection fraction between automatically
detected and manually drawn contours. Including more images in the training set results in a better accuracy of the detected contours,
with optimum results achieved when including 180 images in the training set.
Using AAM-based contour detection with a mixed model of 80% normal-20% pathologic images does provide good
segmentation accuracy in clinical routine. Finally, it is essential to define different AAM models for different vendors of MRI
systems.
III RESULTS AND DISCUSSION
In this proposed method the input image sequence is obtained from the ultrasound sequence. Ultrasound image sequences are
the continuous moving frames. First and foremost thing is divide the successive frames per seconds in to image sequence. This
sequence of images is effortless to process for our constraint. From this multiple image sequences we prefer twelve images as the
suitable images and convert those images into gray scale image then removing unwanted noise which is present in the input image by
the aid of median filter. Next is to noiseless images will resized. From that sequence of image choose one better image for the level set
segmentation








Figure 2: Input Image sequences

















Figure 3: Gray scale image sequences
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

10 www.ijergs.org

















Figure 4: Filtered image sequences








Figure 5: Resized image sequences






Figure 6: Level set iteration Image

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

11 www.ijergs.org











Figure 7: Segmented ROI

CONCLUSION
A novel and efficient method for automated detection of fetal cardiac structure has been proposed in this paper. After initial
preprocessing, the region of interest has been successfully segmented. The final fetal cardiac structure detection is being implemented
by active appearance model.

REFERENCE:

[1]. Yinhui Deng Yuan Wang, China and Ping Chen, China Automated Detection of Fetal Cardiac Structure from First -trimester Ultrasound
Sequences. 3rd International Conference on Biomedical Engineering and Informatics in 2010
[2]. B. Cohen I.Dinstein, New maximum likelihood motion estimation schemes for noisy ultrasound images, Pattern Recognition, vol .35, pp. 455
463 in 2002 .
[3]. Bhagwati Charan Patel, Dr. G.R.Sinha (2010) An Adaptive K-means Clustering Algorithm for Breast Image Segmentation. International Journal
of Computer Application in 2010 (0975 8887) Volume 10 N.4.
[4]. H. Silverman , Md Facc, Mitchell S. Golbus, Md San Francisco Assessing Anatomy: Specific Techniques Echocardiographic Techniques for
Assessing Normal and Abnormal Fetal Cardiac Anatomy Norman, California JACC Vol. 5, No. I January 1985:20S-9S.
[5]. Irving Dindoyala, Tryphon Lambroua, Jing Denga, Cliff F Ruffb, Alf D Linneya, Andrew Todd-Pokropekaa n (UCL) and BUCL HospitalsNHS
Trust (UCLH), UK. Level set segmentation of the foetal heart.
[6]. Pedro F. Felzenszwalb, Ross B. Girshick, David McAllester and Deva Ramanan. Object Detection with Discriminatively Trained Part Based
Models.
[7]. Ted Scott, Hans Swan, Gerald Moran, Tapas Mondal, Judy Jones, Karm Guram and Jaime Huff, (2008)Increasing the Detection Rate of Normal
Fetal Cardiac Structures: A Real-Time Approach, Journal of Diagnostic Medical Sonography 2008; 24; 63 originally published online Feb
21,2008.
[8]. Timothy F. Cootes (2001), Gareth J. Edwards, and Christopher J. Taylor Active Appearance Models. IEEE Transactions on Pattern Analysis
and Machine Intelligence, Vol. 23, no. 6, JUNE 2001 page No 681.
[9]. Tuncer C. Aysal (2007)And Kenneth E. Barner(2007) Rayleigh-Maximum-Likelihood Filtering For Speckle Reduction Of Ultrasound Images
IEEE Transactions On Medical Imaging, Vol. 26, No. 5.
[10]. Sofia G. Antunes, Jos Silvestre Silva and Jaime B. Santos A New Level Set Based Segmentation Method for the Four Cardiac Chambers.
[11]. S.Yagel, S.M.Cohen, I. Shapiro and D.V.Valsky 3D and 4D ultrasound in fetal cardiac scanning: a new look at the fetal heart Ultrasound Obstet
Gynecol 2007; 29: 8195 Published online in Wiley Inter Science (www.interscience.wiley.com). DOI: 10.1002/uog.3912



International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

12 www.ijergs.org

Non Conventional Energys Overview Sources of India
Sachi Sharma
1

1
Research Scholar (M.E), LDRP-ITR College, Gandhinagar, India
E-mail- spark_sachi@yahoo.com
Abstract The energy of a body is its capacity to do work. It is measured the total amount of work that the body can do. Everything
what happens in the world is the expression of flow of energy in one of its forms. Today every country draws its energy needs from a
variety of sources. We can broadly categorize these sources as conventional and nonconventional. The conventional sources include
the fossil fuels (coal, oil and natural gas), types of nuclear power (Uranium), while the nonconventional sources such as sunlight,
wind, rain, tides, and geothermal heat, which are renewable. The energy crisis which began in 1973 caused petroleum supplies to
decrease and prices to rise exorbitantly. This crisis forced developing countries to reduce or postpone important development
programs, so they could purchase petroleum to keep their economies operating. It created the urgent necessity to find and develop
alternative energy sources, such as other fossil fuels (coal, gas), nuclear energy, and renewable energy resources. The consumption of
energy is directly proportional to the progress of the mankind. With ever growing population, improvement in the living standard of
the humankind, Industrialization of developing countries, the global demand for energy is expected to increase significantly in the near
future. The primary source of energy is fissile fuel, however these fissile fuel sources are finite also with their fastly widespread use
degradation of environment takes place, which causes global warming, urban air pollution and acid rain, It strongly suggest that the
time is now come to harness and use the non-conventional and environment friendly energy sources is vital for steering the global
energy supplies towards sustainable path. This paper describes in brief the non conventional energy sources and their usage in India.

Keywords NON CONVENTIONAL, Wind Energy, Hydro Energy, INDIAN POWER SCENARIO, Solar Energy, Biomass energy,
Biofuel

1. Introduction
THE oil shocks of 1970s led to spiraling crude oil prices in the world market which prompted planners to view energy security as an
issue of national strategic importance. Energy security has an important bearing on achieving national economic development goals
and improving the quality of life of the people. Indias dependence on crude oil will continue for most part of the 21st century. In
addition, global warming, caused largely by greenhouse gas emissions from fossil fuel energy generating systems, is also a major
concern. India needs to develop alternate fuels considering the aforesaid two concerns. India has a vast supply of renewable energy
resources, and it has one of the largest programs in the world for deploying renewable energy products and systems. Indeed, it is the
only country in the world to have an exclusive ministry for renewable energy development, Ministry of New & Renewable Energy
Sources (MNRE) supports the implementation of a large broad- spectrum of programs covering the entire range of new and renewable
energies. The program broadly seeks to, supplement conventional fossil fuel- based power; reach renewable energy, including
electricity to remote rural areas for a variety of applications like water pumping for irrigation and drinking water purposes, drying
farm produce, improved chulhas and biogas plants, energy recovery from the urban, municipal and industrial wastes. In addition,
exploitation of hydrogen energy, geothermal energy, tidal energy and biofuels for power generation and automotive applications is
also planned. Increasing the share of new and renewable energy in the fuel-mix is in the Indias long-term interest. Although, the
development process may warrant selection of least-cost energy options, strategic and environmental concerns may, on the other hand,
demand a greater share for new and renewable energy even though this option might appear somewhat costlier in the medium-term.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

13 www.ijergs.org

2 INDIAN POWER SCENARIO
With high economic growth rates and over 15 percent of the worlds population, India is a significant consumer of energy resources.
In 2009, India was the fourth largest oil consumer in the world, after the United States, China, and Japan. Despite the global financial
crisis, Indias energy demand continues to rise. In terms of end-use, energy demand in the transport sector is expected to be
particularly high, as vehicle ownership, particularly of four-wheel vehicles, is forecast to increase rapidly in the years ahead India
currently has 15,789 MW of installed renewable energy sources out of 1, 57,229 MW total installed capacities with distribution shown
below

1. Thermal power - 64.6 per cent of the total installed capacity, producing 1,00,598 MW.
2. Hydel power plants come next with 24.7 per cent of the total an installed capacity of 36,863 MW.
3. Renewable energy sources contribute around 10% to the total power generation in the country producing 15,789 MW (as on
31.1.2010).
Gross Generation: 640 BUs
Per Capita Consumption: 632 kwh/ ANNUM

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

14 www.ijergs.org

Among 16 major states, per capita electricity consumption of Punjab, Gujarat, Haryana, Tamil Nadu, and Maharashtra exceeded 1,000
kWh in 2007-08. On the other hand, for underdeveloped states such as Bihar the figure was as low as 10 kWh.
Energy Shortage about : 12%
Peaking Shortage about: 13-15 %
Electricity demand growing @ 8% annually
Capacity addition of about 92,000 MW required in the next 10 years
Challenge is to meet the energy needs in a sustainable manner
However Indias demand/supply gap is 12% on average and the progressive states see a gap in access of 15%. Being one of the fastest
growing economies, the average energy usage per capita is expected to increase from 632kWh per annum today to 1000kWh by the
beginning of 2013.

The key drivers for renewable energy are the following:
The demand-supply gap, especially as population increases
A large untapped potential
Concern for the environment
The need to strengthen Indias energy security
Pressure on high-emission industry sectors from their shareholders
A viable solution for rural electrification
3 POWER FROM NON CONVENTIONAL ENERGY
India is one of the fastest growing countries in terms of energy consumption. Currently, it is the fifth largest consumer of energy in the
world, and will be the third largest by 2030. At the same time; the country is heavily dependent on fossil sources of energy for most of
its demand. This has necessitated the country to start aggressively pursuing alternative energy sources - solar, wind, biofuels,
small hydro and more.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

15 www.ijergs.org


The country has an estimated renewable energy potential of around 85,000 MW from commercially exploitable sources: Wind,
45,000 MW; small hydro, 15,000 MW and biomass/bioenergy, 25,000 MW. In addition, India has the potential to generate 35 MW
per square km using solar photovoltaic and solar thermal energy. It has proposed an addition of 15,000 MW of Renewable Energy
generation capacities during the period. Wind Power projects form 70 percent (10,500 MW) of the proposed capacity addition, while
Small Hydro Projects (SHP) accounts for 9.3 per cent (1,400 MW).
A) Wind Energy
Indias wind power potential has been assessed at 48500 MW. The current technical potential is estimated at about 13 000
MW, assuming 20% grid penetration, which would increase with the augmentation of grid capacity in potential states. The state-wise
gross and technical potentials are given below
India is implementing the world's largest wind resource assessment program comprising wind monitoring, wind mapping and
complex terrain projects.


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

16 www.ijergs.org


This program covers 800 stations in 24 states with around 200 wind monitoring stations in operation at present. Wind
Electric Generators are being manufactured in the country by a dozen manufacturers through
(i)joint ventures or under licensed production
(ii)subsidiaries of foreign companies under licensed production and
(iii) Indian companies with their own technology. The current annual production capacity of domestic wind turbine is about 3,000
MW.
B) Hydro Energy
Hydro power is the largest renewable energy resource being used for the generation of electricity. The 50,000 MW hydro initiatives
have been already launched and are being vigorously pursued with DPRs for projects of 33,000 MW capacity already under
preparation. Harnessing hydro potential speedily will also facilitate economic development of States, particularly North-Eastern
States, Sikkim, Uttaranchal, Himachal Pradesh and J&K, since a large proportion of our hydro power potential islocated in these
States. In India, hydro power projects with a station capacity of up to 25 megawatt (MW) each fall under the category of small hydro
power (SHP).

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

17 www.ijergs.org

With numerous rivers and their tributaries in India, small hydro RE presents an excellent opportunity with an estimated potential of
15,000 MW with only 17 percent of this sector exploited so far. Over 674 projects aggregating to about 2558.92 MW generating
capacity have been set up in the country as on 31.12.2009. Of the estimated potential of 15,000 MW of small hydro power in the
country, 5415 potential sites with an aggregate capacity of 14,292 MW have been identified. Most of the potential is in Himalayan
States as river based projects and in other States on irrigation canals.
Hydel projects call for comparatively larger capital investment. Therefore, debt financing of longer tenure would need to be
made available for hydro projects. Central Government is committed to policies that ensure financing of viable hydro projects.
State Governments need to review procedures for land acquisition, and other approvals/clearances for speedy implementation
of hydroelectric projects.

The Central Government will support the State Governments for expeditious development of their hydroelectric projects by offering
services of Central Public Sector Undertakings like National Hydroelectric Power Corporation (NHPC). Land acquisition, resettlement
and rehabilitation issues have caused significant delays in hydro projects.
C) Solar Energy
India is a solar rich country. India is a country near the equator which means that given its geographical location, it is subject to a
large amount of solar radiation throughout the year. India is also, according to area, the 7th largest country in the world.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

18 www.ijergs.org


The average solar radiation received by most parts of India range from about 4 to 7 kilowatt hours per meter square per day,
with about 250-300 sunny days in a year. As can be seen from the solar radiation map above, the highest annual solar radiation is
received by Rajasthan (desert area) and the lowest by the North eastern states of India. India has one of the worlds largest
programmes in solar energy which include R&D, demonstration and utilization, testing & standardization, industrial and promotional
activities. Processed raw material for solar cells, large capacity SPV modules, SPV roof tiles, inverters, charge controllers all have
good market potential in India as do advanced solar water heaters, roof integrated solar air heaters; and solar concentrators for power
generation (above 100KW).
The future is bright for continued PV technology dissemination around the world. PV technology fills a significant need in
supplying electricity, creating local jobs and promoting economic development in rural areas, while also having the positive benefits
of avoiding the external environmental costs associated with traditional electrical generation technologies. People who choose to
pursue a renewable and sustainable energy future now, are the ones showing the way for the future.
D) Biomass energy
Globally, India is in the fourth position in generating power through biomass and with a huge potential, is poised to become a
world leader in the utilization of biomass. Biomass power projects with an aggregate capacity of 773.3 MW through over 100 projects
have been installed in the country. For the last 15 years, biomass power has become an industry attracting annual investment of over
Rs. 1,000 billion, generating more than 09 billion unit of electricity per year.
More than 540 million tons of crop and plantation residues are produced every year in India and a large portion is either
wasted, or used inefficiently.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

19 www.ijergs.org


By using these surplus agricultural residues, by conservative estimates more than 16,000 MW of grid quality power could be
generated through Biomass. In addition, about 5,000 MW if power can be produced, if all 550 sugar mills in the country switch over
to modern techniques of cogeneration.
Thus the estimated biomass power potential is about 21,000 MW.
However, in India, though the energy scenario in India today indicates a growing dependence on the conventional forms of
energy, about 32% of the total primary energy use is still using biomass and more than 70% of the countrys population depends upon
it for its energy needs.
E) Energy from Wastes:
The rising piles of garbage in urban areas caused by rapid urbanization and industrialization throughout India represent
another source of nonconventional energy. An estimated 50 million tones of solid waste and approximately 6,000 million cubic meters
of liquid waste are generated annually in the urban areas of India. Good potential exists for generating approximately 2,600 MW of
power from urban and municipal wastes and approximately 1,300 MW from industrial wastes in India. A total of 48 projects with
aggregate capacity of about 69.62 MWeq have been installed in the country thereby utilising only 1.8% of the potential that exists.
F) Biofuels:
The GOI recently mandated the blending of 10 percent fuel ethanol in 90 percent gasoline. This mandate as created an
approximately 3.6 billionliter demand for fuel ethanol in blend mandate to the entire country. This significant demand growth creates
a tremendous manufacturing opportunity for the fuel ethanol industry seeking to expand its investments internationally.
Conclusion: It is not an exaggeration to state that Humanity is facing a choice between a peaceful decision on its common energy
future or wars for resources in the near future. The world Population is set to grow by 0.9% per year on average, from an estimated 6.7
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

20 www.ijergs.org

billion in 2008 to 8.5 billion in 2035 (UNDP, 2009). There is a need of trapping and using non conventional energy sources in India
for the survival of future generation. However it is clear that grid extension in rural areas is often not cost effective, so decentralized
electricity generation with non conventional energy sources such as small wind, hydro, solar, biomass, biofuels and energy from waste
are best suited to provide the much needed options.

REFERENCES:
[1] Shoumyo Majumdar, The Current Scenario Of Developments In Renewable Energy In India inrenewable energy and energy
efficiency, 2008 pp. 1-32
[2] Pradeep Chaturvedi, Renewable Energy in India Programmes and Case Studies, in ISESCO, Science and technology vision
Volume 1 May 2005 (61-64)
[3] S.K. Patra and P.P.Datta, Renewable sources of energy Potential and achievements, technical digest Issue 6
[4] Peter Meisen Overview of sustainable renewable energy potential in India GENI, Jan 2010.
[5] G. M. Pillai, Wise Indian wind energy outlook 2011april 2011
[6] Giorgio dodero, IPGSRL 2011 India energy handbook, August 2010
[7] K.P. Sukumaran Bioenergy India Issue 7 , Jan -March - 2011.
[8] M. S. Swaminathan research foundation Bioenergy resources status in India , Pisces, may 2011
[7] www.mnes.nic.in
[8] www.wisein.org
[9] www.geni.org
[10] Plasma Arc Gasification For Waste Management by Gp Capt (Retd) K.C. Bhasin

[11 ] U.S. Environmental Protection Agency (2010) Municipal solid waste in the United States: 2009 Facts and Figures. Washington,
DC.
[12] NRG Energy Plasma Gasification MSW
























International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

21 www.ijergs.org

Microstrip Patch Yagi-Uda Array for Millimeter Wave Applications
Mahesh Kumar Aghwariya
1

1
Faculty, Department of Electronics Engineering, THDC- Institute of Hydropower Engineering and Technology, Uttarkhand
E-mail- mahi24wings@gmail.com

Abstract This paper presents a novel design of microstrip Yagi-Uda array. This proposed design has been simulated at frequency of 6.95 GHz.
CST

MW Studio software is used for Simulation of Yagi Uda array. Unlike conventional design of Yagi Uda array, In this design reflector, director
and driven element of Yagi Uda array is designed by using microstrip patches of different dimensions with supporting dielectric FR-4 lossy at height
1.6mm and loss tangent of 0.02. It achieves very high gain and effective radiation efficiency. Moreover Return loss of antenna is very less. This
Yagi-Uda antenna shows very good compatibility with microwave circuitry.

Keywords: Microstrip Yagi-Uda antenna, Dielectric constant, Return loss, Back lobe radiations.
Introduction
THE increasing growth of the wireless communications industry and sensor systems demand for low cost, compact size
antennas that can be printed on a substrate. Printed antennas offers many advantages over standard antennas, such as low
manufacturing costs, low prole, ease of integration with monolithic microwave integrated circuits (MMICs) and the ability to be
mounted on planar, non planar and rigid exteriors.
The Yagi-Uda antenna gained its name from the research work done by two scientists Yagi and Uda. Yagi developed the
proof of concept while Uda provided contribution in designing principle[1]. Right from the day of their discovery Yagi-Uda antenna
have gone through tiring out investigations in the literature. The Yagi-Uda antenna is general term for Yagi-Uda array. It is a
directional antenna having two kinds of elements one is driven element which is a dipole and other is parasitic elements like reflector
and directors[2]. The so called reflector element has longer length approximately five percent longer than the driven dipole and
directors have shorter length. Such type of designing improves antenna's directionality and gain[3]. Being highly directional with good
gain these antennas are also referred as beam antennas. But this high gain of the Yagi-Uda antenna is only limited over a narrow
bandwidth providing its usefulness for various communications bands inclusive of amateur radio. The Yagi-Uda antenna operates
forming its basis on electromagnetic interaction between the parasitic elements and the one driven element[4][5]. Due to simplicity of
its type along with its features has made it an appropriate alternative for both unprofessional and professional antenna applications[6].
Usually Yagi-Uda arrays have low input impedance and relatively narrow bandwidth. Modern well-designed Yagi achieve
greater bandwidth, on the order of 5% to more than 15% [7]. This antenna has found applications from short waves to microwave
frequencies for a quarter of a century. It also being widely used in radar and communication systems as it possess wide bandwidth,
low cross-polarization and good isolation as compared to patch antennas. Also, Yagi-Uda antenna finds their use in industrial and
medical applications.

Designing of Antenna structure

In designing of microstrip Yagi-Uda antenna, no simple formulas are employed due to the complexity in relationship between
physical parameters like element length, spacing and diameter, and performance characteristics like as gain and input impedance.

Schematic diagram of proposed designed is presented in the figure1. In Proposed antenna two directors are used to increase
directivity in particular direction. Each director in the proposed antenna having different dimensions also spacing between the
directors are not equal. Ground plane of the proposed antenna is used as the reflector [8]. Rectangular reflector is used to designing the
ground plane. By varying Height and width of the reflector we can also change the antenna gain and directivity. Feeding is provided at
the patch designed in between of reflector and director. Driven element, reflector and directors are microstrip patches of certain
dimensions at some distance[9]. This design of Yagi Uda is combination of patch antenna and Yagi Uda array to enhance the antenna
parameters.


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

22 www.ijergs.org



Fig.1 Front view of the antenna
Antenna Dimensions (millimeter):L=85,W=40, L
1
=9.5,W
1
=14,L2=12.6,W2=17,L3=15.6,W3=20.45,L4=,W4=31.8.
Results
The proposed antenna was simulated by using CST simulation software.



Fig.2 Return Loss of Designed Antenna


Fig.3 Smith Chart of Proposed Antenna
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

23 www.ijergs.org

Figure 2 shows the simulated return loss of proposed antenna .at resonant frequency of 6.95 GHz it achieves 29dB return
loss.
Figure 3 shows the smith chart of proposed antenna at 6.95GHz frequency.
Figure 4 shows directivity of proposed antenna at resonant frequency of 6.95 GHz it achieves 7.66dBi directivity.



Fig.4 Directivity of Proposed Antenna




Fig.5 Gain of Proposed Antenna
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

24 www.ijergs.org


Fig.6 E-Field radiation pattern of Proposed Antenna
Figure 5 shows gain of proposed antenna at resonant frequency of 6.95 GHz it achieves 4.766dB gain.
Figure 6 shows the E-field radiation pattern of proposed antenna. It is clear that back lobe radiations are very less.

Fig.7 H-Field radiation pattern of Proposed Antenna
Figure 7 shows the H-field radiation pattern of proposed antenna which shows back lobe radiation of -8.4dB which is very
less.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

25 www.ijergs.org

Table 1.1 enlists all the measuring parameters of proposed design.
Table 1.1: Analyzed Parameters
Parameters Simulated value
Frequency (GHz) 6.95
Return loss (dBi) 29
Gain (dBi) 4.6
Directivity (dBi) 7.3
Radiation efficiency
(%)
96
Bandwidth 25MHz
Conclusion
Proposed antenna achieves high gain and high directivity at resonant frequency. Radiation efficiency of this antenna is quite
good. Small size and compactness of this antenna makes it very useful at particular band of frequency. These antennas are often
empirical designs using an element of trial and error, often starting with an existing design modified according to one's hunch. The
result can be checked by direct measurement or by computer simulation.
REFERENCES:
[1] Kaneda, N., Quian, Y., Itoh, T., A novel Yagi-Uda dipole array fed by a microstrip-to-CPS transition, 1998 Asia-Pacific
Microwave Conference Proceedings, Yokohama, Japan, pp.1413-1416, Dec. 1998.

[2] Chen, C. A., Cheng, D. K., Optimum Element Lengths for Yagi-Uda Array, IEEE Trans.Antennas Propag., Vol. AP-23, pp.8-15,
January 1975.

[3] W. L. Stutzman and G. A. Thiele, Antenna Theory and Design. New York: Wiley, 1981.

[4] J. Yu and S. Lim, A multi-band, closely spaced Yagi antenna with helical-shaped directors, in Proc. IEEE APS Int. Symp.,
Charleston, SC, pp. 1-4,Jun. 2009.

[5] A. C. Lisboa, D. A. G. Vieira, J. A. Vasconcelos, R. R. Saldanha and R. H. C. Takahashi, Monotonically Improving Yagi-Uda
Conflicting Specifications Using the Dominating Cone Line Search Method, IEEE Transactions on Magnetics, Vol. 45, No. 3,
2009, pp. 1494-1497. doi:10.1109/TMAG.2009.2012688

[6] S. R. Best, E. E. Altshuler, S. D. Yaghjian, J. M. McGinthy, and T. H.

[7] ODonnell, An impedance-matched 2-element superdirective array,IEEE Antennas Wireless Propag. Lett. ,vol. 7, pp. 302305,
2008.

[8] T. H. ODonnell and A. D. Yaghjian, Electrically small superdirective arrays using parasitic elements, in Proc. IEEE APS Int.
Symp. Albuquerque, NM, pp. 31113114, Jul. 2006.

[9] N. Honma, T. Seki, and K. Nishikawa, Compact planar four-sector Antenna comprisingmicrostrip YagiUda arrays in a square
congura-tion, IEEE Antennas Wireless Propag. Lett., vol. 7, pp. 596598, 2008

[10] P. R. Grajek, B. Schoenlinner, and G. M. Rebeiz, A 24-GHz high-gain YagiUda antenna array, IEEE Trans. Antennas Propag.,
vol. 52, no.5, pp. 2571261, May 2004

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

26 www.ijergs.org

Study of Different Risk Management Model and Risk Knowledge acquisition
with WEKA
Kiranpreet Kaur
1
, Amandeep Kaur
1
, Rupinder Kaur
1

1
Department of Computer Science and Engineering, Guru NanakDev University, Amritsar (Pb)
E-mail- sohalkirankaur@gmail.com
ABSTRACT

Software risks can be defined as uncertainty and loss in project process. Software risk management consists of risk identification,
estimation, refinement, mitigation, monitoring and maintenance steps. In this paper,the main focus is on different risk management
model and the importance of automated tools in risk managementt. With the automated risk management tool, the prediction of project
problem effects that can cause loss in software project in terms of their values on risk factors and rank the risk factors to observe how
they can give detail about project problem effects separately. For these purpose five classification methods for prediction of problem
impact and two filter feature selection methods for ranking importance of risk factors are used in this study.

Keywords Software Risk Management Model, Multi-characters Of Risk ,WEKA tool, Risk Ranking; Risk Impact Prediction
1. INTRODUCTION

In real world, success rates of software projects are lower than expected. Software risks that occur during the software development
life cycle are one of the most important reasons for this low success rates. Risk is a problem that could cause loss or threaten the
success of project, but which hasnt happened yet. These potential problems might have a contrary impact on the cost, schedule, or
technical success of the project, the quality of the software products, or project team collaboration. Software risk management
contains preventive key steps before start of new software projects to increase success rates of software projects. These preventive key
steps specify software risks, impact of these risk factors and they aim to dissipate uncertain software issues. Uncertainty can be related
with time, budget, labor or any other risk factors that can appear during the software project development life cycle. Therefore risk
management steps should be applied for the software project.

Risk management has the objective to reduce the harm due to risks. As with any other management, risk management employs
strategies and plans to meet the objectives. Risk management benefits group under two categories: direct and indirect benefits. Direct
(Primary) benefits deal with major risk, people, product and cost. Indirect (Secondary) benefits deal with optimization, pragmatic
decision making, better process management and alternative approaches. The main objective of risk management is to prevent and
control risks before they become corruptive so risk mitigation, monitoring and maintenance steps are applied during the risk
management process. [1]
1.1Several classical mechanisms of software risk management model

A. Barry Boehm theory

80 years of the 20th century, Boehm introduced the concept of risk management software industry, Boehm software project risk
management process will be divided into two basic steps: risk assessment and risk control. The first step risk assessment, including
risk identification, risk analysis and risk prioritization; that is first proposed a risk list,the list of the risk assessment of the probability
and impact to determine the level of risk that take into account the priority of the risk, the risk list is the basis of risk control; when
determining the priority of risk factors out, the second step is risk control, including risk management plans, risk and risk control to
resolve. This step, we must first develop a response plan for each major risks and risk mitigation in accordance with the practical
implementation of the program's activities,and in the process to be monitored.

Boehm states the risk probability and consequences of risk occurrence attributed to two parts of "risk exposure". [2]

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

27 www.ijergs.org

Boehm noticed that the most common IT risks are:
project team members are poorly trained,
temporary planning and project budgets are not realistic,
wrong product features are developed,
interfaces are not user oriented,
testing in real life situation fails.

Not all identified risks should be treated the same. Some identified risks are more likely to occur, and some, if realized, would have a
bigger impact. Risk analysis and management depends on the types of risks being considered. Within the context of the technological
and business perspectives, there can be distinguished three main elements of software risk: technical, schedule/scope, cost

1. Technical risks are associated with the performance of the software product, which includes functionality, quality, reliability and
timeliness issues. Even if there are no mid project changes in scope, unforeseen technical complications can also turn the project
upside down. Project managers might know the technologies they are using in the project very well but still surprises are possible
this component has always been working fine but now when you integrate it with another component, it's a complete mess. The more
experienced the technical people are, the lower the risk of unforeseen technical limitations is, but still this risk is always present.

2. Schedule and scope risks are associated with the schedule and scope of the software product during development. Changes in scope
are frequent in IT projects and to some extent they are quite logical no matter how detailed your specification is, there are always
suggestions that come after you have started the implementation. Often these suggestions demand radical changes and require change
requests that can turn any schedule upside down. In order to address the holistic view of risks, software manager should view the risks
from a different viewpoint and then get complete information. Also the scope can be affected by technical complications. If a given
functionality can't be implemented because it is technically impossible, the easiest solution is to skip this functionality but when other
components depend on it,
doing this isn't wise.

3. Cost risks are associated with the cost of the software product during software development, including its final delivery, which
includes the following issues: budget, nonrecurring costs, recurring costs, fixed costs, variable costs,profit/loss margin, and
realism.After the risks are identified they should be assessed by two dimensions - probability and impact. The project team will take
these two dimensions and multiply them together to generate a risk score, so the risks can easily be ranked and ordered, allowing for
the team and sponsors to dialog about how to respond to each risk. The Risk Score helps us determine a sense of priority amongst the
risks. If, for example, the first risk has a score of $100K and the second of $160K, then the second risk represents a bigger threat to the
projects baselines and has bigger priority.

After the risks are identified and assessed they should be mitigated with one of the response actions based on the risk type and priority.
[4]

B. SEI's Continuous Risk Management (CRM) model

SEI (Software Engineering institution) as a software engineering and application of authority, based on years of software project
management experience in the field, make CRM (continuous Risks Management) model . CRM model proposed in the software
project life cycle should pay attention at all stages of risk identification and management, risk management, it is divided into five
sections repeated cycle: identification, analysis, planning,tracking and control.

CONTINOUS RISK MANAGEMENT MODEL

SEI's CRM model has seven software risk management principles, namely: (1) global view; (2) an active strategy; (3)open
communication environment; (4) Integrated management; (5) continuous process; ( 6) a unified perspective on the product; (7) team
coordination and cooperation.[2]

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

28 www.ijergs.org

In software risk management, information flow direction of information is from risk identification to risk control, and then into the
risk identification, and continuously cycles and flows like this. The characteristics of this cycle will not stop until the end of the
project, that would have been the project does not end risk management. First, the issue before the risk into risk assessment and then
identify the impact, probability and time-consuming, risk classification and prioritization of risk; then the risk information to make
decisions on the basis of action; and real-time monitoring risk indicators and risk mitigation actions; the last correction in the deviation
of the risk mitigation plan. The core of risk this model is communication, which means that all parts of the project should strengthen
the communication of risks, including among the various groups, such as between project phases and so on.

SEI risk management separately from the software risk identification, risk analysis, risk planning, risk tracking and risk management
processes to cope with the various IDEFO (Integrated Computer-Aided Manufacturing Definition referred to as the DEFO, a standard
process definition) data flow diagrams from two perspectives describes the software management process risk management; external
view shows the process control, input, output and mechanism, internal view that the mechanisms used to process the input into output
activities, and a clear description of the software risk management at all stages of the process of mutual effects of the interaction
relationship. Software risk management process model through the control, input, output, and mechanism of the process described in
the top control to decide when and how the input is a key process of change required, it must meet the entrance standards process, the
output is the result of the process of change This result has already passed the process of export standard review mechanism to decide
on the method used in the process.[4]

C. CMMI (Software Capability Maturity Model Integration) in the risk management process areas

In the CMM by the SEI CMMI is developed on the basis, and in the world to promote the implementation of software capability
maturity assessment criteria and mainly used to guide the software development process improvement and software development
capability assessment. Risk Management process area in CMMI Level III - has defined a critical stage in the process domain. The
CMMI suggest three major steps in managing risks. These are prepare for risk management, identify and analyze risk and mitigate
risk. It also suggest institutionalizing risk management (establish an organizational policy, planning, train people, manage
configurations, relevant stakeholders, monitoring process, improvement info, higher level management etc

The core of the model is the risk library, and each activity to achieve the various targets are updated the risk library. Which activities
to "develop and maintain risk management strategies" and the risk of database link is a two-way interaction, that is, work out the risk
database by collecting data with the corresponding activities of the previous input.[2]


D. MSF Risk Management Model

MSF (Microsoft Solutions Framework) is the concept of risk management: risk management must be active, it is a formal system
process, also risk should be continuous assessment, monitoring, management, until it is resolved or the issue is handled. The greatest
feature of this model is the integration of learning activities, risk management, stressing the importance of learning experience from
previous projects. Microsofts research stated that an investment of merely 5% of the total budget into risk management could result in
a probability of 50-70% to complete the project on time.[2]

E. IEEE risk management standards

It defines the process of software development life cycle risk management process for software companies in the software
development projects also apply to individual risks emerging in the software development. It defines the risk management process is a
continuous process, which systematically describe and manage the software development life cycle, including the following activities:
planning and implementation of risk management, managing project risk list, risk analysis, monitoring risk, address risk, assessing risk
management process.[2]

Institute of Risk Management (IRM), The Association of Insurance and Risk Managers (AIRMIC), and The National Forum for Risk
Management in the Public Sector (ALARM) have a generic and valuable standard on risk management. Thereupon the standard
contains these elements: risk definition, risk management, risk assessment, risk analysis, risk evaluation, risk reporting and
communication, risk treatment, monitoring and review of the risk management process. [3]

F. Collaborative Risk Management

Collaborative risk identification
One of the first activities in a project is defining the project goals and description. This information is very important to understand the
range and complexity of the project. Usually this process is developed by the professionals who are closed to the clients like, for
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

29 www.ijergs.org

instance, project leader and consultants (or, in some cases, the entire project team).With the goals defined, team members, according
to their skills and experience, can start identifying risks that can affect the project goals, including the risks which have positive and
negative impact. For each identified risk they will categorize the risk impact and the probability in a scale: low, medium and high. In
this process, project members perform the risk identification alone. This approach may be useful to determine the risk attitude and risk
tolerance of each member or group area, which will allow identifying the organization global risk tolerance. This will also allow
understanding future decisions and monitoring the risk tolerance evolution of the organisation.This stage ends with a first draft of the
risk register of each project member, describing the probability and impact.

Collaborative risk selection and combination
After generating the preliminary risk records, the project leader analyzes all risks and may change, filter or merge some risks. Then
him, with the project team, can analyze and identify the risk dependencies (identifying the risks that may be influenced by other risks).
The probability and impact assessment of the risk will follow risk dependency theory, used to compute the final combined risk
probability and impact. By this way, the project team will be able to identify and analyze the risks and evaluate if its combination can
lead to disproportioned project failure.After the selection and combination, the project team will generate the risk probability matrix
according to the scale (low, medium or high). This matrix gives a visual representation of the risks rank and helps risk prioritization.
The output of this stage is the risk register with the filtered risks sort by priority.

Collaborative risk response strategy
Regarding the organizations risk tolerance and appetite, the project sponsors may analyze and decide what risk or opportunities they
want to explore or ignore. Also they can add new risks, delete or combine the existing ones, which may require new risk analysis by
the project team. According to the risk matrix, project sponsors may want to monitor risk/opportunity, reduce the impact of the risk by
taking some previous actions or enhance the probability/impact of the opportunities. With the project sponsors decisions about the
identified risks, it would be possible to analyze some risk relevant issues. The decisions in this stage will guide the rest of the
organization in terms of RM activities.[6]


Collaborative Risk Management

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

30 www.ijergs.org

1.2 MULT-CHARACTERS OF RISKS

Risks are challenges that can have negative influences on a project unless they are handled properly. The efficiency of risk
management depends upon the cognition to risks. In this section, we cognize characters of software risks, including multi-stages,
multi-roles, multi-situations, uncertainty, multi-methods, multi-dimensions, multi-attributes and multiobjects. We call these characters
as multi-characters of the software risks.


Flow Of Risk Management


Multi-stages
According to the software life cycle, software risks may exist or derive from different stages, i.e., bidding stage, requirement analysis
stage, source-code writing stage, product delivery stage and maintenance stage. Because software risks exist during the software life
cycle, risk management exists during the software life cycle too. Potential key risks should especially be identified and prohibited in
time, which averts more potential losses. Less losses mean more profits. It is necessary for managers to attach importance to risk
management during the
development processes and deal with the risks properly.

Multi-roles
Actually, roles in a software project include varied roles relative to software risks from the bidding to the delivery and maintenance of
the software product. In the bidding stage, roles include tender, bidder, and supervisor. In the project approval stage, roles may include
investor, developer, and uncertain market with risks. In the normal development stage, the development team may be private, joint-
ventured or transnational enterprises, and then roles include mainly investors (stockholders), managers, developers, and customers
(market).In the delivery and maintenance stages, roles include investors, the market branch, the development branch, maintainers,
customers, etc. Different roles may bring different risks; different risks should be dealt by different people.

Multi-situations
Different development teams have different development models and different management models. Development environment are
varied. For example, there are varied development teams or contractors, such as private enterprise, joint-ventured or foreign-funded
enterprises.
According to their practical situation, they may adopt development models such as waterfall, spiral, prototype, or different
development methods such as structured programming, object-oriented design. Different environment need different kinds of
management to staff members, enterprise impression, supervision, etc. Different kinds of development teams may have different kinds
of risks with different development models and management models.Risks exist in different domains, such as in flood risk, grassland
fire, medical science, geo-field.

Uncertainty
Risk may occur, or may not occur. Risk occurs with different probabilities at different time and in different environment. Ri sks are
uncertain. If managers deal with risks correctly, risks may be prevented; if managers do not attach attention to risks or do not deal with
risks incorrectly, the risks may bring losses (or bring fewer profits sometimes). Managers should control important risks properly and
prevent their occurrence or reduce their adverse impact during the risk control stage.

Multi-methods
There are many identifying method, ranking methods, such as Delphi and AHP methods, and ranking methods, such as risk exposure
or risk matrices.

Multi-dimensions
Software risks are normally identified into different dimensions (categories). For example, software risks have six dimensions: user,
requirement, project complexity, planning & control, team and organization environment; or three dimensions: project size,
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

31 www.ijergs.org

technological experience and project structure; five dimensions: technological newness, application size, expertise, application
complexity and organizational environment.
During the risk identification phase, people can identify all the possible risks into a list, or sort these risks into several dimensions
according to their experience and comprehension to the project and respective risks.

Multi-attributes
Risk management uses probability and loss to rank risks.Probability of a risk is sometimes referred as occurrence probability,
frequency or likelihood. Impact of a risk is sometimes referred as magnitude, loss, severity, etc. Changing the names does
not affect the logic of risk assessment. During the risk identification process, decision-makers identify risks with large losses or great
probabilities. During the risk assessment process, decision-makers evaluate risks according to attributes of the risks or combination of
the attributes. For example, risk exposure is the product of probability and loss of a risk, and the exposure value can be used to rank
risks.

Multi-objects
There are many risks in software development. The target of risk management is to deal with most risks, or all the major risks with
limited project resources. Each risk is an object to management. We say there are multi-objects for the risk management. There may
be a lot of risks in a project. Managers cannot deal with all the risks identically for limited human and material resources. It is
necessary to assess risks and get most important risks to deal with first.
Since there are a lot of methods, frames or ideas to identify, evaluate, control risks, managers or decision makers should choose the
most suitable method forthemselves


Risk Identification is an iterative process that seeks to identify risks that may affect the project and documenting their characteristics.
Currently there are different techniques to make the identification of risks such as: brainstorming, Delphi, interviews, SWOT analysis,
checklist, cause-effect diagram, flowchart, diagram of influence. The output of this work would be the Risk register. [5]

3. Use of Automated Risk Management Tools


In order to offer high-quality software products to market in time and under market requirements, it is important to find computer-
based tools with high accuracy probability to help managers to make decision. Software risk analysis and management can be partially
transferred into data analysis or data mining. Automated tools are designed to assist project managers in planning and setting up
projects, assigning resources to tasks, tracking progress, managing budgets, requirements, changes and risks as well as analyzing
workloads.

Risk analysis and management are usually based on information collected from traditional knowledge, or analogy to well-known
cases, common sense assessment, results of experiments or tests, reviewing of inadvertent exposure. The first thing for the automated
tools is to collect historical data to build up a database. Once the database exists, it will process the data and mine some useful
information to help manager to analyze risks and make decisions. There are lots of methods in Machine Learning study. For example,
clustering skills are used to assign risk label to different risks. In each cluster, risks may have similar attributes. Association rule
method is used to analyze each cluster to find the relationship of risks and risks factors. Some other artificial intelligence methods
(9K-near neighbor approach, ID3 decision tree, Neural Network, etc) are used to build risk assessment models and to predict risks of
software development. In the market, there are many popular software for decision making that is also applicable for risk management
in software risks analysis.[2]

According to Hu and Huang, they randomly divide their software risk dataset into two subsets, 100 samples for training, and 20 for
testing. They start by predicting the risks with standard neural network. Then predictions were made using standard multilayer neural
network, support vector machines, the combination of genetic algorithm and neural network. They compared results of three
classifiers. The standard neural networks can predict the outcome of software projects with 70% in accuracy. SVM on the other hand
achieved higher accuracy of 80%. The highest correct prediction results are obtained from the combination of genetic algorithm and
neural network as 85% [7].

According to Amanjot Singh Klair and Raminder Preet Kaur, SVM and kNN based approach could serve as an economical, automatic
tool to generate ranking of software by formulating the relationship based on its training. They have gone through the survey of the
SVM and kNN models for various applications and they conclude that most of the software quality evaluation problems the
performance of SVM model is better than the kNN approach [8].

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

32 www.ijergs.org

Hu and Zhang published an article about an intelligent model for software project risk prediction. They compared ANN and SVM
methods. For ANN method, the probabilities for two categories of prediction errors are 10% and 15% respectively, and for SVM
method, 5% and 10% respectively, which shows that the proposed SVM-based risk prediction model achieves better performance [9].

Tang and Wang published an article about software project risk assessment model based on fuzzy theory. They created a model based
on fuzzy theory about software project risk assessment. That model can measure a combination of impact risk and it resolved the
uncertainty. They calculate quantitative data of risk-equivalent and semantic distance between fuzzy numbers. They combined
demand, technology and software performance risk with progress, costs and software quality [10].

3.1 WEKA data mining tool

Weka (Waikato Environment for Knowledge Analysis) is a popular suite of machine learning software written in Java, developed at
theUniversity of Waikato, New Zealand. Weka is free software available under the GNU General Public License.It is a collection of
machine learning algorithms for data mining tasks. The algorithms are applied directly to a dataset. WEKA implements algorithms for
data preprocessing,classification, regression, clustering and association rules; It also includes visualization tools. The new machine
learning schemes can also be developed with this package.Weka supports several standard data mining tasks, more
specifically,data preprocessing, clustering, classification,regression,visualization,and feature selection.

Main features of WEKA include:

- 49 data preprocessing tools
- 76 classification/regression algorithms
- 8 clustering algorithms
- 15 attribute/subset evaluator + 10 search algorithm for feature selection
- 3 algorithms for finding association rules
- 3 graphical user interfaces :
The Explorer
The Experimentor
The Knowlegde flow


The data file normally used by Weka is in ARFF file format, which consists of special tags to indicate different things in the data file
(foremost: attribute names, attribute types, attribute values and the data). The main interface in Weka is the Explorer. It has a set of
panels, each of which can be used to perform a certain task. Once a dataset has been loaded, one of the other panels in the Explorer
can be used to perform further analysis. [11]


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

33 www.ijergs.org

3.1 RESULTS AND DISCUSSION

The first aim is to measure the importance of risk factors with using 384 problems and six risk factors. Correlation between severity
and each six risk factors are calculated separately by Chi-Squared Statistics and Information Gain approaches to find out the
importance level of the risk factors.

The second aim is to predict impact of the problem with using the model that is formed by Turkcell ICT data set. Each problem has
different values on risk factors so we can estimate the impact of the problem with building a risk model. 384 problems are used as
tuple and 6 risk factors are used as feature in our study. Severity value, which is Low, Medium or High, is used as a class label.

384 problems have class label so we used classification methods in our study. Support Vector Machines (SVMs), Naive Bayes,
Decision Tree (J48), k-Nearest Neighbor (kNN) and Multilayer Perceptron Neural Networks (MLPs) classifiers are used in this work.

Importance Ranking of Risk Factors

In this, the importance ranking of risk factors is obtained by feature selection methods that are information gain and chi square
statistics by using WEKA tool. Importance of risk factors emphasis the most significant risk factors that determine impact of
problems. Problem severity can also be predicted with using classifiers in classification phase. We obtain correlation values of risk
factor and severity of problem. Ranking order of risk factors according to impact power is also given in table below . Regulation
Effect is the most distinctive and important risk factor to determine problem severity. IG and X2 approaches get the same results for
the two most distinctive risk factors. This shows that problem values on Regulation Effect and Financial Effect are more
distinctive than other risk factors to predict the problem severity. With the same logic Employee Effect and Brand Effect are less
distinctive than other risk factors to predict the problem severity. IG and X2 approaches get the same results for the two lest distinctive
risk factors. To sum up, if a problem in the project affects regulations in the company or financial values of the company, this makes
severity of this problem high. If our data set consisted of hundreds of risk factors, determination of ranking of risk factors would
enable to reduce unnecessary risks for risk evaluation phase

Correlation values of risk factors




Problem Impact Prediction

Turkcell data set supply problem severity values so prediction of problem impact become a classification problem. Data set has six
features (six risk factors) and each problem has a class label (severity value) so forming a training model then test this model with
same data gives an idea about prediction of problem impact.
10 fold cross validation evaluation technique is used to get accuracy values in classification phase. 10 fold cross validation evaluation
technique splits data set into ten parts randomly then it uses nine part to build training model and one part is used as test data. It is
repeated ten times to get all classification test results. Classification performances of all five classifiers are measured by using
Precision, Recall and F-measure values.


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

34 www.ijergs.org

Classification Performance Values Of Classifiers



The highest F-measure value, 97.5 percent, is obtained from MLPs classifier. MLPs also give highest Kappa statistic than other
classifiers. It classified 376 problem severity values correctly. The result of SVMs follows results of MLPs. The second highest F-
measure value is obtained from SVMs classifier and also second highest Kappa statistic is taken from this classifier. NB and kNN give
the lowest F-measure values and Kappa statistic values. There is an important point that number of correctly classified instances by
J48 classifier is less than number of correctly classified instances by kNN and NB classifiers but Kappa statistic and F-measure value
of J48 is higher than Kappa statistic and F-measure values of kNN and NB. It proves that number of correctly classified instances is
not capable for evaluating classification performance. F-measure and Kappa statistic are more reliable for non-homogenous data sets
in classification.

4. CONCLUSION:

It is concluded that Software risks that occur during the software development life cycle are one of the most important reasons for this
low success rates so it is important to deal with the risk before they become corruptive. Hence Software risk management contains
preventive key steps before start of new software projects to increase success rates of software projects. These preventive key steps
specify software risks, impact of these risk factors and they aim to dissipate uncertain software issues. In order to offer high-quality
software products to market in time and under market requirements, it is important to find computer-based tools with high accuracy
probability to help managers to make decision. The proposed risk management tools and methods help the project managers deal with
risk management programs in a most effective and efficient manner.

Acknowledgement

I wish to thanks who directly and indirectly contribute in paper, First and foremost, I would like to thank Mrs.Amandeep kaur for his
most support and encouragement. She kindly read my paper and offered valuable details and provide guidelines.Second,I would like
to thanks all the authors whose paper i refer for their direct and indirect support to complete my work.


REFERENCES:

1. M. zgr Cingiz, Ahmet Unudulmaz, Oya Kalpsz ,Computer Engineering Department ,Yldz Technical University,
Prediction of Project Problem Effects on Software Risk Factors, 12th IEEE International Conference on Intelligent Software
Methodologies, Tools and Techniques ,September 22-24, 2013.


2. Pu Tianyin, Development of software project risk management model review, IEEE,2011

3. IRM,A Risk Management Standard Published by AIRMIC, ALARM, 2002.

4. Software risk management, Sergey M. Avdoshin, Elena Y. Pesotskaya,IEEE, 2011

5. Yu Wang,Shun Fu ,A General Cognition to the Multi-characters of Software Risks, International Conference on
Computational and Information Sciences,2011


6. Pedro S Silva, Antnio Trigo, Joo Varajo, Collaborative Risk Management in Software Projects, Eighth International
Conference on the Quality of Information and Communications Technology,2012
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

35 www.ijergs.org


7. Y. Hu, J. Huang, J. Chen, M. Liu, K. Xie, "Software Project Risk Management Modeling with Neural Network and Support
Vector Machine Approaches", International Conference on Natural Computation, 2007

8. A.S.Klair, R.P.Kaur, Software Effort Estimation using SVM and kNN International Conference on Computer Graphics,
Simulation and Modeling, 2012, Pattaya (Thailand)

9. Y. Hu, X. Zhang, X. Sun, M. Liu, J. Du An Intelligent Model for Software Project Risk Prediction, International
Conference on Information Management, 2009

10. A.Tang,R.Wang, Software Project Risk Assesment Model Based on Fuzzy Theory, International Conference on Computer
and Communication Technologies in Agriculture Engineering, 2010


11. http://en.wikipedia.org/wiki/Weka_(machine_learning)























International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

36 www.ijergs.org

Cooperative Spectrum Sensing Using Hard Decision Fusion Scheme
Nikhil Arora
1
, Rita Mahajan
1

1
PEC University of Technology, Chandigarh, India
E-mail- napj.nikhil@hotmail.com
AbstractCooperative spectrum sensing using energy detection is the efficient method of detecting the spectrum holes
in a particular band of interest or channel by combining the information gathered by multiple CR users. In this paper, we
study the hard decision fusion scheme using Logical AND and the Logical OR rule and a brief introduction of Soft
and Quantized fusion scheme. Simulation compares ROC (Receiver Operating Characteristics) curves for the above
mentioned scheme. And it shows that the Logical OR has better performance than the Logical AND rule.
Keywords - Cognitive radio(CR), Energy detection, cooperative spectrum sensing, Fusion scheme, hard decision fusion
rule, Centralized sensing, AWGN channel etc.
INTRODUCTION
The demand for ubiquitous wireless service is growing with the proliferation of mobile multimedia communication
devices. As a result, vast majority of the available spectrums are already been licensed. It thus appears that there is little
or no room to add any new services. On the other hand, studies have shown that most of the licensed spectrum is largely
under-utilized. [1]
Therefore a radio which can identify and sense radio spectrum situations, to recognize temporarily vacant spectrums
and make use of it, has the potential to present higher bandwidth services, enhance spectrum competence and lessen the
need for centralized spectrum organization. This might be achieved through a radio which can formulate autonomous
decisions regarding how it accesses spectrum. Cognitive radios comprise the potential to carry out this. Cognitive radios
have the potential to jump in and out of unused spectrum gaps to enlarge spectrum competence and make available
wideband services.
They can advance the spectral competence by sensing the environment and, in order to provide the quality of service to
the primary user, filling the discovered gaps of unused licensed spectrum with their own transmissions. Precise spectrum
awareness is the main concern for the cognitive radio system (secondary user). In this regard it is a proposal that
adaptive transmission in unused spectral bands without causing interference to the primary user. The transmissions of
licensed users have to be detected without failure and the main goal for adaptive transmission is the detection of vacant
frequency bands. A scheme is propose to formulate a cognitive radio that is intelligent to detect vacant frequency bands
professionally, to get maximum throughput without causing any detrimental harm to the primary user's quality of
service. Therefore, a reliable spectrum sensing technique is needed. Energy detection exhibits simplicity and serves as a
practical spectrum sensing scheme.
As a key technique to improve the spectrum sensing for Cognitive Radio Network (CRN), cooperative sensing is
proposed to combat some sensing problems such as fading, shadowing, and receiver uncertainty problems. The idea of
cooperative spectrum sensing in a RF sensor network is the collaboration of nodes on deciding the spectrum band used
by the transmitters emitting the signal of interest. Nodes send either their test statistics or local decisions about the
presence of the signal of interest to a decision maker, which can be another node.
The centralized cooperating spectrum sensing (as shown in Fig 1.) can be understood as follows:
- All cooperating CRs perform local spectrum sensing of the channel or frequency individually and give the
information to the Fusion Centre FC through reporting channels.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

37 www.ijergs.org

- Then the FC fuses (either hard or soft decision techniques) the sensing information to decide vacancy of
spectrum.
- And then FC passes the information to the CRs.

Fig 1. Centralized cooperative spectrum sensing.
In this paper we studied and implemented the logical AND and OR hard fusion technique. Energy detection method
based on Neyman-pearson criterion [2] is used for local spectrum sensing. And finally the hard fusion technique is used
for the detection of primary user PU.
Rest of the paper is organized as follows: Section II presents concept of two hypotheses (Analytic Model), spectrum
sensing through energy detection for single node and cooperative spectrum sensing. Section III presents simulation results
followed by conclusion in Section IV.
SYSTEM MODEL
Concept of two hypothesis
Spectrum Sensing is a key element in cognitive radio network. In fact it is the foremost step that needs to be performed for
communication to take place. Spectrum sensing can be simply reduced to an identification problem, modelled as a hypothesis test [3].
The sensing equipment has to just decide between for one of the two hypotheses:-
H1: x (n) =s (n) +w (n) (2.1)

H0: x (n) = w (n) (2.2)

Where s(n) is the signal transmitted by the primary users.
x(n) being the signal received by the secondary users.
w(n) is the additive white Gaussian noise with variances .


Fig 2.1 Hypothesis problem model

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

38 www.ijergs.org

As shown in fig 2.1 above Hypothesis H0 indicates absence of primary user and that the frequency band of interest only has noise
whereas H1 points towards presence of primary user.
Thus for the two state hypotheses numbers of important cases are:-

H1 turns out to be TRUE in case of presence of primary user i.e. P(H1/H1) is known as Probability of Detection (Pd).
H0 turns out to be TRUE in case of presence of primary user i.e. P(H0/H1) is known as Probability of Miss-Detection (Pm).
H1 turns out to be TRUE in case of absence of primary user i.e. P(H1/H0) is known as Probability of False Alarm (Pf).

The probability of detection is of main concern as it gives the probability of correctly sensing for the presence of primary
users in the frequency band. Probability of miss-detection is just the complement of detection probability. The goal of the
sensing schemes is to maximize the detection probability for a low probability of false alarm.
Energy Detection

If the secondary user cannot gather sufficient information about the PU signal, the optimal detector (due to fewer
complexities) is an energy detector, also called as a radiometer [4]. It is common method for detection of unknown
signals. The block diagram of the energy detector is as shown in Fig 2.2

Fig 2.2 Energy Detection block diagram
First, the input signal y(t) is filtered with a band pass filter (BPF) in order to limit the noise and to select the bandwidt h of
interest. The noise in the output of the filter has a band-limited, flat spectral density. Next, in the figure there is the energy
detector consisting of a squaring device and a finite time integrator.
The output signal V from the integrator is

V=1/T |()|

(2.3)

Finally, this output signal V is compared to the threshold given by Digham [5] y in order to decide whether a signal is
present or not. The threshold is set according to statistical properties of the output V when only noise is present. The
probability of detection Pd and false alarm Pf [6] are given as follows.

Pd = P{y > X\H1} (2.4)

Pf = P{y > X\Ho} (2.5)

From the above functions, while a low Pd would result in missing the presence of the primary user with high probability
which in turn increases the interference to the primary user, a high Pf would result in low spectrum utilization since false
alarm increase the number of missed opportunities. Since it is easy to implement, the recent work on detection of the
primary user has generally adopted the energy detector. However, the performance of energy detector [7] is susceptible to
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

39 www.ijergs.org

uncertainty in noise power. In order to solve this problem, a pilot tone from the primary transmitter is used to help
improve the accuracy of the energy detector. The energy detector is prone to the false detection triggered by the
unintended signals.
Cooperative spectrum sensing
Under fading or shadowing, received signal strength can be very low and this can prevent a node from sensing the signal
of interest. Noise can also be a challenge when energy detection is used for spectrum sensing, although there are spectrum
sensing techniques that are robust in the presence of noise, such as feature detection approaches [8]. Due to a low signal-
to-noise ratio (SNR) value, the signal of interest may not be detected.
The idea of cooperative spectrum sensing in a RF sensor network is the collaboration of nodes on deciding the
spectrum band used by the transmitters emitting the signal of interest. Nodes send either their test statistics or local
decisions about the presence of the signal of interest to a decision maker, which can be another node. Through this
cooperation, the unwanted effects of fading, shadowing and noise can be minimized [8]. This is because a signal that is
not detected by one node may be detected by another. As illustrated in Fig. 1 the cooperation of nodes in the detection of a
signal of interest under shadowing and fading conditions. As the number of collaborating nodes increases, the probability
of missed detection for all nodes decreases [9].
Cooperation in spectrum sensing also improves the overall detection sensitivity of a RF sensor network without the
requirement for individual nodes to have high detection sensitivity [8]. Less sensitive detectors on nodes means reduced
hardware and complexity [8]. The trade-off for cooperation is more communication overhead [8]. Since the local sensing
results of nodes should be collected at a decision maker, where the decision is made, a control channel is required between
the decision maker and the other nodes [8].
There are three forms of cooperation in spectrum sensing: hard decision (also known as decision fusion), soft decision
also known as data fusion) and quantized decision. The difference between these forms is the type of information sent to
the decision maker.
The following subsections give a detailed introduction of hard decision fusion and a brief introduction of soft decision
fusion and quantized decision fusion schemes.
1. Hard Decision
In the hard decision fusion scheme, local decisions of the nodes are sent to the decision maker. The main advantage of this
method is the fact that it needs limited bandwidth [10]. The algorithm for this scheme is as follows [9]. Every node first
performs local spectrum sensing and makes a binary decision on whether a signal of interest is present or not by
comparing the sensed energy with a threshold. All nodes send their one-bit decision result to the decision maker. Then, a
final decision on the presence of the signal of interest is made by the decision maker.
The detection probability P
d
, miss detection probability p
m
and false alarm probability P
f
over AWGN channels can be
expressed in following way [4]
P
d,k
= Q
m
(, ) (2.6)

P
m,k
=1-P
d,k
(2.7)

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

40 www.ijergs.org

P
f,k
=
(,/)
()
(2.8)

Where y is the signal to noise ratio (SNR), m=TW is the time bandwidth product, Q
m
(-,-) is the generalized Marcum Q-
function, (.) and (.,.) are complete and incomplete gamma functions respectively.

Three of the rules used by the decision maker for a final decision are now discussed.

a. Logical-OR Rule
In this rule, if any one of the local decisions sent to the decision maker is a logical one (i.e., any one of the nodes decides
that the signal of interest is present), the final decision made by the decision maker is one (i.e. decision maker decides that
the signal of interest is present) [11]. Cooperative detection probability Q
d
, cooperative false alarm probability Q
f
and
Cooperative miss detection probability Q
md
are defined as:

Q
d,or
= 1- (1 Pd, k)

=1
(2.9)

Q
f,or
= 1- (1 Pf, k)

=1
(2.10)

Q
md,or
= 1-Q
d ,or
(2.11)

b. Logical-AND Rule
In this rule, if all of the local decisions sent to the decision maker are one (i.e., all of the nodes decide that the signal of
interest is present), the final decision made by the decision maker is one (i.e., decision maker decides that the signal of
interest is present) [11].
Q
d,and
= Pd, k

=1
(2.12)

Q
f,and
= Pf, k

=1
(2.13)

Q
md,and
= 1-Q
d,and
(2.14)


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

41 www.ijergs.org

c. Majority Rule
In this rule, if half or more of the local decisions sent to the decision maker are one (i.e., half or more of the nodes decide
that the signal of interest is present), the final decision made by the decision maker is one (i.e., decision maker decides
that the signal of interest is present) [11].

2. Soft Combination
In the soft combination scheme, nodes send their sensing information directly to the decision maker without making any
decisions [12]. The decision is made at the decision maker by the use of this information [12]. Soft combination provides
better performance than hard combination, but it requires a wider bandwidth for the control channel [13]. It also requires
more overhead than the hard combination scheme [12].

3. Quantized Fusion
Instead of sending the received signal energy values as in conventional schemes, the CRs quantize their observations
according to their received signal energy and the quantization boundaries. Then, the quantized level is forwarded to the
fusion centre, which sums up the entire received quantum it re-creates and compares to the fusion threshold [14]. First the
optimization for both uniform and non-uniform quantization for cooperative spectrum sensing is considered. Then, the
low complexity quantized approach using an approximated CDF on Hi is investigated. In these schemes, the optimization
is based only on Hi in order to minimize the quantization uncertainty for the PU's signal, and hence improve the detection
probability.
SIMULATIONS AND RESULTS
In this section we study the detection performance of our scheme through simulations using complementary Roc curves.
First, we present the performance of the energy detection for single node i.e. without cooperation. Secondly, we will
present the performance of hard decision rule using logical AND and the performance comparison of logical OR rule
simulation with the theoretical logical ANDrule.
For the energy detection for single node i.e. without cooperation, we present in Fig 3.1 the complementary ROC curve
between the probability of false alarm and the probability of miss detection. For the simulation, we use SNR of -10db
under the AWGN channel considered over the 1000 samples.

Fig 3.1 Complementary ROC curve under AWGN channel for single node (i.e. without cooperation).
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

42 www.ijergs.org

For the hard decision scheme using Logical AND rule, Fig 3.2 shows the complementary ROC curve as discussed in
the section II.C.1.a under AWGN channel. For simulation, we plotted the miss detection probability using Monte Carlo
technique of 1000 iterations. The numbers of CR users are 10 for simulation; each user has a SNR of -10db, whereas for
the theory the no. of CRs chosen are different (5 and 10).

Fig 3.2 Complementary ROC curve for hard decision Logical AND rule under AWGN channel over 1000
samples.
Fig 3.3 compares the complementary curve of hard decision logical OR rule with the theoretical part of hard
decision logical AND rule (with no. of CRs 5,10) with each user having a SNR of -10db,and simulated over 1000
Monte Carlo iterations.

Fig 3.3 Complementary ROC curve for comparison of Logical OR rule with Logical And rule of hard decision
scheme.
Conclusions
In this paper we have studied and implemented the cooperative spectrum sensing using hard decision rule using Logical
AND and the Logical or rule based on the energy detection. From the simulation it is evident that the performance of
the spectrum sensing increases with cooperation. But there is a trade-off between performance and architecture
complexity. However the simulation results also shows that the hard decision OR rule has better performance than the
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

43 www.ijergs.org

hard decision Logical AND rule, which is due to the fact that FC decide in favor of the presence of primary signal when
at least one CR detect, however in the Logical AND rule all CR user must detect the primary user.
REFERENCES:
1. James ODaniell NeelsAnalysis and Design of Cognitive Radio Networks and Distributed Radio Resource Management
Algorithms,PhDDissertationVirginia Polytechnic Institute and State University, Blacksburg, VA, pp. 27, 2006.
2. S.M. Kay, Fundamentals of Statistical Signal Processing and Estimation theory. Prentice Hall, 1998.
3. Mahmood A. Abdulsattar, ''Energy detection technique for spectrum sensing in cognitive radio: a survey," Department of
Electrical Engineering, University of Baghdad, Baghdad, Iraq, International Journal of Computer Networks &
Communications (IJCNC) Vol.4, No.5, September 2012.
4. T. Yucek and H. Arslan A Survey of Spectrum Sensing Algorithms for Cognitive Radio Applications, IEEE
Communications Surveys & Tutorials, Vol. 11, No. 1, pp. 116-130, 2009.
5. H. Urkowitz, Energy detection of unknown deterministic signals, in Proceedings of the IEEE, vol. 55, no. 4, pp. 523531,
1967.
6. F.F. Digham, M.-S. Alouini, M.K. Simon, On the energy detection of unknown signals over fading channels, IEEE
Transactions on Communications 55 (1) (2007) 21-24.
7. D. Cabric, A. Tkachenko and R. W. Brodersen, Spectrum Sensing Measurements of Pilot, Energy and collaborative
DetectionIEEE Military Communications Conference, No. 10, pp. 1-7, 2006.
8. G. Schay, Introduction to probability with statistical applications. Birkhauser 2007.
9. M. Grinstead and J. L. Snell, Introduction to probability. American Mathematical Soc., 1998.
10. J. Ma and Y. Li, Soft combination and detection for cooperative spectrum sensing in cognitive radio networks, in Proc.
IEEE Global Telecomm. Conf., 2007, pp. 31393143.
11. B. Wang and K. Liu, "Advances in cognitive radio networks: A survey," Selected Topics in Signal Processing, IEEE Journal
of, vol. 5, no. 1, pp. 5-23, 2011.
12. J. Mitola III and G. Q. Maguire Jr, "Cognitive radio: making software radios more personal," Personal Communications,
IEEE, vol. 6, no. 4, pp. 13-18, 1999.
13. S. Shobana, R. Saravanan, and R. Muthaiah, "Matched filter based spectrum sensing on cognitive radio for ofdm wlans,"
International Journal of Engineering and Technology, vol. 5, 2013.
14. J. Ma and Y. Li, "Soft combination and detection for cooperative spectrum sensing in cognitive radio networks," in Proc.
IEEE Global Telecomm. Conf., 2007, pp. 3139-3143.














International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

44 www.ijergs.org

New Unicast Routing Protocol Using Comparative Study of Proactive,
Reactive and Hybrid protocols for MANET
Karan Sood
1
, Nagendra sah
1

1
PEC University of Technology, Chandigarh, india
E-mail- karansood2march@gmail.com
AbstractMobile ad-hoc networks (MANETs) are self-configuring networks of nodes connected via wireless without any form of
centralized administration. This kind of networks is currently one of the most important research subjects, due to the huge variety of
applications (emergency, military, etc...). In MANETs, each node acts both as host and as router, thus, it must be capable of
forwarding packets to other nodes. Topologies of these networks change frequently. There are three main classes of routing protocols
for MANETs: reactive, proactive and hybrid. By studying advantages and disadvantages of each one, a new hybrid routing protocol is
proposed. The new scheme of protocol considers utilizing merits of both reactive and proactive protocols, and implements them as a
hybrid approach. It allows that a mobile node flexibly runs either a proactive or a reactive routing protocol with its veloci ty and its
traffic. The new routing protocol is evaluated qualitatively. To verify the feasibility, a performance comparison with other typical
existing routing protocols[13] is discussed.

Keywords Manets, reactive, proactive, hybrid, AODV, OLSR, ZRP, DSR

INTRODUCTION
Mobile ad hoc networks (MANETs)[1][2] are autonomous systems of mobile hosts connected by wireless links. To achieve efficient
communication between nodes connected to the network new routing protocols are appearing. This is because the traditional routing
protocols for wired networks do not take into account the limitations that appear in the MANETs environment.
A lot of routing protocols for MANETs have been proposed in the last years. The IETF is investigating this subject and for
example, protocols like AODV (Ad hoc On Demand Distance Vector)[4] and OLSR (Optimize Link State Routing protocol)[3] have
been proposed as RFCs (Request for Comments). But, none of the existing protocols is suitable for all network applications and
contexts. The routing protocols for MANETs can be classified in three groups: reactive, proactive and hybrid.
The proactive protocols are based on the traditional distributed protocols shortest path based. With them, every node maintains in its
routing table the route to all the destinations in the network. To achieve that, updating messages are transmitted periodically for all the
nodes. As a consequence of that, these protocols present great bandwidth consumption. Also, there is a great routing overhead.
However, as an advantage, the route to any destination is always available. Thus, the delay is very small.
The reactive protocols determine a route only when necessary. The source node is the one in charge of the route discovery. As a
main advantage, the routing overhead is small since the routes are determinate only on demand. As a main disadvantage the route
discovery introduces a big delay.
The hybrid ones are adaptive, and combine proactive and reactive protocols
The major part of this work has been to find and study information on the current state of the art in MANETs, the routing protocols
that are used (taking into account the advantages and disadvantages of each one depending on the kind of MANET), and to design a
new routing protocol using the acquired knowledge.
In this paper we have evaluated the merits and demerits of four existing protocols and tried to figure out the new routing protocol
which uses the plus points of each protocol. We have considered four existing protocols which are AODV, OLSR, DSR [6] and ZRP.
The results of these three protocols are being compared and a new theoretical routing protocol is being proposed.

MOBILE AD-HOC NETWORKS: MANETS
Mobile Ad-Hoc networks or MANET networks are mobile wireless networks, capable of autonomous operation. Such networks
operate without a base station infrastructure. The nodes cooperate to provide connectivity. Also, a MANET operates without
centralized administration and the nodes cooperate to provide services. Figure illustrates an example of Mobile Ad-Hoc network.


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

45 www.ijergs.org


The main characteristic of MANETs is that the hosts use wireless medium. In addition, they can move freely. Therefore, the network
topology is changing constantly and they do not need any previous infrastructure to be used. Another characteristic is that the hosts
perform as routers.

ROUTING PROTOCOLS FOR MOBILE AD-HOC NETWORKS
As it has been said, MANETs are necessary to have different routing protocols from the wired networks. There are three types of
routing protocols for MANETS:
Table-driven (Proactive)[7]: OLSR, TBRPF[8], DSDV (Dynamic Destination Sequenced Distance Vector)[9], CGSR (Cluster head
Gateway Switch Routing protocol)[10], WRP (Wireless Routing Protocol), OSPF (Open Shortest Path First)[11] MANET, etc.
Demand-driven (Reactive): AODV, DSR, TORA (Temporally Ordered Routing Algorithm)[12], etc.
Hybrids: ZRP (Zone Routing Protocol), HSLS (Hazy Sighted Link State), etc. In the proactive protocols, each node has a routing
table, updated periodically, even when the nodes dont need to forward any message.

REACTIVE ROUTING PROTOCOLS
These protocols find the route on demand by flooding the network with Route Request packets. The main characteristics of these
protocols are:
Path-finding process only on demand.
Information exchange only when required.
For route establishment, the network is flooded with requests and replies.

THE DYNAMIC SOURCE ROUTING (DSR)
DSR is a reactive routing protocol. It uses source routing. The source node must determine the path of the packet. The path is attached
in the packet header and it allows updating the information stored in the nodes from the path. There are no periodical updates. Hence,
when a node needs a path to another one, it determines the route with its stored information and with a discovery route protocol.
.
THE AD-HOC ON DEMAND DISTANCE VECTOR (AODV)
The AODV protocol is a reactive routing protocol. It is a Single Scope protocol and it is based on DSDV. The improvement consists
of minimizing the number of broadcasts required to create routes. Since it is an on demand routing protocol, the nodes which are not
in the selected path need not maintain the route neither participate in the exchange of tables.


PROACTIVE ROUTING PROTOCOLS
These algorithms maintain a fresh list of destinations and their routes by distributing routing tables in the network periodically. The
main characteristics are:
These protocols are extensions of wired network routing protocols.
Every node keeps one or more tables.
Every node maintains the network topology information.
Tables need to be updated frequently.

OPTIMIZED LINK STATE ROUTING (OLSR)
OLSR is a proactive link state routing protocol. It is a point to point routing protocol based in the link state algorithm. Each node
maintains a route to the rest of the nodes of the ad hoc network. The nodes of the ad hoc network periodically exchange messages
about the link state, but it uses the multipoint replaying strategy to minimize the messages quantity and the number of nodes that
send in broadcast mode the routing messages.



HYBRID ROUTING PROTOCOLS
These protocols are a combination of reactive and proactive routing protocols, trying to solve the limitations of each one. Hybrid
routing protocols have the potential to provide higher scalability than pure reactive or proactive protocols.

THE ZONE ROUTING PROTOCOL (ZRP)
The Zone Routing Protocol is a hybrid routing protocol. It combines the advantages from reactive and proactive routing protocols.
This protocol divides its network in different zones. These zones are the nodes local neighbourhood. Each node has its own zone. Each
node can be into multiple overlapping zones, and each zone can be of a different size.
ZRP [5][6] run three routing protocols:
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

46 www.ijergs.org

Intrazone Routing Protocol (IARP)
Interzone Routing Protocol (IERP)
Bordercast Resolution Protocol (BRP)
IARP is a link state routing protocol. It operates within a zone and learns the routes proactively. Hence, each node has a routing table
to reach the nodes within its zone.
IERP uses the border nodes to find a route to a destination node outside of the zone. IERP uses the BRP.
BRP is responsible for the forwarding of a route request. When the Route Discovery process begins, the source node asks to its routing
table and if necessary, it starts a route search between different zones to reach a destination

A NEW ROUTING PROTOCOL FOR MANETS
Since there are many typical routing protocols proposed, it uses two existing protocols directly. For proactive areas, OLSR is utilized
because it is very popular and performs well compared with other proactive routing protocols. Reactive nodes run AODV for no
additional overhead introduced with the network growing. Besides, when the mobility is very high, AODV has impressive resilience.

PROTOCOL DESCRIPTION
The description of routing protocol is quite easy. Each node checks its velocity and its traffic periodically. If the velocity is smaller
than a threshold X, or the traffic is higher than a threshold Z, then the node will try to join or to create a proactive area. Within this
area, the features to use are the same that in the OLSR. If not, the node will work in reactive mode, using the same features that
AODV. The proactive areas have a limited size in number of nodes. The number of nodes within an area cannot be greater than a
threshold Y. If a node that wants to join an area does not find an area with less than Y nodes, it has to create a new area or it cannot
work in proactive mode. But not all the nodes inside the area work like pure OLSR. There are some nodes that have to work as
gateways to communicate the area with the outside. Similarly, not all the nodes outside the area work in the same way that AODV.
Some of them have special features to allow the communication between reactive and proactive nodes.

ROUTING PROTOCOL PARAMETERS
First of all, there are some parameters that have to be described to understand the operation of it.
V=velocity
Periodically, the node checks its velocity to know if topology changes can happen. The velocity to have into account to switch from an
operation mode to another is the average velocity.
X= threshold velocity=3.5 m/s
If we review different performance studies as we can see that AODV is better than OLSR in all the range of mobility since the point of
view of the throughput, the total amount of generated network traffic, and the resilience. However, when the nodes are semi-static (at
very low velocities) the OLSR can perform better in terms of delay end-to-end. This is because in a network with not many topology
changes OLSR can almost always give the shortest path available.
N=number of nodes in the area
N is the number of nodes working in the same area using the proactive features.
Y= threshold number of nodes in an area = 90
The proactive area works in the same way that OLSR. OLSR reduces the number of superfluous forwarding, reduces the size of LS
updates, and reduces the table size. However, while the number of nodes into an OLSR area increases, the number of control packets
increase. For the study made in the OLSR should not exceed 400 nodes because it generates excessive control packets. In the study it
is demonstrated that the packet delivery ratio decreases if the number of nodes is bigger than 100.
Therefore, a good threshold to the number of nodes in an OLSR network could be 90. OLSR allows choosing a big value for the
number of nodes in a network, but when this value exceeds 100 the performance of the protocol may decrease. With the number of
nodes 90, there is a margin of 10 nodes to reach this critical point.
T= Traffic
T is the traffic that a node manages. This traffic is just data traffic (with no control traffic), and can be both the traffic generated by the
node and the traffic routed by the node and generated in others nodes.
Z= threshold value of traffic= 300 kbps
As explained before, when the traffic in the network is high, the nodes need to know the route to the destination as fast as possible. In
this case a proactive routing protocol outperforms the reactive one because it already has the route when necessary.
A NODE OPERATION
A node working with this protocol will work using different features depending on its velocity, traffic and environment. It defines 6
different states for a node: Initial, R1 (Reactive 1), R2 (Reactive 2), R3 (Reactive 3), P1 (Proactive 1), P2 (Proactive 2) and P3
(Proactive 3) states.
Initial state: When a node is reset it begins in an initial state. In this state the node must check its velocity and its traffic to decide in
which mode it has to work. We define condition 1 as: (V<=X) OR (T>Z). If condition 1 doesnt happen then it will work in the
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

47 www.ijergs.org

reactive mode (Reactive 1), but if condition 1 happens, then it will try to work in the proactive mode. Hence, the node will pass to the
Reactive 3 state.
Reactive 1: In this state, the node works using the AODV features. While condition 1 is not fulfilled and the node does not have
connectivity with an area it will remain in the same mode of operation. In the case that the node discovers a node or more working in
the Proactive 1 or Proactive 2 modes then it will work in the Reactive 2 mode. If condition 1 is fulfilled, then it will try to work in
proactive mode (Reactive 3).


Reactive 2: In this state, the node works using the AODV features, but also must process the control messages coming from the
proactive zone. This is because it needs these messages to have, in its routing table, the proactive destinations. While there is no
condition 1 and while the connectivity with any node working in the Proactive 1 or Proactive 2 modes continues the node will remain
in the same state. If condition 1 is not fulfilled but the router loses the connectivity with the mentioned routers, then it will come back
to the Reactive 1 state. If condition 1 occurs then it will try to work in proactive mode (Reactive 3 state).
Reactive 3: This state exists for the reason that when a node decides that to work in proactive mode is better; firstly it must join or
create an area. In this state the node still works using the AODV features, but also has to generate and to process the proactive control
messages. If there is no condition 1 is happening the node will come back to the Reactive 1 state. But while condition 1 happens, the
Node will try to join or to create an area. If it listens another node working in Reactive 3, Proactive 1 or Proactive 2 modes, then it will
join the area unless in the area the number of nodes N is > Y. If N>Y the node remains in the same state waiting to listen to other area
with less number of nodes.
Proactive 1: In this state the router works using the OLSR features. If condition 1 is not fulfilled, the node will go to the Reactive 1
state. But when condition 1 is fulfilled, the node will continue working in this state unless it discovers a node working in the Reactive
1 or Reactive 2 states. Then it will go to the Proactive 2 state.
Proactive 2 (Area Border Router): In this state the node works using the OLSR features but it has to understand the reactive routing
messages (RREQ, RREP and RERR) because it needs to have in its routing table all the reactive 2 nodes connected with it. When an
ABR (Area Border Router) receives a reactive routing message (RREQ, RREP or RERR) it must look for the destination. If the
destination is inside its own area, then it answers to that message reactively. If not, it forwards them to all the others ABRs of its area.
These exit ABRs will change the flags again. If condition 1 is not fulfilled the node will go to the Reactive 1 state. But while condition
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

48 www.ijergs.org

1 occurs the node will continue working in this state unless it lost all the connectivity with the nodes working in the Reactive 1 or
Reactive 2 states. In this case it will go to Proactive 1 mode.
A node goes to Initial State from every state when it is reset.

SIMULATION
ROUTING PROTOCOL SCALABILITY NETWORK SIZE

The network size vs. throughput graph in Figure plots the per-node average of application level observations of bps data received.
According with these results, DSR is the best routing protocol when the network grows with this particular configuration. OLSR and
AODV perform similar in the range of 0-100 nodes, but when the number of nodes is greater, AODV performs better.

NODE DENSITY

the Control Overhead curve for the Node Density experiments is shown. The control overhead measurements are normalized. The
horizontal axis represents the distance between neighbouring nodes in the grid.
The sparse networks have higher paths lengths. Thus, in these networks there are more rebroadcasts of route requests, and more route
reply packets. For that reason DSR increases its control overhead when the density is smaller. However, AODV begins with a high
overload when the node density is high, but uses fewer control packets as the density is smaller.

NUMBER OF HOPS


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

49 www.ijergs.org

The strangest result is to see that the latency for OLSR has the highest values from 1 to 10 hops, and generally the highest slope. For
OLSR to lose its innate advantage in latency, network route convergence would have to be slower than route acquisition, and given
the high control overhead data that was collected for this experiment set, it is easy to see that this is the case. However, under normal
circumstances the OLSR is supposed to be the best of the analysed protocols since the point of view of the latency.

MOBILITY


AODV is the best here. DSR starts out with higher throughput in the lowest mobility case, but DSR optimizations seem less able to
handle high mobility, but it still manages a second place finish. OLSR is the third place finisher. OLSR is somewhat less scalable than
DSR, but follows a roughly similar curve of decline. ZRP is the worst in this roundup.
Graph keys: Dark Blue-AODV
Light Blue-ZRP
Pink-DSR
Yellow-OLSR

CONCLUSION
The AODV and DSR protocols will perform better in the networks with static traffic and with a number of source and destination
pairs relatively small for each host. In this case, AODV and DSR use fewer resources than OLSR, because the control overhead is
small. Also, they require less bandwidth to maintain the routes. Besides, the routing table is kept small reducing the computational
complexity. Both reactive protocols can be used in resource critical environments.
The OLSR protocol is more efficient in networks with high density and highly sporadic traffic. The quality metrics are easy to expand
to the current protocol. Hence, it is possible for OLSR to offer QoS. However, OLSR requires that it continuously have some
bandwidth in order to receive the topology updates messages.
The scalability of both classes of protocols is restricted due to their proactive or reactive characteristics. For reactive protocols, it is the
flooding overhead in the high mobility and large networks. For OLSR protocol, it is the size of the routing table and topological
updates messages.
ZRP is supposed to perform well in large networks with low area overlapping. But in any of the papers considered to write this thesis
ZRP showed a better performance that the other protocols. Besides, and as a disadvantage, there is an optimum zone radius for each
environment as was studied.
The protocol is supposed to outperform the rest of the protocols under study in large networks with nodes having different traffic rates
and different mobility degrees. Each node decides if it is better to work in proactive or in reactive mode. Hence, every node adjusts the
control overhead and the resource usage to its necessities.

FUTURE WORK
This report has proposed a routing protocol for MANETs. Once the different existing routing protocols as well as their advantages and
disadvantages were understood, the objective was to design a new protocol more suitable for networks with nodes moving freely.
These networks should be able to be both large and small. Also the traffic pattern was taken into account to decide the features of each
node.
Since there was no time to make a quantitative study by means of simulation, only a qualitative analysis was done. Therefore, as
future work, protocol should be programed for example in NS-2 to carry out a performance study in comparison with the other
protocols already implemented

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

50 www.ijergs.org

REFERENCES:

1. Stuart Kurkowski, Tracy Camp Michael, Colagrosso MANET Simulation Studies: The Incredibles MCS Department,
Colorado School of Mines, Golden, Colorado, USA
2. Chalamtac, M. Conti and J. Liu Mobile Ad hoc Networking. Imperatives and Challenges, Ad Hoc Network Journal, Vol. 1
N. 1, July 2003
3. T. Clausen, P. Jacquet, Optimized Link State Routing Protocol (OLSR), IETF RFC 3626, October 2003
4. C. Perkins, E.Belding-Royer, S. Das, Ad hoc On-Demand Distance Vector (AODV) Routing, IETF RFC 3561, July 2003
5. Zygmunt J. Haas, Senior Member, IEEE, and Marc R. Pearlman, Student Member, IEEE Determining the Optimal
Configuration for the Zone Routing Protocol
6. M. R. Pearlman and Z. J. Haas, Determining the Optimal Configuration for the Zone Routing Protocol, IEEE Journal on
Selected Areas in Communication, 17 (8). pp. 1395-1414, 1999
7. Basu Dev Shivahare1, Charu Wahi2 , Shalini Shivhare3 Proactive And Reactive Routing Protocols In Mobile Adhoc
Network Using Routing Protocol ISSN 2250-2459, Volume 2, Issue 3, March 2012
8. R. Ogier, F. Templin, M. Lewis Topology Dissemination Based on Reverse-Path Forwarding (TBRPF) Date: February
2004
9. Hemanth Narra, Yufei Cheng, Egemen K. etinkaya, Justin P. Rohrer and James P.G. Sterbenz, Destination-Sequenced
Distance Vector (DSDV) Routing Protocol Implementation in ns-3
10. Ching-Chuan Chiang, Hsiao-Kuang Wu, Winston Liu, and Mario Gerla. Routing in clustered multichip, mobile wireless
networks with fading channel. Proceedings of IEEE Singapore International Conference on Networks (SICON 97), pages
197211, April 1997
11. Available on http://www.cse.wustl.edu/~jain/cse574-08/ftp/ospf.pdf
12. Available on http://www.ietf.org/proceedings/51/I-D/draft-ietf-manet-tora-spec-04.txt
13. Available on http://en.wikipedia.org/wiki/Hazy_Sighted_Link_State_Routing_Protocol
14. Available on http://en.wikipedia.org/wiki/Routing_protocol

















International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

51 www.ijergs.org

Performance Analysis of medical Image Using Fractal Image Compression
Akhil Singal
1
, Rajni
2

1
M.Tech Scholar, ECE, D.C.R.U.S.T, Murthal, Sonepat, Haryana, India
2
Assistant Professor, ECE, D.C.R.U.S.T, Murthal, Sonepat, Haryana, India
E-mail- akhilsinglabm@gmail.com
Abstract
Fractal Image Compression is a new technique in image compression field by using a contractive transform for
which the fixed points are closed to that of original image. This broad field incorporates in itself a very large
numbers of coding schemes, which have been published after being explored in this field. The paper gives and
introduction and experimental results on Image Coding based on Fractals and different techniques used that can
be used for the image compression.
Keywords
Fractals, image compression, iterated function system, image encoding, fractal theory
I. INTRODUCTION
With the advance of the technology the need for mass storage and fast communication links is required. Storing
images in less memory leads to a direct reduction in storage cost and faster data transmissions. Images are
stored on computers as collections of bits representing pixels or points forming the picture elements. It has been
known that the human eye can process large amounts of information (some 8 million bits), so many images are
required to be stored in small sizes. Most data contains some amount of redundancy, which can be removed for
storage and retained for recovery, but this does not lead to high compression ratios. So in Image compression
techniques the no of bits required to store or transmit images is reduced without any appreciable loss of the data.
The standard methods of image compression come in several varieties. The current most used method relies on
eliminating high frequency components of the signal by storing only the low frequency components (Discrete
Cosine Transform Algorithm).This method is used on JPEG (still images), MPEG (motion video images), and
H.261(Video Telephony on ISDN lines) compression algorithms.
The other technique is fractal compression. This technique seeks to exploit affine redundancy present in the
typical images in order to achieve higher compression ratios as well as maintaining good image quality. In this,
the image is divided into non- overlapping range blocks and overlapping domain blocks where the dimensions
of domain blocks is greater. Then for each the most similar domain block is found using the mean square
error(MSE).
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

52 www.ijergs.org

The paper is organized as follows. Section 2 briefs about the fractal image compression method. Section 3
explains the fractal image compression technique Iterated Function systems. Section 4 derives the conclusion.

II. Fractal Image Compression
1. Fractals
Fractal is a structure made of of a number of patterns and forms that can occur in any different sizes within an
image. The term Fractal was used by B. Mandelbrot for describing the repetitive patterns, structures occurring
in a image. The observed structures are very similar to each other w.r.t. size, orientation, and rotation or flip.
2. Fractal image compression
Let us imagine a photocopy machine that reduces the size of the image by half and multiplies he image by three
times[1]. Fig 1 shows the results of the photocopy machine.
Now fed the out put back into the machine as input. We will observe that the copies are converging as in fig 2.
This image is called as attractor image because any initial image will converge to that image in repeated
running. This describes that the transformations are contractive in nature i.e. if the transformation is applied to
two point of any image, it must bring them together.
In practice chosen transformation is of the form

Where A=rotation; B, C=magnitude; D=scaling parameters.
E, F= parameter causing a linear translation of point being operated upon.



Fig 1: A copy machine making reduced copies

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

53 www.ijergs.org



Fig 2: first three copies generated by copying machine.
3. Contractive Transform
Any transform is said to b contractive if for any two point P1,P2, the distance
D(w(P1),w(P2))<sd(P1,P2)
for some s<1, where d=distance.
This contractive map will always brings points together.
4. Partitioned iterated function system
A iterated function system is a collection of affine transforms w
1
, w
2
------------, w
n
. where W is applied to the
input image.
A Partitioned IFS (PIFS) is nothing but IFS where the transforms are restricted to operate on specific subset of
the input image, domain blocks. Now applying the transform to the domain blocks results in a range blocks.
First and foremost an IFS will not work for any simple image as it is based on self- similarity present in the
image and its parts. In order to fractally compress an image, we have to identify the self similarity in input
image, so that we can express it in a set of transform.
In order to map a source image onto a desired image using iterated function system, more than one
transformation is often required and each transform has its relative importance with respect to another
transform.
5. Algorithm for fractal image compression/ methodology used

1. Load a input image.
2. Cover/partition the input image into square range blocks without overlapping (as range blocks).
3. Introduce the domain block, the size of the Domain block to be twice the size of the range block.
4. Compute the fractal transform for each block.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

54 www.ijergs.org

5. For each range block, choose a domain block that resembles it with respect to symmetry and looks most
like the range block.
6. Compute the encoding parameters that satisfy the mapping.
7. Write out the compressed data in form of local IFS code (as code book).
8. Apply any of the lossless data compression algorithms to obtain a compressed image.

The major problem with the fractal based coding is that the encoder is very complex.
The complexity is due to the image block processing required for range and domain blocks. Also much
computation is also required in the mapping process of the range and domain block.
So, one the most expressing feature of the fractal image compression is that its decoding process in very simple
and easy.
The decoder does its work exactly the same way as the fixed block encoder. The decoder consumes much less
time than the conventional methods. The decoding time here generally depends on the number of iterations
performed by the decoder and in this compression technique a fewer number of iterations ranging from 3-5 to
reach the fixed point encoding are required.
III. Results
a) Test and result images


Fig 3 input image (chest CT scan)
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

55 www.ijergs.org


Fig 4 block processed image 4*4

Fig 5 Domain block 8*8
Fig 6 mapped image of 4*4 and 8*8
b) Performance
image PSNR Value for threshold TH=0.1

1) 23.26
2) 23.27
3) 23.255
Average 23.26


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

56 www.ijergs.org

Table 1 PSNR value at threshold 0.1
Parameter 4*4 8*8 Transformed
image
RMS .7291 .7291 .7291
MEAN .3371 .3401 .3401
SD .3273 .3272 .3272
Table 2 values of modalities using differ blocks results
The derived results are expected ones. The table 1 shows the PSNR value of the entire system used for image
processing. With these values of PSNR we obtained a good compression rate and large gains. Table 2 shows the
different parameters obtained for different intermediate results. The matched image shows the non redundant or
useful data in our image that can be used for decision making.

IV. Conclusion
This field fractal image compression is new.
There is no standardized approach to this technique. The main concept in this compression scheme is to use the
IFS to reproduce images. By partitioning any image into [8 8], [16 16] pixels, the smaller portions are
reproduced by fractal transformation. The speed up in the decoding time via the use of the fractal image
compression should be considered as an interesting technology.
It is evitable that there are many applications where the fractal nature of the image can be used for
computational purposes.

REFRENCES:
1. Dietmar Saupe, Matthias Ruhl, Evolutionary Fractal Image Compression,ieee International
Conference On Image Processing,lausanne,sept. 1996
2. Raouf Hamzaoui,luigi Grandi, Deitmar, Daniele Marini, Mathhis Ruhl Optimal Hierarchical
Partitions For Fractal Image Compression IEEE ICIP Oct. 1998
3. Veenadevi .S.V , A.G.Ananth Fractal Image Compression Of Satellite Imageries International Journal
Of Computer Application Vol 30 Sept. 2011
4. Liangbin Zhang, Lifeng Xi Hybrid Image Compression Using Fractal-wavelet Prediction Precedings
Of 5
th
WSEAS Int. Conference On Information Security, Venice Nov. 2006
5. Veenadevi.S.V Fractal Image Compression Using Quadtree Decomposition And Huffman Coding An
International Journal SIPIJ Vol3 No. , April 2012
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

57 www.ijergs.org

6. Hitashi, Gaganpreet Kaur, Sugandha Sharma Fractal Image Compression A Review IJARCSSE Vol 2,
Issue 2, Feb 2012
7. Pardeep Kr.B.P., Prathap C Performance Evaluation Using Partitioned Iterated Function System ISOR
Journal Of VLSI And Signal Processing(isor-jvsp)vol2,issue 5 June 2013
8. Venkata Rama Parsad,ramesh Babu Fast Fractal Compression Of Satellite And Medical Images Based
On Domain-range Entropy Journal Of Applied Computer Science & Mathematics No.
9(4)/2010,suceava
9. Mohammad R.N. Avanaki, Hamid Ahmadinejad, Reza Ebrahimpour Evaluation Of Pure Fractal And
Wavelet-fractal Compression Techniques ICGST-GVIP Journal Vol 9 Aug. 2009 .
10. Sumathi Poobal, G. Ravindran The Performance Of Fractal Image Compression On Different Imaging
Modalities Using Objective Quality Measures IJEST Vol 3 Jan 2011.
11. Ritu Raj Different Transforms For Image Compression IJECSE V2N2-763-772
12. Miraslav Galabov Fractal Image Compression International Conference On Computer System And
Technologies- Compsystech2003.
13. Dan Lui, Peter K Jimak A Survey Of Parallel Algorithm For Fractal Image Compression
14. Jyoti Bhola, Simarnpreet Kaur Encoding Time Reduction For Wavelet Based Fractal Image
Compression Ijces Vol2 Issue 5 May 2012.
15. Michael F. Barnsley Fractal Image Compression
16. Sunil Kumar , R.C. Jain Low Complexity Fractal Based Image Compressoin Technique IEEE 1997
17. Rehna V.J ,Jeya Kumar .M.K Hybrid Approaches To Image Coding- A Review IJACSA Vol 2 No. 7,
2011.
18. Akemi Galvez, Andres Iglesias, Setsuo Takato Ketpic Matlab Binding For Efficient Handling Of
Fractal Images International Journal Of Future Generation Communication And Networking Vol 3, No. 2
, June 2010









International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

58 www.ijergs.org

Future Forecast Sale of Electronic Devices AsE-waste
1
Aditya Kumar Gautam,
2
Amanjot Singh
1&2
PEC University of Technology, Chandigarh, India
1
er.akgautam13@gmail.com,
2
amanjotsingh271990@yahoo.com
Abstract Electrical and electronic equipments which become obsolete comes under the category of e-waste. The technical
advancement in the society which keeps updated at greater pace creates the problem in handling any of the electronic or electrical
device even before becoming an obsolete. This paper includes the predication of amount of E-waste growing trend up to 2020 year.
The amount of E-waste generated is increasing at higher rate annually and if not treated properly will not only impact on environment
but also on human lives. Future forecasting of sale of electronic devices indicates the amount of e waste to be generated in the
upcoming years
Keywords E-waste, ASSOCHAM, Hazards, MAIT, Forecast, Life span, Recycling.
INTRODUCTION
Waste electrical and electronic equipment (WEEE) describes discarded electrical or electronic equipments [1]. E-waste is
generic word for electrical or electronic devices which become obsolete. The device consisting of any unwanted and
broken equipment or dumped by owner comes under the category of E-waste which includes mobile phones, tablets,
computer, laptops, printers, ups, routers, television, refrigerators, microwave ovens, washing machines, music system, and
toys.
According to the ASSOCHAM (The Associated Chambers of Commerce and Industry of India) latest report, Computer
equipment accounts for almost 68% of e-waste material followed by telecommunication equipment (12%), electrical
equipment (8%) and medical equipment (7%). Other equipment, including household e-crap account for theremaining 5%.
More than 70 per cent of e-waste contributors are government, public and private industries, while household waste
contributes about 15 per cent. Televisions, refrigerators and washing machines make up the majority of e-waste, while
computers account for another 20 per cent and mobile phones 2 percent [14].
The raw materials required in making of these machines not only contains hazardous substances but also contains valuable
and precious metals. The electronic and electrical equipments usesmetal, motor/ compressor, cooling, plastic, insulation,
glass,liquid crystal display, rubber, wiring/ electrical, concrete, transformer, magnetron, textile, circuit board, fluorescent
lamp, incandescent lamp, heating element, thermostat, FR/BFR-containing plastic, batteries, CFC/HCFC/HFC/HC,
external electric cables, refractory ceramic fibres, radioactive substances and electrolyte capacitors. Typically, E-waste
contains metals (40%), plastic (30%), and refractory oxides (30%). The metal scrap consists of copper (20%), iron (8%),
tin (4%), nickel (2%), lead (2%), zinc (1%), silver (0.02%), gold (0.1%) and palladium (0.005%). Plastic components are
polyethylene, polypropylene, polyesters and polycarbonates [2].
HAZARDS ASSOCIATED WITH E-WASTE
E-waste are hazardous to human health and to the environment if not handled properly and by means of properly, they
need treatment process which follows 3R rules: Reuse, Recycle, Reduce. Recycling of the E-waste is necessary in order to
reduce the E-waste generated every year. Landfill dumping of E-waste is not proper solution. Landfilling of these waste
products will toxic to environment and leaching dangerous metals such as lead, cadmium and mercury into the
surrounding soil, groundwater and ultimately ending up in humans.

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

59 www.ijergs.org

A. Hazardous Substance
Substance Used in Hazards
Americium smoke alarms carcinogenic
BFRs flame retardants in plastics impaired development of the nervous system, thyroid problems,
and liver problems.
Cadmium light-sensitive resistors, corrosion
resistant alloys for marine and
aviation environments, and
nickel-cadmium batteries
can cause severe damage to the lungs and kidney, can leach into
the soil, harming microorganisms and disrupting the soil
ecosystem

Lead solder, CRT monitor glass, lead-
acid batteries,
some formulations of PVC.
lead exposure include impaired cognitive function, behavioural
disturbances, attention deficits, hyperactivity, conduct problems
and lower IQ.
Mercury fluorescent tubes, tilt switches
(mechanical
doorbells, thermostats), and flat
screen monitors
sensory impairment, dermatitis, memory loss, and muscle
weakness. Environmental effects in animals include death,
reduced fertility, slower growth and development.
Sulphur lead-acid batteries liver damage, kidney damage, heart damage, eye and throat
irritation. When released in to the environment, can create
sulphuric acid
Perfluorooctanoic
acid (PFOA)
Non-stick cookware (PTFE), used
as an antistatic additive in
industrial applications, and found
in electronics
Hepatotoxicity, developmental toxicity, immune toxicity,
hormonal effects and carcinogenic effects. Studies have found
increased maternal PFOA levels to be associated with an
increased risk of spontaneous abortion (miscarriage) and
stillbirth
Table 2. [3] Hazardous substances used in electronic & electrical equipments.
B. Non Hazardous Substance
Substances Used in
Aluminum nearly all electronic goods using more than a few watts of power (heat sinks), electrolytic capacitors
Copper copper wire, printed circuit board tracks, component leads
Germanium 1950s1960s transistorized electronics
Gold connector plating, primarily in computer equipment
Iron steel chassis, cases, and fixings
Lithium lithium-ion batteries
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

60 www.ijergs.org

Nickel nickel-cadmium batteries
Silicon glass, transistors, ICs, printed circuit boards.
Tin solder, coatings on component leads
Zinc plating for steel parts
Table 2. [3] Non- hazardous substances used in electronic & electricalequipments.
INDIAS E-WASTE GROWTH
The Amount of E-waste generated in India annually in the year 2007 was 3, 82,979 MT, including 50,000 MT of imports in India.[4]
As per the study conducted by ASSOCHAM, India is likely to generate e-waste to an extent of 15 Lakh metric tonnes (MT) per
annum by 2015 from the current level 12.5 Lakh MT per annum growing at a compound annual growth rate (CAGR) of about 25%
[14].Indian government passed the E-waste ( Management & Handling) Rules, 2011 [13] which aims to channelize the E-waste
generated in the country by recycling, recovering and reduce which came into effect from 1st may 2012. The reason behind the annual
increase rate of e-waste is because of the higher per capita income, rate of change of technology and peer pressure contributes to
increased rate of obsolescence of electronics. As the new technology comes into the market, the sale of electronic and electrical
devices increases and previous technology gets affected. The rapid change in technology causes the device stability in market for a
short span of time due to which the owner dumps their device even before they get obsolete.
According to the ASSOCHAM latest study conducted on E- waste which released on 22
nd
April, 2014, In India, about 4.5 lakhs child
labours between the age group of 10-14 are observed to be engaged in various e-waste (electronic waste) activities, without adequate
protection and safeguards in various yards and recycling workshops [14]. Its exposure can cause headache, irritability, nausea,
vomiting and eyes pain. Unauthorised recyclers may suffer liver, kidney and neurological disorders.
The study conducted by MAIT (Manufacturers Association for information technology) organization depicts that the PC and Laptop
sales has increased over the years published in their annual report 2012-13[5]. Other devices as printers, ups and server shows minor
growth. The following table contains the electronic devices sold (in million units) over the past years:
Year Desktop Laptop Printer UPS Server Total
2006-07 5.49 0.85 1.49 2.17 0.09 10.09
2007-08 5.52 1.82 1.60 1.62 0.12 10.68
2008-09 5.28 1.52 1.62 1.52 0.12 10.06
2009-10 5.53 2.51 2.50 2.32 0.10 12.96
2010-11 6.03 3.28 3.13 2.38 0.09 14.91
2011-12 6.71 4.02 2.97 2.55 0.09 16.34
2012-13 6.77 4.40 2.93 2.53 0.09 16.72
Table 3. Previous year sales data (in million) of Desktop, Laptop,Printers, UPS & Server. [5]
LIFE SPAN OF ELECTRONICS EQUIPMENT
The average life of electronic equipment mentioned above for which the device will operate without getting obsolete is normally
different for every equipment.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

61 www.ijergs.org

Device Average Life span in Years
Desktop 5
Laptop 4
Printers 4
Server 4
UPS 5
Table 4. Device and their average life span. [6]
But in Indian scenario, the reality is different. The life time of an equipment will depend on the distribution around the equipment
average lifetime because equipment are often reused or restored [7].
Equipment Years till the device operate

2 3 4 5 6
Desktop[6] 25% 50% 25%
Laptop[6] 10% 50% 20% 20%
Printer[8] 20% 40% 30% 10%
UPS[9] 10% 20% 50% 20%
Server[10] 25% 40% 25% 10%
Table 5. Percentage of each device assumed which operate years before becoming obsolete.
METHOD USED FOR FUTURE PREDICTION
The method used for future prediction of sale values of electronic equipments for the upcoming year is carried out using Forecast
function provided in Microsoft Excel [11]. This method is defined to calculate, or predict, a future value by using existing values. The
predicted value is a y-value for a given x-value. The known values are existing x-values and y-values, and the new value is predicted
by using linear regression. Most common application of this function is to predict future sales, inventory requirements, or consumer
trends.
Syntax: FORECAST(x, known_y's, known_x's)
where, X Required. The data point for which you want to predict a value.
Known_y'sRequired. The dependent array or range of data.
Known_x'sRequired. The independent array or range of data.
Remarks:
If x is nonnumeric, FORECAST returns the #VALUE! error value. If known_y's and known_x's are empty or contain a different
number of data points, FORECAST returns the #N/A error value. If the variance of known_x's equals zero, then FORECAST returns
the #DIV/0! error value.
The equation for FORECAST is a+bx where,
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

62 www.ijergs.org

and:
where x and y are the sample means AVERAGE(known_x's) and AVERAGE(known y's).
FUTURE SALES VALUE
Using above forecast function, the sale values of some of the electronic devices are calculated. The sale forecasting of these device
indicates that the rate of sale is increasing annually. But according to Toxic link report, only 5% of e- waste is recycled annually in
formal sector [12] which is much lesser than the rate at which it increases. Recycling in formal sectors must be encouraged to achieve
the higher recycling rate than the present status.
YEAR Desktop Laptop Printer UPS Server Totals
2013-14 6.798336 4.889413 3.009982 2.541475 0.097538 17.336744
2014-15 6.81991 5.362559 3.07198 2.442851 0.100206 17.797506
2015-16 6.836335 5.819978 3.120038 2.383009 0.10115 18.26051
2016-17 6.84884 6.262193 3.157291 2.346698 0.101484 18.716506
2017-18 6.858361 6.68971 3.186167 2.324666 0.101602 19.160506
2018-19 6.86561 7.103017 3.20855 2.311297 0.101644 19.590118
2019-20 6.871129 7.502587 3.2259 2.303186 0.101659 20.004461
Table 7. Future prediction of sales data of devices till 2020 year.
The above result are indicated on graph using M.S. Excel graph tool provided.

Figure 1. Graph showing the increment in sale of each device.
0
2
4
6
8
2013-14 2014-15 2015-16 2016-17 2017-18 2018-19 2019-20
U
n
i
t
s

i
n

M
i
l
l
i
o
n
s
Year
Future Forecast of sale of Electronic Devices
DESKTOP LAPTOP PRINTER UPS SERVER
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

63 www.ijergs.org

CONCLUSION
The future forecast of electronic equipments such as desktop, laptop, printer, ups, and server is calculated and expected to generate
20. 004461 million units to be sold out in 2020 year which will become e-waste in the years to come. As units sold in one particular
year will become obsolete at the end of the average life. The need of recycling is hence required at the same rate to tackle the e-waste
at right time.Local communities will also directly benefit from a clean environment and better health conditions, due to reduction or
even elimination of the inadequately processed e-waste quantities. Moreover, advanced, scientific materials recovery is expected to
increase employment rates, create additional markets for salvaged materials, and provide small-scale entrepreneurs with new
profitable options.
REFERENCES:

Dr. SP.Victor and Shri S Suresh Kumar,"Roadway for Sustainable Disposal of Scrap Computers in India,"IEEE, 2011.
P. Gramatyka, R. Nowosielski, P. Sakiewicz, Recycling of waste electrical and electronic equipment, Journal of Achievements in
Materials and Manufacturing Engineering, February 2007.
Anil Kumar Saini and Abhishek Taneja, "Managing E-Waste in India- A Review", International Journal of Applied Engineering
Research, Vol.7 No.11, 2012.
Manufacturers Association for information technology (MAIT) and Deutsche Gesellschaft fuer Technische Zusammenarbeit (GTZ),
E-waste assessment in India: a quantitative understanding of generation, disposal and recycling of e-waste in India, November
2007.
Manufacturers Association for information technology (MAIT) organisation, MAIT Annual Report 2012- 13,2013.
Pamela Chawala and Neelu Jain, Gerenration amount prediction of Hazardeous substances from Computer Waste: A case study on
India, International Journal of Emerging Technology and Advanced Engineering, Volume 2, Issue 3, March 2012.
Emmanouil Dimitrakakis, Evangelos Gidarakos, Subhankar Basu, K.V. Rajeshwari, Rakesh Johri, Bernd Bilitewski and Matthias
Schirmer, Creation of Optimum Knowledge Bank on E-Waste Management in India, EuropeAid Asia Pro Eco programme,
2005.
Justin Bousquin, "Life Cycle Analysis in the Printing Industry: A Review," A Research Monograph of the Printing Industry Center at
RIT,pp.13,May2011.
JM Christopher, Energy Star Uninterruptible Power Supply Specification Framework, JT Packard.
Randy Perry, Jean S. Bozman, Joseph C. Pucciarelli and Jed Scaramella,"The Cost of Retaining Aging IT Infrastructure,"WHITE
PAPER by IDC pp. 7-8, February 2012.
Mircosoft, Forecast Function, [online]. Available: http://office.microsoft.com/en-in/excel-help/forecast-function-
HP010342532.aspx. 2014.
Toxic link,"E-Waste- Designing Take Back Systems," A National Workshop Report, pp.3,December 2012.
MoEF (Ministry of Environment and Forests),"Draft for E-waste (Management and Handling) Rules, 2011" [online]. Available:
http://moef.nic.in/downloads/rules-and-regulations/1035e_eng.pdf ,2011
The Associated Chambers of Commerce and Industry of India (ASSOCHAM) and Frost& Sullivan, Electronic Waste Management
in India,, 22 April 2014












International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

64 www.ijergs.org

Comprehensive bandwidth by Dielectric Resonator Antenna
Pragya Soni
1

1
Scholar
E-mail- pragyasoni2009@gmail.com
Abstract A novel technique for producing enhanced band width in micro and mm wave region of spectrum is presented. A new
design of compact & broadband leaky wave dielectric resonator antenna is proposed using co-axially probe feed technique. Two
different LWDRA are designed and their characteristic behaviors are compared. Finally, parametric study of Second Antenna has been
done. With the proper design the resonant behavior of the antenna is found, over which the leaky wave DRA produces extended
bandwidth.

Keywords DRA , LWDRA,NRD, LWRDRA,BL, BW , HFSS,ANSOFT.
INTRODUCTION

MODERN communication systems require wide bandwidth to support the demand of high data rate transfer for various multimedia
applications. To fulfill this requirement, most wireless mobile systems have to be operated at the millimeter wave frequencies [1]-[2].
For ease of space allocation, it is highly desirable to have small size, low profile equipment. Hence, the antennas for modern wireless
communication system should be low in profile and efficient in high frequencies.
Dielectric resonator antennas (DRA) have been the interest of research and investigation due to its highly desirable characteristics
such as small size, light weight, highly efficient in microwave and mm wave spectrum. The most popular shape studied for practical
antennas applications have been the cylindrical dielectric resonator antennas, rectangular dielectric resonator antennas, spherical
dielectric resonator antennas and many more different structure are reported. The stacked DRA has also been tested [3]-[7] with a
resulting increase in bandwidth that is much wider than the bandwidth of the micro strip antennas.
The dielectric resonator antennas based on NRD guides have few publications. The technique of NRD guided antenna was proposed
by Yoneyama and Nishida [8]-[10] . Although, it is classified as open dielectric waveguide, it has attractive feature of no radiation
[11]-[13]. However, introducing suitable perturbation to the NRD guide structure can produce leaky waves that propagate away from
the dielectric slab to the open ends. This mechanism makes the NRD guide working as a leaky wave antenna.
Several techniques have been proposed to generate leaky waves from NRD guide, such as foreshortened sides of parallel metal
plates technique, Asymmetric air gape technique,Trapezoidal dielectric slab technique and many more.
A leaky wave rectangular dielectric resonator antenna (LWRDRA) has been designed. LWRDRA is excited by coaxial probe feed
mechanism. The LWDRA is parametrically studied and different approaches are presented to achieve an extended bandwidth
nearly20% at -10dB. The study also shows the dual resonance behavior LWRDRA at frequency value of 22.14GHz and 24.97GHz.
The dependence of (BW) band width on the various parameters and the geometries of the system show that higher (BW) band width
with desired radiation characteristics can be achieved with such dielectric resonator antenna based on NRD guides. Therefore, it is
necessary to extend extensive research and study on this topic, because it can provide an alternative device to achieve wider (BW)
band width characteristics.
ANTENNA DESIGN
First proposed design uses a substrate of relative permittivity of 2.4 and dimension 210mmx 152mmx 0.6mm. the upper surface of
the substrate has finite conductivity layer. this has been done to minimize the BL (back-lobe).
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

65 www.ijergs.org

HFSSv10 simulation software.the simulation of LWRDRA has been done in three stages. at first, the first antenna was simulated
and results were recorded. in second stage, the second antenna was simulated and the results which were obtained were compared with
that of the first antenna. at last, the parametric variation of the project variable, l was done. the result obtained radiation phenomenon.
the rectangular dielectric resonator of relative permittivity 8.2 is used, having dimension of 148mmx 6mmx 5.2mm. the top layer
surface of the dielectric resonator is perturbed by embedding a strip of thickness 0.8mm and length equal to that of the dielectric
material. the co-axial probe feed mechanism is used for the excitation of LWDRA. in fig. 1(a), the proposed antenna is presented in 3d
view. the proposed first antenna is shown in fig. 1(b). in the second antenna design the perturbation is increased by just modifying the
embedded dielectric material strip of first antenna design, by removing material of thickness 0.1mm from between the upper face. the
antenna design is shown in fig. 1(c). this is termed as second antenna designed. the parametric study of the second antenna is done by
varying the probe penetration length, l into the dielectric material of the resonator. the variation of the length is done from -0.6mm to
5.4mm in step size of 0.6mm. the antenna designed for parametric study, showing probe length l, is shown in fig. 1(d).
SIMULATIONS
The designed antenna is simulated on ANSOFT was studied in detail to know the relation between probe positions, probe length and
the height of the dielectric material.








Figure 1(a) LWRDRA Designed & Simulated on HFSS 3D view, (b) First Antenna Designed, (c) Second Antenna Design and (d)
Second Antenna Design, with probe pin, l set as project variable for parametric study of Antenna
Simulation Result of First Antenna
The S
11
Vs Frequency plot of first antenna is shown in Fig. 2. It can be seen that antenna is well matched at 9.89GHz having return
loss of23.01dB. It has a bandwidth (-10dB) of 800MHz which corresponds to nearly 8.2% in the frequency range of 9.3GHz-
10.1GHz.

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

66 www.ijergs.org



Fig. 2 S
11
Vs Frequency Plot of First Antenna

Simulation Result of Second Antenna
The S
11
Vs Frequency plot of second antenna is shown in fig. 3. This graph shows that the second antenna is well matched at frequency
of 24.25GHz and having return loss of -28.59dB. It has -10dB bandwidth of 16.54% in the frequency range of 22.34GHz-26.37GHz.
This shows the 100% increase in the bandwidth as compared to the first antenna design













Fig. 3 S
11
Vs Frequency Plot of Second Antenna
Simulation Result of Parametric Study of Second Antenna by Varying the Probe Length, l Penetration into the Dielectric Resonator of
LWRDRA.
As discussed in Section II, the parametric variation in the probe length, l was done by setting, l as project variable. The simulated
results are shown in Fig. 4(a), 4(b), 4(c). During the variation of probe length, l, for l = -0.6mm to 0.6mm, it is observed that the there
is decrease in bandwidth as well as increase in return loss. The matching frequency gets shifted to higher value. The frequency range
over which the bandwidth is calculated gets shifted to higher value (shown in Fig. 4(a)). For l = 1.2 and l=1.8, there is increase in
bandwidth as well as resonant frequency. For l = 2.4mm to 3.6mm, the resonant frequency tends to decrease and bandwidth along with
return loss starts to increase. At l = 4.2mm, we observed a dramatic decrease in the return loss to -31.60dB, the resonant frequency

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

67 www.ijergs.org

decrease to a value of 18.63GHz at -10db, calculated bandwidth is 8.4%. At l = 4.8mm, dual resonance behavior of the antenna is
observed. The LWRDRA resonates in the frequency range of 21.62GHz-25.03GHz. The bandwidth obtained at this frequency range is
nearly 20%. The tabulated result of the parametric variation of probe length, l has been presented in Table I.
Fig. 5 shows the S
11
Vs Frequency plot of Second Antenna at resonant condition. It is found that as l = 4.8mm, the LWRDRA acts
as dualresonant leaky wave antenna. The resonant frequencies of LWRDRA are 22.14GHz and 24.97GHz. Calculated bandwidth for
the design at dual resonance is found to be 14.6% (-10dB) with overall bandwidth of the LWRDRA at -10dB bandwidth is nearly
20%.

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

68 www.ijergs.org














Fig. 4(a) S
11
Vs Frequency Plot of Second Antenna graph
showing parametric variation of Probe length, l inside the
LWDRA(a) l = -0.6mm, (b) l = 0mm (c) l = 0.6mm (d) =
1.2mm














Fig. 4(b) S
11
Vs Frequency Plot of Second Antenna graph
showing parametric variation of Probe length, l inside the
LWDRA (a) l = 1.8mm, (b) l = 2.4mm (c) l = 3.0mm


















Fig. 4(c) S
11
Vs Frequency Plot of Second Antenna graph
showing parametric variation of Probe length, l inside the
LWDRA (a) l = 3.6mm, (b) l = 4.2mm (c) l = 4.8mm

















Fig. 5 S
11
Vs Frequency Plot of Second Antenna at l = 4.8mm
showing the Dual Resonance Behaviour of LWRDRA at -10dB




International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

69 www.ijergs.org


CONCLUSION
A new, comprehensive dual resonance LWRDRA has been designed. It is found that the First Antenna which has less
perturbation on dielectric resonators upper surface has bandwidth of 8.2% (-10dB) and good matching at frequency9.89GHz but as
the perturbation is increased as seen in Second Antenna design, the bandwidth gets increased to a new value of 16.54% (-10dB) and
good matching of the system occurs at 24.25GHz.Thus, it is found that as the perturbation of dielectric resonator of LWDRA
increases, the bandwidth of the system along with matching frequency gets shifted to some higher value. In First Antenna perturbation
of upper surface of dielectric resonator was half as compared to that of Second Antenna, and the numerical results obtained shows that
values of bandwidth and the resonant frequency of LWRDRA depends upon the perturbation of the surface. Analysis suggests the
relation of direct proportionality between perturbation of dielectric resonator surface and the bandwidth and resonant frequency of the
LWRDRA. Further 16% increase in the bandwidth is obtained by increasing the probe penetration into the dielectric material. Results
obtained by the parametric study of probe penetration length inside the dielectric resonator material of the antenna demonstrate that
the dual resonance behavior of the LWRDRA is obtained when the antenna is co-axially feed and the position of excitation is at the
3/4
rth
distance from the center of the resonator and the penetration length is equal to 0.8 times that of the height of the rectangular
dielectric resonator. By applying the composite technique the extended (BW) band width can be produced.
REFERENCES:
[1] R. Bekkers and J. Smith, Mobile Telecommunication Standards, Regulations and Applications, Artech House Publications,
Chapters 2, 1999.
[2] P. Bedell, Cellular /PCS Management, Mc-Graw Hill, chap. 27, 1999.
[3] A. G. Walsh, S. D. Young, and S. A. Long, An Investigation of Stacked and Embedded Cylindrical Dielectric Resonator
Antennas, IEEE Antennas Wireless Propag. Lett., vol. 5, pp.130-133, 2006.
[4] R.K. Mongia and A. Ittipibon, Theoretical and Experimental Investigation on Rectangular Di-electrical Resonator Antenna,
IEEE Trans. Antenna Propag., vol. 45, no. 9, pp.1348-1356, sep. 1997.
[5] DebatoshGuha, Yahia and M. M. Antar, New Half Hemispherical Dielectric Resonator Antenna for Broadband Monopole Typed
Radiation. vol. 54, no. 12 pp. 3621-3627, Dec. 2006.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

70 www.ijergs.org

[6] N. Simons, R. Siushansiana, A. Ittipiboon, and M. Cuhaci, Design and Analysis of Multi Segment Dielectric Resonator
Antenna, IEEE Trans. Antenna Propag., vol. 48, pp.738-742, May 2000.
[7] A. Abumazwed, O. Ahmed and A. R. Sebak: Broadband Half-Cylindrical DRA for FutureWLAN Applications, 3
rd
European
Conference on Antennas and Propagation, EuCAP, pp. 389-392; 2009.
[8] T. Yoneyama and S. Nishida, Non-radiative Dielectric Wave Guide for Millimeter Wave Integrated Circuit, IEEE Trans,
Microwave Theory Tech. 29,pp. 1188-1192,1981.
[9] [9] S. M. Shum and K. M. Luk, Stacked Annular Ring Dielectric Resonator Antenna Excited byAxi-symmetric Coaxial Probe,
IEEE Trans. Antenna Propag., vol. 43, pp.889-892, Aug. 1995.
[10] T.Yoneyama, Non radiative Dielectric Waveguide, Infrared and Millimeter Waves, vol.11, K. J. Button (ed)
[11] L. K. Hady, D. Kajfez and A. A. Kishk: Triple Mode Use of a Single Dielectric Resonator,IEEE Transactions on Antennas and
Propagation, Vol. 57, pp. 1325-1335; May 2009.
[12] E. M. OConnor and S. A. Long: The History of the Development of the Dielectric ResonatorAntenna, ICEAA International
Conference on Electromagnetics in Advanced Applications,Turin





















International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

71 www.ijergs.org

Development of Canned Cycle for CNC Milling Machine
S.N Sheth
1
, Prof. A.N.Rathour
2
1
Scholar, Department of mechanical Engineering, C.U SHAH College of Engineering & Technology
2
Professor, Department of mechanical Engineering, C.U SHAH College of Engineering & Technology

Abstract Despite the tremendous development in CNC programming facilities, linear and circular cuts parallel to the coordinate
planes continue to be the standard motions of modern CNC machines. However, the increasing industrial demand for parts with
intricate shapes cannot be satisfied with only these standard motions. The proportion of parts which are not covered by the standard
CNC motions is certainly growing, due to the increasing industrial demand for intricate shapes. So for that we have decided to develop
a new CANNED cycle with the help of MACRO (parametric) programming on Hypo cycloid, combined EP-hypo cycloid curve and
last for Bezier surface also. we found that this canned cycle would be useful for different application such as cycloidal speed reducer,
lobe pump rotor and surface milling without requiring external arrangement like CAD modeling, CNC interpolator etc.

Keywords CANNED cycle, MACRO (parametric) programming, Hypo cycloid, EP-hypo cycloid, Bezier surface, cycloidal speed
reducer, lobe pump rotor.
1. INTRODUCTION
In product development, stages such as conceptual design, prototype making, CAD model construction, tooling design, etc. are
involved. Either forward engineering or reverse engineering can be employed. In general, in forward engineering, a 3D solid model of
a product is first designed in a CAD platform and CAD information is obtained. Then corresponding tool-path information is
generated and the product is produced using an appropriate manufacturing process, such as CNC machining. However, in reverse
engineering, geometrical information is needed to be obtained from a physical shape directly and this information is converted into a
computable format for other downstream processes. In most cases, a digitizing device is used to get the data and the output data is
usually a set of scattered points (called a point cloud). Due to the characteristics of the digitizing device, the point cloud can be
divided into two main typesa regular point cloud and an irregular point cloud. In the former, intervals between adjacent digitizing
points are identical while they are not in the latter. No matter which type of point cloud is obtained, surface fitting techniques are
usually applied and a surface model is constructed. Based on the surface model, CAM tool-path information is generated accordingly.
This procedure is termed as an indirect machining process and the technologies for fitting surfaces onto a point cloud are essential.
However, surface fitting process is usually time-consuming. The tool-path information can be achieved directly from the point cloud.
But, this two technique (forward & reverse) required making much external arrangement and so more money and time. Why we are
not going to use available internal facility? It means in CNC machine there is a facility of programming the tool path according to the
required shape of job and for that we can use combination of linear, circular and curvature tool path. [1]
The raw data could, in a simple case, consist of coordinate for points which should be connected with straight lines or second degree
curves fulfilling certain conditions of smoothness. Regardless of complexity, however, the translation should result in data for the
complete path in the form of piecewise representation of mathematical curves. The curve data obtained from the translator is to be
converted into small unit steps along the fixed axes. This process is also known as interpolation. [2]

2. TOOLS TO BE USED
Since the advent of the computer, the demand for complex shapes has been met by constantly upgrading the shape generating
capabilities of CNC systems and by developing sophisticated CAD/CAM processors, capable of reducing complex geometries to long
series of linear cuts. Thus, if a particular shape cannot be programmed directly with the standard CNC motions, it is first linearized
with the help of a CAD/CAM system, which then encodes the result automatically into an executable NC program.[3]
The development and incorporation of tool path generators into CNC systems, based on efficient and accurate curve tracing methods,
capable to satisfy the increasing industrial demand for machining complex shape parts is an important goal in the field of computer-
aided manufacturing. Another frequent demand is met in the field of surface machining. A lot of sculptured surfaces as are the cases of
molds, stamping dies, forging tools, rolling shapes, etc., are defined as revolved surfaces with free-form profiles. Despite the
particularity in the definition and the design of these surfaces the available CAM systems deal with them as with free-form surfaces.
That is, a sequence of straight lines is used to approximate the part surface and voluminous data describing them must be sent to the
CNC machine. [4]
For boundaries formed at the intersection of higher degree or free-form surfaces, an accurate solution of the boundary machining
problem has hitherto been considered to be beyond the power of traditional curve tracing methods Although related topics as are
general surface intersection methods, problems raised on surface/ surface intersection and boundary representation methods have
been extensively addressed by several authors. [5]
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

72 www.ijergs.org

Parametric programming, mathematical calculations with do-loop subroutines, macro-capabilities and sophisticated canned cycles are
among the strengths of the last CNC generation lessening the users dependence on CAD/CAM. Despite this tremendous development
in programming facilities, however, basic motions in a three axis CNC machine continue to be executed only by 2D or three-
dimensional (3D) linear and 2D circular interpolators. Other types of motions implemented by approximating the desired path with
straight-line segments are accompanied with the drawbacks of accelerationdeceleration cycles on the machine and consequently
machining inaccuracies are raised with the machining time increasing substantially.

2.1 An Introduction to CANNED CYCLE
Canned cycles provide a programming method of a CNC machine to accomplish repetitive machining operations using theG/M code
language. Essentially, canned cycles are a set of pre-programmed instructions permanently stored in the machine controller that
automate many of the required repetitive tasks. Their use eliminates the need for many lines of programming, reduces the
programming time and simplifies the whole programming process. [6] All CNC machining controls come with a set of helpful
machining canned cycles. These canned cycles are executed or called upn by entering a certain code together with any required
variable information. Once canned cycle has been defined it remain active until cancelled. Drilling, counter-boring, peck drilling,
pocket or slot machining are all examples of standard canned cycles. However, the standard canned cycles are limited in number and
capability, being unable to accommodate the increasing needs of applications with complex geometries. Example of standard
CANNED CYCLE Rough turning cycle: G71 Once the caned cycle has been defined it remains active until cancelled. In other word
every time a block has axis movement programmed, the machining operation of the canned cycle is active also. Multiple function are
defined as a series of function which allow a machining operation to be repeated along a given path. The programmer will select the
type of machining which can be canned cycle or model subroutine. These functions must be defined every time they are used. The
programming of canned cycle is readily available in the controller of the NC/CNC machines for from activating and geometry.
However for some unknown geometry for which canned cycles are not available inparticular controller, some can be developed.
Milling machine with particular set of different boundary under regular manner demands development of new canned cycle.

2.2 Macro programming
Macro programming provides a means of shortening code and doing repetitive tasks easily and quickly. All of your canned cyclesin a
control are nothing but a macro. Macro is also extremely useful for families of parts. Repetitive operations can be made similar to
using a sub program, but a macro will allow you to change conditions without having to edit multiple lines of code. Variables are used
in place of coordinate numbers in the program. Variables are expressed in the program by a numerical value preceded by the pound
sign #100, #500. Values can be input manually into a variable register Values can also be assigned via the NC program #510=1.5,
#100=#100+1 Variables Can Only be Mathematical Quantities (numbers)

2.2.1 Program flow function
We need to understand some program flow (control) functions before we do our mathematics because we need some of these
functions to quickly perform the mathematics. These are the three most commonly used: IF [compare1 {function} compare2]-
GOTO[block]: The if-then statement is a conditional jump. If the statement is true then the GOTO command is executed and a
program jump is performed. If the statement is false, then program flow continues with the next block. There are numerous functions
available, the most common are: EQ - (==) Equals NE - (<>) - Not Equal LT - (<) - Less than GT - (>) - Greater than Example:
N40 IF [#500 EQ 0] GOTO 900 - If variable #500 equals 0 then jump to block 900. WHILE [compare1 {function} compare2] DO
END: While the comparison is true, the blocks between DO and END are repeated, with the comparison checked each time it loops.
Example: N10 WHILE [#530 LT 8] DO N40... N50.... N60..... N70 END N80....... So long as #530 is less than 8, blocks N40-N70 are
executed repeatedly. When #530 is no longer less than 8, program execution jumps to block N80 and continues. GOTO {block or
label}: This is an absolute jump to a different block. In the Siemens controls the commands are GOTOF and GOTOB depending on
which way you want the control to search for the block. (F= Forward, B= Backwards) Siemens also supports labels, while Fanuc style
does not. Mathematical Functions: There are quite a few mathematical functions available to us for macro programming. Some
controls offer more extensive operation sets, but I'll stick with the Fanucese standard set for now. The standard operational order of
equations for Fanucese is: First: Functions (Trig Functions, etc) Second: Multiplication, Division, AND Third: Addition,
Subtraction, OR, XOR etc.

3. TOOL PATH GENERATION TECHNIQUE
Curves and surfaces are mathematically represented explicitly, implicitly or parametrically. Explicit representation of the form y =
f(x), although useful in many applications, are axis dependent, cannot adequately represent multiple-valued functions, and cannot be
used where a constraint involves an infinite derivative. Hence, these are little used in computer graphics or computer aided design.
Implicit representation of the form f ( x , y) = 0 and f ( x , y, z) = 0 for curves and surfaces, respectively, are capable of representing
multiple-valued functions but are still axis dependent. However, these have a variety of uses in computer graphics and computer aided
design.
3.1 Parametric Curves
Parametric curve representation of the form:
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

73 www.ijergs.org

x=f(t); y=g(t); z=h(t); (3.1)
Where t is a parameter, has extreme flexibility. These are axis independent, representing multiple-valued functions having
infinite derivatives. These parametric curves have additional degrees of freedom compared to either explicit or implicit formulations.
To see the later point, an explicit cubic equation is considered.
y = ax
3
+ bx
2
+ cx + d (3.1.1)
Here, four degrees of freedom exist, one for each of the four constant coefficients a, b, c, d.
Rewriting this equation in parametric form,
x(t)= ot
3
+ |t
2
+ t + o
y(t)=kt
3
+lt
2
+mt+n (3.1.2)
Where c
1
s t sc
2

Here, eight degrees of freedom exist, one for each of the eight constant coefficients o , | , , o , k ,l, m, n. Although not
necessary, the parameter range is frequently normalized to 0 t 1.

4. SELECTION OF CURVE
For instance trochoidal milling strategies efficient for industrial applications. A trochoidal tool path is defined as the combination of a
uniform circular motion with a uniform linear motion. As a result, trajectory radius is continuous, which creates favorable milling
conditions in terms of tool loads and kinematics. Furthermore, full-immersion milling configurations are avoided. Nevertheless, tool
path length is much higher compared to standard tool paths such as zigzag because large portions are outside the material. Tool path
interpolation has also a major influence on the process implementation. Thus, trochoidal tool paths are well adapted to complex
milling cases, such as hard material roughing.[7] Based on geometric design of cycloidal speed reducer [8] and lobe pump rotor found
that hypotrochodal curve is useful for the same.

4.1 EPTROCHOIDDefinition and Parametric Representation
A curve traced by a point P fixed to a circle with radius r rolling along the outside of a larger, stationary circle with radius R at a
constant rate without slipping, where the point P is at distance h from the center C of the exterior circle.

Fig 4.1.1: Epi-trochoid curve Representation Fig 4.1.2: Epi-trochoid curve generation

The parametric form of the curve is

X=r cos + r cos - h cos
+

(4.1.1)

Y=r sin + r sin - h sin
+

(4.1.2)

Where, R= Fix circle radius (large)
r = Rotating circle radius (small)
The derivation of the equation is given in Appendix

4.2 HYPOTROCHOIDDefinition and Parametric Representation
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

74 www.ijergs.org

A curve traced by a point P fixed to a circle with radius r rolling along the inside of a larger, stationary circle with radius R at a
constant rate without slipping, where the point P is at distance h from the center C of the interior circle. The name hypotrochoid comes
from the Greek word hypo, which means under, and the Latin word trochus, which means hoop.



Fig 4.2.1: Hypo-trochoid curve Representation Fig 4.2.2: Hypo-trochoid curve generation

The parametric form of the curve is

X=r cos - r cos + h cos

(4.2.1)

Y=r sin - r sin - h sin

(4.2.2)

Where, R= Fix circle radius (large)
r = Rotating circle radius (small)
The derivation of the equation is same way as Epi-trochoid in Appendix

4.3 Combination of EPTROCHOID and HYPOTROCHOID
It is the curve traced by the two point P1 and P2 fixed to the circles with radii r rolling along the outside and inside respectively of a
larger, stationary circle with radius R at a constant rate without slipping, where the point P1 and P2 is at distance h from the center C1
and C2 of the exterior and interior circles.

Fig 4.3.1: Epi-trochoid curve Representation Fig 4.3.2: Epi-trochoid curve generation







International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

75 www.ijergs.org

4.4 Alternation of EPTROCHOID and HYPOTROCHOID
It is the curve traced by the two point P1 and P2 fixed to the circles with radii r rolling alternately for one complete revolution along
the outside and inside respectively of a larger, stationary circle with radius R at a constant rate without slipping, where the point P1
and P2 is at distance h from the center C1 and C2 of the exterior and interior circles.


Fig 4.4.1: Alternate-Combined Epi-trochoid Fig 4.4.2: Alternate-Combined Epi-trochoid
and Hypo-trochoid curve Representation and Hypo-trochoid curve generation

5. PART PROGRAMMING AND SIMULATION
Finally I found that alternate cycle of Combined EPTROCHOID and HYPOTROCHOID is useful for designing lobe pump rotor. So
for that I have decided to develop a new canned cycle with help of macro programming.
Here four teeth lobe rotor cutting is programmed and simulated

MacroProgram:
N10 G54 G90 M05
N12 G28 X0 Y0 Z0
N14 M06 T1
N16 G01 Z0 F(FEED)
N18 G41 D1
N20 #501=1
N22 #501=1
N23 #11=0
N25 #1=-1.5
N25 #2=1
N26 #21=(FIX CIRCLE DIAMETER)
N27 #22=(ROLLING CIRCLE DIAMETER)
N28 #24=(ARM LENGTH)
N29 #25=0
N29 #28=6
N30 S (SPINDAL SPEED) M03
N32 #27=#21/#22
N34 #23=#21+#22
N36 #26=#23/#22
N40 #1=#1+1
N42 #31=#26*#1
N44 #10=COS[#31]
N46 #3=#10*#24
N48 #13=COS[#1]
N50 #4=#13*#23
N52 #14=SIN[#1]
N54 #5=#14*#23
N56 #9=SIN[#31]
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

76 www.ijergs.org

N58 #6=#9*#24
N60 IF [#501 LT 0] GOTO68
N62 #7=[#4-#3]
N64 #8=[#5-#6]
N66 IF [#501 GT 0] GOTO72
N68 #7=[#4+#3]
N70 #8=[#5-#6]
N72 G01 X#7 Y#8 Z#25 F150
N74 IF [#501 LT 0] GOTO78
N76 IF [#1 LT [360/#27]*#2] GOTO34
N77 IF [#1 GT 360] GOTO100
N78 #501=-1
N80 #23=#21-#22
N82 IF [#502 LT 0] GOTO88
N84 #502=-1
N86 #2=#2+1
N88 IF [#1 LT [360/#27]*#2] GOTO36
N89 #501=1
N90 #502=1
N92 #2=#2+1
N96 IF [#1 GT 360] GOTO100
N98 IF [#1 LE [360/#27]*#2] GOTO34
N100
N101 #27=#21/#22
N102 #501=1
N104 #502=1
N106 #2=1
N108 #1=-1.5
N110 #11=#11+1
N111 #25=#25-0.5
N112 IF [#11 LT #28] GOTO34
N114 G01 Z5 F150
N116 G01 X0 Y0
N118 M30







5.1: Four Teeth lobe pump rotor Tool-Path Fig 5.2: Four Teeth lobe pump rotor
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

77 www.ijergs.org


Fig 5.3: Four Teeth lobe pump rotor Tool Path Fig 5.4: Four Teeth lobe pump rotor

Unassigned G code can be adopted to specify tool paths and associated federate functions. A block citing the preparatory function G
contains one or more words that communicate information on the geometry and traversal rate of a curve. Many of the identifiers have
established meanings within conventional G code part programs e.g., angular dimensions about the coordinate axes for A, B, C;
spindle speed for S; tool selection for T; federate for F; and secondary dimensions parallel to the coordinate axes for U, V, W.G blocks
be modali.e., they specify values that will remain in effect until superseded by words of the same type in subsequent G blocks [9]

6. CONCLUSIONS
In the present work, a CANNED Cycle FOR lobe pump rotor has been developed with the parametric programming technique using
Macro Programming. A tool path generation program has also been developed and Simulated in CIMCO Edit v7.0 simulation
software. Developed canned cycle is useful to produce any number of teeth of lobe pump rotor with input of some key parameter of
the curve.
- To make n teeth of lobe pump rotor take compression ratio (R/r)=2n
- Selection of radii is depends upon the size of rotor
- Selection of step value decides the smoothness and accuracy of the curve.
Developed CANNED Cycle can be called with G65 at any stage of part programming with input of some key parameter like radii R
and r, speed, feed etc

7. APPENDIX
Derivation of Epi-trochoid parametric equation:
As the point C in Fig 4.1.1 travels through an angle , than its x-coordinate is defined as
x= (Rcos -rcos) and y-coordinate is defined as
y= (Rsin - rsin).
The radius of the circle created by the center point is (R-r). As the small circle goes in a circular path from 0 to 2, it travels in a
counter-clockwise path around the inside of the large circle. However, the point P on the small circle rotates in a clockwise path
around the center point C.
As the center rotates through an angle , the point P rotates through an angle in the opposite direction. The point P travels in a
circular path about the center of the small circle and therefore has the parametric equations of a circle.
However, since goes clockwise,
x= h COS and
y=-h SIN.
Since the inner circle rolls along the inside of the stationary circle without slipping, the arc length r must be equal to the arc length
R.
r=R
=R/r
However, since the point P rotates about the circle traced by the center of the small circle, which has radius (R-r), is equal to (R-r)
/r. Therefore, the equations for a hypotrochoid are
X=r cos + r cos - h cos
+



Y=r sin + r sin - h sin
+


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

78 www.ijergs.org

7.1 OTHER DEVELOPED CANNED CYCLE

7.1.1 HYPOTROCHOID POCKET:
The most popular method for machining a two-dimensional (2D) pocket is the contour-parallel offset method. Contour-parallel
machining' is used to refer to pocketing with contour-parallel tool paths. However, we should note that the productivity of contour-
parallel machining is mainly dependent on the tool path interval, because an increase in the tool-path interval brings a decrease in the
total length of the tool paths. For die-cavity pocketing, contour-parallel machining is the most popular machining strategy. [10]
The generation of offset curves is a fundamental and well-known problem in CNC machining. The programmed path is the
trajectory of the cutter center (CNC milling) or the center of the cutters rounded tip (CNC turning), while the machined contour is the
envelop of successive cutter positions. The programmed path is thus offset from the given contour by the cutter radius or the cutters
tip radius and we are faced with the problem of generating the offset as a real time trajectory. [11]


Fig 7.1.1: Hypo-trochoid Pocket

7.1.2 BEZIER SURFACE
For generating a smooth movement in NC machining, parametric curve interpolator has been developed since 1990s. The input of
parametric interpolator is a programmed tool path associated with an off-line or real-time scheduled federate, and from this a sequence
of reference commands for the servo-controller can be outputted to coordinate the motion of each drive axis simultaneously. [12] But
without requiring external arrangement like CAD modeling, CNC interpolator etc, we can generate required tool path. Parametric
programming helps to generate Bezier surface. So for that I have developed canned cycle for the same.


Fig 7.1.2: Bezier Surface


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

79 www.ijergs.org

REFERENCES

1. Biplab kanti haldar: CNC Tool Path Generation For Free Form Surface Machining, Thesis submitted in Faculty of
Engineering and Technology, Jadavpur University-2010.
2. PER E. DANIELSSON, MEMBER: Incremental Curve Generation, IEEE TRANSACTIONS ON COMPUTERS, VOL.
C-19, NO. 9, SEPTEMBER 1970
3. Sotiris L. Omirou.,: Space curve interpolation for CNC machines, Journal of Materials Processing Technology 141 (2003)
343350 2003 Elsevier.
4. Sotiris L. Omirou, Antigoni K. Barouni: Integration of new programming capabilities into a CNC milling system Robotics
and Computer-Integrated Manufacturing 21 (2005) 518527, 2004 Elsevier.
5. Sotiris L. Omirou: A CNC interpolation algorithm for boundary machining Robotics and Computer-Integrated
Manufacturing 20 (2004) 255264, 2003 Elsevier.
6. Sotiris L. Omirou, Andreas C. Nearchou: An epitrochoidal pocketA new canned cycle for CNC milling machines,
Robotics and Computer-Integrated Manufacturing 25 (2009) 7380 2007 Elsevier.
7. Matthieu Rauch , EmmanuelDuc , Jean-YvesHascoet: Improving trochoidal tool paths generation and implementation using
Process constraints modeling International Journal of Machine Tools & Manufacture 49 (2009) 375383 2009 Elsevier.
8. Yii-Wen Hwang, Chiu-Fan Hsieh: Geometry Design and Analysis for Trochoidal type Speed Reducers: with conjugate
envelopes, No. 05-CSME-59, B.LC. Accession 2911,2006
9. Rida T. Farouki, Jairam Manjunathaiah, Guo-Feng Yuan: G codes for the specification of Pythagorean-hodograph tool
paths and associated feedrate functions on open-architecture CNC machines, International Journal of Machine Tools &
Manufacture 39 (1999) 123142, 1998 Elsevier.
10. S.C. Park, B.K. Choi: Uncut free pocketing tool-paths generation using pair-wise offset algorithm, Computer-Aided Design
33 (2001) 739-746, 2000 Elsevier.
11. Sotiris L. Omirou: A locus tracing algorithm for cutter offsetting in CNC machining, Robotics and Computer-Integrated
Manufacturing 20 (2004) 4955 , 2003 Elsevier.
Yuwen Sun, Yang Zhao, Yurong Bao, Dongming Guo: A novel adaptive-federate interpolation method for NURBS tool path with
drive constraints, International Journal of Machine Tools & Manufacture 77(2014)7481, 2014 Elsevier

















International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

80 www.ijergs.org

A Study on Automobile Air-Conditioning Based on Absorption Refrigeration
System Using Exhaust Heat of a Vehicle
S.S.Mathapati
1
, Mudit Gupta
2
, Sagar Dalimkar
2

1
Assistant Professor, Department of mechanical Engineering, Sinhgad Institute of Technology, Lonavala, Maharashtra
2
Scholar, Department of mechanical Engineering, Sinhgad Institute of Technology, Lonavala, Maharashtra
E-mail- muditgupta9210@gmail.com
ABSTRACT Energy from an exhaust of an internal combustion engine is used to power an absorption refrigeration systemto air-
condition an ordinary passenger vehicle. Feasibility study has been done to find out the energy available from exhaust gas of a vehicle.
Cooling load for the automobile has been estimated. In this paper theoretical evaluation ofLiBr-Water based absorption refrigeration
system is presented. Mathematical modeling of system using EES software is done, Alsoeffects on COP of system with change in
different parameters has been studied.

Keywords:Automobile Exhaust, Absorption Refrigeration System, Internal Combustion Engine, EES

INTRODUCTION
In vapour absorption refrigeration system, aphysiochemical process replaces themechanical process of the vapour
compression system by using energy in the form of heat rather than mechanical ork. Themain advantage of this system
lies in possibility of utilizing energy from exhaust a sofvehicle and alsousinganeco-friendly refrigerant such as water. The
vapour absorption system has many favorable characteristics; typically a much smaller electrical input is required to drive
the solution pump as compared to the power requirement of the compressor in the vapour compression system.
Also,fewermovingpartsmeanlower noise level, higher reliability and improved durabilityin vapourabsorptionsystem.
METHODOLOGY
In vapour absorption refrigeration system as shown in FIG1, the compressor is replaced by an absorber, a pump, a generator anda
pressure reducing valve. These components in the system perform thes ame function as that of compressor in VCR system. The
vapour refrigerated from evaporator is drawn into the absorber where it is absorbed by the weak solution of refrigerant forming
astrong solution. This strong solution is pumped to the generator where it is heated utilizing exhaust heat of vehicle. During the
heating process the vapour refrigerant is driven off by the solution and enters into the condenser where it is liquefied. The liquid
defrigerant then flows into thee vaporator and the cycle is completed.

FIG [1]
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

81 www.ijergs.org

MEASURED EXHAUST USEFUL HEAT AND HEAT LOAD CALCULATION
To generate base line data, the engine is allowed to run at different throttle position (one-fourth and half) considering engine speed as
running parameter. The mass flow rate of air, mass flow rate of fuel and temperature of exhaust gas is measured as given in Table
1 For measuring the required data plenum chamber (1 m
3
) with circular orifice of 32 mm diameter, inclined tube manometer, burette
for petrol measurement and thermocouple for exhaust temperature measurement installed on engine. The determination of actual load
becomes very difficult in vehicle air conditioning because of the variation of the load in the climatic conditions when the vehicle is
exposed during the course of long journey. The cooling load of a typical automobile is also considered at steady state conditions. The
cooling capacity is affected by outdoor infiltration into vehicle and heat gain through panels, roofs, floors etc. The cooling load
considered in this analysis is given in Table 2. The table shows that heat load inside the traveler is 2 kW. Therefore, 2 KW air
conditioning unitis sufficient to fulfill the cooling.
Throttle
position
opening
Eng.
Speed
(rpm)
Air
Pr.
(mm
of
H
2
O)
Time
for cons.
of 25cc
of
fuel(sec)
Exh.
Temp.
(
o
C)
Mass
of
fuel
(kg/s
x 10
-
5
)
Mass
of
air
(kg/s
x10
-
4
)
Exh.
useful
energy
(KW)


1/4
3500
3000
2500
2000
1500
7.4
7.9
7.2
5.6
4.9
40
57
48
42
41
622
605
566
623
502
46
32
38
44
45
64
67
64
56
52
3.98
3.91
3.50
3.49
3.05


Half
3500
3000
2500
2000
1500
14.8
15.9
12.3
9.4
6.8
34
29
24
32
39
669
615
648
595
508
57
63
71
57
47
91
94
83
73
62
6.02
5.74
5.47
4.51
3.61

TABLE [1]
Heat load inside the vehicle is calculated as follows:
We have considered passengers in the traveler and calculated the following:-
Radiation Load
Q
rad
= S**I
rad
*cos

Ambient Load
Q
amb
= S*U*(T
s
- T
i
)

Ventilation Load
Q
ven
= m
ven
* (e
o
- e
i
)

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

82 www.ijergs.org

Metabolic Load
Q
meta
= M*A

Overall Heat Load
Q
AC
= (Q
rad
+ Q
amb
+ Q
ven
+ Q
meta
)


Heat Load Amount of Heat( KJ/hr)
Radiation Load

Ambient Load

Ventilation Load

Metabolic Load
85.83

422.83

59.54

1356.23
Total 1924.43(KJ/hr) or 1.9 Kw

TABLE [2]

MODELLING OF ABSORPTION SYSTEM
Followingassumptionhasbeenmadetomodelthesystem.
1. Generator and condenser as well as evaporator and absorber are under same pressure.
2. There are no pressure changes except through the flowrestrictors and the pump.
3. Refrigerant vapor leaving the evaporator is saturated pure water.
4. Liquid refrigerant leaving the condenser is saturated.
5. Strong solution leaving the generator is boiling.
6. Weak solution leaving the absorber is saturated.
7. No liquid carryover from evaporator.
8. Flow restrictors are adiabatic.
9. Pump is isentropic.
10. No jacket heat losses


FIG [2]
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

83 www.ijergs.org

1st point is saturated water vapor;
2nd point is superheated water vapor;
3rd point is saturated liquid water;
4th point is vapor-liquid water state;
5th point is saturated liquid solution;
6th point is sub-cooled liquid solution (at P
low
);
7th point is sub-cooled liquid solution (at P
high
);
8th point is saturated liquid solution;
9th point is sub-cooled liquid solution;
10th point is vapor-liquid solution state.

2 KW Aqueous Lithium Bromide Absorption System

Assumptions Taken:-
Condenser Temperature = 38
o
C
Evaporator Temperature = 7
0
C
Absorber Temperature = 37
0
C
Generator Temperature = 85
0
C
Pressure values taken from p-h chart of water as refrigerant for condensing temperature 35
0
C and evaporating temperature 7
0
C
P
E
= 1 KPa
P
C
= 5.696 KPa

1. For Evaporator
Process Cycle 4-1
Heat load on Evaporator Q
E
= 2KW
Q
E
= m
R
(h
1
h
4
)
For Defined System
m
R
= m
1
= m
4
= 0.000844 Kg/Sec

2. For Generator
Process Cycle 7-2
Mass Balancing Of Weak and Strong Solution
m
7
= m
2
+ m
8

m
7
x
7
= m
8
x
8

m
7
= 0.0101 Kg/Sec
m
8
= 0.00928 Kg/Sec
m
2
= 0.000844 Kg/Sec

=
2

2
+
8

7

Q
g
= 0.0909*m
8
*h
2
+ m
8
*h
8
1.0909*m
8
*h
7

Q
g
=2.725 KW

For Defined System
m
8
= m
9
= m
10
= 0.00928 Kg/Sec
m
7
= m
6
= m
5
= 0.01010 Kg/Sec
m
2
= m
3
= m
4
= m
1
= 0.000844 Kg/Sec

3. For Condenser
Process Cycle 2-3
Heat Rejected by Condenser Q
c
= m
2
*(h
2
h
3
)

Q
c
= 2.113 KW

4. For Absorber
Process Cycle 1-5
Heat Rejected by Absorber Q
a
= m
1
h
1
+ m
10
h
10
- m
5
h
5

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

84 www.ijergs.org

Q
a
= 2.567 KW
5. For Solution Heat Exchanger
Process Cycle 6, 9 - 7, 8
Heat transfer Q
SHEX
= m
5
*(h
7
h
6
)

Q
SHEX
= 0.416 KW

SYSTEM ANALYSIS

System analysis is based on certain fixed parameters which are shown in TableNo.3by using this fixed parameters COP, Mass flow
rate of refrigerant, mass flow rate of strong solution, mass flow rate of weak solution, heat transfer in generator, condenser and
absorber are found out using EES software and the effect of generator temperature, evaporator temperature, condenser temperature
and absorber temperature on system COP is analysed using EES software.

INPUT PARAMETERS

= Generator Temperature () 85

= Evaporator Temperature() 7

= Condenser Temperature() 35

= Absorber Temperature() 37

= Load ( ) 1720( )

Table No.3

EES PROGRAMM


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

85 www.ijergs.org














CONCLUSION
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

86 www.ijergs.org


As per the calculations of heat load and heat availability obtained from a vehicle a 2kW system is feasible to provide air conditioning
in a vehicle. From system analysis it is seen that COP of system increases with increase in generator temperature and evaporator
temperature but it reduces with increase in condenser and absorber temperature. There is optimum value of generator temperature
above which COP reduces also COP increases with increase in mass flow rate of water (m
w
).

REFERENCE:
[1] IlhamiHoruz. An alternative road transport refrigeration, journal of engineering and environmental sciences, 22(1998), 2011-222.
[2] Harish Tiwari, Dr.G.V.Parishwad. Adsorption Refrgeratiion system for cabin cooling of trucks, international journal of emerging
technology and advanced engineering, Oct(2012) Vol 2, issue 10.
[3] Satha Aphornratana, Thanarath Sriveerakul. Experimental studies of a single effect absorption refrigerator using aqueous lithium-
bromide: Effect of operating condition to system performance, Science Direct, 30Aug(2007), 658-669.
[4] ASHRAE Fundamental Handbook (SI): 2001, Atlanta, USA.
[5] ASHRAE Handbook of fundamentals, 1997.
[6] K.K.DattaGupta, D.N.Basu and S.Chakravati. Optimization study of a solar-operated lithium bromide-water cooling system with
flat 7.Mohammad Ali FAyazbakhsh and Majid Bahrami. Comprehensive Modeling of vehicle air conditioning loads using heat
balance method, SAE international,04/08/2013,2013-01-1507
[8] Shah Alam. A proposed model for utilizing exhaust heat to run automobile air conditioner, joint international conference on
sustainable energy and environment (SEE 2006) 21-23 Nov(2006), E-011(P)
[9]Florides, G.A., Kalogirou, S.A., Tassou, S.A., Wrobel, L.C. Design and construction of a LiBrwater absorption machine, Energy
Conversion and Management 44 (2003) 24832508.
[10] K.Balaji, R.Senthil Kumar. Study of vapor absorption system using waste heat in sugar industry, IOSR Journal of
Engineering,Aug(2012),2250-3021.
[11] G Vicatos. A car air-conditioning system based on an absorption refrigeration cycle using energy from exhaust gas of an internal
combustion engine., university of Cape Town.
[12] Guozhen Xie. Improvement of the performance for an absorption system with lithium bromide water as Refrigerant by
increasing Absorption Pressure, ICEBO(2006). HVAC Technologies for Energgy Efficiency, Vol. IV 10-4.









International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

87 www.ijergs.org

The Transition of Phrase based to Factored based Translation for Tamil
language in SMT Systems
Dr. Ananthi Sheshasaayee
1
, Angela Deepa. V.R
2

1
Research Supervisior, Department of Computer Science & Application, Quaid-E-Millath Government College for Women
(Auonomous), Chennai
2
Research Scholar (PG), Department of Computer Science & Application, Quaid-E-Millath Government College for Women
(Auonomous), Chennai
E-mail-ananthi.research@gmail.com
Abstract Machine translation is one of the major and the most active areas of Natural language processing. Machine
translation (MT) is an automatic translation of one natural language into another using computer generated instructions. The utility
and power of Statistical Machine Translation (SMT) seems destined to change our technological society in profound and
fundamental ways. The current state-of-the-art approach to statistical machine translation,so-called phrase-based models is limited
to linguistic information.For a highly agglutinative languages like Tamil developing a linguistic tools and machine translation
system is a challenging task. Therefore, extending the phrase-based to factored based approach by tightly integrating additional
annotation information at the word level which encompass not only of tokens but a vector of factor representating the level s of
annotation. The additional linguistically features enabled in the toll will increase the accuracy of the SMT systems. This paper
motivates the use of factored translation models for statistical machine translation systems for better reliable translation for highly
morphological languages like Tamil.
Keywords Statistical machine translation, Automata theory, Artificial intelligence, Datastructure, Morphology, Linguistics
Agglutinative language,
INTRODUCTION
The performance of the SMT systems for English to Tamil language pairs is affected by two main things (i) the amount of parallel
data (ii) the language difference owing to the morphological richness and word order differences due to syntactic divergence [7]. The
availability of parallel data for English to Tamil language is less. The difference in word order and morphological complexity between
the English and Tamil language leads in intricacy of building the translation models. In SMT systems the current state-of-the-art
approach for translation model is phrase-based models. To translate morphological rich languages like Tamil there is a need to
integrate linguistic information at word level in the translation model which include not only of tokens but a vector of factors
representating the levels of annotation.This leads to the new approach termed as Factored based approach.
This paper motivates the importance of using factored based approach in the translation models of SMT systems for translating
English and Tamil language pairs by integrating it with state-of-the-art phrase based models .The remaining part of the paper is
organized as follows: Section 2 discuss about the various SMT systems build for English Tamil language pairs. Section 3 portrays
the role of phrase based approach in translation models. In Section 4, 5, 6 the motivation for and an overview of the present model is
given. The paper is concluded in Section 7
LITERATURE SURVEY

A Statistical Machine Translation Systems for Sinhala to Tamil language was developed by Ruvan Weerasinghe [1].In this method a
small trilingual parallel corpora was formed which contains the newsevents, culture and politics of Srilanka. A semi automatic
approach was employed to perform sentence boundary detection. The sentences were aligned manually and a total of 4064 sentences
of Sinhala and Tamil were used in this systems.
A Statistical Machine Translation System by Ulrich Germann (2001)[18] developed a small parallel corpus Tamil-English for about
100,000 words on the Tamil side using several translators. As a part of this a simple text stemmer for Tamil was built based on the
Tamil infections tables which helped to increase the performance of the systems.
In 2002 Fredric C.Gey [16] assembled a corpus of Tamil news stories from Thinaboomi website which contains nearly 3000 news
stories in Tamil language. This developed corpus is been used to develop a statistical Machine translation by Information Sciences

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

88 www.ijergs.org

Institute,one of the leading machine translation research organizations.
An interactive approach to develop web based English to Tamil machine translation system was proposed by Vasu Renganathan.[17]
Google developed a web based machine translation engine for English to Tamil language which is facilitated to identify the source
language automatically.

TRANSLATION MODELS
(i)Word based translation
In word based translation model [19], words are the translation elements. The word based translation models rely on high fertility rates
and the mapping of the single word to multiple words. Fertility is defined as the ratio of the length of sequences of translated words
aiming at producing the n number of words from the source word. The Statistical machine translation is that every sentences t in a
target language is a possible translation of a given sentence e in a source language. Based on the bilingual text corpus and the
probability assigned to each sentence the possible translation of a given sentence is estimated. Therefore considering the words as the
translation units with the applied probabilities the first Statistical machine translation models based on the words were built.
(ii)Phrase based translation

For better translation between the language pairs the translation of words is replaced with the phrase sentences. The phrase based
translation models [15] aims to translate whole sequences of words based on their length. Phrases are not merely linguistic ones but
the sequence of words found using statistical methods from corpora. For computation of the translation probabilities with respect to
the behavior of the phrase is taken into considerations.

Steps involved in Phrase based Translation process:

a. The sentence from the source language (E) is grouped into phrases

the arbitrary contiguous sequences of words.


b. Each phrase

is translated into

(phrase of Tamil language)


c. The phrase in source language are reordered according to the target language (T) The translation model aims to assign a probability
for the given Tamil sentence T and an English sentence E such that T generates E. The probability model for the phrase based
translation depends on a translation and distortion probability.
The translation probability for generating source phrase

from target phrase

is (/). The distortion probability d is


responsible for the reordering of the source phrase which means that the probability of two consecutive Tamil phrases are separated in
English by a distance of the English word of a particular length.
The distortion is parameterized by


where

= start position of the source English phrase being generated by the

Tamil phrase.

= end position of the source


English phrase generated by

Tamil phrase. Thus calculating the distortion probabilities handles the difference in the order of
the words in the phrase based models.
The following equation(1) implies the translation model for phrase based machine translation.
= (

)(

-1)..(1)
Though phrase based models produce better translation than word-based models a novel approach is needed for translating longer
units. The lack of linguistic information in phrase based models prone to decrease the translation quality of the systems.
MOTIVATING EXAMPLE:MORPHOLOGY

Tamil is one of the longest surviving classical languages in the world.It is morphologically rich and agglutinative.[3]. Therefore it
needs deep analysis at the word level to capture the meaning of the word from its morphemes and its categories. The complex
morphological structure of Tamil inflects to person, gender, and number markings and also combines with auxiliaries that indicate
aspect, mood, causation, attitude etc in verb. Each root is affixed with several morphemes to generate word. Each root word can take a
few thousand inflected word forms.

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

89 www.ijergs.org

INCORPORATING LINGUISTIC INFORMATION IN SMT
To model translation systems for Morphological rich languages like Tamil we need to integrate linguistic information into the
translation models on the levels of lemmas and grouping the different word forms from the common lemma. Factored translation
models allow the integration of additional morphological and lexical information at the word level of both the source and the target
languages. Factored translation models[14] is an extension to phrase-based models in which each word is substituted by a vector of
factors such as word, Lemma, Part-of-speech information, morphology etc. In order to improve the translation quality the factored
models can be employed with various features like morphological coherence [13,9,10,6,4,5], grammatical coherence [11], compound
handling [8] or domain adaptation [2, 12].
DECOMPOSITION OF FACTORED TRANSLATION

i) Factored corpora:
Statistical machine translation system accuracy is based on the size of the parallel corpus. The scarce availability of parallel sentences
between the English-Tamil language pair inhibits the efficiency of the SMT systems. Therefore in order to build the framework of
factored translation the available parallel corpora is cleaned up to separate the words and punctuations. Pre-processing plays a
predominant role in creating factored training corpora. For highly agglutinative languages like Tamil the pre-processing is done
through the linguistic tools like POS tagger and morphological analysers(Fig1,Fig1.1).
For English the reordering and compounding steps are implemented for creation of factored corpora

Input Output
Word Word
Lemma Lemma
Part-of-speech Part-of-speech
Morphology Morphology
Word class Word class

Fig1.Representing (source/target) by factors

ii) Mapping steps in Translation models:
The translation model in the factored based models is broken up into three mapping steps:
1. Translation of input lemmas into output lemmas
2. Translation of morphological and POS factors
3. Generation of surface form through the lemma and the linguistic information.




International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

90 www.ijergs.org


Input Output
Word Word
Lemma Lemma
Part-of-speech Part-of-speech
Morphology Morphology
Fig.1.1.Example Factored model
Let us consider an example, the word book is different from the word books.If the system is trained with the word book while
translating the system identifies it but it denies to identify the word books ( plural form)since the system is not trained with the
linguistic information Though this problem does not cause much impact for the language English but shows up significant problem for
morphological rich language like Tamil.
Source Language (e) sentence
Target Language (t) sentence

T

T

T

G
e- source factors T-Translation step
t-target factors G-Generation step

Fig(2)The annotated factors of a word in a source language(e) to that of the translated factors of source word (e) in Target Language(t)
Thus there is a need for a model in which the lemma and morphological information are separately translated which generates the
output surface words on the output side instituting the obtained information. Factored translation models(Fig 2) can ultimately meet
these need. Thus before training the sentences the parallel corpus should be annotated with factors which gives linguistic information
such as lemma, part-of-speech, morphology etc. Translation steps compute the sentences like the phrase based models whereas the
generation steps trains on the target side of the corpus.For every factor annotated additional language models are used to train the
system. Models are combined in a log-linear fashion analogous to the different factors and components.
CONCLUSION
This paper describes the significance of factored translation models which is an extension of phrase based approach by integrating
the additional information from linguistic tools or automated word classes. Moreover these models can be deployed in morphological



International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

91 www.ijergs.org

rich languages like Tamil for better translation quality.


REFERENCES:
[1] Sripirakas, S., A. R. Weerasinghe, and D. L. Herath. "Statistical machine translation of systems for Sinhala-Tamil." Advances in
ICT for Emerging Regions (ICTer),International Conference on IEEE,2010
[2] Niehues, J., Waibel, A. Domain adaptation in statistical machinetranslation using factored translation models, In :Proc.of
EAMT,2010
[3] Kumar, M. Anand, et al. "A Sequence Labeling Approach to Morphological Analyzer for Tamil Language."International Journal
On Computer Science and Engineering,Volume-02,Issue-06,2010
[4] Koehn, P., Haddow, B., Williams, P., Hoang, H.More linguistic annotation for statistical machine translation. In: Proc. of
WMT And Metrics MATR,Uppsala,Sweden,ACL,Page no(115-120),2010.
[5] Yeniterzi, R., Oflazer, K.. Syntax-to-Morphology Mapping in FactoredPhrase-Based Statistical Machine Translation from
English to Turkish,In:Proc.of ACL, Uppsala, Sweden, ACL,Page no(454-464)
[6] Ramanathan, A., Choudhary, H., Ghosh, A., Bhattacharyya, P.Case markers and morphology: addressing the crux of the
fluency problem in English-Hindi SMT. In: Proc. of ACL/IJCNLP,Suntec,Singapore: Volume 2,Page no(800-808),2009
[7] Koehn, P., Birch, A., and Steinberger, R. 462 Machine Translation Systems for Europe, In MT Summit XII,2009
[8] Stymne, S.,German Compounds in Factored Statistical Machine Translation. In Ranta,Bengt Nordstrom Aarne.Advances in
Natural Language Processing.Volume 5221 of Lecture Notes in Computer Science. Springer Berlin/Heidelberg(2008)
[9] Avramids, E., Koehn, P.: Enriching morphologically poor languagesfor statistical machine translation. In: Proc. of ACL/HLT,
Columbus,Ohio,ACL,Page no(763-770),2008.
[10] Badr, I., Zbib, R., Glass, J.: Segmentation for English-to-Arabic statistical machine translation. In: Proc. of ACL/HLT Short
papers, Columbus, Ohio, ACL,Page no(153156),2008
[11] Birch, A., Osborne, M., Koehn, P.: CCG Supertags in Factored Statistical Machine Translation. In: Proc. of ACL WMT,
Prague,Czech Republic, ACL,Page no(916),2007
[12] Koehn, P., Schroeder, J.: Experiments in domain adaptation for statistical machine translation. In: Proc. of ACL WMT,
Prague,Czech Republic, ACL ,Page no( 224227 ),2007
[13] Bojar, O.: English-to-Czech Factored Machine Translation. In: Proc.of ACL WMT, Prague, Czech Republic, ACL
Page no( 232239),2007
[14] Philipp Koehn and Hieu Hoang.: Factored translation models. In Proc.EMNLP+CoNLL, Prague,Page no( 868876),2007
[15] Philipp Koehn, Franz Josef Och, and Daniel Marcu, Statistical Phrase-Based Translation, In: Proc.of HLT/NAACL,2003
[16] Fedric C. Gey,Prospects for Machine Translation of the Tamil Language, In:Proc. of Tamil Internet conference,
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

92 www.ijergs.org

California, USA,2002
[17] Vasu Renganathan, An interactive approach to development of English to Tamil machine translation system on the web.
INFITT, (TI2002),2002
[18] Germann, Ulrich. "Building a statistical machine translation system from scratch: how much bang for the buck can we
expect?."In:Proc. of the workshop on Data-driven methods in machine translation-Volume 14. Association for
Computational Linguistics, 2001.
[19] Koehn, Philipp, and Kevin Knight. "Knowledge sources for word-level translation models."In: Proc.of the Conference
on Empirical Methods in Natural Language Processing. 2001
























International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

93 www.ijergs.org

Analysis and Control of Three Phase Multi level Inverters with Sinusoidal
PWM Feeding Balanced Loads Using MATLAB
Rajesh Kumar Ahuja
1
, Amit Kumar
2

Department of Electrical Engineering, YMCA University of Science & Technology, Faridabad Haryana ,India
1,2
rajeshkrahuja@gmail.com
1
, ymca62@gmail.com
2


Abstract Multi level inverters are becoming very attractive for industries due to their high power rating, high voltage rating and
high efficiency is achieved without transformer. These also improves the overall performance of the system as these produces fewer
harmonics. As the number of levels increases the quality of voltage waveform is also increases. In this paper the analysis of Three
Phase Diode Clamped Multi Level Inverter has been carried out in MATLAB/Simulink for different levels (3, 5 and 7) at varying
loads. Diode-Clamped topology for Multi level inverter (DC-MLI) for different levels are analyzed in the terms of THD contents for
output voltage and as well in output current. Sinusoidal PWM Technique is used to generate the gate pulses. The simulation of three
phase three level, five level and seven level inverters is done in Matlab/Simulink.
I. INTRODUCTION
In multilevel inverters the main dc supply voltage is divided into several smaller sources which are further used to synthesize an ac
voltage source into a staircase or stepped approximation of the desired sinusoidal waveform. The multilevel inverters combine the
individual dc sources at particular times to make a sine wave. And by using more levels to synthesize the sine waveform, the
waveform approaches the desired sine and the total THD is reduced to nearly zero.
Types of Multi level inverters
Mainly three types of Multi level inverters are there:
- Diode-Clamped inverter
- Flying Capacitor inverter
- Cascaded inverter
In this paper Diode-Clamped topology for Multi level inverter [10] is used. The simplest form of this topology is also known as the
neutral point clamped converter. In this there are two pairs of switches (upper and lower).
For an M- level Diode Clamped-Multi-Level Inverter:-
No. of Power Semi-Conductor Switches per phase = 2(m-1)
Clamping diodes per phase = (m-1) (m-2)
DC bus capacitors = (m-1)
Where m= No. of levels
The main role of capacitors is to divide the main dc voltage into smaller voltages i.e. for five level it is V
dc
, V
dc,
0, - V
dc
and -V
dc
.
Diode-Clamped topology for Multi level inverter (DC-MLI) has a number of advantages some of them are as follows:
- The THD decreases with the increase in number of level.
- Common DC bus is used for all the phases.
- Flow of Reactive power can be controlled.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

94 www.ijergs.org

- Control scheme is quite simple.

II. MODULATION TECHNIQUE

The control of Multi level Inverter is much complicated as compared to two level Voltage source Inverter because extra need for the
timing transitions between the voltage levels. Sinusoidal PWM, Space Vector Modulation and Harmonics Elimination PWM are some
of the main modulation techniques for Multi level Inverter but Sinusoidal PWM is found the most popular method. The Space Vector
Modulation and Harmonics Elimination PWM are used for some specific applications.
In Sinusoidal PWM, comparison of Reference (Sine) with Triangular waves (N-1) are done. The resulting intersection points are used
as the switching instants of PWM pulses as shown in Figure 1.


Figure 1 Comparison of Triangular Waves with Sine Wave

III. MATLAB SIMULATION
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

95 www.ijergs.org

The main objective of this paper is to generate the gate pulses for the thyristors used in the Diode-Clamped Inverter. The gate
pulses are generated by the SPWM method. When we compare the two waves i.e. Sine and Triangular, the pulses are generated
which are further used to control the VSI. Matlab Simulation of 5- Level Diode-Clamped Inverter is shown in figure 2

Figure 2 Matlab Simulation of 5- Level Diode-Clamped Inverter
IV. SIMULATION RESULTS
Different Level Inverters are analyzed for different values of loads. R-L load is taken for all the outputs and the power factor is taken
as 0.8.
A. Output Phase Voltage of 3-Level Inverter
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

96 www.ijergs.org


Figure 3. Output Phase Voltage of 3-Level Inverter

B. Output Current of 3-Level Inverter at 1 KW

Figure 4. Output Current of 3-Level Inverter at 1 KW
C. Output Phase Voltage of 5-Level Inverter
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

97 www.ijergs.org


Figure 5. Output Phase Voltage of 5-Level Inverter
\
D. Output Current of 5-Level Inverter at 1 KW

Figure 6. Output Current of 5-Level Inverter at 1 KW
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

98 www.ijergs.org

E. Output Phase Voltage of 7-Level Inverter

Figure 7. Output Phase Voltage of 7-Level Inverter


F. Output Current of 7-Level Inverter at 1 KW

Figure 8. Output Current of 7-Level Inverter at 1 KW
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

99 www.ijergs.org


V. RESULTS
The THD for output voltages and output currents is summarized in the tables. Table 1 shows the comparison of the THD of
Output Currents of different Levels at different loads. And Table 2 shows the comparison of the THD of Output Voltages of
different Levels at different loads.
Table No.1


Table No.2
Load
Level
1 KW 3 KW 5 KW 7 KW 9 KW
3-Level
Inverter
Voltage 410.8 410.6 410.5 410.5 410.4
THD 57.19 % 57.27 % 57.27 % 57.27 % 57.26 %
5-Level
Inverter
Voltage 417.7 409.5 401.5 393.8 386.34
THD 28.12 % 29.69 % 31.57 % 33.74 % 36.22 %
7-Level
Inverter
Voltage 418.5 417.2 416.7 416.4 416.2
THD 18.80 % 19.41 % 19.52 % 19.54 % 19.55 %
VI. CONCLUSION
This paper has evaluated the Sinusoidal PWM technique for Diode-Clamped Multi level. In this paper, Simulink model for same has
been developed and tested in the MATLAB/Simulink environment for different values of Loads. The simulation results are compared
and analyzed by plotting the output harmonic spectra of various output Currents and output Voltages and computing their Total
Harmonic Distortion (THD) and their final comparison is shown in Tables. It is observed that with the increase of level the THD of
output current and voltage decreases. THD of load currents at all levels is less than 5 % shows the durability of design of this
multilevel inverter. It is clear from the comparison that the Harmonics are reduced as the number of Levels increases and hence the
overall system efficiency increases.

Load
Level
1 KW 3 KW 5 KW 7 KW 9 KW
3-Level
Inverter
Current 0.5115 1.534 2.556 3.578 4.6
THD 1.99 % 1.99 % 1.99 % 1.99 % 1.99 %
5-Level
Inverter
Current 0.5214 1.534 2.509 3.447 4.348
THD 1.14 % 1.26 % 1.40 % 1.56 % 1.73 %
7-Level
Inverter
Current 0.5227 1.563 2.602 3.641 4.678
THD 0.55 % 0.55 % 0.54 % 0.53 % 0.53 %
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

100 www.ijergs.org

REFERENCES:
[1] Tim Cunnyungham, "Cascaded Multilevel Inverter for Large Hybrid Electric Vehicle Applications with Variant DC Sources", A thesis
presented for the Master of Science degree, The University of Tennessee, Knoxville, May 2001.
[2] T1hami Colak, Ersan Kabalci, and Seref Sagiroglu, "The Design and Analysis of a 5-Level Cascaded Voltage Source Inverter with Low
THO," Lisbon, Portugal, pp.575-580, March 18-20, 2009.
[3] Calais, M.; Borle, L.J.; Agelidis, V.G., Analysis of multicarrier PWM methods for a single-phase five level inverter, Power Electronics
Specialists Conference, PESC.2001 IEEE 32nd Annual, Volume 3, Issue , 2001. Pp: 1351 1356.
[4] Nabae A., Takahashi I., Agaki H., A New Neutral-Point-Clamped PWM, Inverter, IEEE Transactions on Industry Applications, Vol. IA-17,
No. 5, Sep.-Oct., 1981.
[5] D.G. Holmes, T.A.Lipo, Modern Pulse Width Modulation Techniques for Power Converter, IEEE Press, 2003
[6] Rajesh Kr Ahuja, Lalit Aggarwal, Pankaj Kumar Simulation of Single Phase Multilevel Inverters with Simple Control Strategy Using
MATLAB, International Journal of Advanced Research in Electrical, Electronics and Instrumentation Engineering, Vol. 2, Issue 10,
October 2013
[7] B.P.Mcgrath and D.G Holmes reduced n PWM harmonic distortion for multi level inverters operating over a wide modulation range
IEEE Transactions on power electronics vol 21 no 4 pp941-949, july 2006.
[8] Zainal Salam and Junaidi Aziz Derivation of Switching Angles of the Cascaded Multilevel Voltage Source Inverter Subjected to a New
Pulse Width Modulation Scheme, The Institution of Engineers, Malaysia, Vol. 72, No.3, September 2009.
[9] Rajesh Kumar Ahuja, Amit kumar Analysis, Design and Control of Sinusoidal PWM Three Phase Voltage Source Inverter Feeding
Balanced Loads at Different Carrier Frequencies Using MATLAB in IJAREEIE, Volume 3, Issue 5, May 2014
[10] Tim Cunnyngham Cascade Multilevel Inverters for Large Hybrid-Electric Vehicle Applications with Variant Dc Sources, M.S. thesis,
The University of Tennessee, Knoxville, 2001.
[11] Maheswari, S. Mahendran, Dr. I. Gnanambal, Implementation of Fundamental Frequency Switching Scheme on Multi Level Cascaded
H-Bridge Inverter Fed Three Phase Induction Motor Drive Wulfenia journal, Klangfurt Austria, Vol 19, No. 8, pp10-24,2012.
[12] N. Celanovic and D. Boroyevich, A comprehensive study of neutral-point voltage balancing problem in three level neutral-point-clamped
voltage source PWM inverters . IEEE Transactions on Power Electronics, Vol. 15, No. 2, pp. 242249,2000.
[13] Y.R. Manjunatha, M.Y. Sanavullah, Generation of equal step multilevel inverter output using two unequal batteries, International Journal
of Electrical and Power Engineering, Vol.1, Issue 2, pp. 206-209,2007.
[14] M. Ayadi, L. El Mbeki, M. A. Fakhfakh, M. Ghariani, R. Nazi, A Comparison of PWM Strategies for Multilevel Cascaded and Classical
Inverters Applied to the Vectorial Control of Asynchronous Machine, International Review of Electrical Engineering , Vol. 5, No.5,
pp.2106-2114 September-October 2010.
[15] R. Lund, M. D. Manjrekar, P. Steimer , T. A. Lipo, Control strategies for a hybrid seven-level inverter, in Proceedings of the European
Power Electronic Conference, Lausanne, Switzerland, Sep 2009.
[16] V.Kumar Chinnaiyan, Dr. Jovitha Jerome, J. Karpagam, and T. Suresh, Control techniques for Multilevel Voltage Source Inverters, in
Proceedings of The 8th International Power Engineering Conference (IPEC 2007), Singapore, pp. 1023-1028,3-6 Dec 2007.
S. K. Pillai, A first course on electrical drives, 2nd ed., New Age International Publishers, 2004











International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

101 www.ijergs.org

Record Values from Size-Biased Pareto Distribution and a Characterization
Shakila Bashir
1
, Munir Ahmad
2

1
Assistant Professor, Kinnaird College for Women, Lahore
2
Professor, National college of Business Administration & Economics (NCBA&E), Lahore
E-mail- shakilabashir15@gmail.com
Abstract In this paper upper record values from the size-biased Pareto distribution(S-BPD) are studied. Several
distributional properties of upper record values from the size-biased Pareto distribution, including probability density function(pdf),
cumulative distribution function(cdf), moments, entropy, inverse/negative moments, relations between negative and positive moments,
median, mode, joint and conditional pdfs, conditional mean and variance, have been derived. The reliability measures of the upper
record values from the S-BPD such as survival function, hazard rate function, cumulative hazard rate function and reversed hazard
rate are also discussed. A characterization of the S-BPD based on the conditional expectation of record values is given.
Keywords S-BPD; distribution function; moments; record values; hazard function; entropy; mgf; cdf; pdf; characterization.
1. INTRODUCTION

Chandler (1952) introduced records as sequence of random variables such that random variable at ith place is larger (smaller) than
variable at (i1)thplace. He called random variables as upper (lower) records in a random sample of size n from some probability
distribution. After the introduction of the field, number of researcher jumped into this area of statistics. Shorrock (1973) has
comprehensively discussed about record values and record time in a sequence of random variables. Ahsanullah (1979) has
characterized the exponential distribution by using the record values. Ahsanullah (1991) has also derived the distributional properties
of records by using the Lomax distribution. Some moment properties of the records have been given by Ahsanullah
(1992).Balakrishnan and Ahsanullah (1994) have established some recurrence relations satisfied by the single and double moments of
upper record values from the standard form of the generalized Pareto distribution. Ahsanullah (1997) derived some properties and a
characterization of upper record values from the classical Pareto distribution. Sultan and Moshref (2000) have obtained the best linear
unbiased estimates for the location and scale parameters of record values from the generalized Pareto distribution. Ahsanullah (2010)
considered several distributional properties of the upper records from the exponential distribution. Based on these distributional
properties, some characterizations of the exponential distribution are also presented in this paper. Ahsanullah et al. (2013) discussed a
new characterization of power function distribution based on lower record values.The pdf ( ) x f
n
of upper record values
( ) n U
X is

( )
( ) ( )
( ) . ,
1
< <
I
=

x x f
n
x R
x f
n
n
(1.1)

Thejoint pdf of
( ) j U
X and
( ) i U
X is

( )
( ) | |
( )
( ) ( ) | |
( )
( ) . , ,
1 1
,
< < <
I

I
=

y x y f
i j
x R y R
x r
i
x R
y x f
i j i
j i
(1.2)
i j >
The conditional pdf of
( ) ( ) i i U j U
x X X = is

(1.3)


For 1 + = i j

( ) ( )
( )
( ) ( ) ( )
( )
( )
( )
. ,
1 ! 1
1
< < <

= = =

y x
x F
y f
i j
x R y R
x X y X f
i j
i i U j j U
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

102 www.ijergs.org

( )
( )
( )
( )
. ,
1
1
1
1
< < <

= =
+
+
+ i i
i
i
i i U i
y x
x F
y f
x X y f (1.4)

1.1 SIZE-BIASED PARETO DISTRIBUTION
When an investigator records an observation by nature according to a certain stochastic model the recorded observation will not have
the original distribution unless every observation is given an equal chance of being recorded. Patil and Rao (1978) examined some
general models leading to weighted distributions with weight functions not essentially restricted by unity. The results were applied to
the analysis of data relating to human populations and wildlife management. Sunoj and Maya (2006) introduced some fundamental
relationships between weighted and
unique variables in the context of maintainability function and inverted repair rate. Furthermore, some characterization theorems for
specific models such as power, exponential, Pareto II, beta, and Pearson system of distributions using the relationships between the
original and weighted random variables was also established. Mir and Ahmad (2009) introduced some size-biased probability
distributions and their generalizations. These distributions offer a joining approach for the problems where the observations fall in the
non-experimental, non- replicated, and nonrandom categories.They introduced some of the possible uses of size- biased distribution
theory to some real life data. A number of papers have been appeared during the last ten years implicitly using the concepts of
weighted and size-biased sampling distributions.

The probability density function of weighted Pareto distribution is obtained by applying the weight ( )
m
x X w =

( ) ( )
( )
. , 0 ,
1
< < > =

x x m x f
m m
o o o |
| |
(1.5)

Where 1 = m or 2, these special cases are named as size-biased or length- biased distribution and area-biased distribution,
respectively. We define the Size-Biased Pareto distribution, when ( ) x X w = . The probability density function of the size-biased
Pareto Distribution is

( ) ( ) . , 0 , 1
1
< < > =

x x x f o o o |
| |
(1.6)

The cumulative distribution function of moment Pareto distribution is

( )
| |
o

=
1 1
1 x x F . (1.7)
In this paper, upper record values from S-BPD have been derived and discussed various properties including characterization.
Previously, no research work has been done on weighted distributions in the context of record values. So it is hoped that findings of
this paper will be useful for researchers in different fields of applied sciences.
2. UPPER RECORD VALUES FROM SIZED-BIASED PARETO DISTRIBUTION (S-BPD)
Let
( ) ( ) ( ) n U U U
X X X , , ,
2 1
denote the upper record values arising from the iid size-biased Pareto variables, then using equations
(1.6) and (1.7), the probability density function of the nth upper record
( ) n U
X is given by
( )
( )
( )
( ) ( ) . , 0 , ln
1 1
1 1
1
< < >
I

x x x
n
x f
n
n
o o o
o |
| | |
|
(2.1)

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

103 www.ijergs.org


(a) 5 , 4 , 3 , 2 , 2 , 1 = = = | o n (b) 5 , 4 , 3 , 2 , 3 , 1 = = = | o n

(c) 5 , 4 , 3 , 2 , 5 , 1 = = = | o n (d) 5 , 4 , 3 , 2 , 7 , 1 = = = | o n
Fig. 1 pdf plots for upper record values from S-BPD

2.1 PROPERTIES

In this section some distributional properties of the upper record values from the S-BPD have been derived.

2.1.1 MOMENTS

The rth moment of the nth upper record value
( ) n U
X by using (2.1), are

( ) ( )
( )
( )
( )
n
r n
r
n n r
r
X E
+

= = '
|
o |

1
1
.
(2.2)

The mean and variance of the upper record values from the S-BPD are, respectively


Mean =
( )
( )
n
n
|
o |

2
1
. (2.3)
Variance =
( )
( ) ( )
( ) ( ) ( ) | |
n n n
n n
n
| | |
| |
o |



1 3 2
2 3
1
2
2
2
. (2.4)
0
0.2
0.4
0.6
0 5 10 15
Chart Title
fn(x), n=2, =2 fn(x), n=2, =3
fn(x), n=2, =4 fn(x), n=2, =5
0
0.2
0.4
0.6
0 5 10 15
Chart Title
fn(x),n=3,=2 fn(x),n=3,=3
fn(x),n=3,=4 fn(x),n=3,=5
0
0.2
0.4
0 2 4 6 8 10 12
Chart Title
fn(x), n=5, =2 fn(x),n=5, =3
fn(x),n=5,=4 fn(x),n=5,=5
0
0.1
0.2
0 2 4 6 8 10 12
Chart Title
fn(x),n=7,=2 fn(x),n=7,=3
fn(x),n=7,=4 fn(x),n=7,=5
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

104 www.ijergs.org


The mode of the upper record values from size-biased Pareto distribution is

( )
( )
|
|
.
|

\
|


=
| |
o
1
1
exp
) ( mod
n
x
n e
.
(2.5)

The inverse/negative moments of nth upper record values from the size-biased Pareto distribution are

( )
( )
( )
( )
r n
n
r
n
n r
r
X
E
o |
|



=
|
|
.
|

\
|
= '

1
1 1
.(2.6)


By using the equation (2.2) and (2.6), the relation between negative and positive moments of the nth upper record values from S-BPD
is

( ) ( ) n r
r
n
n r
r
r
o
|
|
'
|
|
.
|

\
|

+
= '

2
1
1
. (2.7)

The moment generating function of the upper record values from size-biased Pareto distribution is


( )
( ) ( )
( )
( )

= +
=
0 1 !
1
k
n
k
n
n x
k k
t
t M
|
o
| . (2.8)
2.1.2 ENTROPY
The entropy of the nth record values
( ) n U
X from the S-BPD is


( )
( ) ( )
( ) ( ) ( ) | |
( ) |
|
o | o |
|

+ + I
|
|
.
|

\
|
I

I =

=
+

1
ln 1 ln 1
1 1
ln
1
1 0
1
n
i k n
i
k
k n
n
n x H
i
k i
k
.


(2.9)

2.1.3 CUMULATIVE DISTRIBUTION FUNCTION

The cumulative distribution function of the upper record values from the S-BPD is

( )
( ) ( ) ( )
n
x n
x F
n
I
I
=
| |
o
1 1
ln ,
1 . (2.10)

where ( ) dx e x s a
x
s
a

}
= I
1
, is the upper incomplete gamma function.
2.1.4 SURVIVAL AND HAZARD RATE FUNCTION

The survival function of upper record values from the S-BPD is
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

105 www.ijergs.org


( )
( ) ( ) ( )
n
x n
x S
n
I
I
=
| |
o
1 1
ln ,
. (2.11)

The Hazard rate function is

( )
( ) ( ) ( )
( ) ( ) ( )
| |
| | | |
o
o o |


I

=
1 1
1
1 1 1
ln ,
ln 1
x n
x x
x h
n
n
. (2.12)



The cumulative hazard rate function is

( )
( ) ( ) ( )
|
|
.
|

\
|
I
I
=

n
x n
x H
n
| |
o
1 1
ln ,
ln . (2.13)


The reverse hazard rate function is

( )
( ) ( ) ( )
( ) ( ) ( ) | |
| |
| | | |
o
o o |


I I

=
1 1
1
1 1 1
ln ,
ln 1
x n n
x x
x r
n
n
. (2.14)



(a) 7 , 5 , 3 , 2 , 2 , 1 = = = n | o (b) 7 , 5 , 3 , 2 , 5 , 1 = = = n | o
Fig.2 Survival Plots for upper record values from S-BPD.
0
0.2
0.4
0.6
0.8
1
1.2
0 5 10 15
Sn(x),n=2
Sn(x), n=3
Sn(x), n=5
Sn(x), n=7
0
0.2
0.4
0.6
0.8
1
1.2
0 5 10 15
Sn(x), n=2
Sn(x), n=3
Sn(x), n=5
Sn(x), n=7
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

106 www.ijergs.org


(a) 7 , 5 , 3 , 2 , 2 , 1 = = = n | o (b) 7 , 5 , 3 , 2 , 5 , 1 = = = n | o
Fig.3 Hazard rate plots for upper record values from S-BPD.

3. JOINT AND CONDITIONAL DENSITY FUNCTIONS

The joint probability density function of
( ) i U
X and
( ) j U
X from S-BPD is

( )
( )
( )
( ) ( ) ( ) ( ) | | . , ln ln ln .
1
,
1
1 1 1 1
1
1 1
1 2
,
< < < +
I I



y x y x x
x
y
i j i
y x f
i j i
j i
o o o o
o |
| | | | | |
| |
i j >

(3.1)


Thus the conditional pdf ( ) x y f
i j
of
( ) ( )
x X X
i U j U
= is given by

( )
( )
( )
. , ln
1 1
1
1
< < <
|
|
.
|

\
|
|
.
|

\
|

|
.
|

\
|
I

=


y x
x
y
x x
y
i j
x y f
i j
i j
o
|
| |
(3.2)


The mean of the conditional pdf ( ) x y f
i j
is

( ) ( )
( ) ( )
( )
( )
i j
i j
i j i U j U
x
x y E X X E

= =
|
|
2
1
. (3.3)

Variance of the conditional pdf ( ) x y f
i j
is

( ) ( )
( ) ( ) ( )
( )
( )
( )
( )
(

= =

i j
i j
i j
i j
i j i U j U
x x y X X
2
2
2
1
3
1
1 var var
|
|
|
| . (3.4)





0
0.05
0.1
0.15
0.2
0.25
0 5 10 15
hn(x),=2,n
=2
hn(x),=2,n
=3
hn(x),=2,n
=5
hn(x),=2,n
=7
0
0.5
1
1.5
2
0 5 10 15
hn(x),=5,n=
2
hn(x),=5,n=
3
hn(x),=5,n=
5
hn(x),=5,n=
7
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

107 www.ijergs.org


4. CHARACTERIZATION
Using conditional pdf of
( ) 1 + n L
X given
( ) n L
X as given in equation (1.4), it can be shown that if ( ) | o, P X e , then
( ) ( )
{ }
( )
| |
( )( )
3 ,
3 2
2
2
2
1
>

= =
+
|
| |
x
x X X X E
n L n L n L
.
The following theorem gives a characterization of the S-PBD using the above result.
Theorem 4.1
Let { } 1 , >
n
X be iid random variables having absolutely continuous (with respect to Lebesgue measure) cdf ( ) x F . We will assume
without loss of generality ( ) 0 0 = F and ( ) 1 1 = F .We assume further that ( ) x F is twice differential and let ( ) . 1 ,
2
> < n X E
n
If
for , 3 > |
( ) ( )
{ }
( )
| |
( )( )
3 ,
3 2
2
2
2
1
>

= =
+
|
| |
x
x X X X E
n L n L n L
. (4.1)
and ( ) | o, P X e .
Proof.Using Equation (1.4), we get
( ) ( )
( )( )
( ) x F
x
dy y f x y
x
3 2
2
2
2

=
}

| |
. (4.2)
Differentiating both sides of equation (4.2), we get
( ) ( )
( )
( )( )
( )
( )( ) 3 2
2
3 2
4
2
2


=
}

| | | |
x f x x F x
dy y f x y
x
. (4.3)
Now taking 2
nd
derivative of both sides of equation (4.3), we have
( )
( )
( )( )
( )
( )( )
( )
( )( ) 3 2
2
3 2
8
3 2
4
2
2

'


=
| | | | | |
x f x x xf x F
x F . (4.4)
Substituting ( ) ( ) ( ) x f y x f y x F y ' = ' ' = ' = , , , the equation (4.4) reduces to
( ) 0 4 5 4
2 2
= + ' + ' ' y y x y x | | (4.5)
The equation (4.5) is the well-known Euler type equation. It has solution of the form ,
r
x y = where
r
must satisfy the equation
( ) ( )
( ) 0 4 5 3
0 4 5 4 1
2 2
2
= + +
= + +
| |
| |
r r
r r r

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

108 www.ijergs.org

The roots of r , from the above equation are | =1 r and 4 = | r . So the solutions are of the type
|
=
1
1
x c y and
4
2

=
|
x c y . (4.6)
Where
2 1
&c c are constants. Hence we assume ( )
2
X E exists and ( ) ( ) x F x F y = = 1 , so we have
( ) ( ) 0 lim , 0 lim = =

x F x x F
x x
(4.7)
The solution
|
=
1
1
x c y satisfies the condition of equation (4.7) if 1 > | , which contradicts the assumption that 1 < | , and
4
2

=
|
x c y satisfies the condition of equation (4.7) if 4 < | , which contradicts the assumption if 4 > | . We must have
( ) . 0 , 0 , 1
1 1
> < < < =

| o o
| |
x x x F

5. CONCLUSION
In this paper, we developed the distribution of upper record values from the size-biased Pareto distribution. The graphs show that the
distribution of upper record values is positively skewed. For large values of n and | the pdf showing peaked and right tail longer
while for smaller values of n and | the pdf is flattering. We derive the positive and negative moment of the upper record values from
the size-biased Pareto distribution and developed a relation between them. The associated cdf, survival function, hazard function,
entropy, mgf, median, mode, skewness and kurtosis have been derived. We derive the joint and conditional probability distribution
functions of ith and jth upper record values from the size-biased Pareto distribution and find out conditional mean and variance of it.
The cumulative hazard rate function and reverse hazard rate function for the record values from size-biased Pareto distribution have
also been derived. The plot of survival function shows that the survival function is increasing for small n &| and decreasing and
showing bathtub shape for large | . Plot of hazard function shows increasing trend with n = 2 while decreasing function when n
increase. We hope this paper will contribute a valuable contribution for the enhancement of research in the theory of record values.

REFERENCES:

[1] Adamic, L.A. (2002). Zipf, Power-laws and Pareto-a ranking tutorial. Internet Ecologies area, Xerox Palo Alto Research Center,
Palo Alto, CA 94304 (http://ginger.hpl.hp.com/shl/papers/ranking.html)
[2] Ahsanullah, M. (1979). Characterization of exponential distribution by record values, Sankhya, Vol. 41, 116121.
[3] Ahsanullah, M. (1988). Introduction to Record Values. Ginn Press, Needham Heights, Massachusetts.
[4] Ahsanullah, M. (1991). Record values of Lomax distribution, Statisti. Nederlandica, Vol. 41(1), 2129.
[5] Ahsanullah, M. (1992). Record values of independent and identically distributed continuous random variables, Pak. J. Statist. Vol.
8(2), 934.
[6] Ahsanullah, M. (1995). Record Statistics, Nova Science Publishers, USA.
[7] Ahsanullah, M (1997). On the record values of the classical Pareto distribution. Pak. J. Statist., Vol, 13(1), 9-15.
[8] Ahsanullah, M. (2010). Concomitants of Upper Record Statistics for Bivariate PseudoWeibull Distribution, J. Appl. Math. Vol.
5(10), 1379-1388.
[9] Ahsanullah, M. (2010). Some characterizations of exponential distribution by upper record values, Pak. J. Statist, Vol. 26(1), 69-
75.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

109 www.ijergs.org

[10] Ahsanullah, M., Shakila, M., and GolamKibria, B. M. (2013). A characterization of power function distribution based on lower
record values. ProbStat Forum, Vol, 6, 68-72.
[11] Arnold, B.C., Balakrishnan, N., and Nagaraja, H.N. (1992). A First Course in Order Statistics, John Wiley and Sons, New York.
[12] Balakrishnan, N. and Balasubramanian, K. (1995). A characterization of geometric distribution based on record values, J. Appl.
Statist. Science, Vol. 2(1), 7387.
[13] Candler, K.N. (1952). The distribution and frequency of record values, J. Roy. Statist. Soc., Vol.14, 220228.
[14] Gustafson, G. & Fransson, A. (2005). The use of the Pareto distribution for fracture transmissivity assessment, Hydrogeology
Journal, Vol. 14, 15-20.
[15] Johnson, N.L., Kotz, S., and Balakrishnan, N. (1995). Continuous Univariate Distributions, Vol. 2, Second edition, John Wiley &
Sons, New York.
[16] Mir, K. A., and Ahmad, M. (2009). Size-biased distributions and their applications, Pak. J. Statist, Vol. 25(3), 283-294.
[17] Patil, G. P., and Rao, C. R. (1978). Weighted Distributions and Size-Biased Sampling with Applications to Wild life Populations
and Human Families, Biometrics, Vol. 34, 179-189.
[18] Sultan, K. S., and Moshref, M. E. (2000). Record values from generalized Pareto distribution and associated inference, Metrika,
Vol. 51, 105-116.
[19] Sultan, K.S. (2007). Record Values from the Modified Weibull Distribution and Applications, International Mathematical Forum,
Vol. 2,(41), 2045 2054.
Sunoj, S. M., and Maya, S. S. (2006). Some Properties of Weighted Distributions in the Context of Repairable Systems,
Communications in StatisticsTheory and Methods, Vol. 35, 223228



























International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

110 www.ijergs.org

Ultra Wide Band Filter from Defected Ground Structures as Complementary
Split Ring Resonator by with Simultaneously Double Negative Permittivity
and permeability
Cherinet Seboka Ambaye
1,2
, Guoping Zhang
1
, Yunhu Wu
1,3

1
College o Physical Science and Technology, Central China Normal University, Wuhan, China
2
Department Physics, madawal abu University, Bale Robe 217, Ethiopia
2
Department of Physics, Kashi Normal College, kasha 8844000, China
E-mail- chersebo@yahoo.com

Abstract In this paper, a metamerial of a single resonator operating between 3.1 GHz and 10.6 GHz frequency range is designed
for developing ultra-wide band (UWB) filter. During UWB filter designing in CST microwave simulation, defected ground structure
(DGS) in the form of complimentary split ring resonator (CSRR) is printed on the metallic plate and strip of wire mounted up on
dielectric substrate so that simultaneously double negative permittivity (<0) and permeability (<0) are extracted. From the
simulation result, the transmission UWB bandpass filter having transmission and fractional bandwidth of the transmission is about
2.48 GHz and 27.46% respectively satisfying the minimum requirement of FCC proposal at -10 dB transmission bandwidth

Keywords Defected Ground Structure & Complementary Split Ring Resonator (DGS-CSRR), Drude-Lorentz
Model of Harmonic Oscillator, Negative Refractive Index ,Metamaterial Resonator, Negative permittivity, negativepermeability
1. Introduction
Metamaterials are a special category of artificially engineered structures with sub- wavelength unit cells. In recent years, research on
metamaterials, especially on left-handed materials (LHMs, i.e. metamterials with simultaneously negative electrical permittivity (<0)
and magnetic permeability (<0) has aroused much interest due to the many involved intriguing physical properties such as negative
refraction [1, 2, 3].
Doubtlessly, the development of research on metamaterials is tightly connected with structure design. As is well known, the difficulty
in finding natural materials showing negative permeability is the reason that Veselagos hypothesis on left handed materials
(LHMs)[1] had been given a cold shoulder for more than 30 years until the first realistic left-handed (LH) structure in a microwave
regime[2] came out; the unit cell of this structure was actually a combination of thin metallic wires leading to ( < 0[4] and split-ring
resonators (SRRs) leading to ( < 0)[5], both proposed theoretically by Sir J. Pendry. Afterwards, many novel metamaterial designs,
especially designs showing negative permeability, were proposed aiming at more simplified fabrication and testing processes, lower
intrinsic losses, and operation at higher frequencies, even the visible regime [6] to [7]. Under microwave frequency regime we design
a metamaterial as ultra-wide band pass filter from defected ground structure of conducting plane with complimentary split ring
resonator (DGS-CSRR)[11, 13] operating within frequency range of 3.1 GHz and10.6 GHz[8, 9, 10, 12]. In recent years, the ultra-
wide band (UWB) technology has received much attention in academic and industrial fields [9, 10]. In order to construct a UWB
communication system, many UWB components including antennas and microwave filters should be designed and developed.
Compared to other parts in a UWB system, the design of wide bandwidth BPF with compact size, low insertion loss and wide band
rejection is still a challenging task [9]. A UWB system is defined as any radio system that has a 10-dB bandwidth larger than 25% of
its center frequency, or has a 10-dB bandwidth equal to or larger than 1.5 GHz if the central (resonance) frequency (resonance) is
greater than 6 GHz[9]. The trends that drive recent R&D activities carried out for UWB transmission for commercial communication
applications include:
1. High data rate: UWB technology is likely to provide high data rates in short- and medium-range (such as 20m, 50m) wireless
communications.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

111 www.ijergs.org

2. Less path loss and better immunity to multipath propagation: As UWB spans over a very wide frequencyrange (from very low
to very high), it has relatively low material penetration losses. On the other hand, UWBchannels exhibit extremely frequency-selective
fading, and each received signal contains a large number ofresolvable multipath components.
3. Availability of low-cost transceivers:Recent advances in silicon process and switching speeds makecommercial low-cost UWB
systems possible.
4. Low transmit power and low interference: For a short-range operation, the average transmits power of pulses of duration on the
order of one nanosecond with a low duty cycle is very low. With an ultra-wideband spectrum bandwidth, the power spectral density of
UWB signals is extremely low. This gives rise to the potential that UWB systems can coexist with narrow-band radio systems
operating in the same spectrum without causing undue interference.
Complementary split ring resonator (CSRR) is a dual split ring resonator (SRR), has been very popular resonator and widely used to
synthesize metamaterial. Pendary et al[14]. have demonstrated that an array of SRRs exhibits negative permeability near its resonant
frequency. Gay-Balmaz et al.[15] study experimentally and numerically the resonances in individual and coupled split ring resonators.
Bonache et al[16] have found the application of complementary circular split-ring resonators to the design of compact narrow
bandpass structures in microstrip technology. Consequently this opens the door to a wide range of applications. In this paper a
concentric compliantly split ring resonators (CSRRs) are produced from defected ground structure (DGS) on PEC structure to disturb
the shield current distribution depending on the shape and dimension of of the defect. This disturbance at shield current distribution
will influence the input impedance and the current flow of the antenna. It can also control the excitation and electromagnetic waves
propagating through the substrate layer. DGS is any defected etched in the ground plane of the microstrip can give increasing the
effective capacitance and inductance. DGS has the characteristics of stop band slow wave effect and high impedance. DGS are
basically used in microstrip antenna design for different applications such as antenna size reduction, cross polarization reduction,
mutual coupling reduction in antenna arrays, harmonic suppression etc. DGS as CSRR is widely in microwave device to make the
system compact and effective. Jigar M. patel et al[17]. Designedmicrstrip patch antenna with defected ground structure DGS for
bluetooth to determine the effect on its application. Microstripbandpass filters are particularly popular structures because they can be
fabricated using printed circuit technology, compact size and low-cost integration. An effective way to obtain tight coupling within
fabrication limit is to use defected ground structure (DGS) or aperture compensation technique, which can realize strong coupling
compared with the coupled line structure. This process modifies the characteristics of the transmission line such as the line capacitance
and inductance. Therefore, the DGS is usually used to improve the passband and stopband characteristics. Several methods have been
developed using different forms of DGSs. Moreover, reducing size is also the main challenge of the filter design for microstrip filters.
Several types of resonators have been designed to overcome these problems, such as stepped-impedance resonator, meander resonator,
and slow-wave open loop resonator. Nevertheless, miniaturized resonators lead to a reduced size of filter, but not always improve the
spurious response. In recent years, several filter applications at microwave frequency have been developed by means of metamterials
(MTMs) based on sub-wavelength resonators such as the split-ring resonators (SRRs) and different resonators. Because of the small
electrical size of the unit cells, the metamaterial based resonator (MBR) offers a great solution to the design of miniaturized
microwave resonator. However MBRs have usually been used for notch band and narrow bandpass filters and, furthermore, there is
still a vast need for research on miniaturization of wide-band transceiver components using MBRs[18]. A special sub-class of
metamterials with both effective parameters negative in a certain frequency, are so called double-negative (DNG) or left-handed (LH)
metamaterials. The first theoretical speculation on the existenceof DNG media and prediction of their fundamental properties was
done by Russian physicist Victor Veselago in 1967,[19]. Veselago anticipated unique electromagnetic properties of DNG media and
showed their support propagating modes of the electromagnetic waves, but exhibit negative propagation constant. The energy would
still travel forward from the source but the wave fronts would travel toward the source. Consequently, vector of the electric field,
vector of magnetic field and wave vector of an electromagnetic wave in a double negative material will form a left-handed triad.
Therefore, LH materials are characterized by antiparallel phase and group velocities and exhibit negative refractive index (NRI).
Therefore the constitutive parameters are the effective permittivity
eff
c and
eff
which are related to the refractive index by:
eff eff
n c = ------------------------------------------------(1)

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

112 www.ijergs.org

2. Designing Ultra Wideband Filter from Metamaterial Resonator
Designing of metamaterial resonator basic structure is not separated from designing of microstrip patched antennaon
dielectric substrate mounted on ground plane so as to filter or detect the right signals falling up on it over some frequency range. As it
has been indicated on different literature [20, 21], electromagnetic metamaterials are composite arrays of resonator of sub-wavelength
size designed to have specific microwave or optical properties. The properties of metamaterial resonators depend on the geometry of
individual unit cell. Many unique effects such as negative refractive index, sub-wavelength imaging, cloaking, perfect absorption etc.,
are difficult to achieve with natural materials. But with careful designing of noble metals as metamaterial with sub-wavelength
dimensions have been obtained at specified frequencies.
Designing of metamterial resonator basic structure is not separated from designing of microstrip patched antennaon dielectric
substrate mounted on ground plane so as to filter or detect the right signals falling up on it oversome frequency range. As it has been
indicated on different literature [20, 21], electromagnetic metamaterials arecomposite arrays of resonator of sub-wavelength size
designed to have specific microwave or optical properties.The effective dielectric constant
e
c in determining the characteristics
impedance Z
0
, the guided wavelength
g
, feeding length 4 /
g
during designing of microstrip patched antenna proposed
accordingL.Yang et al.[22], M.J.Roo-Ons et al.[23], David M Pozar[24] and K. Tripathi et al.[25] is given as
5 . 0
12
1
2
1
2
1

(

+
+
=
W
d
r r
e
c c
c -----------------------------------------------(-2)
where
r
c is relative dielectric, W denotes the width of the strips, d is the thickness of the substrate on which thestrips mounted, other
parameters like characteristic impendence Z
0
that can be related by the following conditions in the following two equations. For a
given W/d
0
, 1 Z s 1, Z
0
becomes
|
.
|

\
|
+ =
d
W
W
d
Z
e
4
8
ln
60
0
c
-----------------------------------------------------------(3)
For a given
0
, 1 / Z d W > becomes
( ) | | 444 . 1 / ln 667 . 0 393 . 1 /
0
+ + + = d W d W Z
e
c ---------------------------------------------(4)
For a given characterize impendence Z
0
anddielectricconstant
r
c , W/d is calculated by the following equations. For W/d 2 s assumed:
2
8
2

=
A
A
e
e
d
W
-------------------------------------------------------------------------------(5)
For 2 / > d W assumed
(

)
`

+ =
r r
r
B B B d W
c c
c
t
6 . 0
39 . 0 ) 1 ln(
2
1
) 1 2 ln( 1
2
/ --------------------------------------------------(6)
Where
|
|
.
|

\
|
+
+

+
+
=
r r
r r
Z
A
c c
c c 11 . 0
23 . 0
1
1
2
1
60
0
and
r
Z
B
c
t
0
2
377
=
3. Drude-Lorentz Model of Harmonic Oscillator
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

113 www.ijergs.org

This model is a system of non-homogenous Maxwell differential waveequations helping to visualize the scatteringand dispersion of
electron gases when they are exposed to incident electric field E

and magnetic field H

. The electromagnetic wave equations are


summarized for the purpose for this paper as:

= =
= + =
= + =
c
c
= V
c
c
= V
r ne p n P
E P E D
H M H B
t
D
H X
t
B
E X
r


c c c

0 0
0
-------------------------------------------(7)
where D

and B

corresponding to electric and magnetic flux densities are emerged in response to the correspondingelectric E

and
magnetic H

propagating either conducting or non-conducting media.


0
c is permittivity of free space,
r
c is the relative permittivity
of the medium and
0
is permeability of free space. Where P

is polarization, n isthe number of electric dipole moment p

per unit
volume, r

is position vector of the electron, M

is magnetization. Motion of electron whose effective mass


eff
m and charge amount
e either free or bounded medium in the externalelectromagnetic field is governed by classical Newtons second law of motion
explained byW. CaiV.Shaleaev[26],R. Fowles[27], W. Millonni& H. Eberly[28,29].
E e
t
r
m
t
r
m
e eff eff

=
c
c
I +
c
c
2
2
-------------------------------------------------------(8)
Where
e
I is the electric damping or collision frequency in Drude-Lorentz model.
The applied electric field is varied harmonically with time according to the usual factor
t i
e
e
assuming that the motion of the
electron has the same harmonic time dependence, combining eq(8) with the last sub-equation eq(7), the following of Drude-Lorentz
are extracted as
E
t
P
t
P
p e


2
0
2
2
e c =
c
c
I +
c
c
----------------------------------------------------(9)
.
|
|
.
|

\
|
I +
=
) (
1 ) (
2
e
p
eff
i e e
e
e c ----------------------------------------------(10)
Wherere
p
e is named as plasma frequency at which density of the electron gas oscillates, ) (e c
eff
is effective permittivityin
frequency domain. Taking a conducting wires of radius r and separated by a (the dimension of unit cell isalso called lattice constant) is
immersed in external electric field to drift free charges with velocity
d
v . Estimatingthe plasma frequency of medium wire would
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

114 www.ijergs.org

depend on the estimation of effectivemass and effective electrondensity in the metal. The electron density in the wire of radius r of
arrays in the dielectric medium separated from each other by a distance is given by
2
2
a
r
N N
eff
t
= ----------------------------------------------------------------------(11)

where N is the actual electron density in pure metal. The effective mass is resulted from the self-inductance is relatedto the the
magnitude of magnetic vector potential A

and drift velocityv


d
is given by
d
eff
v
eA
m = -----------------------------------------------------------------------------(12)
According to Amperes law, the current flow gives rise to an azimuthal magnetic field H

around the wire at radius Rgiven by

V =
=
0
2
) (
) (
2
) (

t
t
r A
r H
r
eNv r
r H
d

--------------------------------------------------------------------(13)
Where the magnitude of the vector potential choosing according to H.Hayat[30] and W.Shalaev [31] as
) / ln(
2
) (
2
0
r a
eNv r
r A
d

= ---------------------------------------------------------------------(14)
Combining eq(12) and eq(14), we obtain the effective mass of the electrons in the medium as
) / ln(
2
2
0
r a
eN r
m
eff

= ------------------------------------------------------------------------------(15)
Now with both N
eff
and me
eff
m availablility, the plasma frequency of the medium is presented as

=
=
) / ln(
2
2
2
0
2
2
r a a
c
m
e N
p
eff
eff
p
t
e
c
e
-----------------------------------------------------------------------------(16)
Where c is the speed of light in free space given by
0 0
1
c
= c and assumption of a r <<
The bound current circulating about a differential area S d

establishing a dipole moment is defined as


S d I m
b

= -----------------------------------------------------------(17)
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

115 www.ijergs.org

Magnetization M

as the magnetic dipole moment per unit volume established because of bound charges (orbitalelectrons, electron
spin, and nuclear spin) is defined as,

A
= A
A
=
v n
i
i
v
m
v
M
1 0
1
lim

------------------------------------------------(18)
Now a metallic square split ring resonator (SRR) of length , ring width w, thickness t and split gap g isimmersed in time varying
harmonic external magnetic field H

perpendicular to the plane of SRR as shown in figure1(a), and its simplified equivalent circuit
with capacitance C, resistance R, inductance L is also shown on figure1(b). According to Faradays Law, there is induced
electromotive force
m
since there is harmonically changingmagnetic field perpendicular to the plane of rectangular ring. Latter it is
adapted to figure 2 for simulation work.Applying Kirchhoffs circuit and Faradays rules to figure 1, the following differential
equation is obtained.
2
2
2
2
1
dt
d
I
C dt
dI
R
dt
I d
L
B
b
b b
|
= + + --------------------------------------(19)


Figure 1: (a) a single square-Split Ring Resonator (SRR) as magnetic field H

oscillated perpendicularSRR plane (b) The equivalent circuit of the


single squared SRR
Where
b
I is the bound current or Amperian current,
b
| is the magnetic flux density passing through area element S d

, therefore
b
| is
mathematically written as
S d H S d B
s
b

} }
- = - =
0
| ---------------------------------------------------(20)
The associated harmonic variation of magnetic bound current
b
I , magnetization M, magnetic susceptibility
m
_ , filling factor F,
magnetic plasma frequency
m
e , magnetic damping frequency
m
I , relative permeability
r
, volumeV occupied by an N number of
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

116 www.ijergs.org

unit cells, magnetic field intensity H and other parameters listed below are usedto drive frequency dependent effective magnetic
permeability ) (e
eff
of eq.(22) using equations from eq(17) to Eq.(20).

=
+ =
= I
=
=
=
=
=
=
=

r eff
m r
m
m
m
t i
t i
t i
b
L
R
LC
V
t
F
I
V
N
M
H M
e H H
e M M
e I I

_
e
_
e
e
e
0
2
0
2
0
0 0
0
0
0
1
2
1

----------------------------------------------------- (21)
2 2
2
1
m m
eff
F
e e e
e

I +
= --------------------------------------------------(22)
the filling factor F ( 1 0 s < F ) measuring the portion of the volume occupied by the split rings. The larger F, thestronger the
magnetic resonance of the system[32, 33].
4. Developing UWB Bandpass Filter
An 8X8 mm
2
PEC plate of thickness 1 mm is prepared. The plate is etched in the form of two squares 6X6 mm
2
and 4.88X4.88 mm
2

complimentary split ring resonator (CSRR) so that negative permeability ( < 0) is obtained in the medium. The depth and the width
of the etched surfaces are 0.28 mm, A 2.5 mm thick Arlon AR 450 (
r
c =4.5) substrate is loaded on the non-etched surface as shown
in figure 2. A 2 mm width of microstrip line is mounted upon the substrate to obtain a negative permittivity (
r
c < 0) in the medium.
The composite structure of DGS-CSRR and the microstrip line from a new class of metamaterial forms a resonator. This metamaterial
could simultaneously have negative permittivity and permeability near the resonance frequency when the sum of the impedance
zero condition in the circuit is satisfied. From the equivalent circuit model of figure 2, etching split-ring defective pattern in the
ground plane will add a parallel resonant circuit to the equivalent circuit, but L2 has little contribution overall effect of system[34,
35], the sum of the impedance zero is written us.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

117 www.ijergs.org


0
2 1
= + Z Z -------------------------------------(23)
Where Z
1
is
eq T
T
T
eq
C L
L
j
L j
C j
Z
2
1
1
1
1
e
e
e
e

=
+
= ------------------------(24)
And Z
2
C
j
C j
Z
e e
1 1
2
= = -------------------------------------------(25)


The resonance frequency of the circuit is
) ( 2
1
C C L
f
eq T
r
+
=
t
------------------------------------------------(26)
5. Theoretical Analysis of UWB Resonator
When an electromagnetic wave is launched in the coplanar waveguide (CPW) guide, propagating along y direction,the magnetic field
directing along z axis interact with the DGS-CSRR placed on the back of the CPW. Thisarrangement produces induced electromotive
force inside the ground plane. Since the ground plane is defected, theflow of induced current is disturbed inside the plane. The
disturbance can change the characteristic of transmissionline such as equivalent capacitance and inductance to obtain the slow wave
effect and band-stop property. If thecapacitance C contributed from the strip of wire is ignored or suppressed, the resonance frequency
r
f in eq(26) canbe rewritten as
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

118 www.ijergs.org

eq T
r
C L
f
1
2
1
t
= -----------------------------------------------------(27)
Where
T
L the total inductance of CSRR is structure and
eq
C is the total equivalent capacitance of the structure. This total equivalence
capacitance,
eq
C can be evaluated as
( )( )
( ) ( )
2 1
2 1
2 1
2 1
g g
g g
eq
C C C C
C C C C
C
+ + +
+ +
= ------------------------------------------(28)
Where
1
C and
2
C are the capacitance of the upper and lower half portion between the CSRR about an imaginaryline passing through
the centers of the split gaps
1
g and
2
g . The split gaps are incorporated in the model as gapcapacitances
1
g
C and
2
g
C .From figure 2,
all gaps are identical (
1
g
C =
2
g
C =
g
C ) where
1
g is the gap of thesmaller split ring, where
2
g is the gap of the t larger split ring.
These gaps also affect the total inductance
T
L ofthestructure of the model in figure 2. Since the split gaps have identical dimensions
g g g = =
2 1
, hence the gapcapacitance is denoted as
g g g
C C C = =
2 1
and the series capacitance is denoted as
0 2 1
C C C = = and
thereforeeq(28) is modified as
2
0 g
eq
C C
C
+
= -----------------------------------------------------(29)
Considering a metal thickness, t of the strip conductors, the gap capacitance
1
g
C and
2
g
C represented as
g
wt
C C C
g g g
0
2 1
c
= = = ----------------------------------------------(30)
Where w is the width of the ring, t is the depth of the etched surface forming rings; M F / 10 85 . 8
12
0

= c . The distributed
capacitance C
1
and C
2
are also a function of the split dimensions g g g = =
2 1
average ring dimension
avg
a is given by
pul avg
C g a C C ) 4 (
2 1
= = ------------------------------(31)
Where
2
r
w a a
ext avg
= --------------------------------------(32)
r is the gap between the two rings, a
ext
is the distance from the center to the outer ring, C
pul
is defined asthecapacitance per unit length
and calculated as
0
cZ
C
r
pul
c
= ------------------------------------------------(33)
Therefore, the equivalent capacitance by substituting the value of C
0
and C
g
in eq(29), we obtain
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

119 www.ijergs.org

1
0
2 2
2
g
wd
C
g
a C
pul avg eq
c
+ |
.
|

\
|
= ---------------------------------(34)
Hence, the resonance frequency of the squared split ring resonator is derived as
(

+
|
.
|

\
|

= =
1
0
2 2
2 2
1
2
1
g
wt
C
g
a L
C L
f
pul avg T
eq T
r
c
t
t
----------------------------(35)
A simplified formulation for the evaluation for the total equivalent inductance L
T
for a wire of rectangular cross- section having finite
length and thickness t is proposed as .
H
t
L
T
|
.
|

\
|
=

4
log 303 . 2 0002 . 0
10
---------------------------------(36)
Where the constant =2.853 for a wire loop of square geometry. The length and thickness t are in mm. Theevaluation of the wire
length is straight forward and is given as g a
ext
= 8 for square geometry[37].Theparameters and the numerical calculation of the
DGS-CSRR unit cell based on the figure 2 and 3 are displayed ontable 1.
Table 1: This table displays both settled and extracted value of the parameters which are used on the next subsequent simulations

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

120 www.ijergs.org

Compact UWB bandpass filters with low insertion loss, compact size, high selectivity, wider bandwidth and good stop band
rejection are required for the next generation mobile and satellite communication systems. This paper reports on the development of
UWB bandpass filters operating in the frequency range of 3.1 GHz and 10.6 GHz by designing a defected ground structure
complimentary split ring resonator (DGS-CSRR) on PEC ground plane to obtain stop band characteristic as in figure 2 and 3. The
parameter used in the simulation task are listed down in table 1. The limiting size of the two waveguide ports are also adjusted as in
figure 3 for a bettor performance. The single DGS-CSRR with metallic strip on the Arlon 450 substrate are activated by the two ports
in the UWB frequency range as shown in the figure 4. To verify the filtering property of the DGS-CSRR from the result of figure 5,
the cut off frequencies region of the transmission curve (S21) at -10 dB are lied between 7.62 and 10.01 GHz. Therefore the
transmission bandwidth lieswithin the frequency region is 2.48 GHz about central frequency 9.04 GHz with transmission curve less
than -1 dB . Again there are two weak signals of -41.75 dB at 8.69 GHz and -50.52 at 9.53 GHz embedded in the transmission
bandwidth are rejected back by UWB bandpass filter not to affect the performance. The central frequency of the bandwidth that we
obtain from simulation and the resonance frequency we calculate numerically in table 1 are deviated from each other by 0.5%. The
transmission UWB bandpass filter having fractional bandwidth of the transmission is about 27.46 % as shown in figure 5.

Figure 4: Adesigningand modelling of single DGS-CSRS with metallic strip during CST microwave simulation within UWB frequency range. From
boundary condition set up, the electric field is directed along the vertical, while the magnetic field is directed perpendicular to the page.

Figure 5: Frequency response of a single DGS-CSRR with metallic strip mounting up on the arlon 450 (
r
c = 4:5) substrate.
6. Extraction of Negative Permittivity from DGS-CSRR
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

121 www.ijergs.org

From section 3, the discussion on derivation of the frequency dependent permittivity of metamterial consisting of arrays of wires is
given by the following equation
( )
|
|
.
|

\
|
I +
=
e
p
eff
i e e
e
e c 1 ) ( ------------------------------------------------------(37)
where
p
e is the characteristic plasma frequency,
e
I is the collusion frequency in the Drude-Lorentz model. The plasma frequency
of the wire depends up on the number of electrons per unit volume is called effective electron density
eff
N and the effective mass
eff
m of the electron due to the self-inductance. The empirical formula for solving the plasma frequency is given as

=
=
) / ln(
2
2
2
2
0
2
2
r a a
c
m
e N
p
eff
eff
p
t
e
c
e
---------------------------------------------------(38)

A modified expression for the reduced plasma frequency may then be written as[36, 38]:
( ) ( ) ( ) ( )
r eff
p
r a r a a
c
N r a r a a
c
c
t t
e

=
4 / ln
2
4 / ln
2
2 2 2
2
---------------------------(39)
where a is the the dimension of DGS-CSRR unit cell, r is the radius of the wire. Here we approximate half of the width W of the
metallic strip as the radius for extracting plasma frequency. From the modified equation of (37), the electric plasma frequency f
p
of the
metallic strip is computed as 9.08 GHz. The electric damping frequency
e
I as we can extract from Drude-Lorentz model discussing
before
o t
e c c
2
2 2
0
r
a
p r
e
= I --------------------------------------------------------------------(40)
As silver is used in the metallic strip, the conductivity o is given by 6.3x10
7
S/m. Hence the electric damping frequency
e
I using
the linear plasma frequency f
p
is obtained from eq(38) as
o
c c
2
2 2
0
2
r
f a
p r
e
= I ----------------------------------------------------------------(41)
Therefore, the electric damping frequency is computed as 104.2 MHz. From figure 6, we note that negative permittivity ( 0 <
eff
c )
occurs lower than the resonance frequency. Above the plasma frequency, the effective permittivity is positive and the medium acts as
a transparent dielectric. This onset of propagation has been identified with an effective plasma frequency dependent on the wire radius
and spacing, with the effective dielectric function following the form of eq(37). A reduction in
p
e can be achieved by restricting the
current density to the thin wires, which also increases the self-inductance per unit length L[39]. When the conductivity of the wires is
large, the plasma frequency has been shown to have the general form
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

122 www.ijergs.org

L a
p
2
0
2
1
c
e = ------------------------------------------------------------(41)
Combining the DGS-CSRR medium having a frequency band gap due to a negative permeability with a thin wire medium produces a
resultant left-handed material in the region where both
eff
c and
eff
have negative values. Figure 6 demonstrates that the the metallic
strip, which gives negative permittivity below plasma frequency and exhibits high pass characteristics, in combination with DGS-
CSRR, which gives negative permeability and exhibits resonant characteristics can be used together to design
plasmonicmetamaterial[40].

Figure 6: The frequency response of both the real and imaginary of the effective permittivity within UWB frequency range of DGS-CSRS with
metallic strip loaded up on arlon 450 substrate.

7. Extraction of Negative Permeability from DGS-CSRR
By combining the split ring resonators into a periodic medium such that there is strong (magnetic) couplingbetween the resonators,
unique properties emerge from the composite. In particular, because these resonatorsrespond to the incident magnetic field, the
medium can be viewed as having an effective permeability,
eff
. Fromsection 3, bycombining Kirchhoffs circuit rule and Faradays
law for harmonic time varying magnetic fieldperpendicular to the plane of split ring resonator SRR indicated in figure 1, the frequency
dependent of magneticpermeability
eff
of meta-material is derived as
2 2
2
1
m m
eff
F
e e
e

I +
= --------------------------------------------------(42)
where F(0 < F 1) is the filling factor of the fractional factor of of the the volume of DGS-CSRR in the unit cell.
It is advisable to keep F small to avoid strong magnetic interaction or coupling among adjacent unit cells. If we
ignore the material loss, m can be set to zero, and Eq.(41) can be rewritten as:
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

123 www.ijergs.org

2 2
2
1
m
eff
F
e e
e

= -----------------------------------------------------(43)
The filling factor F can be approximated as
cell
V
Ah
F = ----------------------------------------------------------------(44)
Where A is the area occupied by the DGS-CSRR, h is the thickness of the ring, V
cell
is the volume of the unit cell.So the filling factor
F of DGS-CSRR is approximated as 0.2. When the plate of DGS-CSRR are made of good conductors (i.e., is small), the imaginary
part of effective permeability given in figure 7 )is almost zero or ignored.Behavior of its real part versus linear frequency is plotted in
figure 7. There are two critical frequencies seen at this graph.
r
f , the frequency where the effective permeability diverges, is called
the resonant frequency. The frequencywhere the effective permeability crosses the 0 =
eff
axis is called the magnetic plasma
frequency
mn
f of the DGS-CSRRs.As it is clearly seen from this figure, the effective permeability
eff
exhibits an asymptotic
behavior in itsfrequency response by taking extreme values around the resonant frequency. It is highly positive near the
lowerfrequency side of
r
f , whereas, most interestingly and strikingly, it is highly negative near the higher-frequency side of
r
f .
Throughout a narrow frequency band which extends from
r
f to
mn
f , the effective permeability possessesnegative values It becomes
less negative as the frequency increases towards
mn
f and outside this negative region,the effective permeability (relative to that of
vacuum) becomes positive and quickly converges to unity.

Figure 7: The frequency response of both the real and imaginary of the effective permeability within UWB frequency range of DGS-CSRS with
metallic strip loaded up on Arlon 450 substrate.
8. Negative Refractive Index Materials NRM

One of the main objectives at the very beginning of metamaterial research was to construct and verify negative refractive
index materials (NRM). Though there has not yet been reported for occurrence of NRM naturally, there is no theoretical obstacle
which would prevent the existence of such material. In the paper published in 1968[41], Veselago predicted that electromagnetic plane
waves in a medium having simultaneously negative permittivity and permeability would propagate in a direction opposite to that of
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

124 www.ijergs.org

the flow of energy. This result follows not from the wave equation, which remains unchanged in the absence of sources, but rather
from the individual Maxwellcurl equations. The curl equation for the electric field provides an unambiguous "right-hand" (RH) rule
between the directions of the electric field E

, the magnetic induction B

, and the direction of the propagation vector k. The direction


of energy flow, however, given by B E

, forms a right-handed system only when the permeability is greater than zero. Where the
permeability is negative, the direction of propagation is reversed with respect to the direction of energy flow, the vectors E, H, and k
forming a left-handed system; thus, Veselago referred to such materials as "left handed" (LH).
Veselago went on to argue, using steady-state solutions to Maxwells equations, that a LH medium has a negative refractive
index (n). While there are many examples of systems that can exhibit reversal of phase and group velocities with associated unusual
wave propagation phenomena, negative group velocity bands in photonic crystals being an example, we show that the designation of
negative refractive index is unique to LH systems. An isotropic negative index condition has the important property that it exactly
reverses the propagation paths of rays within it; thus, LH materials have the potential to form highly efficient low reflectance surfaces
by exactly canceling the scattering properties of other materials.
The absence of naturally occurring materials with negative made further discussion of LH media academic until recently,
when a composite medium was demonstrated in which, over a finite frequency band, both the effective permittivity ) (e c
eff
and the
effective permeability ) (e
eff
are shown to be simultaneously less than zero right to the resonance frequency as shown in figure 8.
The composite of DGS-CSRR with metallic strip upon the dielectric exhibits not only extra ordinary property that is not existence
naturally, but also is made of small size and spacing much smaller than the wavelengths in the frequency range of interest. Thus, the
composite medium can be considered homogeneous at the wavelengths under consideration. With this practical demonstration, it is
now relevant to discuss discuss in more detail the phenomena associated with wave propagation in LH materials, as both novel
devices and interesting physics may result.

Figure 8: The frequency response negative refractive index within UWB frequency range of DGS-CSRS with metallic strip loaded up on Arlon 450
substrate.
ACKNOWLEDGMENT
I would like to appreciate ProfessorGuoping Zhang and Dr. Yunhu Wu for professional assistance and consultation from Central China
Normal University (CCNU) atCollege of Physical Science and Technologyin Lab of Optoelectronics and Information Engineering.

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

125 www.ijergs.org

Conclusions
Ultra-wideband (UWB) communication techniques have attracted a great interest in both academia and industry in the past
few years for applications in short-range wireless mobile systems. This is due to the potential advantages of UWB transmissions such
as low power, high rate, immunity to multipath propagation, less complex transceiver hardware, and low interference. However,
tremendous R&D efforts are required to face various technical challenges in developing UWB wireless systems, including UWB
channel characterization, transceiver design, and coexistence and interworking with other narrow-band wireless systems, design of the
link and network layers to benefit from UWB transmission characteristics.
Metamaterials and especially left-handed metamaterials present a new paradigm in modern science, which allows design of
novel microwave components with advantageous characteristics and small dimensions. Ultra-wide band(UWB) technology owing to
its attractive characteristics, such as low complexity, low cost, and extremely high data rates, has been widely used in communication
systems. As one of main issues of UWB systems, UWB bandpass filter or antenna has received increased attention because of its wide
impedance bandwidth, simple structure, and omnidirectional radiation pattern. UWB communication systems use the frequency band
3.1-10.6 GHz, which was approved by the Federal Communications (FCC). From the simulation result, the transmission UWB
bandpass filter having transmission and fractional bandwidth of the transmission is about 2.48 GHz and 27.46% respectively
satisfying the minimum requirement of FCC proposal. All these results are achieved through the widely used methods for generating
band-notch function by etching the ground plane to be defected ground structure with complimentary split ring resonator (DGS-
CSRR). This DGS-CSRR forming metamaterial could simultaneously have negative permittivity and permeability near the resonance
frequency when the sum of the impedance zero condition in the circuit satisfied.
REFERENCES:
[1] V. G. Veselago, "The electrodynamics of substances with simultaneously negative values of c and ,Sov. Phys. Usp.10, 509-
514 (1968)
[2] R. A. Shelby, D. R. Smith, and S. Schultz, "Experimental verification of a negative index of refraction," Nature 292, 77-79 (1997)
[3] J. B. Pendry, "Negative refraction makes a perfect lens," Phys. Rev. Lett. 85, 3966-3999 (2000)
[4] J. B. Pendry, A. J. Holden, W. J. Stewart, and I. Youngs,"Extremely low frequency plasmons in metallicmesostructures," Phys.
Rev. Lett. 76, 4773-4776 (1996) metallicmesostructures," Phys. Rev. Lett. 76, 4773-4776 (1996)
[5] J. B. Pendry, A. J. Holden, D. J. Robbins, and W. J. Stewart, "Magnetism from conductors and enhanced nonlinear phenomena,"
IEEE Trans. Microwave Theory Tech. 47, 2075-2084 (1999)
[6] S. Linden, C. Enkrich, M. Wegener, J. Zhou, T. Koschny, and C. M. Soukoulis, "Magnetic response of metamaterials at 100
terahertz,"Science 306, 1351-1353 (2004).
[7] G. Dolling, C. Enkrich, M. Wegener, J. F. Zhou, C. M. Soukoulis, and S. Linden, "Cut-wire pairs and plate pairs as magnetic
atoms for optical metamaterials," Opt. Lett.30, 3198-3200 (2005).
[8] Q. Li, Z.-J. Li, C.H. Liang, B. "Wu,UWBbandpass filter with notched band using DSRR", Electronics Letters 13th May 2010 Vol.
46 No.10
[9] Weihua Zhuang, Xuemin (Sherman) Shen,Qi Bi, "Ultra-wideband wireless communications", Wireless Communications and
Mobile Computing, (2003), 3:663-685
[10] Mushtaq A. Alqaisy, Jawad K. Ali, Chandan K. Chakrabarty, and Goh C. Hock, "Design of a Compact Dual-mode Dual-band
MicrostripBandpass Filter Based on Semi-fractal CSRR", Progress In Electromagnetics Research Symposium Proceedings,
Stockholm, Sweden, Aug. 12-15, 2013 699
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

126 www.ijergs.org

[11] X. Q. Chen, R. Li, S. J. Shi, Q. Wang, L. Xu, andX. W. Shi, "A Novel Low Pass Filter Using Elliptic Shape Defected Ground
Structure", Progress In Electromagnetics Research B, Vol. 9, 117-126, 2008
[12] R. Movahedinia and M. N. Azarmanesh, "A Novel Planar Uwb Monopole AntennaWith Variable Frequency Band-Notch
Function Based On Etched Slot-Type Elc On The Patch", Microwave and Optical Technology Letters, Vol. 52, No. 1, January
2010
[13] M. Shobeyri, M. H. VadjedSamiei, "Compact Ultra-Wideband Bandpass Filter With Defected Ground Structure," Progress In
Electromagnetics Research Letters, Vol. 4, 25-31, 2008
[14] Pendry, J. B., A. J. Holden, D. J. Robbins, et al., " Magnetism from conductors and enhanced nonlinear phenomena," IEEE Trans.
Microwave Theory Tech., Vol. 47, No. 11, (1999) 2075-2084
[15] Gay-Balmaz, P. and O. J. F. Martin, "Electromagnetism resonances in individual and coupled split-ring resonators", Journal of
Applied Physics, Vol. 92, No. 5,(2002) 2929-2936, 2002.
[16] Bonache, J., F. Martin, F. Falcone, et al., "Application of complementary split-ring resonators to the design of compact narrow
band-pass structures in microstrip technology," Microwave and Optical Technology Letters, Vol. 46, No. 5, (2005) 508-512
[17] Jigar M. Patel, ShobhitK.Patel, Falgun N. Thakkar, "Defected ground Structure MultbandMicrstrip Patch Antenna using
Complementary Spilt Ring Resonator", International Journal trends in Electrical and Electronics, Vol.3, Issue. 2, (May 2013)14-
19
[18] SarawuthChaimool, PrayootAkkaraekthalin, "Miniaturized Wideband Bandpass Filter with Wide Stopband using Metamaterial-
based Resonator and Defected Ground Structure", Radioengineering, VOL. 21, NO. 2, (JUNE 2012),611-611
[19] V. Veselago: "The electrodynamics of substances with simultaneously negative values of and "", Soviet Physics Uspekhi, Vol.
92, no. 3, 517-526, (1967).
[20] GovindDayal, S Anantha Ramakrishna, "Design of multi-band metamaterial perfect absorbers with stacked metaldielectric disks",
J.Opt. 15 (2013) 055106 (7pp)
[21] MuamerKadic, TiemoBuckmann, Robert Schittny1, Martin Wegener, "metamaterials beyond electromagnetism", Rep. Prog.
Phys. 76 (2013) 126501 (34pp)
[22] L Yang,Xueshun Shi, Kunfeng, Chen, Kai Fu,Baoshun Zhang, "Anyalysis of Photonic Crystal and multi-
freqiencyterhertzmicrostrip patch antenna", Physica B 431(2013) 11-14
[23] M.J.Roo-ons ,S.v.Shyun, M.Secredynski, M.J. Amman, "Influence of solar heating on the performance of integrated solar cell
microstrip patch antennas", Solar Energy 84 (2010) 1619-1627
[24] M Pozar, "Microwave Engineering", 4th Edition, John Wiley and Sons, Inc., University of Massachusetts at Amherst, USA,2012,
(141-150)
[25] K. Tripathi,S.Srivastava, H.P. Sinha, "Design and Analysis of Swastik Shape Microstrip Patch Antenna at Glass Epoxy Substrate
onL-Band and S-Band, International Journal of Engineering and Innovative Technology" (IJEIT), Volume 2,(2013),37-41
[26] WenshanCai, Vladimir Shalaev, "Optical Metamaterials- Fundamental and Applications, Springer Science +Business Media",
USA, LLC (2010) 19-36
[27] R.Fowles,"Introduction to Modern Optics", second Edition, over Publication, Inc., New York, 1975 p155-192
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

127 www.ijergs.org

[28]Wang, Zhenlin and Chan, CT and Zhang, Weiyi and Ming, Naiben and Sheng, Ping Three-dimensional self-assembly of metal
nanoparticles: Possible photonic crystal with a complete gap below the plasma frequency, A PS, Physical Review B, V 64, No.
11, P113108
[29] W. Milonni, H.Eberly, Laser Physics, John Wiley & Sons, NC., PUPLICATION, 2010,p67-73
[30] H.Hayt, "Engineering Electromagnetics", 8th edition, McGraw-Hill Companies, Inc., America Nework NY 10020, 2012
[31] W. Cai,VShalaev, "Optical metamterials, Fundamental and Application", Springer Science Business Media, Stanford & Prude
University, USA, 2010,64-74
[32] Peter Markos, M. Soukoulis, "Wave Propagation, From Electrons to Photonic Crystals and Left-Handed Materials", Princeton
University press, Princeton and oxford, United Kingdom, New Jersey 08540, 2008
[33] ZoramJaksic, Slobodan Vukovic, Jovan Matovic, DraganTanaskovic, "Negative Refractive index Metasurfaces for Enhanced
Biosensing",Material 2011,4,1-36;doi:10.3390/ma4010001
[34] Ricardo Marques, Ferran Martin, Mario Sorolla, " metamterials with negative parameters: Theory, Design, and Microwave
Applications", Ajohn Wiley & Sons, Inc... Publication, New Jersey, Canada (208), 166-170
[35] Bian Wu, Bin Li, Tao Su, and Chang-Hong liang, " Study on Transmission Characteristic of Split-Ring resonator Defected
ground Structure", Piers Online, Vol. 2, No. 6, (2006), 710-714
[36] Maslovski SI. Tretyakov SA. Below PA ,"Wire Media with negative effective permittivity:" a quasi-static model, Microw Tech
let 35,(2002), 47-51
[37] Nor MuzlifahMahyuddin and NurLiyana Abdul Latif, "A 10 GHz Low Phase Noise Split-Ring Resonator Oscillator",
International Journal of Information and Electronics Engineering, Vol. 3, No. 6, (November 2013), 584-589, [Doi:
10.7763/IJIEE.2013.V3.384]
[38] N.P Johnson, A.Z Khokhar, H.M.H Chong, R.M. De La Rue, S. McMeekin, "Characterstisation at infrared wavelengths of
metamaterials formed by thin film metallic split-ring resonator arrays on silcon", Electronics Letters 42(19), (2006) 1117-119
[39] D. R. Smith, Willie J. Padilla, D. C. Vier, S. C. Nemat-Nasser, and S. Schultz, "Composite Medium with Simultaneously
Negative Permeability and Permittivity",Physical Review Letters, Volume 84, Number 18,4184-4187
[40] SubalKar, Tapashree Roy, PromitGangooly, and Souvik Pal, "Analytical Characterization of Cut-Wire and Thin-Wire Structures
for Metamaterial Applications", Science and Information Conference (2013) October 7-9, 2013 | London, UK
[41] David R. Smith and Norman Kroll, "Negative Refractive Index in Left-Handed Materials", Physical Review Letters, Volume 85,
NUMBER 14, 2933-2936, (2 OCTOBER 2000)







International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

128 www.ijergs.org

An Investigation on Behavior of Centrally Loaded Shallow Foundation on
Sand Bed Reinforced with Geogrid
S.Panda
1
, N.H.S Ray
2

1
Faculty, Department of civil Engineering, CEB, Bhubaneswar, BPU, Odisha, India
2
Faculty, Department of Mechanical Engineering, CEB, Bhubaneswar, BPU, Odisha, India
E-mail- nhs.ray@gmail.com


Abstract-The study pertains to the investigation of the effect of embedment on the load carrying capacity and settlement of strip
foundations on sand reinforced with geogrids. Geogrid being an inextensible reinforcing material is widely used all over the world
mainly for retaining walls, abutments slope protection and below the foundation in poor soil. Some research works have been done the
investigators regarding the optimum placement of geogrids below surface footing however, the work on centrally loaded embedded
foundations reinforced with multilayer of geogrids have not been reported in literature. Therefore in this paper attention is being paid
to the load carrying capacity and settlement behavior of centrally loaded embedded footing reinforced with multilayer of gerogrids.
Load tests have been carried out for this purpose. Strip foundations are considered and loads were being applied through electrically
operated hydraulic jack for greater accuracy. Enkogrid @ PRO-40 has been used at the reinforced material and sand as the medium.
The studies conducted show
1. In centrally loaded surface footing the load carrying capacity is increased to about 3.55 times by providing geogrid as
reinforcement. The load carrying capacity the reinforced soil increases with increase in the depth of embedment while these decrease
in settlement because of placement of geogrid.
2. The number of layers of geogrid has significant effect on load carrying capacity and settlement of foundations. Decrease in the layer
of geogrid decreases the load carrying capacity and increases the settlement of foundation.

Key words- Geogrid, Reinforcement, Model Footing, Foundation, Embedment, Bearing Capacity, Settlement

I.INTRODUCTION

The reinforced soil is the soil in which the metallic, synthetic or geogrids are provided to improve its engineering behavior. The
technique of ground improvement by providing reinforcement was also in practice in olden days. Babylonians built ziggurats more
than three thousand years ago using the principle of soil reinforcement. A part of the Great Wall of China is also an example of
reinforced soil construction. Dutch & Romans had used soil reinforcing technique to reinforce willow animal hides & dikes. Basic
principles underlying reinforced soil construction was not completely investigated till Henery Vidal of France who demonstrated it's
wide application & developed the rational design procedure. A further modified version of soil reinforcement was conceived by Lee
who suggested a set of design parameters for soil reinforced structures in 1973.
Rising land costs & decreasing availability of areas for urban infill has established that previously undeveloped areas are now being
considered for the sitting of new facilities. However these undeveloped areas often possess weak underlying foundation material a
situation that presents interesting design challenges for Geo technical engineers. To avoid the high cost of deep foundation
modification of the foundation soil or the addition of a structure fill is essential.
Binquet & Lee (1975) investigated the mechanism of using reinforced earth slab to improve the bearing capacity of granular soils.
They tested model strip footings on sand foundations reinforced with wide strips cut from household aluminum foil. An analytical
method for estimating the increased bearing capacity based on the tests was also presented Fragaszy & Lawton also used aluminum
reinforcing strips & model strip foundations to study the effects of density of sand & length of reinforcing strips on bearing capacity.
In this paper, the results of experimental studies on cohesionless soil reinforced with Geogrids have been presented. Tests have been
conducted with the provision of Geogrids in four layers at various spacing & the results have been compared with the results of
unreinforced condition.

II-EXPERIMENTAL SET UP AND PROCEDURE

2.1 Sample collection
It has been decided to have a study on the effect of geogrid placed horizontally in the soil on the load carrying capacity and settlement
behavior of model strip footing placed on the surface as well as at different embedded depths of cohesion less soil. The sand collected
from the river bed was made free from roots, organic matters and etc by washing and cleaning. The above sample was oven dried and
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

129 www.ijergs.org

properly sieved by passing through 1.18 mm IS sieve and retained on 75 micron IS sieve to get the required grading. Dry sand was
used as soil medium for the test as it does not include the effect moisture and hence the apparent cohesion associated with it. Due to
limitation of the time and scope of the present investigations it is decided to perform the test using dry sand as medium and hence the
complexities developed due to the presence of moisture and cohesion has been avoided. Thus the test has been conducted in a
simplified condition.

2.2 Characteristics of sand
Dry sand has been sieved passing through 1.18 mm IS sieve and retained on 75 micron IS sieve. The results of sieve analysis of sand
used have been presented in table- 1. The characteristic of sand used are as follows:
(i) Specific gravity = G = 2.64
(ii) Maximum Void ratio = e
max
= 0.92
(iii) Minimum void ratio = e
min
= 0.67
(IV) Relative density = I
D
= 0.72
(v) Dry density =
d
= 1.51 gm/cm
3
The angle of internal friction at the adopted bulk density was found to be 41 12

. The result of direct shear test has been presented in


the table-2.
Table-1 Grain size analysis
Table-2 Direct shear test results

2.3 Test tank
A test tank of size 75 cm X 40.5 cm X 61 cm was made in the laboratory for the purpose. The test tank was made of cast iron 6 mm
thick. The side of the box was heavily braced to avoid lateral yielding. The following considerations were taken into account while
deciding the dimensions of the tank.
(i) As per provision of IS 1888-1962 the width of the test pit should not be less than 5 times the width of the test plate, so that the
failure zones are freely developed without any interference from sides.
(ii) Chumar (1972) in his investigation suggested that incase of cohesionless soil the maximum extension of failure zone was 5 B to
the sides and 3 B below the footing.
By adopting the above tank size for the model footing (8 cm X 36 cm), it was ensured that the failure zones are fully and freely
developed without any interference due to the presence of sides and bottom of the tank.

2.4 Equipments used

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

130 www.ijergs.org

2.4.1 Loading beam with platform
A mild steel channel section of size 152 cm .X20 cm. X 10 cm. was used for this purpose. A mild steel plate welded to a vertical shaft
passing through a pipe welded to the channel at it's mid span was used to transfer load to the footing.

2.4.2 Model footing
Model footing used for laboratory tests were made of mild steel plate of sizes 8 cm X 36 cm X 2.5 cm and 10 cm X 40 cm X 2 cm.
One footing was meant for centroidal loading while reaming three were meant for eccentrical loading, the eccentricity being 0.05B,
0.1 B, 0.15 B respectively. Circular depressions accommodating steel balls were made on the footings at proper points so that the
loading pattern i.e. centroidal and eccentrical can be maintained. The load was transmitted from the loading platform to the footings
through the steel balls. Such arrangement permitted rotation of the footing about its longitudinal axis.

2.4.3 Dial gauge
Two dial gauges of following specification were used during the test.
Least count-0.0lmm
Range- 50mm
Dial gauges were kept in position using a magnetic base placed suitably on a rigid support. As the load was applied settlement
occurred which was recorded by two dial gauges. The average of the two dial gauge readings was taken as the required settlement in
mm.

2.5 Sample preparation
First the internal dimensions of the tank were measured accurately and the different layers for filling the sand were marked with the
help of a marking pen. After knowing the volume of the tank weight of sand required to fill the tank was computed step wise Air dried
and sieved sand as mentioned earlier was taken .It was found that for a layer of 5 cm the weight of sand required was 23 kg. First the
sand was applied in eight equal layers, each layer being 5 cm each. Layer was compacted properly by a tamping rod to achieve the
required density of 1.51 gm/cm
3
.Then a layer of 5 cm was applied and was compacted to achieve the required density. In each layer
the top surface of the sand was made smooth by a straight edge and the horizontality was checked by a sprit level. Care was taken so
that the top surface of each layer was just with the mark previously made for that layer. For the test without reinforcement footing was
placed on the surface and at different embedment of 0.25B, 0.5B, 0.75B and 1.0B respectively.

Fig.1 Geometric parameters of Reinforcement

For the test with reinforcement the first geogrid layer was placed at a depth 0.35B from the base of the footing, the other subsequent
layer of geogrid being placed at equal spacing of 0.25B as shown in fig.1. After putting the geogrids, small weights were placed on
them to keep the geogrids in position and then the required quantity of sand was poured. For each time each layer was compacted
properly to achieve the required density. While compacting care was taken not to disturb the geogrid layers. The compaction was done
with the help of a tamping rod. Different marks were made at different levels for the compaction of a particular layer. For example for
a 5cm layer the mark was made at a height 5cm from bottom. The compaction was done by inserting the rod up to the mark, so that
the bottom layers were not disturbed.

2.6 Experimental setup
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

131 www.ijergs.org

In some tests after filling the tank with sand and compacting it was transported to the required position with the help of a crane where
the load was to be applied. Then the alignment of the tank was fixed by slight longitudinal and lateral movement of the tank to transfer
the load centrally. For this purpose the lank was allowed to rest on the rollers. But later on it was realized that in this process the
sample might be disturbed and position of the test tank was fixed suitably at the required position so that the load was transmitted
centrally.
In case of rigid footing, the footing was fixed in position at the bottom of the vertical shaft by threaded arrangement provided both in
the shaft and footing so that the bottom of the shaft and the footing were just in flush with each other. Then the footing with the
loading assembly was placed on the top surface of the sand such that the center of the footing was coincided with the center of the
tank.
In case of flexible footing circular depression were made both in the footing and vertical shaft. The footing was placed on the top
surface of the sand so that its center coincided with the center of the tank. A steel ball was placed on the depression of the footing.
Then the loading beam with platform was placed on the top of the tank, so that the vertical shaft rested on the steel ball.
Then two dial gauges were mounted on the loading platform. The dial gauges were so adjusted that the tip of the stem touched the top
face of the mild steel plate. As the load was applied and settlement occurred the plate moved downward thus pushing the stem from
which settlement was recorded.
Then the load was applied with the help of an electrically operated machine by means of hydraulic jack. The details of the
arrangement for load application are shown in fig.2

Fig.2 Bearing Capacity test setup

2.7 Experimental procedure
(i) The footing with loading assembly was placed on the top surface of the sand.
(ii) The top of the jack was allowed to move down till it just came in contact with the top of the mild steel plate. Then the weight of
the footing with the rod was released by loosening the locking screw which acted as seating load as per IS 1882-1962
(iii) The initial readings of dial gauges were noted.
(iv) The load was then applied and the footing was allowed to settle under the applied load intensity. When the required load intensity
was reached settlement observations were taken from the dial gauges.
(v) The next load increment was then applied and the reading of dial gauges were noted
(vi) The process of load application was repeated till the footing failed because of excessive settlement, which was also indicated from
proving ring reading.
On completion of the load test, the equipments were removed, the tank was emptied and the tank was again refilled for the next set of
load test.

III- RESULT & DISCUSSION

Results obtained from the laboratory test on two types of model footings of size 10cm X 40cm X 2cm and 8cm X 36cm X 2.5cm with
sand as medium and two types of geogrid sheet as reinforcement placed horizontally have been presented. The detailed procedure of
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

132 www.ijergs.org

the load tests conducted on the model footings is highlighted in section 2.7. The load intensity vs. settlement observations have been
presented in figures (3 to 25).

3.1 Footing on homogeneous medium
Load test were conducted on surface footing of sizes 10cm X 40cm X 2cm and 8cm X 36cm X 2.5cm with sand as medium without
any reinforcement. The peak load at failure has been found from the graphs drawn with load per unit area vs. corresponding settlement
of the footings. The peak load at failure of the surface footing as well as embedded footings have been investigated in the both the
cases, viz. (a) rotation not permitted (b) rotation permitted.

3.1.1 Rigid footings
It was first decided to conduct the tests throughout the programme without allowing the rotation of the foundation and hence, rigid
footing was taken into consideration. Some test have been conducted using rigid footings, but while attempting to do the test on
eccentrically loaded foundations, the experimental setup did not allow and experiment was held up. Hence for the rotation of the
footing was permitted subsequently for smooth running of the experiment. The load settlement curve for the above case of rigid
footing has been shown in fig. (3 to 7) and the peak load at failure of rigid footing placed on the surface of unreinforced sand bed is
found from fig. (3 to 7).





Fig.3 Load settlement curve of (10cm x40cm) strip centrally loaded footing in homogeneous sand bed

Fig.4 Load settlement curve for centrally loaded foundation (10cm x40cm) with depth of footing at 0.25B from base of footing

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

133 www.ijergs.org



Fig.5 Load settlement curve for (10cm x40cm) centrally loaded footing placed at 0.5B depth


Fig.6 Load settlement curve for centrally loaded foundation (10cm x40cm) placed at 0.75B depth in homogeneous sand bed



Fig.7 Load settlement curve for centrally loaded rigid surface footing in homogeneous soil

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

134 www.ijergs.org

3.1.2 Flexible footings (Rotation permitted)
A number of test have been conducted to study the ultimate bearing capacity and the corresponding settlement of the foundation which
are subsequently being used as a reference to analyze in case of reinforced condition. The tests on unreinforced have been conducted
for the surface footing as well as for the footings placed at different depths viz. 0.25B, 0.5B, 0.75B, 1.0B. The peak load at failure of
the flexible surface footing is found from fig. 8.



Fig.8 Load settlement curve for (8cm x36cm) strip surface foundations in reinforced soil

The graph showing the plot of load per unit area vs. settlement in all these cases (surface footing as well as embedded footings) has
been shown in the fig.9


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

135 www.ijergs.org

Fig.9 Plot of load per unit area vs. settlement of centrally loaded footing in unreinforced sand showing the effect of depth of
embedment

3.2 Effect of embedment on peak load
The peak load at failure of strip footing placed at different depths has been found out from the graph (fig.3 to 7 and fig.10 to 14)


Fig.10 Load settlement curve for (8cm x36cm) strip surface foundations in unreinforced soil


Fig.11 Load settlement curve for (8cm x36cm) strip embedded foundations in unreinforced soil


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

136 www.ijergs.org


Fig.12 Load settlement curve for (8cm x36cm) strip embedded foundations in unreinforced soil


Fig.13 Load settlement curve for (8cm x36cm) strip embedded foundations in unreinforced soil


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

137 www.ijergs.org



Fig.14 Load settlement curve for (8cm x36cm) strip embedded foundations in unreinforced soil

The ratio of peak load at different embedment to the peak loads at surface footings have been computed and shown in the graph (Fig.9
and Fig.15).


Fig.15Plot of load per unit area vs. Settlement of centrally loaded footing (10cm x40cm)

It is seen the peak load at failure the increases with increase in the depth of embedment, conforming to the reports made by other
investigators, in the past.
From the graph it is seen that the peak load at failure of the failure load increase with the depth of embedment.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

138 www.ijergs.org


3.3 Footing on reinforced soil
Load tests have been conducted on the model strip footing of sizes 10cm X 40 cm X 2cm and 8cm X 36cm X 2.5cm with sand as
medium and geogrid of the type Enkagrid - @ Pro - 40 and Enkagrid - @ Pro- 80 as reinforcements. Geogrids have been placed in
four layers, the top most layers being placed at 0.35B from the base of the footing and other subsequent layers being placed at equal
spacing of 0.25B. The load intensity and settlement observation have been presented in fig. (8 to 12) & (16 to 22). To see the effect of
providing the geogrids below the foundation for improving the load carrying capacity of soil tests have been conducted on sand
reinforced with geogrids. Centrally loaded footings have been considered, placing the footing at different depths (Df = 0, 0.25B, 0.5B,
0.75B, I.0B).

Fig.16 Load settlement curve for (8cm x36cm) centrally loaded surface foundations in reinforced sand


Fig.17 Load settlement curve for centrally loaded footing in reinforced soil placed at 0.25B below base of footing (rotation permitted)


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

139 www.ijergs.org


Fig.18 Load settlement curve of centrally loaded footing in reinforced soil placed at 0.5B below base of footing (rotation permitted)






Fig.19 Load settlement curve of centrally loaded footing in reinforced soil placed at 0.75B below base of footing (rotation permitted)

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

140 www.ijergs.org


Fig.20 Load settlement curve of centrally loaded footing in reinforced soil placed at 1.0B below base of footing (rotation permitted)



Fig. 21 Load settlement curve of centrally loaded surface footing with three layers of geogrids
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

141 www.ijergs.org


Fig. 22 Load settlement curve of centrally loaded surface footing with four layers of geogrids

Geogrid of variety Enkagrid -@ Pro - 40 and has been used, considering the footing to be rigid while type Enkagrid - fo) Pro 80 has
been used for flexible footing. The purpose was to see the effect of different type of geogrid as well as to test the footings by allowing
rotations, such that the flexible footing can be utilized for eccentric load application. Because of the lack of resource of geogrid and
time in conducting the tests, limited number of test have been conducted.

3.3.1 Rigid footing (reinforced soil)
Load tests have been carried out on centrally loaded strip footing of size 8cm X 36cm X 2.5cm and with geogrid variety Enkagrid @
Pro-40 to see the effect of embedment on reinforced soil. The load settlement curves have been in shown in fig. (23 to 27)

Fig.23 Load settlement curve for centrally loaded rigid surface footing in reinforced soil


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

142 www.ijergs.org


Fig.24 Load settlement curve for concentrally loaded rigid foundation in reinforced sand



Fig.25 Load settlement curve of concentrated rigid footing in reinforced sand at depth of 0.5B


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

143 www.ijergs.org


Fig.26 Load settlement curve for centrally loaded rigid footing in reinforced soil at depth of 0.75B




Fig.27 Load settlement curve of concentrated rigid footing on reinforced soil at depth of 1B

The combined load per unit area vs. settlement of the foundation showing the effect of embedment has been shown in fig.28.The
figure shows that with increase in depth of embedment, the peak load at failure is increased. The effect of providing reinforcement on
the centrally loaded surface footing is shown in fig.29 .From the fig. it is seen that by the provision of four layer of geogrid below the
model strip footing under consideration increases the peak load by 15% ,18%, 25%, 38% respectively for D
f
=0.25B, D
f
=0.5B, D
f

=0.75B, D
f
=1.0B.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

144 www.ijergs.org


Fig.28 Plot of load per unit area vs. Settlement of centrally loaded foundation showing the effect of depth of foundation in reinforced
soil and comparison to unreinforced one

Fig.29 Plot of load per unit area vs. Settlement of centrally loaded strip footing (8cm x36cm) showing the effect of embedment

3.3.2 Flexible footing
Load test have been conducted on centrally loaded strip footing of size 8cm X 36cm X 2.5cm has been considered for the load test
using Enkogrid @ PRO- 80 to see the effect of embedment on reinforced soil. The load settlement observation curves have been
shown in fig. (16 to 22)
The combined curve of load per unit area vs. settlement of the foundation on the effect of embedment has been shown in fig.29.From
the figure it is clear the peak load at failure increases with increase in the depth of embedment. This fig shows the effect of providing
reinforcement on the centrally loaded footing. The percentage increase in peak load at different depth of embedment by the provision
of four layer of geogrid as seen from the figure are 31%, 46%, 105%, 150% respectably for D
f
=0.25 B, D
f
=0.5 B, D
f
=0.75B, D
f
=1.0 B.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

145 www.ijergs.org

The effect of number of geogrid layers on centrally loaded surface footings has also been investigated. The combined graph showing
the variation of the load intensity vs. settlement is presented in fig.30 .From the graph it has been observe that with decrease in number
of layers, the peak load at failure also decreases. Form the investigations it has been found that the optimum number of layers is four
which has been adopted in the present investigation.



Fig.30 Plot of load per unit area vs. Settlement of centrally loaded surface foundation on reinforced sand bed showing the effect of
number of geogrid layers

IV-CONCLUSION
The following conclusion are drawn from the tests conducted in the present study, based on the result and discussions presented in the
provision section with regard to embedded foundations on sand reinforced with geogrids and also the effect of numbers of layers of
geogrid.
Foundation on homogeneous sand in centrally loaded foundation on homogeneous sand bed, as the depth of the foundation is
increased; the peak load at failure is increased.
Foundation on reinforced sand In centrally loaded foundations the load carrying capacity increases with increase in the depth of
foundation in surface footing providing geogrids in "four layers increases the load carrying capacity to 3.55 times where as providing
there layers of geogrids above value reduces to 2.28 and for two layers the above value reduces to 1.82.The less number of geogrid
layers decrease the load carrying capacity.
Provision of geogrid in strip foundations increases the load carrying capacity but decreases the settlement of the foundation.

V-SCOPE FOR FURTHER STUDIES
Keeping in view the limitations of time, available laboratory facilities and its scope of present investigation, only a part of the problem
with experimentally investigated. It is necessary to investigate the peak load at failure and the corresponding settlement in cohesive
soil with geogrids as reinforcement. The load carrying capacity and the settlement behavior of centroidal footings observed during
experiment need theoretical analysis.
Comprehensive investigation, both experimental and theoretical, of the problem with geogrid as reinforcement is desirable.

REFERENCES:

[1] A. Guido, "Bearing Capacity of Reinforced Sand Sub grades," ASCE, Jr. of Geotechnical Engineering, Vol. 1 - 113, 1987.

[2] Adams & Collin, "Performance of Spread Footings on Sub grades Reinforced with geogrids & Geojacks, "Chi L i , Scott M-Merry, & Everkt C. Lawton (1997).

[3] Binquet & Lee, Bearing Capacity of Reinforced earth Slabs," ASCE, Vol. - 1, G T - 1 2, P P - 1241 - 1255, Jan 1975.

[4] Bowles, E, "Foundation Analysis & Design" MC Graw Hill, Kogakusha, Ltd. (1977).

[5] Chummar A.V, "Bearing Capacity Theory from Experimental Results" Proc ASCE, Jr. Soil Mechanics & Foundation Division 1972.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

146 www.ijergs.org

[6] C.T.Gnanendran & A.P.S. Selvadurai, Strain Measurement & Interpretation of Stabilizing force in a geogrid Reinforcement," Department of Civil Engineering &
Applied Mechanics, MC Gil University, Montrel Canada (2001).

[7] Das et al., Foundation of Geogrid Reinforced sand Effect of Transient Loading". California University, U.S.A. (1998).

[8] Fragaszy & Lawton, "Bearing Capacity of Reinforcement sand Sub grades." Jr.Geotechnical Engg, ASCE (1984).

[9] Henery Vidal, "The developments & Future of reinforced earth", ASCE Symposium on earth reinforcements (1978).

[10] Huang & Hong, "Ultimate bearing capacity & Settlement of footing on reinforced sandy ground." (2000).

[11] Indian Standard Institutions, "Code of Practice".
1498 - Classification & Identification on soils for general Engineering purposes,1888 - Methods of Load test and soils,2720 - Methods of tests on soils,PT - 111 -
Determination of Specific Gravity,Pt - V - Grain Size analysis,Pt. - XIII - Direct Shear Test.

[12] Jewell et al, "Interaction between soil & geogrids," Symp. Polymer Grid Reinforcement in Civil Engg. London, England (1985).

[13] M.A. Mahmoud & F.M. Abdrabbo, "Bearing capacity tests on strip footing resting on reinforced sand subgreads, Canadian Geotechnical Journal. Vol - 26, No. 1 -
4, 1989.

[14] Meyerhof G.G., "The ultimate bearing capacity of foundations, Geotechnique, 2, No. - 4 (1951).

[15] Omar et al., "Ultimate bearing capacity of rectangular foundations on geogrid reinforced sand". Geotech (1993).

[16] Rhines, W. J, "Elastic - plastic foundations model for punching shear failure." * Proc. ASCE, Jr. of soil mechanics & foundations divisions Vol. 95, no. S m - 3,
May 1967.

[17] Sharma & Bolton, Centrifuge moddleing of Embankment on soft clay reinforced with geogrid," Cambridge University Engineering Department. (1996) reinforced
soil & Geotextiles (1998).

[18] Sreekantiah H.R., "Stability of loaded footings on reinforced sand," Proc of FIGC on reinforced soil & Geotextiles Vol. - 1, (1988).

[19] Sridhran A, "Reinforced soil foundation on soft soil," Proc of FIGC on reinforced soil & Geotextiles. Vol. - 1 (1988).

[20] Terzaghi K, 'Theoretical soil mechanics," John Wiley & Sons inc, New York - 1943.

[21] Terzaghi & Peck, R.B., "Soil mechanics in Engineering Practice," John Wiley & Sons (1967).

[22] Vesic A.S., "Analysis of ultimate loads of shallow foundations." Proc of ASCE, Soil mechanics & Foundation division Vol. 99, Jan 1973.

[23] V.N.S. Murthy, Soil mechanics & foundation Engineering" Dhanpat Rai & Sons (1988).

[24] Yetimoglu et al, "The bearing capacity of rectangular footings on reinforced sand," PhD, Thesis (1994)






















International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

147 www.ijergs.org

Design of MEMS Based Elecrostatically Controlled Micromirror Using
COMSOL Multiphysics
Pooja Bansal
#
, Anurag Singh*
#
M.tech scholar ECE OITM, Hisar, India
*
Asst. Prof. ECE dept.OITM, Hisar, India
1
poojab57@gmail.com,
2
anurag.sangwan@hotmail.com


Abstract- In this paper, we have addressed the Design and simulation results of an electrostatically controlled
micro mirror using COMSOL multiphysics Software. The structural mechanical properties of the actuation
mechanism of a square shape micro-mirror with the lift-off of the structure used four springs simulating a
prestressed cantilever beam. There are few base materials were introduced like Alumina, Aluminum 3003 H18,
Copper and Aluminum. To make the leg or cantilever more efficient the Steel AISI 4340 was introduced together
with base materials to further reduce the lift-off stress. From the analysis, we concluded that the best material
combination is Aluminum 3003-H18 + Steel AISI 4340 which have less stress level and desired lift-off stress.
Another important parameter have to measure in this structure is generally what prestress level is necessary to
result in a desired lift-off.
Keywords: Al, COMSOL, DMD, IC, MEMS, Micromirror, 3D
INTRODUCTION
Micro Electromechanical systems show an extra-ordinary technology that transforms the whole industries and
drive the next technological revolution. These devices can replace bulky actuators and sensors with micron-scale
equivalent that can be produced in large quantities by fabrication processes used in integrated circuits (IC)
photolithography. This reduces cost, size, weight and power consumption while increasing performance,
production volume and functionality by orders of magnitude.
For example, one well known MEMS device is the accelerometer i.e now being manufactured using MEMS
which have low cost, small size, and more reliability. Furthermore, it is clear that current MEMS products are
simply precursors to greater and more pervasive applications to come, including genetic and disease testing,
guidance and navigation systems, power generation, RF devices, especially for cell phone technology, weapon
systems, biological and chemical agent detection, and data storage [3,7].
Recently, MEMS based micro mirrors have been applied in optical switch and displays [5, 9]. They are also used
in a wide range of applications such as interferometric systems, confocal microscopes [8], wavelength-selective
switches, variable optical attenuators, and biomedical image [6]. MEMS-based micromirrors have higher
operating speed and lower mass as compared to traditional fabricated technology, and potential for lower cost
through MEMS fabrication process. A successful example of MEMS-based micromirrors is Texas Instruments
digital micromirror device (DMD) [10]. In most applications, electrostatic actuators are preferred because of their
low power consumptions.
The research was referring to Digital MicroMirror Device (DMD), an optical semiconductor which is the core of
DLP projection technology invented by Dr. Larry Hanbeck and Dr. William E Ed Nelson of Texas Instrument
in 1987 which used Aluminium as mirror materials [10]. This paper is organized as follows. First, we describe the
Micromirror design and materials used. Next, we present the simution details and results. Finally, we show the
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

148 www.ijergs.org

simulation figures and conclusion. In this paper, we have reported the design and modeling of Prestressed micro mirror by
COMSOL Multiphysics version 3.5a [11].
MICROMIRROR DESIGN
This micromirror model uses 3D structural analysis. The micromirror has a stiff, flat, reflective center portion
which is supported by four prestressed plated cantilever springs as shown in figure 1. To keep the mesh size small
and the solution time reasonable, it studies the plated structure with two layers. It assumes that in the top and
bottom layers, the plating process creates equal and opposite initial stresses. So it is easy to set the model.
The purpose of the model is to elaborate the use of pre-stresses in plated metal layers in order to create a desired
lift-off of a MEMS structure. The model shows the use of the Initial Stresses feature in the Structural Mechanics
Module.

Figure 1.Model Geometry
Note in particular that a 3D structure with thin layers such as the one in this model leads to a very large unstructured
tetrahedral mesh. To avoid this case, we first generates a 2D quadrilateral mesh by mesh mapping and then extrudes it into
3D to produce a mesh with hexahedral elements as shown in figure 2 and 3. This way we can have the mesh generator
create structured elements with a high aspect ratioNote in particular that a 3D structure with thin layers such as the one in
this model leads to a very large unstructured tetrahedral mesh. To avoid this case, we first generates a 2D quadrilateral
mesh by mesh mapping and then extrudes it into 3D to produce a mesh with hexahedral elements as shown in figure 2
and 3. This way we can have the mesh generator create structured elements with a high aspect ratio.

Figure 2.The geometry with the 2D mapped mesh.

Mirror
Cantilever
beam
Two layer
of
plating
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

149 www.ijergs.org


Figure 3.The geometry after extruding the mesh into 3D.
The table 1 specified the materials used for designating of this device and properties of these materials such as Youngs
modulus, Poisson ratio etc. In this we designed a micromirror by using these materialsSIMULATION DETAILS
SIMULATION DETAILS
The simulation was done using COMSOL software, which was well known as one of the software normally used to
simulate all MEMS device prior to fabrication steps. There is also other software such as COVENTOR, ANSYS etc.
Before start the simulation all the available materials for micromirror was studied and selected from the COMSOL
Software. The selected one is presented in Table below. In this, Simulation the Initial Press was set at 5GPa [4], which
was advised from COMSOL. For simulation, we used parametric non linear solver to model the performance of micro
mirror. So this model uses the large-deformation analysis type with both the linear and parametric linear solvers.

TABLE 1 : Materials used and its Properties











M
a
t
e
r
i
a
l

Characteristics Young's
modulus
(E)

Thermal
Expansion
(alpha)

Poisson
ratio
(nu)

Density
(rho)

Alumina 300e9[pa]

8e-6 [1/k]

0.222 3900[kg/m3]

Aluminum 3003-H18

69e9[pa]

23.2e-6
[1/k]

0.33

2730[kg/m3]

Copper 110e9[pa]

17e-6 [1/k]

0.35 8700[kg/m3]

Aluminum 70e9[pa]

23e-6 [1/k]

0.33 2700[kg/m3]

Steel AISI 4340

205e9[pa]


12.3e-
6[1/k]

0.28

7850[kg/m3]

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

150 www.ijergs.org

used parametric non linear solver to model the performance of micro mirror. So this model uses the large-
deformation analysis type with both the linear and parametric linear solvers

RESULTS

Simulation Cantilever Change to Different materials as per Table: from the table we can observe that different
micromirror material with the same cantilever material have different stress level and lift-off accordance with the material
characteristics. Observation is particularly on the stress on the edge of mirror and lift-off. In the paper [1], Hazian Mamat
et al. observe that if we used same materials for micromirror and cantilever it results high stress level and lift off.
And then used structural steel as a micro mirror material so he observed the improved results. We used Steel AISI 4340 as
a micro mirror and we find that stress level have comparatively low and desired lift- off stress as shown in the table. The
further simulation with improvement on the cantilever materials has dramatically change the surface deformation and lift-
off, which is able to solve the over-stress problem [1].
The figures [4-7] compare lift-off and stress level for different materials combinations. The steel, being harder than
aluminum, deforms less. Table 2 shows the stress level and lift-off of different materials combinations.

TABLE 2: Different Materials used with Steel AISI 4340










Figure 5 shows that lift-off and stress level of micro mirror when Aluminum 3003-H18 is used for cantilever beam and
steel AISI 4340 for micro mirror. It dramatically reduces lift off and low stress level which is 1.95e-4.This is the best
combination in all
Cantilever
Materials
Micromirror
Materials
Lift-off Stress
Level
Alumina Steel AISI
4340
Low
210-5
Low
2.355e-5
Aluminum
3003-H18
Steel AISI
4340
Good
510-5
Low
5.194e-5
Copper Steel AISI
4340
Very
low
110-6
Very low
1.202e-6
Aluminum Steel AISI
4340
High
210-4
Medium
2.447e-4
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

151 www.ijergs.org


Figure 4: Alumina+Steel AISI 4340

Figure 4 shows that lift-off and stress level of micro mirror when Alumina is used for cantilever beam and steel AISI 4340
for micro mirror. It have lift off and low stress level which is 2.335e-5 when prestress 10e8 is applied.

Figure 5: Aluminum 3003 H18+Steel AISI 4340

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

152 www.ijergs.org


Figure 6: Copper+ Steel AISI 4340

Figure 6 shows that lift-off and stress level of micro mirror when Copper is used for cantilever beam and steel AISI 4340
for micro mirror. It have very low and low stress level and lift off. So it is not suitable for that.



Figure 7: Aluminum+Steel AISI 4340

Figure 7 shows that lift-off and stress level of micro mirror when Aluminum is used for cantilever beam and steel AISI
4340 for micro mirror. It have medium stress level and high lift-off. We have required a stiff, flat device so this is not
suitable for that.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

153 www.ijergs.org

CONCLUSION
In this paper, we have designed a MEMS based rectangle shape Micro mirror using COMSOL Multiphysics Software. For
simulation, we used parametric non linear solver to model the performance of Micro mirror. We conclude that the stress
can be controlled by using other cantilever materials. We have used different cantilever materials to reduce the stress. The
best material combination is Aluminium 3003-H18+steel AISI 4340 which have less stress and uniformed roughness. It
gives 8.6% improved result as compared to combination of Aluminium 3003-H18+structural steel which was advised in
[1]. Furthermore the distance between mirror and the bottom plate are reasonably close and the voltage required to adjust
the Micromirror will be reasonably low. As known theoretically, if the Micromirror are far apart from the electrode, it
would bend and create stress at its four legs.

REFERENCES:
[1] Hazian Mamat,Azrul Azlan Hamzah,Azman Jalar, Jumril Yunas and Nurfirdaus A. Rahim A COMSOL Model
to Analyse the Structural Mechanical Problem in an Electrostatically Controlled Prestressed Micro-Mirror
World Applied Sciences Journal 26 (7):pp. 950-956, 2013
[2] Li, L., Li, R., Lubeigt, W Uttamchandani. Design, simulation and characterization of a bimorph varifocal
micromirror and its application in an optical imaging system, D.Journal of Microelectromechanical Systems,
22(2): art. no. 6328231, pp: 285-294,2013.
[3] K. Srinivasa Rao, et.al., Over-view on Micro-Electro-Mechanical-Systems (MEMS) Technology
AppliedScience Research, 1(5), 2011
[4] Viereck, V., Ackermann, J., Li, Q.,Jakel, A., Schmid, J. and Hillmer, H Sun glasses for buildings based on micro
mirror arrays: Technology, control by networked sensors and scaling potential. Networked Sensing Systems,
pp. 135-139, 2008
[5] Jin-Chern Chiou, Chin-Fu Kou, and Yung-Jiun LinA Micromirror with Large Static Rotation and Vertical
Actuation IEEE JOURNAL OF SELECTED TOPICS IN QUANTUM ELECTRONICS, VOL. 13, NO. 2, pp:
297-303, MARCH/APRIL 2007
[6] W. Piyawattanametha, L. Fan, S. Hsu, M. Fujino, M. C. Wu, P. R. Herz, A. D. Aguirre, Y. Chen, and J. G.
Fujimoto, Two-dimensional endoscopic MEMS scanner for high resolution optical coherence tomography, in
Proc. CLEO, San Francisco, CA, Paper CWS2, , 2004
[7] H. Sato, T. Kakinuma, J. S. Go and S. Shoji, A novel fabrication of in-channel 3-D micromesh structure using
maskless multi-angle exposure and its microfilter application, Proceedings of the IEEE MEMS Conference,
Kyoto,Japan, pp. 223-226., Jan. 2003
[8] K. Murakami, A. Murata, T. Suga, H. Kitagawa, Y. Kamiya, M. Kubo, K. Matsumoto, H. Miyajima, and M.
Katashiro, A miniature confocal optical microscope with MEMS gimbal scanner, in Proc. Transducers, Boston,
MA, pp. 587590,2003
[9] R. Ryf et al., 1296-port MEMS transparent optical cross connect with 2.07 petabit/s switch capacity, in Tech.
Dig. Opt. Fiber Commun. Conf., Anaheim, CA, Mar., Paper PD-28, 2001
[10] L. J. Hornbeck, Current status of the digital micromirror device (DMD) for projection television applications,
in IEDM Tech.Dig., pp. 381384, 1993
[11] http://www.google.co.in/url?sa=t&rct=j&q=micromirror%20designed%20using%20comsol%203.5a&source=we
b&cd=3&cad=rja&uact=8&ved=0CDIQFjAC&url=http://www.comsol.com/paper/download/182965/thomas_pos
ter.pdf&ei=njBqU_LZG4SJrgf2y4HQBQ&usg=AFQjCNEuDlmCdk_9Vd6mtBnBvukLVLey3w
http://www.csa.com/discoveryguides/mems/overview.php

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

154 www.ijergs.org

Design of Cache Memory Cell for Leakage Power Reduction
Anil Kumar Gautam
1
, Mahesh Kumar Aghwariya
1

1
Department of Electronics Engineering, THDC- Institute of Hydropower Engineering and Technology, Uttarkhand
E-mail- anilgautam19@gmail.com
Abstract This paper represents a successful comparison of 5T cell with 6T cell. Leakage power of conventional 6T cell at 0.18 m technology has
been calculated and is found to be 37.32 pW. Same technology has been implemented on the 5T cell , by which leakage power has been reduced by
37.59%.Various leakage reduction techniques such as Autobackgate Controlled Multi-threshold CMOS (ABC-MTCMOS), Gated V
DD
and
Dynamic Voltage Scaling (DVS) has been discussed and applied on conventional 6T cache memory cell and same has been apply on 5T cell and
compared.
Mentor graphics software is for the simulation of the above mentioned SRAM cell

Keywords: Leakage power Leakege current ,Metor graphics software, ABC-MTCMOS
INTRODUCTION
In recent years, a rapid development in VLSI fabrication has led to decreased device geometries and increased transistor densities
of integrated circuits and circuits with high complexities and very high frequencies have started to emerge. Such circuits consume an
excessive amount of power and generate an increased amount of heat. Circuits with excessive power dissipation are more susceptible
to run time failures and present serious reliability problems [1].
The operation voltage of VLSI's is ever decreasing due to the strong needs for low-power consumption. In order to achieve low-
voltage, high-speed operation, CMOS process tends to be optimized for low-voltage operation using thinner gate oxide and shorter
effective channel length . The low voltage operation is also important in the future VLSI's, where scaled MOSFETs can be operated
only in low V
DD
environments with sufficient reliability [2].
The low power design phenomenon is a growing class of personal computing devices, such as portable desktops, digital pens, audio
and video-based multimedia products, and wireless communications and imaging systems, such as personal digital assistants, personal
communicators and smart cards. These devices and systems demand high-speed, high-throughput computations, complex
functionalities and often real-time processing capabilities. One of the negative side effects of technology scaling is that leakage power
of on chip memory increases dramatically and forms one of the main challenges in future system on a chip (SoC) design. In battery-
supported applications with low duty-cycles, such as the Pico-Radio wireless sensor nodes , cellular phones, or PDAs, leakage power
can dominate system power consumption and determine battery life. Therefore, an efficient memory leakage suppression scheme is
critical for the success of ultra low-power design [3]. Deep sub-micrometer CMOS technologies limit dynamic.Energy dissipation, by
scaling down the supply voltage and the threshold voltage V
Th
, offer a continuously higher level of integration and assure high speed
[4].One of the advantages of complementary metal oxide semiconductor (CMOS) over competing technologies, such as transistor-
transistor logic (TTL) and emitter coupled logic (ECL), has been its lower power dissipation. When not switching, CMOS transistors
dissipate negligible amounts of power [5].
Memory circuits form an integral part of every system design as Dynamic RAMs, Static RAMs, Ferroelectric RAMs, ROMs or Flash
Memories, significantly contributing to the system level power consumption. Therefore, reducing the power dissipation in memories
can significantly improve the system power-efficiency, performance, reliability and overall costs. SRAMs have experienced a very
rapid development of low-power low-voltage memory design during recent years due to an increased demand for notebooks, laptops,
hand-held communication devices and IC memory cards.
Semiconductor devices are aggressively scaled each technology generation to achieve high integration density while the supply
voltage is scaled to achieve lower switching energy per device. However, to achieve high performance there is need for scaling of the
transistor threshold voltage . Scaling of transistor threshold voltage is associated with exponential increase in sub-threshold leakage
current [6].Various techniques have been proposed to reduce the SRAM sub-threshold leakage power. At the circuit level, dynamic
control of transistor gate-source and substrate- source back bias were exploited to create low leakage paths during standby periods. At
the architectural level, leakage reduction techniques include gating off the supply voltage (V
DD
) of idle memory sections, or putting
less frequently used sections into drowsy standby mode. These approaches exploited the quadratic reduction of leakage power with
V
DD
, and achieved optimal power-performance tradeoffs with assistance of compiler level cache activity analysis. To further exploit
leakage control on caches with large utilization ratio, the approach of drowsy caches allocated inactive cache lines to a low-power
mode, where V
DD
was lowered but with memory data preserved [3].
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

155 www.ijergs.org

6 Transistor SRAM (Static Random excess memory) Cell
The six transistor (6T) SRAM is mainly formed by an array of CMOS Cells along with a number of other peripheral circuitry, e.g.,
row decoder, column decoder, sense amplifier, write buffer, etc. Transistors M1, M2, M3 and M4 comprise a pair of cross coupled
CMOS inverters that use positive feedback to store a value. Transistors M5 and M6 are two pass transistors that allow access to the
storage nodes for reading and writing. There are two wordlines which are connected to the gate of access transistors M5 and M6 and
two bitlines which are connected to the drain (source) of pass transistors. Conventional SRAMs (CV-SRAM)[8].

Figure1; 6T SRAM
READ OPERATION IN 6T SRAM
Consider the data read operation first, assuming that the logic 0 is stored in the cell. The voltage levels in the CMOS SRAM
cell at the beginning of the read operation are depicted in Figure 2-10. Here, the transistors M2 and M3 are turned off, while the
transistors M1 and M4 operate in the linear mode. Thus the internal node voltages are V
1
=0 and V
2
=V
DD
before the cell access
transistors M5 and M6 are turned on. After the pass transistors M5 and M6 are turned on by the row selection circuitry, the voltage
level of column C
bar
will not show any significant variation since no current will flow through M6. On the other half of the cell, M5
and M1 will conduct a non-zero current and the voltage level of column C will begin to drop slightly.

Fig 2:READ Operation in 6T SRAM
The column capacitance C
C
is typically very large ; therefore the amount of decrease in the column voltage is limited to a few
hundred millivolts during the read phase. The data read circuitry is responsible for detecting this small voltage drop and amplifying it
as a stored 0. While M5 and M1 slowing discharging the column capacitance, the node voltage V
1
will increase from its initial value
of 0V. If the W/L ratio of the access transistor M5 is large compared to the W/L ratio of M1 , the node voltage V
1
may exceed the
threshold voltage of M2, forcing an unintended change of the stored state. The design issue for the data read operation is then to
guarantee that the voltage V
1
does not exceed the threshold voltage of M2, so that the transistor M2 remains turned off during the read
phase.[10]
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

156 www.ijergs.org

WRITE OPERATION IN 6T SRAM
Consider the write 0 operation, assuming that the logic 1 is stored in the SRAM cell initially.The voltage levels in the
CMOS SRAM cell at the beginning of the data write operation. The transistor M1 and M4 are turned off while the
transistor M2 and M3 operate in the linear mode. Thus the internal node voltages are V
1
=V
DD
and V
2
=0V before the cell
access transistors M5 and M6 are turned on.

Fig 2:WRITE Operation in 6T SRAM
The column voltage V
C
is forced to logic 0 level by the data-write circuitry; thus, we may assume that V
C
is
approximately equal to 0V. Once the pass transistors M5 and M6 are turned on by the row selection circuitry, we expect
that the node voltage V
2
remains below the threshold voltage of M1. The voltage level at node 2 would not be sufficient to
turn on M1. To change the stored information i.e. to force V
1
to 0V and V
2
to V
DD
, the node voltage V
1
must be reduced
below the threshold voltage of M2, so that M2 turns off first.[7]
POWER LEAKAGE REDUCTION IN 6T SRAM

Leakage reduction in 6T cell has been observed by Gated V
DD ,
ABC-MTCMOS(Auto-Backgate-Controlled Multi- Threshold
CMOS) and Dynamic voltage scaling techniques ,observed results has been given below
TABLE-1:LEAKAGE POWER AND PERFORMANCE
OF 6T SRAM CELL
Conventional 6T SRAM Cell
Metrics
Read time (WL high upto 100 mV difference
in bitlines)
318 ps
Write time (WL upto node flips) 62 ps
Leakage power / Cell 37.32 pW
Table 1 shows the leakage power dissipation and the read and write times in the 6T SRAM cell
Leakage
Reduction
Techniques
Leakage Power
Dissipation/Cell
(in pW)
Percentage
Reduction
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

157 www.ijergs.org

Conventional 37.32 -
ABC
MTCMOS
23.42 37.25
DVS 22.37 40.06
Gated V
DD
18.92 49.30

Table 2 shows the comparison of leakage power dissipation by applying different leakage reduction techniques on
conventional 6T SRAM cell.
5 Transistor SRAM (Static Random excess memory) Cell
5T SRAM cell in a standard 0.18 m CMOS technology. The five transistor SRAM cell consists of two crosscoupled inverter which
are connected back to back is used for the storage of the data. One access transistor which act as a switch to cut off the cell from the
bitline which is used for the communication with the outside. One bitline is used for applying the voltage level and read/write is done.

Figure 4: Five transistor SRAM cell
READ OPERATION IN 5T SRAM
When a read operation is issued the memory will go through the following steps:
1-Row and column address decoding: The row address decoded for selecting a wordline. Also column address decoded for connecting
a selected bitline.
2-Bitline deriving: After the wordline went to high voltage, the target cell connects to its bitline. The so called cell current through the
driver or load transistor of target cell will discharge or charged the voltage of bitline progressively, and this resulted a change on
bitline voltage.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

158 www.ijergs.org

3-Sensing: After wordline returned to low voltage, the sense amplifier (SA) is turned on to amplify the small difference voltage into
full-swing logic signal.
4-Precharging: At the end of read operation all bitlines and data-lines are precharged to 1.2 V and memory array gets ready for next
read/write operation.

Figure 5: Five-Transistor SRAM cell at the onset of read operation (Reading 0)
WRITE OPERATION IN 6T SRAM
When a write operation is issued the memory array will go through the following steps:
1-Row and column address decoding: The row address decoded for selecting a wordline. Also column address decoded for connecting
a selected bitline.
2-Bitline driving: For a write, this bit-line driving conducts simultaneously with the row and column address decoding by turning on
proper write buffer. After this step, selected bitline will be forced into '1' or '0' logic level.
3-Cell flipping: If the value of the stored bit in the target cell is opposite to the value being written, then cell flipping process will take
place.
4-Precharging: At the end of the write operation bitline is pre-charged to 1.2 V and memory array gets ready for next read/write
operation.
POWER LEAKAGE REDUCTION IN 5T SRAM
Leakage reduction in 5T cell also observed by the same techniques which is used in 5T.
TABLE-5
Designed 5T SRAM cell Metrics
Read time (WL high upto 100 mV
difference in bitlines)
326 ps
Write time (WL upto node flips) 96 ps
Leakage power / Cell 23.29 pW
Table 5 shows the leakage power dissipation and the read and write times in the 5T SRAM cell.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

159 www.ijergs.org

TABLE- 6
Leakage
Reduction
Techniques
Leakage Power
Dissipation/Cell
(in pW)
Percentage
Reduction
Designed 5T 23.29 -
ABC
MTCMOS
21.10 9.40
DVS 19.96 14.30
Gated V
DD
7.44 68.06

Table 6 shows Comparison of leakage power reduction techniques in 5T SRAM cell
Results and conclusion
Simulations were performed of 6T and 5T cells in 1.8V, 0.18m TSMC CMOS using mentor graphics and tsmc 018 models.
Transistor sizes for writing and precharging were optimized for the 5T and 6T cells independently. In addition, 1pF of bitline
capacitance and all extracted cell parasitics were included in the simulations. The 6T cell for comparison was designed but laid out
under the same constraints as the proposed 5T design.
The leakage power for the 6T is calculated and by applying different leakage reduction techniques, the leakage power is much
more reduced. Leakage power in the cell has been evaluated by keeping wordline low to cut off the cell from the bitline. Read time is
calculated when 50 to 100 mV difference in both the bitlines. Write time is calculated when both nodes flips. Writing of 1 or 0 into
the 5T cell is performed by driving the bitline to V
DD
or V
GND
respectively, while the wordline is asserted at V
DD
. As a consequence,
for a non-destructive read operation, the bitline is precharged to an intermediate voltage V
PC
= 1.2V< V
DD
= l.8V.
TABLE 7
Leakage
Reduction
Technique
Leakage Power
Dissipation/Cell
(in pW)
Percentage
Reduction
6T 5T
Conventional 37.32 23.29 37.59
ABC-
MTCMOS
23.42 21.10 9.91
DVS 22.37 19.96 10.77
Gated-V
DD
18.92 7.44 60.68

Table 7 shows the comparison of leakage power dissipation in 6T and 5T SRAM cell with various leakage reduction techniques.
Leakage power dissipation for conventional 6T and designed 5T SRAM cell is calculated and it is seen from above discussion
that leakage power is reduced in 5T as compare to 6T SRAM cell. By further applying different leakage reduction techniques to the
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

160 www.ijergs.org

6T and 5T, we have seen that leakage power is much more reduced from 6T to 5T. Out of all the techniques discussed DVS has found
to be the best as it reduces leakage comparable to Gated V
DD
as well as retain the cell information.

REFERENCES:
[1]. Martin Margala, Low power SRAM circuit design in IEEE international workshop on memory technology, design and
testing, 1999, pp. 115-122.
[2]. Hiroshi Kawaguchi, Yasuhito Itaka and Takayasu Sakurai, Dynamic leakage cut off scheme for low-voltage SRAMs, in IEEE
symposium on vlsi circuits digest of technical papers, 1998, pp. 140-141.
[3]. Huifang Qin, Yu Cao, Dejan Markovic, Andrei Vladimirescu and Jan Rabaey, SRAM leakage suppression by minimizing
standby supply voltage, in IEEE computer society, 2004.
[4]. Fabio Frustaci, Pasquale Corsonello, Stefania Perri, and Giuseppe Cocorullo, Techniques for leakage energy reduction in deep
submicrometer cache memories, in IEEE transactions on very large scale integration (vlsi) systems, vol. 14, no. 11, november 2006,
pp. 1238-1249.

[5]. Nam Sung Kim, Krisztian Flautner,, David Blaauw, and Trevor Mudge, Circuit and microarchitectural techniques for reducing
cache leakage power, in IEEE transactions on very large scale integration (vlsi) systems, vol. 12, no. 2, february 2004, pp. 167-184.

[6]. Amit Agarwal, Hai Li and Kaushik Roy, DRG-Cache: A data retention gated ground cache for low power, 39
th
proceedings of
design automation conference, 2002, pp. 473-478.
[7]. K. Itoh, VLSI memory chip design, Springer-Verlag, NY, 2001.
[8]. S.M. Jung, J. Jang, W. Cho, J. Moon, K. Kwak, B. Choi, B. Hwang, H. Lim, J. Jeong, J. Kim, and K. Kim, The revolutionary and
truly 3-dimensional 25F2 SRAM technology with the smallest S3 (Stacked Single-crystal Si) cell, 0.16m
2
, and SSTFT (Stacked
Single-crystal Thin Film Transistor) for ultra high density SRAM, Symp. VLSI tech. dig. tech. papers, june 2004, pp. 228229.

[9]. H. J. An, H. Y. Nam, H. S. Mo, J. P. Son, B. T. Lim, S. B. Kang, G. H. Han, J. M. Park, K. H. Kim, S. Y. Kim, C. K. Kwak and
H. G. Byun, 64Mb mobile stacked single crystal Si SRAM(S3RAM) with selective dual pumping scheme (SDPS) and multi cell
burn-in scheme (MCBS) for high density and low power SRAM, Symp. VLSI circuits dig. tech. papers, , June 2004, pp. 282283.

[10]. Y. H. Suh, H. Y. Nam, S. B. Kang, B. G. Choi, H. S. Mo, G. H. Han, H. K. Shin, W. R. Jung, H. Lim, C. K. Kwak, and H. G.
Byun, A 256Mb synchronous-burst DDR SRAM with hierarchical bit-line architecture for mobile applications, ISSCC dig. tech.
papers, Feb. 2005, pp. 476477.

[11]. http://www.sun.com/blueprints.

[12]. T. Floyd, Digital Fundamentals, Prentice Hall, ninth edition, 2006.

[13]. Sung Mo Kang ,Yusuf Leblebici , CMOS Digital Integrated Circuits: Analysis and Design , Tata McGraw Hill Third edition ,
2003.

[14]. Neeraj Kr. Shukla, Debasis Mukherjee, Shilpi Birla and R.K. Singh, Leakage current minimization in deep- submicron
conventional single cell SRAM, in IEEE International conference on recent trends in information, telecommunication and computing,
2010, pp. 381-383.

[15]. Arash Azizi Mazreah, Mohammad Taghi Manzuri and Ali Mehrparvar, A high density and low power cache based on novel
SRAM cell, in journal of computers, vol. 4, no. 7, July 2009, pp. 567-575.

[16]. B. Amelifard, F. Fallah, M. Pedram, Reducing the sub-threshold and gate tunneling leakage of SRAM cells using dual-V
t
and
dual-t
ox
assessment, in IEEE proceedings of design, automation and test, 2006, vol. 1, pp. 1-6.

[17]. M. Mamidipaka, K. Khouri, N.Dutt, and M. Abadir, Analytical models for leakage power estimation of memory array
structures, International conference on hardware/software and co-design and system synthesis (CODESISSS), 2004, pp. 146 151.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

161 www.ijergs.org


[18]. J. T. Koa and A. P. Chandrakasan, Dual threshold voltage techniques for low power digital circuits, in IEEE journal of solid
state circuits, vol. 35, no.7, Jul. 2000, pp. 1009-1018.

[19]. M. Powell, S. Yang, B. Falsafi, K. Roy, and T. Vijay Kumar, Gated-V
DD
: A circuit technique to reduce leakage in deep-
submicron cache memories, Proceedings IEEE/ACM international symposium on low power electronics and design, 2000, pp. 90-
95.

[20]. S. Yang, M. Powell, B. Falsafi, K. Roy, and T. Vijay Kumar, An integrated circuit/architecture approach to reducing leakage in
deep-submicron high performance I caches, in Proc. IEEE/ACM international symposium on high- performance computer
architecture, 2001, pp. 147157.





















International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

162 www.ijergs.org

Application of Response Surface methodology (RSM) for the Removal of
Nickel Using Rice Husk ASH as Biosorbent
Sravan Kumar Danda
1
, Ch. V. Ramachandramurthy
2
, K.Dayana
3
, Ch.V.N.Sowjanya
1

1
Research Scholar (M.Tech), Department of Chemical Engineering, Andhra University, A.P, India
2
Professor, Department of Chemical Engineering, Andhra University, A.P, India
1
Research Scholar (M.Tech), Department of Chemical Engineering, Andhra University, A.P, India
ABSTRACT
The Rice Husk Ash is a cheap and readily available biosorbent for the removal of nickel ions from aqueous solutions.
This investigation comprises of the equilibrium and kinetics study for biosorption of nickel ions from aqueous solutions using
Rice Husk Ash powder as a biosorbent in a batch process. The Rice Husk Ash is obtained from a hotel near Andhra University
campus, is dried in sunlight for 15 days. The biosorption process is carried out in batch process by varying four parameters: pH,
Dosage, Concentration and Temperature of the solution of the process.
The results indicate that biosorption of nickel is increased with an increase in biosorbent dosage. A significant increase
in percentage biosorption of nickel is observed as pH is increased from 2 to 5 and the percentage biosorption decreased beyond
pH 5. Increased initial concentration of nickel in the aqueous solution resulted in lower percentage of biosorption. Langmuir,
Freundlich and Temkin isotherm models describe the present data very well indicating favorable biosorption. The biosorption
followed pseudo-second-order kinetics.
The present study involves the use of statistical design to optimize process conditions for maximal biosorption of nickel
from aqueous solution using BBD involving RSM
Hence the Rice Husk Ash is highly effective as biosorbent for biosorption of nickel ions from aqueous solutions and can
be appreciably considered as most versatile, economical, feasible and efficient biosorbent for biosorption of nickel ions from
aqueous solutions.
Keywords: Rice Husk Ash, Biosorption, Nickel, Response Surface Methodology, Box-Behnken Design, Optimization,
Equilibrium isotherms, Low cost biosorbent.

INTRODUCTION:
A vast number of raw materials for industrial processes originate from agricultural activities, which result in the production of
chemical and solid wastes. The chemical wastes arise from the use of pesticides, dyes and fertilizers while the solid wastes
include bagasse, sawdust, rice husk, peanut shell and coffee husk, among others. Interestingly, the agricultural solid wastes can be
converted to adsorption media and used to treat the chemical wastes; a concept of using waste to treat waste. By this concept, the
cost of adsorption material for wastewater treatment, which is a major constraint in wastewater management, is generally reduced.
Due to environmental concerns and the demand for high-quality water, there has been an increase in regulations controlling the
discharge of heavy metals and non-biodegradable toxic compounds into water bodies. This has resulted in developing toxic waste
removal techniques such that only minute quantities remain in the wastewater discharged into water bodies.[1]
Traditional methods for removal of lead ions from solution include chemical precipitation, ion exchange, electro dialysis and
membrane separations. All these methods have various disadvantages, specifically, high capital investment and operating cost,
incomplete removal, low selectivity and high energy consumption. Therefore, there is a need for a cost effective treatment method
that is capable of removing low concentrations of lead from solution For the last decades, biosorption or sorption of contaminants
by sorbents of natural origin has gained important credibility due to the good performance and low cost of these complexing
materials.[2]
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

163 www.ijergs.org

Biosorption, which is defined as the accumulation and concentration of pollutants from aqueous solutions by the use of biological
materials, appears to offer a technically feasible and economically attractive approach. The biosorption mechanism of heavy
metals is theorized to be a combination of active and passive transport starting with diffusion of the metal ions to the surface of
the microbial cell. The coordination of metal ions to different functional groups, such as amino, thioether, carboxyl, hydroxyl,
carbonyl, phosphate, phenolic, etc., groups, in or on the algal cell biomass makes it a good biosorbent for removal of heavy metals
from aqueous solutions.[3]
In recent years, biosorption has been suggested as being cheaper and more effective than chemical (precipitation) or physical (ion
exchange and membrane) technologies. Biosorption involves the use of biological materials that form complexes with metal ions
using their ligands or functional groups. Most metal sorption reported in literature is based on bacterial, algal and fungal biomass,
which needs to be cultured, collected from their natural habitats and pre-processed, with the result of additional costs. The use of
biosorbents from numerous lignocellulosic agro wastes is a very constructive approach and has received much attention in
sorption of heavy metals, because they are inexpensive and have high adsorption properties resulted from their ion exchange
capabilities. [4]
Rice husk are an agricultural waste produced as by-product of the rice milling industry to be about more than 100 million tones,
96% of which is generated in developing countries. Rice husk is mostly used as a fuel in the boiler furnaces of various industries
to produce stream. The ash generated after burning the rice husk in the boiler is called rice husk ash. The R.H was collected from
the particulate collection equipment attached up stream to the stack of rice-fired boilers. The ash generated got a server disposal
problem. The objective of this study was to explore the possibility using R.H and R.H.A. for removing Pb(II) from aqueous
solution.[5]
Nickel(II) ion is one such heavy metal frequently encountered in raw wastewater streams from industries such as non-ferrous
metal, mineral processing, paint formulation, electroplating, porcelain enameling, copper sulphate manufacture and steam-electric
power plants. [6]
2. Materials and methods
2.1 Preparation of the adsorbent
Nickel (II) Nitrate, Ni (NO3)2.6H2O is used as the source for Nickel stock solution. All the required solutions are prepared with
analytical reagents and double-distilled water. 5.102g of 98% Ni (NO3)2.6H2O is dissolved in distilled water of 1.0 L volumetric
flask up to the mark to obtain 1000 ppm (mg/L) of Nickel stock solution. Synthetic samples of different concentrations of Nickel
are prepared from this stock solution by appropriate dilutions. 100 mg/L Nickel stock solution is prepared by diluting 10 mL of
1000 ppm Nickel stock solution with distilled water in 100 mL volumetric flask up to the mark. Similarly solutions with different
metal concentrations such as (20 to 200ppm) are prepared.
2.2 Batch mode adsorption studies
Batch mode adsorption studies for individual metal compounds were carried out to investigate the effect of different parameters
such as, agitation time, pH, adsorbate concentration, adsorbent dosage, and temperature. Solution containing adsorbate and
adsorbent was taken in 250 ml capacity conical flasks and agitated at 180 rpm in a mechanical shaker at predetermined time
intervals. The adsorbate was decanted and separated from the adsorbent using filter paper (Whatman No-1). The filtrates are
analyzed in Atomic Absorption Spectrophotometer.
Table 1.Range of different parameters investigated in the present study
Parameter Values Investigated
Agitation time, t, min 2, 5, 10, 20, 30, 50, 70, 90, 120, 150, and 180.
pH ofaqueous solution 2, 3, 4, 5, and 6
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

164 www.ijergs.org

Initial Nickel concentration,C
0
, ppm 20, 40, 80,120,160,180, and 200
adsorbent dosage, w, g 0.5, 0.75, 1, 1.25, 1.5, 1.75, 2, 2.5, and 3
Temperature, K 283, 293, 303, 313 and 323 K
3. RESULTS AND DISCUSSION
3.1 Effect of agitation time on biosorption of nickel:
Duration of equilibrium biosorption is defined as the time required for heavy metal concentration to reach a constant value during
biosorption. The equilibrium agitation time is determined by plotting the % biosorption of nickel against agitation time as shown
fig. 3.1 for the interaction time intervals between 1 to 180 min. For 74 m size of 0.5,1, and 1.5 gram biosorbent dosage, 47.48%
(0.28949mg/g) of nickel is biosorbed in the first 5 min. The % biosorption is increased briskly up to 90 min reaching 61.56%
(0.375343mg/g). Beyond 90 min, the % biosorption is constant indicating the attainment of equilibrium for all dosage of
biosorbent.
Fig 3.1 Effect of agitation time on biosorption of nickel
3.2 Effect of pH of the aqueous solution:
pH controls biosorption by influencing the surface change of the biosorbent, the degree of ionization and the species of
biosorbate. In the present investigation, nickel biosorption data are obtained in the pH range of 2 to 6 of the aqueous solution (C
0

= 100 mg/L) using 1 gram of 74 m size biosorbent. The effect of pH of aqueous solution on % biosorption of nickel is shown in
fig.3.2. The % biosorption of nickel is increased from 28.19 % (2.2725mg/g) to 52.28 % (4.3935mg/g) as pH is increased from 2
to 5 and decreased beyond the pH value of 5 % biosorption is decreased from pH 5 to 6 reaching 47.00 % (4.2824mg/g) from
52.28 % (4.3935mg/g).
Fig. 3.2 Dependence of % biosorption of nickel on pH
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

165 www.ijergs.org

3.3 Effect of initial concentration of Nickel:
Fig.3.3 % Biosorption as a function of initial concentration of nickel
The effect of initial concentration of nickel in the aqueous solution on the percentage biosorption of nickel is shown in fig.3.3.
The percentage biosorption of nickel is decreased from 48.09% (1.6564mg/g) to 20.19 % (4.0602mg/g) with an increase in C
0

from 20 mg/L to 200 mg/L. Such behavior can be attributed to the increase in the amount of biosorbate to the unchanging
number of available active sites on the biosorbent (since the amount of biosorbent is kept constant)
3.4 Effect of biosorbent dosage:
The percentage biosorption of nickel is drawn against biosorbent dosage for 74 m size biosorbent in fig.3.4. The biosorption of
nickel increased from 34.23% (0.4122mg/g) to 92.02% (0.1847 mg/g) with an increase in biosorbent dosage from 0.5 to 3 gm.
Such behavior is obvious because with an increase in biosorbent dosage, the number of active sites available for nickel
biosorption would be more. Hence all other experiments are conducted at 1.25 gram dosage where the % biosorption of nickel is
94.56% (0.45552 mg/g).
Fig. 3.4 Dependence of %biosorption of nickel on biosorbent dosage
3.5 Effect of Temperature:
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

166 www.ijergs.org

The effect of temperature on the equilibrium metal uptake was significant. The effect of changes in the temperature on the nickel
uptake is shown in Fig.3.5. When temperature was lower than 303 K, Nickel uptake increased with increasing temperature, but
when temperature was over 303 K, the results were on the contrary. This response suggested a different interaction between the
ligands on the cell wall and the metal. Below 303 K, chemical biosorption mechanisms played a dominant role in the whole
biosorption process, biosorption was expected to increase by increase in the temperature. While at higher temperature and
physical biosorption became the main process. Physical biosorption reactions were normally exothermic, thus the extent of
biosorption generally is constant with further increasing temperature.
Fig 3.5 Dependence of %biosorption of nickel on biosorbent temperature
3.6 Freundlich isotherm for adsorption of Nickel:
Freundlich isotherm is drawn between ln C
e
and ln q
e
as shown in the figure for the present data. The resulting equation has a
correlation coefficient of 0.9945.The following equation are obtained from the plot drawn in the fig 3.6. ln q
e
= 0.4024ln C
e
-
0.2718
The slopes (n) of the above equation are varied between the n value in the above equations satisfies the condition of 0<n<1
indicating favorable biosorption.

Fig.3.6 Freundlich isotherm for biosorption of Nickel
3.7 Langmuir isotherm for adsorption of Nickel:
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

167 www.ijergs.org

Langmuir isotherm is drawn for the present data and shown in Fig.3.7 have good linearity (correlation coefficient, R~ 0.9958)
indicating strong binding of Nickel ions to the surface of rice husk ash. The separation factor (R
L
), obtained is 0.6976 Shows
favorable adsorption.
The following equation obtained from 3.6
C
e
/q
e
= 0.1977 C
e
+ 8.1535
The isotherm constants for nickel- rice husk ash interactions at 303
o
K, t=90 min, C
o
=20mg/L, d
p
= 74m and w = 1.25g are
shown below.
Fig. 3.7 Langmuir isotherm for biosorption of nickel
3.8 Temkin isotherm for adsorption of Nickel:
The present data are analyzed according to the linear form of Temkin isotherm and the linear plot is shown in Fig.3.8. The
equation obtained for nickel biosorption is: q
e
= 61.019 ln C
e
97.183 with a correlation coefficient 0.9563. The best fit model is
determined based on the linear regression correlation coefficient (R). From the Figs 3.6, 3.7 & 3.8, it is found that biosorption
data are well represented by Langmuir isotherm with higher correlation coefficient of 0.9985, followed Freundlich and Temkin
isotherms with correlation coefficients of 0.9945 and 0.9692 respectively.
Fig. 3.8 Temkin isotherm for biosorption of Nickel


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

168 www.ijergs.org

Table 2 Isotherms constants








3.9 Kinetics of biosorption

Fig.3.9 First order kinetics for biosorption of Nickel
Plot of log (q
e
q
t
) versust gives a curve for first order kinetics, facilitating the computation of adsorption rate constant (K
ad
)
.
Fig.3.10 Second order Kinetics for biosorption of nickel
In the present study, the kinetics are investigated with 20 mL of aqueous solution (C
0
= 20 mg/L) at 303
o
K with the interaction
time intervals of 1 min to 180 min. Lagragen plots of log (q
e
-q
t
) versus agitation time (t) for biosorption of Nickel the biosorbent
Isotherm Parameters At temp 303
o

K
Langmuir

qm , mg/g 5.05816
b , L/g 0.024247
R
2
0.9985
Freundlich N 0.4024
Kf, mg/g 0.762
R
2
0.9945
Temkin AT, L/mg 0.09499
bT 41.2845
R
2
0.9692
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

169 www.ijergs.org

size (74 m) of rice husk ash in the interaction time intervals of 1 to 180 min are drawn in figs.3.9 & 3.10. It is second order
kinetics as the line is best fit the t/q
t
vs. t graph with R
2
value of 0.9984.
Table3 Equations and Rate constants
Order Equation Rate constant R
2

Lagergren first order log (qe qt) = -0.0276 t- 1.31 0.06356 Min-1 0.9753
Pseudo second order t/qt= 6.5322t + 21.606 1.97 g/(mg-
min)
0.9984
3.11 Optimization using Box-Behnken Design:
The experiments conducted with different pH values ranging from 4-6. Different nickel concentrations of 15-25mg/L, different
biosorbent dosage of 1 to 1.5 gram and different temperatures coupled to each other and varied simultaneously to cover the
combination of parameters in BBD. The levels and ranges are chosen independent parameters are given in the Table-3., Table-4.
is employed for the optimization of the parameters.
The regression equation for the optimization of medium constitutes: % biosorption of nickel (Y) is a function of pH (X
1
), initial
nickel concentration C
o
(X
2
), biosorbent dosage (X
3
) and temperature T (X
4
). Multiple regression analysis of the experimental
data has resulted in the following equation for the biosorption of nickel:
Y = -332.126 + 66.68X
1
+ 8.958 X
2
+ 206.067X
3
+ 2.923X
4
6.674 X
1
2
- 0.225X
2
2
-82.487X
3
2
0.048X
4
2
-------
-(1)
The result of above regression model in the form of analysis of variance (ANOVA) for Eq. (1) is given in Table-5.6. The
optimal set if condition for maximum percentage biosorption of nickel is pH =4.99, biosorption dosage (w) = 1.249 gram and
initial nickel concentration of Co = 19.87428 mg/L calculated at these optimum condition is 96.46 %. Fig. 3.11 shows the
comparison between the % biosorption obtained through experiments and that predicted. The experimental values are in good
agreement with predicted values.
The correlation coefficient (R
2
) provides a measure of the models variability in the observed response values. The closer
the R
2
value to 1, the stronger the model is and it predicts the response better. In the present study the value of the regression
coefficient (R
2
=0.9998) indicates that 0.0002 % of the total variations are not satisfactorily explained by the model. The ANOVA
Table can be used to test the statistical significance of the ratio of mean square due to regression and mean square due to residual
error. From that table, it is evident that, the F-statistics value for entire model is higher. This large value implies that % removal
can be adequately explained by the model equation. Generally P values lower than 0.05 indicates that the model is considered to
be statistical insignificance at the 95% confidence level. The %biosorption prediction from the model is shown in Table-4. From
Table-5, it is known that all the squared terms and the linear terms of all the variables are significant (P < 0.05)
Table-4
Levels of different process variables in coded and un-coded form for % biosorption of nickel using rice husk
ash
Variable Name Ranges and levels
-1 0 1
X
1
pH of aqueous solution 4 5 6
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

170 www.ijergs.org

X
2
Initial concentration, C0, mg/L 15 20 25
X
3
Biosorbent dosage. W, grams 1 1.25 1.5
X
4
Temperature , T
o
K 293 303 313
Table5 Results from BBD for Nickel biosorption by rice husk ash
Ru
ns
pH(
X1)
Conc(
X2)
Dosage(
X3)
Temp(
X4)
p
H
Co
nc
Dosa
ge
Te
mp
%
remo
val
Predic
ted
1 -1 -1 0 0 4 15 1.25 30 84.66 84.49
2 1 -1 0 0 6 15 1.25 30 84.78 84.37
3 -1 1 0 0 4 25 1.25 30 83.24 83.93
4 1 1 0 0 6 25 1.25 30 83.44 83.8
5 0 0 -1 -1 5 20 1 20 85.72 86.2
6 0 0 1 -1 5 20 1.5 20 86.24 86.12
7 0 0 -1 1 5 20 1 40 86.12 86.84
8 0 0 1 1 5 20 1.5 40 87.38 86.77
9 0 0 0 0 5 20 1.25 30 96.46 96.46
10 -1 0 0 -1 4 20 1.25 20 84.58 84.7
11 1 0 0 -1 6 20 1.25 20 84.76 84.58
12 -1 0 0 1 4 20 1.25 40 85.24 85.35
13 1 0 0 1 6 20 1.25 40 85.51 85.22
14 0 -1 -1 0 5 15 1 30 86.43 85.99
15 0 1 -1 0 5 25 1 30 85.49 85.42
16 0 -1 1 0 5 15 1.5 30 85.16 85.91
17 0 1 1 0 5 25 1.5 30 85.82 85.34
18 0 0 0 0 5 20 1.25 30 96.46 96.46
19 -1 0 -1 0 4 20 1 30 85.68 84.72
20 1 0 -1 0 6 20 1 30 84.36 84.6
21 -1 0 1 0 4 20 1.5 30 84.47 84.65
22 1 0 1 0 6 20 1.5 30 84.28 84.53
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

171 www.ijergs.org

23 0 -1 0 -1 5 15 1.25 20 85.91 85.96
24 0 1 0 -1 5 25 1.25 20 85.78 85.4
25 0 -1 0 1 5 15 1.25 40 86.42 86.61
26 0 1 0 1 5 25 1.25 40 86.19 86.04
27 0 0 0 0 5 20 1.25 30 96.46 96.46
28 0 0 0 0 5 20 1.25 30 96.46 96.46
29 0 0 0 0 5 20 1.25 30 96.46 96.46
30 0 0 0 0 5 20 1.25 30 96.46 96.46

Table-6
ANOVA of Nickel biosorption for entire quadratic model
Source of variation SS df Mean square(MS) F value P > F
Model 613.7276 8 152.7468 739.133 0
Error 4.3398 21 0.2067
Total SS 618.0674 29
Df- degree of freedom, SS- sum of squares, F- factor F and P- probability.
R
2
=0.9896; R
2
(adj):0.9799
Table-7 Estimated regression coefficients for the nickel biosorption onto rice husk ash
Terms
Regression
coefficient
Standard error of coefficient

t(21) P
Mean/Interc. -332.126 7.997247
-
41.53
00
0
.
0
0
(1)pH (L) 66.680 1.740967
38.30
06
0
.
0
0
pH (Q) -6.674 0.173601
-
38.44
0
.
0
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

172 www.ijergs.org

54 0
(2)Concentration (L) 8.958 0.278999
32.10
76
0
.
0
0
Concentration (Q) -0.225 0.006944
-
32.45
46
0
.
0
0
(3)Dosage (L) 206.067 6.963867
29.59
08
0
.
0
0
Dosage (Q) -82.487 2.777622
-
29.69
69
0
.
0
0
(4)Temperature (L) 2.923 0.104984
27.84
23
0
.
0
0
Temperature (Q) -0.048 0.001736
-
27.75
28
0
.
0
0

Insignificant (P 0.05)

The model is reduced to the following form by excluding undistinguished terms in eq.5.2
Y =-332.126 +66.68X
1
+8.958 X
2
+206.067X
3
+2.923X
4
6.674 X
1
2
- 0.225X
2
2
-82.487X
3
2
0.048X
4
2

(1)
3.11 Interpretation of residual graphs:
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

173 www.ijergs.org

Normal probability plot (NPP) is a graphical technique used for analyzing whether or not a data set is normally distributed to
greater extent. The difference between the observed and predicted values from the regression is termed as residual. fig. 3.12
exhibits normal probability plot for the present data. It is evident that the experimental data are reasonably aligned implying
normal distribution.

Fig. 3.12 Normal probability plot for % biosorption of nickel
Fig. 3.13 Pareto Chart for % biosorption of nickel

Pareto chart
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

174 www.ijergs.org

From t the pareto chart can be explained in such a way that P > 0.05 are significant and is presented in Fig. 3.13. The red line in
the x-axis is P=0.05.
Interaction effects of biosorption variables:
Three dimensional view of response surface contour plots [Fig. 3.14(a)& 3.14(b)] exhibit, % biosorption of the nickel using rice
husk ash for different combinations of dependent variables. All the plots are delineated as a function of two factors at a time,
imposing other factors fixed at zero level. It is evident from response surface contour plots that the % biosorption is minimal at
low and high levels of the variables. This behavior conforms that there is a presence of optimum for the input variables in order
to maximize % biosorption. The role played by all the variables is so vital in % biosorption of nickel and seen clearly from the
plots. The predicted optimal set of conditions for maximum % biosorption of nickel is
Biosorbent dosage = 1.24909 gram
Initial nickel ion concentration = 19.87428 mg/L
pH of aqueous solution = 4.99538
% biosorption of nickel = 96.42
The experimental optimum values are compared with those predicted by BBD in table-5. The experimental values are in close
agreement with those from and BBD
Table-7Comparison between optimum values from BBD and experimentation
Critical values; Variable: % Removal
Predicted value at solution: 96.46917
Table 8
Observed Critical Observed
pH 4.00000 4.99538 6.00000
Concentration 15.00000 19.87428 25.00000
Dosage 1.00000 1.24909 1.50000
Temp 20.00000 30.33469 40.00000

Surface contour plots.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

175 www.ijergs.org

Fig. 3.14(a) Surface contour plot for the effects of pH and initial concentration, Dosage of nickel on %
Removal
Fig. 3.14(b) Surface contour plot for the effects of Temperature and pH, dosage and concentration, of nickel
on % Removal.

CONCLUSIONS
Rice Husk Ash is used as biosorbent for the removal nickel ions from aqueous solution. In the biosorption of nickel, effect of pH,
biosorbent dosage, metal ion concentration, contact time, biosorption isotherm and kinetics studies were carried out. The metal
ion concentrations were analyzed using Atomic absorption spectroscopy.
1. Theequilibrium agitation time for nickel biosorption is 90 min.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

176 www.ijergs.org

2. The % biosorption of nickel is increased from 28.19 % (2.27 mg/g) to 52.28 % (4.39 mg/g) as pH is increased from 2 to 5 and
decreased beyond the pH value of 6.% biosorption is decreased from pH 5 to 6 reaching 47.01 % (4.28 mg/g) from 52.25 % (4.39
mg/g). Thus indicating the optimum pH is 5 for the biosorbent rice husk ash.
3. With an increase in the initial concentration of nickel in the aqueous solution, the percentage biosorption of nickel from the
aqueous solution is decreased.
4. The percentage biosorption of nickel is increased significantly with increase in biosorbent dosage up to 1.25 grams and there by
remained constant. The biosorption of nickel increased from 34.23 % (0.41 mg/g) to 92.03 % (0.18 mg/g) with an increase in
biosorbent dosage from 0.5 gram to 1.25gram.
5. The percentage biosorption of nickel is increased significantly with increase in the temperature.
6. The kinetic studies show that the biosorption of nickel is better described by pseudo second order kinetics. (K
2
= 1.97), R
2
=
0.9984
The present study involves the use of statistical design to optimize process conditions for maximal biosorption of nickel from
aqueous solution using BBD involving RSM

REFERENCES:
1. Krishnie Moodley1, Ruella Singh1, Evans T Musapatika1, Maurice S Onyango2 and Aoyi Ochieng3* (SA Vol. 37, 2011)
Removal of nickel from wastewater using an agricultural adsorbent
2. Srinivasa Rao J1*, Kesava Rao C2 and Prabhakar G2 (2013)
Optimization of Biosorption Performance of Casuarina Leaf Powder for the Removal of Lead Using Central Composite Design.
3. F.A. Abu Al-Rub, M.H. El-Naas, F. Benyahia, I. Ashour(2003)
Biosorption of nickel on blank alginate beads, free and immobilized algal cells
4. A.G. El-Said(2010;6(10))
Biosorption of Pb(II) Ions from Aqueous Solutions Onto Rice Husk and its Ash
5. V. Padmavathya, P. Vasudevanb,*, S.C. Dhingraa(2002)
Biosorption of nickel(II) ions on Baker's yeast
6. K. Kishore Kumar, M. Krishna Prasad, B. Sarada, G. V. S. Sarma, Ch. V. R. Murthy
ISSN: 2248-9622 www.ijera.com Vol. 2, Issue 3, May-Jun 2012, pp.2810-2819
Optimization of Ni (II) removal on Rhizomucor tauricus by using Box-Behnken design.
7. R.H.S.F. Vieira, B. Volesky, Biosorption: a solution to pollution Int. Microbiol. 3, (2000) 1724.
8. D. Gialamouidis, M. Mitrakas, M. Liakopoulou-Kyriakides,
Biosorption of nickel ions from aqueous solutions by Pseudomonas sp. and Staphylococcus xylosus cells, Desalination 248,
(2009), 907914.
9. M.J Temkin and V Pyzhey Recent modifications to Langmuir isotherms. ActaPhysiochim., 12 (1940) 217222
10. M. Ozacar, I.A. Sengil, Adsorption of reactive dyes on calcined alunite from aqueous solutions, J. Hazard. Mater. 98, (2003),
211224.
11. M. Ozacar, I.A. Sengil, Adsorption of reactive dyes on calcinedalunite from aqueous solutions, Journal of Hazardous Materials.
98, (2003), 211224.
12. D. Gialamouidis, M. Mitrakas, M. Liakopoulou-KyriakidesEquilibrium, thermodynamic and kinetic studies on biosorption of
Mn(II) fromsolution by Pseudomonas sp., Staphylococcus xylosus and Blakesleatrispora cells. Journal of Hazardous Materials
182, (2010), 672680.


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

177 www.ijergs.org

A New Car Selection in the Market using TOPSIS Technique
Srikrishna S
1
, Sreenivasulu Reddy. A
1
, Vani S
1

1
Department of Mechanical Engineering, Sri Venkaeswara University college of Engineering, Tirupati, India
E-mail- seetharamadasubcm@gmail.com

Abstract Now days purchasing of automobiles i.e, especially cars in the market is very tough task to the customers due to day to
day changes in various technical and operational parameter specifications like style, life span, fuel economy, suspension and cost etc.
Therefore to overcome from this confusion state some selection procedure techniques are required. TOPSIS is one the selection
procedure technique is adopted for this problem. This technique provides a base for decision-making processes where there are limited
numbers of choices but each has large number of attributes. In this paper some cars are considered with different attributes and select
the best car using TOPSIS technique.
Keywords TOPSIS, MCDM, Car Selection, Normalized decision matrix, Positive and Negative Ideal solutions, Relative closeness,
Ranking.
INTRODUCTION
The selection of automobile is crucial for the purchaser due to the confusion created by fake publicity of the dealers. Choosing just the
right one becomes a critical decision making problem. The possible budget is then a constraint in the decision on which car to buy.
Other important criterias while selection include: fuel economy, comfort and convenience features, life span, suspension, style and
cost. An appropriate decision-making method for selecting the best car is useful to both customers and manufacturers.
LITERATURE REVIEW
Hwang and Yoon (1981) proposed that the ranking of alternatives will be based on the shortest distance from the Positive Ideal
Solution (PIS) and the farthest from the Negative Ideal Solution (NIS). Hsu-Shih Shiha, et al (2007) investigated on extension of a
Multi-Attribute Decision Making (MADM) technique, to a group decision environment. MajidBehzadian, et al (2012) had given
review on state-of the-art survey of Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) applications.
METHODOLOGY
The objective of this work is to develop TOPSIS method for car selection. In order to comply with collecting quantitative and
qualitative data for TOPSIS car selection model that could be applied by a seven steps approach was performed to ensure successful
implementation.
Selection criteria
Buying a new car is a big decision-making problem and reflection of customer preference. Customer choice must be made among
several cars for a given application, it is necessary to compare their performance characteristics in proper manner [1]. Some of the
main criterias of four wheelers are fuel economy, quality, life span, style, engine power, engine limits, and dimensions of the car and
cost of the car. The importance of these criteria is commonly known and thus not elaborated.


Fig.1. Selection criteria of TOPSIS
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

178 www.ijergs.org

TOPSIS Method
TOPSISwas first presented by Yoon (1980) and Hwang and Yoon (1981), for solving Multiple Criteria Decision Making (MCDM)
problems based on the concept that the chosen alternative should have the shortest Euclidian distance from the Positive Ideal Solution
(PIS) and the farthest from the Negative Ideal Solution (NIS). For instance, PIS maximizes the benefit and minimizes the cost,
whereas the NIS maximizes the cost and minimizes the benefit. It assumes that each criterion require to be maximized or minimized.
TOPSIS is a simple and useful technique for ranking a number of possible alternatives according to closeness to the ideal solution.

The TOPSIS procedure is based on an intuitive and simple idea, which is that the optimal ideal solution, having the maximum benefit,
is obtained by selecting the best alternative which is far from the most unsuitable alternative, having minimal benefits [3]. The ideal
solution should have a rank of 1 (one), while the worst alternative should have a rank approaching 0 (zero). As ideal cars are not
probable and each alternative would have some intermediate ranking between the ideal solution extremes. Regardless of absolute
accuracy of rankings, comparison of number of different cars under the same set of selection criteria allows accurate weighting of
relative car suitability and hence optimal car selection.

Mathematically the application of the TOPSIS method involves the following steps.

Step 1:Establish the decision matrix
The first step of the TOPSIS method involves the construction of a Decision Matrix (DM).
----------- (1)

Where i is the criterion index (i = 1 . . . m); m is the number of potential sites and j is the alternative index (j= 1 . . . n). The
elements C
1,
C2
,
C
n
refer to the criteria: while L
1
, L
2,
L
n
refer to the alternative locations. The elements of the matrix are related to
the values of criteria i with respect to alternative j.

Step 2: Calculate a normalised decision matrix
The normalized values denote the Normalized Decision Matrix (NDM) which represents the relativeperformance of the generated
design alternatives.
NDM = R
ij
=
X
ij
X
2
ij
m
i =1
----------- (2)
Step 3:Determine the weighted decision matrix
Not all of the selection criteria may be of equal importance and hence weighting were introduced from AHP (Analytical Hierarchy
Process) technique to quantify the relative importance of the different selection criteria. Theweighting decision matrix is simply
constructed by multiply each element of each column of the normalized decision matrix by the random weights.
V= V
ij
= W
j
R
ij
--------- (3)
Step 4:Identify the Positive and Negative Ideal Solution
The positive ideal (A
+
) and the negative ideal (A
-
) solutions are defined according to the weighted decision matrix via equations (4)
and (5) below
PIS = A
+
= { V
1
+
,V
2
+
, V
n
+
}, where: V
j
+
={(maxi (V
ij
) if j eJ);(mini V
ij
if j eJ') } ------------ (4)
NIS = A
-
= { V
1
-
, V
2
-
, V
n
-
}, where: V
j
-
= {(mini (V
ij
) if j eJ);(maxi V
ij
if j eJ') } ------------ (5)
Where, J is associated with the beneficial attributes and J' is associated with the non-beneficial attributes.

Step 5:Calculate the separation distance of each competitive alternative from the ideal and non- idealsolution.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

179 www.ijergs.org

S
+
=

(V
+
j
n
j=1
V
ij
)
2
i = 1,...., m --------- (6)
S

(V

j
n
j=1
V
ij
)
2
i = 1,...., m --------- (7)
Where, i = criterion index, j = alternative index.
Step 6:Measure the relative closeness of each location to the ideal solution.
For each competitivealternative the relative closeness of the potential location with respect to the ideal solution is computed.
C
i
= S
i

/ (S
i
+
+S
i

), 0 C
i
1 --------- (8)
Step 7:Rank the preference order
According to the value of C
i
the higher the value of the relative closeness, the higher the ranking orderand hence the better the
performance of the alternative. Ranking of the preference in descending order thusallows relatively better performances to be
compared.

INPUT TABLES
Table 1: Criterion Parametric values
Attributes
Alternatives
MarutiErtiga Swift Tata Indica Alto800

Fuel
Economy


City
18kmph 15.2kmph 20kmph 16kmph
Highway 22.2kmph 18.6kmph 24kmph 21.7kmph
Style Better Extreme Good Good
Life Span in Average 10yrs 12yrs 10yrs 8yrs
Cost (Rs) 5.99 -8.77 lakhs 4.58 - 6.9 lakhs 4.20-5.3 lakhs 2.5 3.6 lakhs
Table 2:Elements of the Decision matrix
Alternatives
Criterias
Style Lifespan Fuel economy Cost
MarutiErtiga 6 7 8 6
Swift 8 7 8 7
Tata Indica 7 9 9 8
Alto800 9 6 8 9
Weights 0.1 0.4 0.3 0.2

RESULTS
- After taking the decision matrix from selection criteria, first we had to do normalise decision matrix. According to formula R
ij

is written as (equation 2)

R
13
=7/ (8
2
+6
2
+9
2
)
1/2
= 0.46
R
23
=9/ (7
2
+7
2
+6
2
)
1/2
= 0.61
R
33
=9/ (8
2
+8
2
+8
2
)
1/2
= 0.54
R
43
=8/ (6
2
+7
2
+9
2
)
1/2
= 0.53
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

180 www.ijergs.org

Table 3: Normalised values of Decision matrix
Alternatives
Criterias
Style Lifespan Fuel economy Cost
MarutiErtiga 0.40 0.48 0.48 0.40
Swift 0.53 0.48 0.48 0.46
Tata Indica 0.46 0.61 0.54 0.53
Alto800 0.59 0.41 0.48 0.59

- Then it is multiplied with weight criteria. Therefore it is

V
13
= 0.10.46= 0.046
V
23
= 0.40.61= 0.244
V
33
= 0.30.54= 0.162
V
43
= 0.20.53= 0.106
Table 4: Weighted values of Decision matrix
Alternatives
Criterias
Style Lifespan Fuel economy Cost
MarutiErtiga 0.040 0.192 0.144 0.080
Swift 0.053 0.192 0.144 0.092
Tata Indica 0.046 0.244 0.162 0.106
Alto800 0.059 0.164 0.144 0.118

- The positive ideal (A
+
) and the negative ideal (A

) solutions are defined according to the weighted decision matrix via equations
where J is associated with the beneficial attributes and J is associated with the non-beneficial attributes. Then we calculate the
separation distance of each competitive alternative from the ideal and non-ideal solution. Therefore (Eq.6 and 7)
S
+
= {0.058; 0.057; 0.029; 0.090}
S

= {0.047; 0.040; 0.083; 0.019}
- For each competitive alternative the relative closeness of the potential location with respect to the ideal solution is computed
(equation 8).
C
i
= {0.45; 0.41; 0.74; 0.17}
- Therefore the maximum value is the best one. If the value is lesser than the value of 1, then it is acceptable condition.

Fig. 2.Histogram of different cars
0.45
0.41
0.74
0.17
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
Maruti Ertiga Swift Tata Indica Alto 800
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

181 www.ijergs.org

CONCLUSIONS
The proposed new procedure for four wheeler selection is to find the best car among available ones in market using of decision
making method. After checking the aggregations on various process parameters under different circumstances, it is observed that the
proposed model is rather simple to use and meaningful for aggregation of the process parameters. TOPSIS is applied to achieve final
ranking preferences in descending order; thus allowing relative performances to be compared.

From the results it is observed that MARUTI ERTIGA, SWIFT, TATA INDICA and ALTO800 obtained the relative closeness
to ideal solution and the values are 0.45, 0.41, 0.74 and 0.17 respectively.
It is observed INDICA is identified as the best car among the considered ones which has the best relative closeness value

REERENCES:
[1] Deng, H.,Yeh, C.H., Willis, R.J., "Inter - company comparison using modified TOPSIS with objective weights", Computers &
Operations Research, 27, 2000, 963 973.
[2] C. L. Hwang, & K. Yoon, Multiple Attribute Decision Making: Methods & Applications, Berlin Heidelberg New York, Springer-
Verlag, 1981.
[3] Hu Yonghong. The Improvement of the Application of TOPSIS Method to Comprehensive Evaluation [J]. Mathematics in
Practice and Theory, 32 (4), 572 575, 2002.
[4] Li Chunhui, Li Aizhen. The Application of TOPSIS Method to Comprehensive Assessment of Environmental Quality, Journal of
Geological Hazard and Environmental Preservation, 10(2),9 13, 1999.
[5] Vimal J., Chaturverdi V., DubeyA.K, Application of TOPSIS method for supplier selection in manufacturing industry, IJREAS,
2(5), 2012, 25 35.
[6] Y. J. Lai, T. Y. Liu, and. L. Hwang, TOPSIS for MODM, European Journal of Operational Research, 76, 1994, 486 500.
[7] MajidBehzadian , S. KhanmohammadiOtaghsara , MortezaYazdani , Joshua Ignatius, A Review on state-of the-art survey of
TOPSIS applications,Expert Systems with Applications, 39, 2012, 1305113069.
[8] Jiang J., Chen Y.W., Tang D.W., Chen Y.W,(2010), "Topsis with belief structure for group belief multiple criteria decision
making, international journal of Automation and Computing, vol.7,no.3, pp 359-364.
[9] Huang, Y. S., & Li, W. H. (2010). A study on aggregation of TOPSIS ideal solutions for group decision-making. Group Decision
and Negotiation. http://dx.doi.org/10.1007/s10726-010-9218-2.
[10] H.S. Byun, K.H. Lee, A decision support system for the selection of a rapid prototyping process using the modified TOPSIS
method, International Journal of Advanced Manufacturing Technology 26 (1112) (2005) 13381347.
[11] C. L. Hwang, Y. J. Lai, and T. Y. Liu, A new approach for multiple objective decision making, Computers and Operational
Research 20, pp. 889-899, 1983.
[12] Fazlollahtabar, H., Mahdavi, I., TalebiAshoori, M., Kaviani, S., &Mahdavi-Amiri, N. (2011). A multi-objective decision-making
process of supplier selection and order allocation for multi-period scheduling in an electronic market. International Journal of
Advanced Manufacturing Technology, 52, 10391052








International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

182 www.ijergs.org

Biometric Template Feature Extraction and Matching Using ISEF Edge
Detection and Contouring Based Algorithm
Deven Trivedi
1
, Rohit Thanki
1
, Ashish Kothari
2

1
PhD Researcher Scholar, C. U. Shah University, Near Kothariya Village, Wadhwan City, Gujarat, India
2
Assistant Professor, Atmiya Institute of Technology & Science, Rajkot, Gujarat, India
Abstract In present world, biometric base authentication system is used by many agencies for security purpose. Use of important
characteristics of biometric based authentication system become so popular because every human presents unique biometric
characteristics and biometric recognition done automatically. Biometric authentication system is divided into four steps like biometric
template acquisition using sensor, feature extraction, template matching and decision about authentication. In this paper, we are
described new approach for biometric template feature extraction and template matching using combination of ISEF edge detection
and contour based biometric recognition algorithm. We have explored infinite symmetric exponential filter properties of ISEF
algorithm on iris biometric template which is applicable on particular edges points of template. This proposed approach has been
applicable on fingerprint, iris, and face biometric template of human. This proposed approach has applied on iris biometric template
because of less algorithm is available in literature for iris biometric recognition because of popularity of iris biometric template used
for human identification.

Keywords Biometric Recognition, Contouring, Edge Detection, Feature Extraction, ISEF, Iris Pattern, Template Matching.
INTRODUCTION
During so many years in biometric image recognition processing, human identification is done automatically by various templates
matching algorithm which is based on human characteristics like fingerprint, face, iris, Palm print and teeth. Biometric recognition is
challenging topics in pattern recognition area. Biometric recognition is used for enrollment, verification and authentication of
biometric template in biometric system [1]. In an enrollment process, the system enrolled humans characteristics which may be iris or
fingerprint into system database. In a verification process, the system is verified query humans characteristics with enrolled humans
own biometric characteristics. In authentication process, the system authenticates a humans characteristics by comparing the entire
enrolled biometric characteristics with human own biometric templates stored in system database [2, 3].

In last decade many algorithms are proposed and describe for iris recognition are reviewed here. Iris recognition algorithm is
analyzing the random pattern of humans iris [4]. First iris recognition and systems introduced by John Daugman in 2004 [5]. Author
in [5] described iris algorithm based on 2D Gabor wavelet which identify outer boundaries of the iris and the pupil. The data of this
region is converting into binary values pattern and which is used for human identification. When query iris image presented by a
human then statistical comparison taken place between query template and enrolled template for verification or authentication.

Author in [6] described iris algorithm using phase based image matching. In this algorithm phase components of 2D Discrete Fourier
Transforms (DFT) of iris image is used for template matching. Author in [7] developed an open source iris recognition algorithm
based on Daugmans method by using MATLAB. This iris recognition algorithm is dividing into three steps like automatic
segmentation, normalization and feature encoding and matching. Authors in [8] described iris recognition algorithm based on
combination of PCA and ICA. Authors shown that when PCA and ICA methods are used for encoding iris image then achieved good
performance of iris recognition algorithm.

In authors in [9] proposed new iris recognition algorithm using edge detection and zero crossing of wavelet transform [10]. They are
calculated zero crossing value of various wavelet resolution level of concentric circles on the iris. These values are used as model
features for compare with enrolled features values. This system can use under noisy condition and various illumination. In authors in
[10] shows similar approach of authors proposed in [11] which is based on zero-crossing discrete dyadic wavelet transform
representation and this approach approved accuracy of iris recognition. Authors in [12] proposed iris feature extraction algorithm
based on Multi-resolution Independent Component Identification (M-ICA). This extracted feature is compared with enrolled data
using conventional matching algorithms. Authors in [13] proposed Thresholding based iris recognition algorithm for detection of
pupil and the surroundings of iris image and convert into a rectangular format. This pattern is matched with enrolled data using self
organizing map networks and accuracy of algorithm is 83%.

Authors in [14, 15] proposed iris recognition algorithm based on circular symmetry filters and this filters used to capture local texture
information of iris image which is used for construction of a fixed length feature vector. The results of proposed algorithm were
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

183 www.ijergs.org

0.01% for False Acceptance and 2.17 % for False Rejection. Authors in [16] proposed iris feature extraction algorithm using fractal
dimension where iris is divided into small blocks and based on this block local fractal features is computed as iris pattern. The results
of proposed algorithm were 91% acceptance for enrolled user and 100% rejection for imposter user. Authors in [17] proposed
recognition algorithm for iris used Gabor filters and wavelet transform. The performance of method is does not change to translation,
rotation and tolerant to illumination. Authors in [18] proposed new iris algorithm for extraction of iris feature. In this method,
localization of iris image is get using Hough transform and feature is get using instantaneous phase or emergent. The iris pattern is
generated using Thresholding of frequency and real and imaginary parts of phase. Finally the matching is performed using Hamming
distance.

Authors in [19] proposed iris feature extraction using Haar wavelet transform and fourth level wavelet applied on iris image to get 87
bit feature vector. The recognition rate obtained is around 98.4%. Authors in [20] proposed two iris rogation algorithms based on
partly on the correlation analysis and partly on the median binary code of commensurable regions of digitized iris image. Similarly
method of eye-iris structure using statistical and spectral analysis of color iris images is proposed by authors in [21]. Authors in [21]
proposed iris feature extraction using Wiener spectra to characterized iris image. Authors in [22, 23] explained human iris structure
and classified using coherent Fourier spectra of the optical transmission. Authors in [24] proposed iris recognition algorithm for
biometric security with high performance and confidence. In this method, following steps are followed acquiring iris patterns,
determine the local of iris boundaries, converting iris boundary to polar coordinate, extracting iris code based on texture analysis of
wavelet and classification of the iris code. This proposed algorithm used wavelet transforms for feature analysis and depends on the
knowledge of general structure of human iris. Authors in [25] are gives review and comparison of various extraction and recognition
algorithms for human iris image.

In this paper we have described the ISEF based edge detection and contour based biometric recognition algorithm used for iris
template feature extraction recognition and iris template matching in biometric system. The organization of the paper is as follows:
section 2 briefly describes the ISEF algorithm; section 3 gives contour algorithm; section 4 gives proposed algorithm; section 5 gives
experimental results and conclusion in section 6.
ISEF EDGE DETECTION ALGORITHM [26, 28, 32 AND 34]
The Shen Castan is introduced novel edge detection algorithm based on infinite symmetric exponential filter (ISEF) [26, 29]. This
algorithm is divide into following steps like recursion filtering in X direction, recursion filtering in Y direction, binary Laplacian
image, non maximum suppression, gradient, hysteresis Thresholding, thinning [26, 28 32 and 34]. Shen and Castan agree with Canny
about the general form of the edge detector a convolution with a smoothing kernel followed by a search for edge pixels [27, 29]. The
steps of ISEF edge detection algorithm is applied on any biometric image are given in table 1 and get edges of biometric image. Figure
1 shows the result of applying the canny edge detector and Shen Castan edge detector to the test biometric image.


(a) Canny Operated

(b) ISEF Operated
Figure 1. Edge Detector Performance on Test Iris Image

Table I
ISEF EDGE DETECTION ALGORITHM [26, 28, 32 AND 34]

Sr. No. Steps
1 Recursive Filtering in X Direction
2 Recursive Filtering in Y Direction
3 Apply Binary Laplacian Technique
4 Apply Non Maxima Suppression
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

184 www.ijergs.org

5 Find the Gradient
6 Apply Hysteresis Thresholding
7 Apply Thinning

CONTOUR ALGORITHM [30 AND 33]
The information of contour algorithm is taken from reference papers [30, 33] which give application contour on image and how
contour label is applied on contour of image to get matrix values. The contour c function calculates the contour matrix for the other
contour functions. It is low level function that is not called from the command line. Contour, contour3 and contourf return a two row
matrix specifying all the contour lines [30, 33]. The format of the matrix is
C = [ value1 xdata (1) xdata (2)...
Num v ydata (1) ydata (2)...]
The first row of the column that begins each definition of a contour line contains the value of the contour as specified by v and used by
clabel. Beneath that value is the number of (x, y) vertices in the contour line. Remaining columns contain the data for the (x, y) pairs
[30, 33].
PROPOSED BIOMETRIC RECOGNITION ALGORITHM
Based discussion on ISEF edge detection algorithm and contour algorithm, we formatting block diagram of proposed biometric
recognition which is based on ISEF edge detection algorithm and contour algorithm is shown in figure 2. First step of algorithm is
that acquire query biometric template from biometric sensor and then applied ISEF edge detection algorithm on biometric template
and extract biometric features of query template. After getting features in term of edges applied contouring and contour label on this
edges and get contour matrix which is used as feature values for comparison. This contour matrix value is compared with enrolled
contour matrix value of human which is store at time of enrollment process. Score between these two contour matrix values take
decision about human authentication. The proposed iris recognition outline is given table 2.

Figure 2. Block Diagram of Proposed Biometric Recognition Algorithm
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

185 www.ijergs.org

Table II
PROPOSED BIOMETRIC RECOGNITION ALGORITHM

Steps No. Action Taken
1 Acquire Biometric Template
2 Apply ISEF Edge Detection Algorithm on Biometric Template
3 Apply Image Contouring on Extracted Edges of Biometric Template
4 Label the Contours to get Contour Matrix Values of Biometric Template
5 Used the Contour Matrix Values as Feature of Biometric Template for Comparison
6 Match Extracted Digits of Contour Matrix of Feature of Biometric Template with Digits of Contour
Matrix of Feature of Database Biometric Template
7 Decision on Biometric Recognition and Human Identification.
EXPERIMENTAL RESULTS
Performance of the proposed biometric template recognition algorithm is evaluated using a database containing iris template from
CASIA iris database [31] which is shown in figure 2. The size of iris image is M N = 128 128 pixels selected.


(a)

(b)
Figure 3. Sample Test Iris Images form CASIA Database (a) I1 (b) I2
An automated iris recognition system consists of two main stages: feature extraction and feature matching. In this paper, the feature
extracted is the iris contours and feature matching taken place between contour values of query template and enroll template of
human. The edge can be detected by any of template based edge detector but Shen Castan Infinite Symmetric Exponential Filter based
edge detector is an optimal edge detector like canny edge detector which gives optimal filtered image [26, 34]. First the whole iris
image will be filtered by the recursive ISEF filter in X direction and in Y direction [26, 28 and 34].

As shown in result in fig. 4 (a) and (b) recursive filter is applied in X direction and then in Y direction on iris image. Fig 5 is binary
Laplacian iris image which is derived by filtered image original image. Fig 6 is getting after applying gradient on binary Laplacian
image. Fig 7 is generated by applying code for Recursion filtering in X, direction Recursion filtering in Y direction, Binary Laplacian
Image, Non maximum suppression, Gradient, Hysteresis Thresholding & Thinning on iris image. Finally we have found that by
contour coded applied to ISEF edge detected iris image and original iris image which is shown in fig 8 and 9 respectively.



(a) Recursive Filter in X Direction

(b) Recursive Filter in Y Direction
Figure 4. Recursive Filtering on Test Iris Image I1
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

186 www.ijergs.org


Figure 5. Binary Laplacian Test Iris Image I1

Figure 6. Gradient Test Iris Image I1


Figure 7. ISEF Operated Test Iris Image I1

Figure 8. Contour on ISEF Operated Test Iris Image I1
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

187 www.ijergs.org


Figure 9. Contour on Original Test Iris Image I1
Finally we have found that by various contour codes applied to fig 8 and fig 9, we got more than 10000 columns. We have taken only
4001 to 4007 no of columns. The contour matrix values of original iris image and ISEF operated iris image in table 3 and 4
respectively.
Table III
CONTOUR MATRIX VALUES OF COLUMNS 4001 TO 4007 FOR ORIGINAL IRIS TEST IMAGE I1

111.1 111.1 111.0 110.0 109.0 108.0 107.9
6.00 5.00 4.88 4.88 4.88 4.50 4.00

Table IV
CONTOUR MATRIX VALUES OF COLUMNS 4001 TO 4007 FOR ISEF OPERATED IRIS TEST IMAGE I1

56.0 56.0 0.33 74.0 73.7 74.0 74.3
68.3 68.3 5.0 127.3 126.7 17.0 127.3

These columns are ready for iris template matching. Then find comparison between contour matrix values of query iris image and
contour matrix values of enrolled iris image and if result of comparison is zero then human is identified other human cannot
recognized . So when iris image of enrolled human is come for query at sensor then match almost all the columns are nearly equal, and
Euclidean distance between digits of query contour matrix and digits of enrolled contour matrix is zero. Table 5 shows contour matrix
values of iris Test image I2 which is given in figure 3 which different than contour matrix values of iris test image I1 which is
indicate contour matrix values of different iris image is different and it used as recognition decision of iris image for human
identification. For performance analysis of this proposed algorithm, we have taken 50 iris images from database and chose contouring
size 5 with setting threshold value is less than 100 and considering image 50 as enrolled iris data in system database. The results show
in table 6 that when image one is come as query then percentage of matching is 100 otherwise is less than 85.00 percentages.

Table V
CONTOUR MATRIX VALUES OF COLUMNS 4001 TO 4007 FOR ISEF OPERATED IRIS TEST IMAGE I2

50.0 51.0 52.0 53.0 54.0 55.0 56.0
101.7 101.7 101.7 101.7 101.7 101.7 101.7

Table VI
MATCHING PERCENTAGE OF PROPOSED RECOGNITION ALGORITHM FOR TEST IRIS IMAGE I1

Image Number Matching Percentages
1 100
10 84.40
20 30.10
30 29.40
40 29.40
50 25.10
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

188 www.ijergs.org

ACKNOWLEDGMENT
The authors are highly thankful to National Laboratory of Pattern Recognition (NLPR), Institute of Automation, Chinese Academy of
Sciences (CASIA), China to provide iris image database.
CONCLUSION
In this paper we have proposed novel algorithm for iris recognition based on ISEF edge detection algorithm and contour based
algorithm. Here we have applied proposed recognition algorithm on test iris image, we have got more than 10000 columns of contour
data. Form these columns, we choose 4001 to 4010 columns of contour data used for matching of iris image and matching is same
when query iris image is belong to enrolled humans iris image than matching of all the columns are nearly equal, otherwise different
columns are there. This proposed algorithm has good performance for iris recognition under noisy environment because of application
of recursive filtering before applying edge detection.

REFERENCES:

[1] A. Jain and A. Kumar, Biometric Recognition: An Overview, Second Generation Biometrics: The Ethical, Legal and Social
Context, E. Mordini and D. Tzovaras (Eds.), Springer, 2012, pp. 49-79.
[2] A. Jain, A. Ross and S. Prabhakar, An Introduction to Biometric Recognition, IEEE Transactions on Circuits and Systems for
Video Technology, Special Issue on Image and Video Based Biometrics, vol. 14, no. 1, January 2004, pp. 4-20.
[3] Biometrics and Standards, ITU-T Technology Watch Report, December 2009.
[4] D. Walker, Image Recognition Biometric Technologies Make Strides, 2006.
[5] J. Daugman, How Iris Recognition Works, IEEE Transactions on Circuits and Systems for Video Technology, vol. 14, no. 1,
January 2004, pp. 21-30.
[6] K. Miyazawa, K. Lto, T. Aoki, K. Kobayashi and H. Nakajima, A Phase Based Iris Recognition Algorithm, D. Zhang and A. K.
Jain (Eds.), ICB 2006, LNCS 3832, Springer Verlag Berlin Heidelberg, 2005, pp. 356-365.
[7] L. Masek and P. Kovesi, Recognition of Human Iris Patterns for Biometric Identification, MS Dissertation, School of Computer
Science and Software Engineering, University of Western Australia, 2003.
[8] V. Dorairaj, N. Schmid, and G. Fahmy, Performance Evaluation of Iris-based Recognition System Implementing PCA and ICA
Encoding Techniques, In Defense and Security, International Society for Optics and Photonics, March 2005, pp. 51-58.
[9] W. Boles and B. Boashash, Human Identification Technique Using Images of the Iris and Wavelet Transform, IEEE
Transactions on Signal Processing, vol. 46, no.4, 1998, pp. 1185-1188.
[10] S. Mallat, Zero-crossing of a Wavelet Transform, IEEE Transactions on Information Theory, vol. 37, no. 14, 1991, pp. 1019-
1033.
[11] C. Sanchez, R. Sanchez and D. Roche, Iris Recognition for Biometric Identification Using Dyadic Wavelet Transform Zero-
Crossing, Proceedings of the IEEE 35
th
International Conference on Security Technology, Camahan, 2001, pp. 272-277.
[12] N. Seuung, P. Kwanghuk, L. Chulhy, and J. Kim, Multiresolution Independent Component Identification, Proceedings of the
2002 International Technical Conference on Circuits, Systems, Computers and Communications, Phuket, Thailand, 2002.
[13] J. Dargham, A. Chekima, F. Chung and L. Liam, Iris Recognition Using Self Organizing Neural Network, Student Conference
on Research and Development, 2002, pp. 169-172.
[14] L. Ma, W. Tieniu and Yunhong, Iris Recognition Based on Multichannel Gabor Filtering, Proceedings of the International
Conference on Asian Conference on Computer Vision, 2002, pp. 1-5.
[15] L. Ma, W. Tieniu and Yunhong, Iris Recognition Using Circular Symmetric Filters, Proceedings of the 16
th
International
Conference on Pattern Recognition, vol. 2, 2002, pp. 414-417.
[16] W. Chen and Y. Yuan, A Novel Personal Biometric Authentication Technique Using Human Iris Based on Fractal Dimension
Features, Proceedings of the International Conference on Acoustics, Speech and Signal Processing, 2003.
[17] Z. Yong, T. Tieniu and Y. Wang, Biometric Personal Identification Based on Iris Patterns, Proceedings of the IEEE
International Conference on Pattern Recognition, 2000, pp. 2801-2804.
[18] C. Tisse, L. Torres and M. Robert, Person Identification Technique Using Human Iris Recognition, Proceedings of the 15
th

International Conference on Vision Interface, 2002.
[19] S. Lim, K. Lee, O. Byeon and T. Kim, Efficient Iris Recognition through Improvement of Feature Vector and Classifier,
Journal of Electronics and Telecommunication Research Institute, vol. 23, no. 2, 2001, pp. 61-70.
[20] M. Labor and P. Jaroslav, Alternatives of the Statistical Evaluation of the Human Iris Structure, Proceedings of the SPIE, vol.
4356, 2001, pp. 385-393.
[21] E. Gurianov, A. Ximnyakov and A. Galanzha, Iris Patterns Characterization by use of Wiener Spectra Analysis: Potentialities
and Restrictions, Proceedings of the SPIE, vol. 4242, 2001, pp. 286-290.
[22] P. Kois, A. Muron and P. Jaroslav, Human Iris Structure by the Method of Coherent Optical Fourier Transform, Proceedings of
the SPIE, vol. 4356, 2001, pp. 394-400.
[23] A. Muron, P. Kois and J. Pospisil, Identification of Persons by Means of the Fourier Spectra of the Optical Transmission Binary
Models of the Human Irises, Optics Communications, vol. 192, 2001, pp. 161-167.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

189 www.ijergs.org

[24] M. Jafar and A. Haussanien, An Iris Recognition System to Enhance E-Security Environment Based on Wavelet Theory, AMO
Advanced Modeling and Optimization, vol. 5, no. 2, 2003, pp. 93-104.
[25] M. Vatsa, R. Singh and P. Gupta, Comparison of Iris Recognition Algorithms, Proceedings of the ICISIP 2004, 2004, pp. 354-
358.
[26] S. Castan, J. Zhao and J. Shen, New Edge Detection Methods Based on Exponential Filter, Proceedings 10
th
International
Conference on Pattern Recognition, vol. 1. Issue 16, Jun 1990, pp. 709-711.www.scribd.com/doc/6577699/Edge-Detect
[27] J. Canny, A Computational Approach to Edge Detector, IEEE Transactions on PAMI, 1986, pp. 679-698.
[28] K. Pithadiya, C. Modi, J. Chauhan and K. Jain, Performance Evaluation of ISEF and Canny Edge Detector in Acrylic Fiber
Quality Control Production, Proceedings of National Conference on Innovations in Mechatronics Engineering , G. H. Patel
College of Engineering & Technology, Vallabh Vidyanagar, 2009, pp. 89-92.
[29] D. Marr and E. Hildreth, Theory of Edge Detection, Proceedings of the Royal Society of London, Series B. Biological
Sciences, 207(1167), 1980, pp. 187-217.
[30] www.mathworks.com/help/matlab/ref/contour.html, for contour algorithm
[31] CASIA Iris Image Database (version 1.0), http://www.sinobiometircs.com/casiairis.html
[32] A. Martin and S. Tosunoglu, Image Processing Techniques for Machine Vision, Miami, Florida, 2000
[33] http://nf.nci.org.au/facilities/software/Matlab/pdf_doc/matlab/graphg.pdf.pan.txt , for contour algorithm.
[34] A. Solanki, K. Jain and N. Desai, ISEF Based Identification of RCT/Filling in Dental Caries of Decayed Tooth, International
Journal Image Processing (IJIP), vol. 7, issue 2, 2013, pp. 149-162

















International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

190 www.ijergs.org

Ram Control Block of Vector Display Processor
N. Agarwala
1

1
Lecturer, Department of EEE, School of science and Engineering, Southeast University, Dhaka, Bangladesh
Abstract Vector Display Processor allows efficient encoding of topology, and as a result more efficient operations that require
topological information can be done, e.g. proximity, network analysis. [1] In this work, a Ram control block has been designed which
is a part of Vector Display Processor and the block is synthesized with the test bench and the generating simulation waveform was
checked for the inputs and outputs.
Keywords VDP (Vector Display Processor); Ram Control Block; Draw Block.
1. Introduction
Vector Display Processor consists of two blocks. One is Draw Block and another one is RAM Control Block. The RAM Control Block
is the interface between the Draw Block and VRAM. At this display, data can be represented at its original resolution and form
without generalization. Graphic output is usually more aesthetically pleasing; since, most data, e.g. hard copy maps, is in vector form
no data conversion is required and accurate geographic location of data is maintained.[1] Synthesis is the most important design issues
which I think everybody should remember while doing the VHDL hardware. The code must be synthesizable. The top level (Vector
Display Processor) design integrates several parts and together they behave as a single device. RAM Control Block is one of the parts
of VDP where there are many different modules which behave as a single block.

2. Methods
2.1 Design Description:
This is the most important part of my work as all the processes are based on this section. In this part of my project, several things was
taking into account, for example, specification, Finite State Machine (FSM), Code description, the Waveform and so on.
2.1.1 Specifications:
Inputs:
X and Y: These are the 6 bit vectors from which the first 4 bits of each of them are used for the current word address and the
other 2 bits are for checking the bit number where to update the data.
Pen: Pen has 2 bits to give the color which are listed below:

Input Color
00 Black
01 White
10 Invert
11 Illegal

Draw pixel: It is single bit input which comes from the Draw Block and informs RAM Control Block for updating the data
from the pen inputs and performs the operations of drawing pixels.

Flush: Flush is a 1 bit input which is used to write the data to RAM from both the data_reg and (stored_ram_word) and
address_reg(current_word_addr).

vdout: This is a 16bit input which comes from the VRAM and it stores the date in the data_reg.
Outputs:

ack: This is a 1 bit output which tells the Draw Block whether the RAM Control Block is free or not.

vaddr: This is a 7 bit output from address_reg or x,y which goes to VRAM .
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

191 www.ijergs.org


Vdin: This is a 16 bit output which goes to the VRAM.

Vwrite: This is the output which goes to the VRAM.
Registers:

Address_reg (Current_word_addr): This is 8 bit register where the first 4 bit of both the inputs (x,y) come and stores current
word address.

Data_reg (stored_ram_word): This is 16 bit register which stores the data according to the 16 bit pixels.

2.1.2 The Finite State machine:


Fig 1: Flow Chart of FSM
The Finite State machine consists of four states which are described below:

rest_state: This is the idle state in which state RAM Control Block has nothing to work. In this state the Reset = 1, meaning
that it will reset all the previous values and also ack = 1 which means that now the RAM Control Block is ready to accept
the data from the Draw Block. In the rest state two commands might come, one is Drawpixel = 1 or Flush = 1. If it will get
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

192 www.ijergs.org

the drawpixel = 1 then it will check one more condition, whether (x,y) = Current RAM Word or not. If the answer is yes
then it will directly go to up_data_state and if no then it will go to Wr_state. In the case of Flush, if the command is Flush =
1 then it will immediately go to the write state.

Wr_state: This is the next state after rest which is used to write data to the RAM. The system can be in Wr-state because of
two commands. One when Flush = 1 and another when (x,y) is not same as Current RAM Word. In the case of Flush, it will
just flush the present data to the two registers. In another case, it will take the data from the two registers and write the data
to the RAM. After performing all the operation if the value of write_over = 1 and write_read_update = 0 then it will go
back to the idle state and update the ack = 1meeaning that RAM Control Block has performs all the operations and waiting
for the next operation to do from the Draw Block.

R_state: This is the next state after the Wr_state when it will get the command like Write_over = 1. At this state the only
work is to read the Data from the RAM. After this state the data read from the Data neeede to be updated. So, whenever the
R_state will give a value of 1 it will go to the up_data_state to update the data.

Update_data_state: This is the nest state after the R_state. After updating the data it will go to the rest state.

2.1.3 Description of Code:

The code consists of one Entity and an Architecture

ENTITY RAM_CONTROL_BLOCK: In this Entity all the input and output ports are define. Some are for the interfacing
with the Draw Block and some are for VRAM.

ARCHITECTURE RAM_CONTROL_BLOCK_BEHAVIOUR: This Architecture consists of FIVE processes which are
described below:

PSTATE_PROC: This is a short process which depends on the clock. When the value of clock will be 1it will go to
next_state from the present_state. When the reset will be 1 it will make the next_state as the rest_state.

TIME: In this process all the necessary variables and constants are declared as integer and some of them assigned the fixed
values. Here, different formulas are used to calculate the time to get the frequency measurements. Here, the operation and the
number of clock cycles for both the read and write has been done for all fours states.

STATE_TRANSITION: This is process is all about the transition of the states and depends on the value of data_bit_signal
which is the combination of drawpixel and flush. If the data_but_gis =00 it will be in the rest_state. If the data_bit_sig
=01, it will go to the wr_state. If the data_bit_sig =10, then it will check whether the word_sig = addr_reg. If the value is
same then it will go to the up_data_state otherwise it will go to the wr_state. After performing the operation in the wr_state it
wil go to either the rest_stae (if all other operations is finished) or will go to the r_state. In the r_state, it will read the data and
after that it will go to the up_data_state and after updating the data it will go to the rest_state.

DATA_RAM_WORD: In this process it always update the value of data_reg depending on the value of x1 which is being
calculated every state.

ADDRESS_REG: In this process for loop is used to get the value of word_sig. In this process the value of vaddr is updated
with the value of data_reg.





International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

193 www.ijergs.org

3. Results:
3.1.1 Waveform:












Fig 2: Waveform of Ram Control Block
In this waveform we can see the values of all the inputs and outputs. Here, the inputs values are given differently to check the
expected outputs in all cases.

3.1.2. The Synthesis Result:











Fig 2: Synthesis Result of Ram Control Block


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

194 www.ijergs.org

The RAM CONTROL BLOCK is synthesized.

4. Conclusion

I have tested the RAM Control Block by using the test bench created by me and check the inputs and outputs in the simulation
waveform. To complete the total Vector Display Process, my next step will be making the Draw Block to integrate with my one. I
hope, I will able to finish my work very soon.

ACKNOWLEDGEMENT:
I would like to thank Dr. Tom Clarke for his enormous support while doing this work. I also want to thank Raj, Kim for their help to
check my result with their test bench.

REFERENCE:
[1] Buckey, D.J., bgis introduction to GIS, BGIS-SANBI

















International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

195 www.ijergs.org

Comparative Study of type-1 and Type-2 Fuzzy Systems
Neetu Gupta
1

1
Assistant proessor, GIMET Amritsar
E-mail- gmetneetugupta@gmail.com

Abstract TYPE-2 fuzzy sets (T2 FSs), originally introduced by Zadeh [3], provide additional design degrees of freedom in
Mamdani and TSK fuzzy logic systems (FLSs), which can be very useful when such systems are used in situations where lots of
uncertainties are present [4]. The implementation of this type-2 FLS involves the operations of fuzzification, inference,and output
processing. We focus on output processing, which consists of type reduction defuzzification. Type-reduction methods are extended
versions of type-1 defuzzification methods. In this paper we represent a comparision of dirrerent techniques of fuzzy logic systems.


Key Wordsfuzzy logic systems, interval sets, uncertainties,membership function, defuzzification.



I. INTRODUCTION
IN this paper, we introduce a new class of fuzzy logic systemstype-2 fuzzy logic systemsin which the antecedent or consequent
membership functions are type-2 fuzzy sets. The concept of a type-2 fuzzy set was introduced by Zadeh [1] as an extension of the
concept of an ordinary fuzzy set (henceforth called a type-1 fuzzy set). Such sets are fuzzy sets whose membership grades themselves
are type-1 fuzzy sets;they are very useful in circumstances where it is difficult to determine an exact membership function for a fuzzy
set; hence,they are useful for incorporating uncertainties.Quite often, the knowledge used to construct rules in a fuzzy logic system
(FLS) is uncertain. This uncertainty leads to rules having uncertain antecedents and/or consequents, which in turn translates into
uncertain antecedent and/or consequent membership functions. For example:
1) A fuzzy logic modulation classifier described in [2] centers type-1 Gaussian membership functions at constellation points on the in-
phase/quadrature plane. In practice, the constellation points drift. This is analogous to the situation of a Gaussian membership function
(MF) with an uncertain mean. A type-2 formulation can capture this drift.
2) Previous applications of FL to forecasting do not account for noise in training data. In forecasting, since antecedents and
consequents are the same variable, the uncertainty during training exists on both the antecedents and consequents. If we have
information about the level of uncertainty, it can be used when we model antecedents and consequents as type-2 sets.
3) When rules are collected by surveying experts, if we first query the experts about the locations and spreads of the fuzzy sets
associated with antecedent and consequent terms, it is very likely that we will get different answers from each expert[4] This leads to
statistical uncertainties about locations and spreads of antecedent and consequent fuzzy sets. Such uncertainties can be incorporated
into the descriptions of these sets using type-2 membership functions. In addition, experts often give different answers to the same
rule-question; this results in rules that have the same antecedents but different consequents.In such a case, it is also possible to
represent the output of the FLS built from these rules as a fuzzy set rather than a crisp number. This can also be achieved within the
type-2 framework[6].

2 TYPE 1 FLS

In a type-1 FLS, the inference engine combines rules and gives a mapping from input type-1 fuzzy sets to output type-1 fuzzy
sets. Multiple antecedents in rules are connected by the -norm (corresponding to intersection of sets). The membership
grades in the input sets are combined with those in the output sets using the sup-star composition. Multiple rules may be combined
using the -conorm operation (corresponding to union of sets) or during defuzzification by weighted summa- tion. In the type-2 case,
the inference process is very similar. The inference engine combines rules and gives a mapping from input type-2 fuzzy sets to output
type-2 fuzzy sets. To do this one needs to find unions and intersections of type-2 sets, as well as compositions of type-2 relations.
In a type-1 FLS, the defuzzifier produces a crisp output from the fuzzy set that is the output of the inference engine, i.e., a
type-0 (crisp) output is obtained from a type-1 set. In the type-2 case, the output of the inference engine is a type-2 set; so we use
extended versions (using Zadehs extension principle [5], [7], of type-1 defuzzification methods. This extended defuzzification
gives a type-1 fuzzy set. Since this operation takes us from the type-2 output sets of the FLS to a type-1 set, we call this operation
type reduction and the type-reduced set so obtained a type-reduced set.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

196 www.ijergs.org

To obtain a crisp output from a type-2 FLS, we can defuzzify the type-reduced set. The most natural way of doing this seems
to be by finding the centroid of the type-reduced set, however, there exist other possibilities like choosing the highest
membership point in the type-reduced set.


From our discussions so far, we see that in order to develop a type-2 FLS, one needs to be able to: 1) perform the set theoretic
operations of union, intersection, and complement on type-2 sets [8]; 2) know properties (e.g., commutativity, associativity,
identity laws) of membership grades of type-2 sets [8]; 3) deal with type-2 fuzzy relations and their compo- sitions [8]; and 4)
perform type reduction and defuzzification to obtain a set-valued or crisp output from the FLS [8], [7].




Fig. 1. Type-1 FLS.


3 TYPE 2 FLS


Type-2 fuzzy sets allow us to handle linguistic uncertainties,as typified by the adage words can mean different things to different
people [20]. A fuzzy relation of higher type (e.g.,type-2) has been regarded as one way to increase the fuzziness of a relation and,
according to Hisdal [11], increased fuzziness in a description means increased ability to handle inexact information in a logically
correct manner. According to John [12], Type-2 fuzzy sets allow for linguistic grades of membership, s ssisting in knowledge
representation and they also offer improvement on inferencing with type-1 sets.







Fig. 2. The structure of a type-2 FLS.

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

197 www.ijergs.org

Fig. 2 shows the structure of a type-2 FLS. It is very similar to the structure of a type-1 FLS [26]. For a type-1 FLS, the
output processing block only contains the defuzzifier. We assume that the reader is familiar with type-1 FLSs, so that here we focus
only on the similarities and differences between the two FLSs.
The fuzzifier maps the crisp input into a fuzzy set. This fuzzy set can, in general, be a type-2 set, however, in this paper,
we consider only singleton fuzzification, for which the input fuzzy set has only a single point of nonzero membership
Fig.3 shows an example of product and minimum inference for an arbitrary single-input single-output type-2 FLS using
Gaussian type-2 sets. Uncertainty in the primary membership grades of a type-2 MF consists of a dark region that we call the
footprint of uncertainty of a type-2 MF. The footprint of uncertainty represents the union of all primary memberships. Darker
areas indicate higher secondary memberships. The principal membership function, i.e., the set of primary memberships having
secondary membership equal to one, is indicated with a thick line


(a) (b)


(c) (d)


Fig. 3. Illustrations of product and minimum inference in the type-2 case. (a) Gaussian type-2 antecedent set for a single
nput system. The membership of a certain input x = 4 in the principal membership function is also shown, equal to . (b)
Consequent set corresponding to the antecedent set shown in (a). (c) Scaled consequent set for x = 4 using product
inference. Observe that the secondary membership functions of the consequent set also change depending upon the
standard deviation of the membership grade of x. (d) Clipped consequent set for x = 4 using minimum inference.

REERENCES:
[1] L. A. Zadeh, The concept of a linguistic variable and its application to approximate reasoning1, Inform. Sci., vol. 8, pp. 199
249. 1975.
[2] W. Wei and J. M. Mendel, A fuzzy logic method for modulation classification in nonideal environments, IEEE Trans. Fuzzy
Syst., vol.7, pp. 333344, June 1999.
[3] L. A. Zadeh, The concept of a linguistic variable and its applicationto approximate reasoning-1, Inform. Sci., vol. 8, pp. 199
249, 1975
[4] , Type-2 fuzzy sets: Some questions and answers, IEEE Connections,vol. 1, pp. 1013, Aug. 2003.
[5] O. Castillo, P. Melin, Intelligent systems with interval type-2 fuzzy logic, International Journal of Innovative Computing,
Information and Control 4 (4) (2008) 771783.
[6] S. Coupland, R. John, A fast geometric method for defuzzication of type-2 fuzzy sets, IEEE Transactions on Fuzzy Systems
16 (4) (2008) 929941. [3] S. Coupland, R. John, New geometric inference techniques for type-2 fuzzy sets, International Journal
of Approximate Reasoning 49 (1) (2008) 198211. [7] D. Dubois, H. Prade, Fuzzy sets and systems: theory and applications,
Academic Press, Inc., New York, 1980.
[8] H. Hagras, Type-2 fuzzy logic controllers: a way forward for fuzzy systems in real world environments, Lecture Notes in
Computer Science (5050) (2008) 181200.
[9] R. O. Duda and P. E. Hart, Pattern Classification and Scene Analysis.New York: Wiley, 1973.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

198 www.ijergs.org

[10] S. Ghosh, Q. Razouqi, H. J. Schumacher, and A. Celmins, A survey of recent advances in fuzzy logic in telecommunications
networks and new challenges, IEEE Trans. Fuzzy Syst., vol. 6, pp. 443447, Aug. 1998.
[11] E. Hisdal, The IF THEN ELSE statement and interval-valued fuzzy sets of higher type, Int. J. Man-Machine Studies, vol. 15,
pp. 385455,1981.
[12] R. I. John, Type 2 fuzzy sets: An appraisal of theory and applications,Int. J. Uncertainty, Fuzziness, Knowledge-Based Syst.,
vol. 6, no. 6, pp.563576, Dec. 1998.
[13] R. I. John, P. R. Innocent, and M. R. Barnes, Type 2 fuzzy sets and neuro-fuzzy clustering of radiographic tibia images, IEEE
Int. Conf.Fuzzy Syst., Anchorage, AK, May 1998, pp. 13731376.
[14] N. N. Karnik and J. M. Mendel, Introduction to Type-2 Fuzzy Logic Systems, presented at IEEE FUZZ Conf., Anchorage, AK,
May 1998.
[15] , Type-2 Fuzzy Logic Systems: Type-Reduction, presented at IEEE Syst., Man, Cybern. Conf., San Diego, CA, Oct. 1998.
[16] , An introduction to type-2 fuzzy logic systems, Univ. Southern California, Rep., Oct. 1998.
[17] , Applications of type-2 fuzzy logic systems: Handling the uncertainty associated with surveys, presented at FUZZ-IEEE Conf.,
Seoul, Korea, Aug. 1999.
[18] , Applications of type-2 fuzzy logic systems to forecasting of time-series, Inform. Sci., to be published.
[19] , Operations on type-2 fuzzy sets, Fuzzy Sets Syst., to be published.
[20] , Centroid of a type-2 fuzzy set, Inform. Sci., to be published.
[21] G. J. Klir and B. Yuan, Fuzzy Sets and Fuzzy Logic: Theory and Applications. Englewood Cliffs, NJ: Prentice-Hall, 1995
















International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

199 www.ijergs.org

Design of High Speed Multiplier using Vedic Mathematics
Surbhi Bhardwaj
1
, Ashwin Singh Dodan
1

1
Scholar, Department of VLSI Design, Center or Development of Advance Computing (CDAC), Noida, India
E-mail- meetsurbhi.bhardwaj@gmail.com

Abstract With the advancement of technology, a processor is required to have high speed. Multiplication is a critical operation of
Digital Signal processing(DSP) applications(like DFT, FFT, convolution etc), Arithmetic and logic unit(ALU), and Multiply and
Accumulate(MAC) unit(which is basically a multiplier itself). High Speed Multiplication is thus an essential requirement to increase
the performance of processor.
In this paper, we are presenting a multiplier, in which the basic multiplication is performed using one of the techniques of Vedic
Mathematics, and the accumulation of partial products is done using a specific design. Both the design and the vedic technique, results
in a high speed multiplier. Vedic Mathematics is based on 16 sutras, out of which we are using "Urdhva triyagbhyam" sutra. In this
technique intermediate products are generated in parallel, that makes multiplication faster. We have synthesized our design using
Xilinx ISE tool and compared its speed with "Modified Booth Wallace Multiplier", "High speed Vedic Multiplier" by Ramesh
Pushpangadam and "Vedic Mathematics based Multiply Accumulate Unit" by Kabiraj Sethi. Our proposed Design seems to have
better speed.
Keywords Vedic Mathematics, Urdhav Tiryagbhyam, VLSI, Verilog, Modified Booth Wallace Multiplier, ALU, Booth Multiplier
INTRODUCTION
The speed of a processor, determines its performance. High speed processing is an essential requirement for all the systems.
Multiplication is a significant operation in Digital signal processors and ALU, and thus the demand for high speed multiplication is
continuously increasing in modern VLSI design. Our research focus is on high speed multiplication using Vedic Mathematics. Earlier
some multipliers like booth multiplier[1], Modified Booth Multiplier[2,3], and array multipliers[4] were considered for high speed
multiplication, but these multipliers involve large number of intermediate steps, which reduces their speed, with increase in the
number of bits.
For high speed multiplication as well as to increase the performance of multiplier, Vedic Mathematics techniques are used[7-11].
The Sanskrit word Veda is derived from the root Vid, meaning to know without limit. Swami Bharati Krishna Tirtha, culled a set of 16
Sutras (aphorisms) and 13 Sub - Sutras (corollaries) from the Atharva Veda. He developed methods and techniques for amplifying the
principles contained in the aphorisms and their corollaries, and called it Vedic Mathematics[5]. The Sutras apply to and cover almost
every branch of Mathematics. They apply even to complex problems involving a large number of mathematical operations.
Out of these 16 sutras, our emphasis is on Urdhva Triyagbhyam technique, which is based on vertical and crosswise multiplication.
Vedic mathematics is highly coherent and unified. Multiplication can be reversed to achieve one line division, squaring can be
reversed to generate square root. In Vedic Mathematics partial products are generated in parallel, which increases the speed of
operation[6]. In this paper we are proposing a design for accumulation of these intermediate products, with minimal delay.
Section II deals with the Urdhva Tiryagbhyam technique, which is used for basic multiplication. Section III describes basic
multiplier architecture, Section IV describes our proposed design. Section V deals with results and their comparison with other
multipliers and Vedic Designs.

I. VEDIC TECHNIQUE-URDHAV TIRYAGBHYAM
Among all the techniques used in Vedic Mathematics for multiplication, Urdhav Tiryagbhyam is the most preferred technique.
Urdhav Tiryagbhyam means vertically and crosswise multiplication. It was discovered for fast and convenient multiplication of
decimal numbers, and in our design we are using this ability for multiplication of binary numbers. The partial products are generated
in parallel, which provides fast multiplication. The biggest advantage is that, it can be implemented with reduced number of AND
gates, Full Adders and Half Adders.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

200 www.ijergs.org

We will first consider an example, which shows multiplication of two decimal numbers 123*456, as shown in Fig. 1. Firstly we
take the product of least significant bits of the multiplier and multiplicand, the least significant bit of the result i.e. 8 in this case, is
stored and carry is generated for the next step i.e. 1. In the next step, crosswise two least significant bits are multiplied, and their
product is added with the previous carry. Similarly in the next step all the bits are multiplied crosswise and their products are summed
up with the previous carry. Again In the next step, two most significant bits are multiplied crosswise, and results are added in the
similar manner. And finally, the most significant bit of the multiplier and multiplicand are multiplied, and result is added with the
previously generated carry, to get the end result.
Fig.1. Line Diagram for multiplication of decimal numbers
We can apply this sutra in the similar manner, to binary numbers. As shown in Fig. 2, the bits of multiplier and multiplicand are
multiplied crosswise. They are added with previous carry, to generate the result of that particular step. The final result is obtained by
concatenating, the result from each step and the carry in the last step. For convenience bits are represented by circles in figure.


Fig.2. Line Diagram for multiplication of binary numbers
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

201 www.ijergs.org


II. BASIC MULTIPLIER ARCHITECTURE
2*2 Multiplier Architecture is obtained by using 2 half adders and four and gates, as shown in Fig. 3. The product X0Y0 is directly
given to the output. X1Y0 and X0Y1 are added using first half adder, sum is given directly to the output and carry is added with
product X1Y1 using second half adder.
S0=X0*Y0
C1S1=X0*Y1+X1*Y0
C2S2=X1*Y1+C1
Result={C2S2S1S0}
where Sn and Cn are the sum and carry output respectively.


Fig.3. Architecture of 2*2 Multiplier
III. PROPOSED DESIGN
The architecture of 4*4 Multiplier consists of four 2*2 Multipliers, a 4-bit carry save adder, a 5-bit adder, and a 2-bit adder as
shown in Fig. 4. This design, performs the accumulation of partial products in such a way, that it reduces the delay, compared to other
multipliers, along with that partial products are generated in parallel, so delay is further reduced.

Fig.4. Architecture of 4*4 Multiplier
The least significant two bits of the first 2*2 multiplier are directly given to the output as output bits P1P0. The most significant
bits of the output of the same multiplier are concatenated with the least significant two bits of the fourth multiplier, and the resulting
value is added with 4-bit outputs of the second and third multiplier using carry save adder. The sum and carry output of carry save
adder are added using a 5-bit adder. The least significant four bits obtained as output of the 5-bit adder are given at the output as
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

202 www.ijergs.org

P5P4P3P2. And the most significant 6th bit is appended with 0 and added with the most significant two bits of the fourth multiplier
using 2-Bit adder, the result is given as P7P6 bit of the output.
The total critical path delay is equal to the delay of 3 Full adders and 6 half adders. In the similar manner, we can implement
designs of 8*8 and 16*16 multipliers, using four 4*4 and four 8*8 multipliers respectively.

IV. RESULTS AND COMPARISON
We have simulated our design using ModelSim-Altera 6.4a. Coding is done using Verilog. The simulation results of 4*4, 8*8 and
16*16 Multipliers are shown in Fig. 5, Fig. 6, and Fig. 7 respectively. We have synthesized our designs using Xilinx ISE suite 14.3
and obtained the delay using Xilinx Plan Ahead 14.3, the results are shown in TABLE 1.

Fig.5. Simulation result of 4*4 Multiplier

Fig.6. Simulation result of 8*8 Multiplier

Fig.7. Simulation result of 16*16 Multiplier

Table 1:Synthesis Result of Proposed Design
Device:SPARTAN3:
XC3S50:-4
No. of Slices No. of 4 I/p LUTs No. of Bonded
IOBs
Delay(ns)
4*4 16 29 16 11.695
8*8 84 149 32 18.532
16*16 373 661 64 30.659

We have compared our results with "Modified Booth Wallace Multiplier"[7,10], "High speed Vedic Multiplier" by Ramesh
Pushpangadam[10] and "Vedic Mathematics based Multiply Accumulate Unit" by Kabiraj Sethi[7]. The results, as shown in TABLE
2, report that our multiplier design is much faster than other multipliers. Its delay is much less compared to other designs.

Table 2:Comparison of maximum combinational pad to pad delay(ns)
Device:SPARTAN3:
XC3S50:-4
Modified Booth Wallace
Multiplier[7,10]
Ramesh
Pushpangadam[10]
Kabiraj
Sethi[7]
Proposed
Design
4*4 NA NA 17.45 11.695
8*8 25.756 25.175 25.06 18.532
16*16 59.238 37.507 36.09 30.659

CONCLUSION
We have designed a multiplier, which is highly efficient in terms of speed. Basic multiplier architecture is based on Vedic Technique and
accumulation is done using carry save adder, which gives better performance. On comparison with other multipliers, we have found that our
design works with much less delay. For future work, its performance within an ALU can be tested or it can be compared with other Vedic
Designs or Conventional Designs
.

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

203 www.ijergs.org

REERENCES:
[1] A.D.Booth, A Signed Binary Multiplication Technique, J. mech. and appl. math, vol 4, no.2, pp. 236-240, Oxford University
Press, 1951.
[2] Soojin Kim and Kyeongsoon Cho, "Design of High-speed Modified Booth Multipliers operating at GHz Ranges", World academy
of Science, Engineering and Technology 61 2010
[3] Shaik.Kalisha Baba and D.Rajaramesh,"Design and Implementation of advanced Modified Booth Encoding Multiplier",
International Journal of Engineering Science Invention, August 2013
[4] J. Rabaey, A. Chandrakasan, B. Nikolic,"Digital Integrated Circuits", Second Edition, 2003
[5] Swami Bharti Krisna Tirths, "Vedic Mathematics", Motilal Banarsidass Publishers, Delhi,1965.
[6] Maharaja, J.S.S.B.K.T.,"Vedic Mathematics", Motilal Banarsidass PublishersPvt. Ltd., Delhi, 2009.
[7] Devika Jaina, Kabiraj Sethi, and Rutuparna panda, Vedic Mathematics based Multiply Accumulate unit, International
conference on computational intelligence &communication systems, 2011
[8] Prof J.M. Rudagi, Vishwanath Ambli, Vishwanath Munavalli, Ravindra Patil, and Vinay Kumar sajjan, Design and
implementation of efficient multiplier using vedic mathematics, Proc of Int. Conf. on Advances in Recent Technologies in
communication and computing, 2011
[9] Sushma R. Huddar, Sudhir Rao Rupanagudi, Kalpana M., and Surabhi Mohan, Novel High speed Vedic Mathematics Multiplier
using compressors, International Multiconference on Automation, Computing, Communication, Control and Compressed
sensing, 2013
[10] R. Pushpangadan, V. Sukumaran, R. Innocent, D. Sasikumar, and V. Sundar, High Speed Vedic Multiplier for Digital Signal
Processors, IETE Journal of Research, Vol. 55, pp. 282-286, 2009
[11] M. Ramalatha, K. Deena Dayalan, S. Deborah Priya, High Speed Energy Efficient ALU Design using Vedic Multiplication
Techniques, Advances in Computational Tools for Engineering Applications, 2009, IEEE Proc., pp 600-603















International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

204 www.ijergs.org

Modeling and Characterization of Tunable Piezoelectric Actuator
Meenu Pruthi
1
, Anurag Singh
2

1
Research Scholar (M.Tech), ECE OITM
2
Assistant Professor, ECE Department OITM
E-mail- menu.pruthi815@gmail.com

Abstract The hub of this paper to study the effect resonance frequency of piezoelectric MEMS. The modeling and
characterization of piezoelectric resonator using COMSOL Multiphysics. An investigative relation was developed based
on the shift in resonance frequency caused by the addition of a different material on the PZT. The theoretical analysis is
done with a user-friendly SPICE Circuit Editor interface constructed for easy introduction of design dimensions , material
parameter values and force signal stimuli. A piezoelectic device can actutate a cantniliver beam simply by applying an AC
voltage over the device.The cantilever beam itself has resonant modes that causes peaks in vibrations when the frequency
of applied voltage passes the resonance frequency of each mode .If another piezoelectric device is attached to the
cantilever, it is possible to tune the resonance by connecting that device to a passive external circuit.In this model,
different materials are used for designing o tunable actuator. We have observed the graph between displacement and
frequency of various materials. And the best material found from analysis is Lead Zirconate Titanate. This model
investigates how the external circuit influence the resonance peaks of cantilever beam and also improve the quality.
Keyword MEMS, Piezoelectric effect, Lead Zirconate Titanate(PZT-5A),Resonance,COMSOL,Tunable,deformation
I. INTRODUCTION

MEMS is Micro-Electro-Mechanical System Technology is a capable technology for low-loss, high linearity
applications[1]-[4]. Piezoelectrically transduced micro resonator have become attractive research topic in ultra-mass
detector, bio-sensor, RF filter and high freq micro oscillator. Compared with elctrostatically actuated and sensed
capacitive silicon micro resonator, piezoelectrically transduced microresonator exhibits better power handling capacity
than capacitive type since low driving-voltage of several hundreds of millivolts is enough for resonator actuation ,which
facilitates the integration of microresonators with CMOS signal processing circuits. The main advantage of MEMS
resonator lies in possible integration onto silicon based IC platforms.
High-mode vibration can improve mass detect sensitivity of a resonant cantilever under atmospheric pressure by
suppressing the air damping effect [5].High mode vibration can be successfully achieved by the proposed structure, and
greater Q-factor can be obtained as expected for the pursuit of better mass detection sensitivity. However, the measured
Q-factors are still lower than the theoretical calculations. High mode vibration results in large vibration amplitude at the
position where cantilever and actuator connects. It leads to large vibration amplitude in PZT actuator, which in turn
induces additional energy dissipation as analyzed in reference[6]. Besides, large vibration amplitude at the actuation hinge
also trends to decrease Qsup, because energy may dissipate easily through substrate.
In robotics, resonance has been recognized as an important phenomenon that can be used to increase power transmission to a load,
reduce the effort of actuators, and achieve a large amplitude motion for cyclic tasks, such as running (e.g., [8], [9]), flapping (e.g.,
[10], [11]), or fin-based swimming (e.g., [12]). Variable stiffness and resonance can be intimately connected because the ability to
vary actuator stiffness provides the ability to tune a robotic systems resonant frequencies.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

205 www.ijergs.org

Quality Factor (Q) is one of the most important characteristics of MEMS resonators, especially if they are used to build sensors based
on frequency monitoring. The corresponding frequency resolution, and thus the system's sensitivity, is then indeed directly linked to
Q. The higher the value of Q, the higher the micro-system's performance. These MEMS resonators are indeed found in many
applications were a high sensitivity is needed: inertial sensors, mass sensors. To get high Q-values, these micro-systems generally rely
on the use of vacuum packaging, air damping being an important limitation to the quality factor [13].
Another possibility to get high Q-values could be to externally increase the quality factor. An interesting technique to artificially
improve the quality factor is called parametric amplification and consists in modulating the structure's stiffness at a harmonic
frequency of the device's resonant frequency. This modulation results in an increase of the oscillation amplitude at the device's
resonant frequency and thus an increase of Q.

II. THEORITICAL CONSIDERATION
The actuator consists of a thin bar of silicon with an active piezoelectric device below the bar, and a second passive
piezoelectric device on top as shown in Figure1. These devices are located at one end of the actuator. The piezoelectric
material is lead zirconate titanate (PZT), and each of the devices has two electrical connections to an external circuit,
realized with the Floating potential boundary condition of the Piezo Plane Strain application mode.


Figure1: Geometry of a Tunable Piezoelectric Actuator Based Resonator

III. MODLING OF PIEZOELECTRIC ACTUATOR
Because the fundamental resonance mode is the mode of deformation with maximum displacement and the relevant mode
shapes were modeled. The results related to the various displacement with various piezoelectric material even the material
of cantilever beam is fixed, which is single crystal Silicon as shown below. Thus the deformation show by displacement
by varying the frequency.
Figure2 : Simulation of displacement of Quartz

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

206 www.ijergs.org

Figure 2 shows that displacement of piezoelectric actuator. In this, we use cantilever beam of silicon and piezoelectric
device is of quartz material. It has the displacement of 4.842e-20 which is very low.

Figure3: Simulation of displacement of Zinc Oxide

Figure 3 shows that displacement of piezoelectric actuator. In this, we use cantilever beam of silicon and piezoelectric
device is of ZnO material. It has the displacement of 1.003e-8 which is low.
Figure4: Simulation of displacement of Aluminium Nitride

Figure 4 shows that displacement of piezoelectric actuator. In this, we use cantilever beam of silicon and piezoelectric
device is of AlNi material. It has the displacement of 2.532e-9 which is low.

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

207 www.ijergs.org


Figure5: Simulation of displacement of Lead Zirconate Titanate(PZT-5A)

Figure 5 shows that displacement of piezoelectric actuator. In this, we use cantilever beam of silicon and piezoelectric
device is of PZT-5A material. It has the displacement of 3.547e-7 which is best in all materials.
All the Above Figure(1-5) shows the simulated output of piezoelectric actuator showing the variation of displacement
along its boundary according to colour profile given in the plot and reported to the maximum amplitude of vibration
analyzed by the lead zirconate titanate. Animation can also help us to show the maximum deformation of the different
materials
IV. EXPERIMENTAL RESULTS
The analysis of the actuator is performed through a frequency sweep that goes from 200 kHz up to 1 MHz while logging the
displacement amplitude in the y-direction. the vibration shows several resonance peaks in this range. The external inductance for this
sweep was 50 mH.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

208 www.ijergs.org


Figure6: Plot of Displacement and Frequency of Quartz material

Figure 6 shows frequency response when we used quartz material in piezoelectric device of piezoelectric actuator.

Figure7: Plot of Displacement and Frequency of Zinc Oxide material

Figure 7 shows frequency response when we used ZnO material in piezoelectric device of piezoelectric actuator.


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

209 www.ijergs.org


Figure8: Plot of Displacement and Frequency of Aluminum Nitride material

Figure 8 shows frequency response when we used AlNi material in piezoelectric device of piezoelectric actuator.



Figure9: Plot of Displacement and Frequency of Lead Zirconate Titanate(PZT-5A) material
Figure 9 shows frequency response when we used PZT-5A material in piezoelectric device of piezoelectric actuator.

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

210 www.ijergs.org


Figure10: A comparison between the amplitude versus frequency for two indunctance value in external circuit,50mH(blue curve) and 60mH(red curve).
Figure( 6-9) shows frequency response of Quartz material, zinc oxide, aluminum nitride, lead zirconate titanate
respectively. As the frequency increases then the displacement of cantilever beam also increases and the maximum
amplitude displacement shows the resonance of the piezoelectric resonator. Figure (10) shows that tuning is possible and
due to changing inductance 60mH , so only the 660kHz frequency is affected by inductance.
The spike is caused by a resonance between the capacitance of the piezoelectric device and the inductance of the external
circuit. The resonant frequency for a LC-circuit is

Because the values for L and f are known, it is possible to roughly estimate the capacitance of the piezoelectric device.










Figure11: Analyisis of frequency response of various material
0.00E+00
5.00E-08
1.00E-07
1.50E-07
2.00E-07
2.50E-07
3.00E-07
3.50E-07
4.00E-07
0.00E+00 1.00E+06
freq-displacement graph
PZT(50mH)
quatrz(50mH)
AlNi(50mH)
ZnO(50mH)
PZT(60mH)
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

211 www.ijergs.org

In Figure 11 displacement in y-direction changes with increases in frequency . In ZnO ,quartz and AlNi material
displacement is small whereas in PZT material displacement is large so that total energy stored is maximum . Also by
varying inductance tuning is possible.

V. CONCLUSION
Here we concluded that all the material i.e. Quartz, Zinc Oxide, Aluminum Nitride, Lead Zirconate Titanate (PZT-5A), shows
various changes in displacement when frequency changes respectively. Quartz is not used due to its low piezoelectric cofficient, but it
is nevertheless an intresting material of its high Q factor. But the tunnig with external circuit is possible in the lead Zirconate Titanate
by varying the inductance and also frequency will shifted towards the lower side. In order to get desired frequency range, by changing
the parameter value in the solver parameter. The analysis is done by using a high end software COMSOL Multiphysics. One important
parameter is to able to predict the Q factor of the structure and have accurate design guidelines to minimize the energy losses.
REFERENCES:
[1] R.Lifshitz, and M.L.Roukes, Thermoelastic damping in micro and nanomechanical systems,Physical review
B, vol. 6, no 8, Feb. 2000,5600-5609.
[2] T.V. Roszhart, The effect of thermoelastic internal friction on the Q of micromachined silicon resonator
,Tech.Dig.Solid-State Sens Actutaor Workshop,Hilton Head, SC,1990,13-16.
[3] Srikar Vengallatore, Analysis of thermoelastic damping in laminated composite micromechanical beam
resonator,J.Micromech.Microeng.(2005), 2398-2404.
[4] M.Zamanian,S.E.Khadem Mechanical & Aerospace Engineering Department, Tarbia Modare University,P.O.Box14115-
177,Tehran,IranAnalysis of thermoelastic damping in microresonators by considering the stretching effect International
Journal of Mechanical Sciences 2010.
[5] F.R.Blom, S.Bouwstra, M.Elwenspoek, and J.H.J.Fluitman, "Dependence of the quality factor of micromachined
silicon beam resonators on pressure and geometry," J. Vac. Sci. Technol. B, vol.10, pp.19-26, 1992.
[6] J.Lu, T.Ikehara, Y.Zhang, R.Maeda, and T.Mihara, Energy dissipation mechanisms in lead zirconate titanate thin
film transduced microcantilevers, Jpn.J.Appl.Phys., vol.45, pp.8795-8800, 2006
[7] M. H. Raibert, Legged Robots That Balance. Cambridge, MA: MIT Press, 1986.
[8] J. Hurst and A. Rizzi, Series compliance for an efficient running gait,IEEE Robot. Autom. Mag., vol. 15, no. 3,
pp. 4251, Sep. 2008.
[9] K. K. Issac and S. K. Agrawal, An investigation into the use of springs and wing motions to minimize the
power expended by a pigeon-sized mechanical bird for steady flight, Trans. Amer. Assoc. Mech. Eng. J. Mech.
Des., vol. 129, no. 4, pp. 381389, 2007.
[10] J. Yan, R. Wood, S. Avadhanula, M. Sitti, and R. Fearing, Towards flapping wing control for a micromechanical
flying insect, in Proc. IEEE Int. Conf. Robot. Autom., 2001, vol. 4, pp. 39013908.
[11] P. Valdivia y Alvarado and K. Youcef-Toumi, Design of machines with compliant bodies for biomimetic
locomotion in liquid environments, Trans. Amer. Assoc. Mech. Eng. J. Dyn. Syst., Meas. Control, vol. 128, no. 1,
pp. 313, 2006.
[12] B. Le Foulgoc, T. Bourouina, O. Le Traon, A. Bosseboeuf, F. Marty, C. Brluzeau, J.-P. Grandchamp and S.
Masson, "Highly decoupled single-crystal silicon resonators: an approach for the intrinsic quality", Journal of
micromechanics and microengineering, vol. 16, pp S45-S53, 2006




International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

212 www.ijergs.org

TCP Traffic Based Performance Comparison of Manet Routing Protocol
Dinesh Kumar
1
, Mr. Anil Yadav
2
,Dr. Mukesh Sharma
3

1
Scholar (M.Tech), Computer Science, T.I.T&S, Bhiwani
1
Department of computer Science, T.I.T&S, Bhiwani
E-mail- Kumar.jangra@gmail.com

Abstract:
In Mobile Ad hoc network (MANETS), no fixed infrastructure is available. Different wireless hosts are free to move
from one location to another without any centralized administration, so, the topology changes rapidly or unpredictably.
Every node operates as router as well as an end system. Routing in MANETs has been a challenging task ever since the
wireless networks came into existence. The major reason for this is continues changes in network topology because of
high degree of node mobility. The MANET routing protocols ha ve ma i nl y two classes: Proactive routing (or table-
driven routing) protocols and Reactive routing (or on-demand routing) protocols. In this paper, we have analyzed
various Random based mobility models: Random Waypoint model, Random Walk model, Random Direction model
and Probabilistic Random Walk model using AODV and DSDV protocols in Network Simulator (NS 2.35). The
performance comparison of MANET mobility models have been analyzed by varying number of nodes, type of traffic
(CBR, TCP) and maximum speed of nodes. The comparative conclusions are drawn on the basis of various
performance metrics such as: Routing Overhead (packets), Packet Delivery Fraction (%), Normalized Routing
Load, Average End-to-End Delay (milliseconds) and Packet Loss (%).

Keywords:
Mobile Ad hoc, AODV, DSDV, TCP, CBR, routing overhead, packet delivery fraction, End-to-End delay, normalized
routing load.

1 Introduction:
Wireless technology came into existence since the 1970s and is getting more advancement every day. Because of
unlimited use of internet at present, the wireless technology has reached new heights. Today we see two kinds of wireless
networks. The first one which is a wireless network built on-top of a wired network and thus creates a reliable
infrastructure wireless network. The wireless nodes also connected to the wired network and these nodes are connected to
base stations. An example of this is the cellular phone networks where a phone connects to the base-station with the best
signal quality.
The second type of wireless technology is where no infrastructure [1] exists at all except the participating mobile nodes.
This is called an infrastructure less wireless network or an Ad hoc network. The word Ad hoc means something which is
not fixed or not organized i.e. dynamic. Recent advancements such as Bluetooth introduced a fresh type of wireless
systems which is frequently known as mobile Ad-hoc networks.
A MANET is an autonomous group of mobile users that communicate over reasonably slow wireless links. The network
topology may vary rapidly and unpredictably over time because the nodes are mobile. The network is decentralized where
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

213 www.ijergs.org

all network activity, including discovering the topology and delivering messages must be executed by the nodes
themselves. Hence routing functionality will have to be incorporated into the mobile nodes. Mobile ad hoc network is a
collection of independent mobile nodes that can communicate to each other via radio waves. The mobile nodes can
directly communicate to those nodes that are in radio range of each other, whereas others nodes need the help of
intermediate nodes to route their packets. These networks are fully distributed, and can work at any place without the aid
of any infrastructure. This property makes these networks highly robust.
In late 1980, within the Internet [1] Engineering Task Force (IETF) a Mobile Ad hoc Networking (MANET) Working
Group was formed to standardize the protocols, functional specification, and to develop a routing framework for IP-based
protocols in ad hoc networks. There are a number of protocols that have been developed since then, basically classified as
Proactive/Table Driven and Reactive/On-demand Driven routing protocols, with their respective advantages and
disadvantages, but currently there does not exist any standard for ad hoc network routing protocol and the work is still in
progress. Therefore, routing is one of the most important issues for an ad hoc network to make their existence in the
present world and prove to be divine for generations to come. The area of ad hoc networking has been receiving
increasing attention among researchers in recent years. The work presented in this thesis is expected to provide useful
input to the routing mechanism in ad hoc Networks.

2 Protocol Descriptions
2.1 Ad hoc On Demand Distance Vector (AODV)
AODV routing algorithm is a source initiated, on demand driven, routing protocol. Since the routing is on demand, a
route is only traced when a source node wants to establish communication with a specific destination. The route remains
established as long as it is needed for further communication. Furthermore, another feature of AODV is its use of a
destination sequence number for every route entry. This number is included in the RREQ (Route Request) of any node
that desires to send data. These numbers are used to ensure the freshness of routing information. For instance, a
requesting node always chooses the route with the greatest sequence number to communicate with its destination node.
Once a fresh path is found, a RREP (Route Reply) is sent back to the requesting node. AODV also has the necessary
mechanism to inform network nodes of any possible link break that might have occurred in the network.
2.2 Destination Sequenced Distance Vector (DSDV)
The Destination Sequenced distance vector routing protocol is a proactive routing protocol which is a medications of
conventional Bellman-Ford routing algorithm. This protocol adds a new attribute, sequence number, to each route table
entry at each node. Routing table is maintained at each node and with this table; node transmits the packets to other
nodes in the network. This protocol was motivated for the use of data exchange along changing and arbitrary paths
of interconnection which may not be close to any base station.
3 Simulation
Both routing techniques were simulated in the same environment using Network Simulator (ns-2). Both AODV and
DSDV were tested by the traffic i.e. TCP. The algorithms were tested using 50 nodes. The simulation area is 1000m by
1000m where the nodes location changes randomly. The connection used at a time is 30. Speed of nodes varies from 1m/s
to 10m/s. by using TCP traffic we calculate performance of these two protocols for different random based mobility
model. i.e.:
Random Waypoint (RWP)
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

214 www.ijergs.org

Random walk(RW)
Random direction(RD)
Prob. Random Walk(PRW)
4 Simulation result
The results of our simulation will be presented in this section. First we will discuss the results of both AODV & DSDV
protocol for different matrices and after that we make the comparison between the two protocols.

4.1 AODV Result
4.1.1 Routing Over head (packets)


Fig 1 Routing Overhead vs. Speed of Nodes
From fig. 1 we conclude that every mobility model is suffering from more variations in routing overhead with increase
in mobility. Random Waypoint model is generating minimum overhead packets for every type of mobility while Prob.
Random Walk is generating highest routing load during transfer of data packets from source node to destination node.


4.1.2 Packet Del i ver y Fr act i on (%)


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

215 www.ijergs.org

Fig 2 Packet Delivery Fraction vs. Speed of Nodes
Fig. 2 shows that for AODV protocol and TCP traffic, Random Walk model is giving better performance at low speed.
At high speed, Random Direction model is better from other models.

4.1.3 Normalized Rout i ng Load


Fig 3 normalized Routing Load vs. Speed of Nodes
Fig. 3 indicates that for AODV protocol with TCP traffic, Random Waypoint model is generating minimum routing
packets for transmission of data packets at all speeds. Random Direction is generating higher routing loads.

4.1.4 Average End-to-End Delay


Fig 4 End-to-End Delay vs. Speed of Nodes
Fig. 4 for AODV protocol with TCP traffic, Prob. Random Walk model is giving better performance by taking minimum
time to transmit the data packets up to destination for high and lower speeds. As the speed increases, Random Walk
model performance degrades very much and suffers from highest delay.


4.1.5 Packet Loss (%).
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

216 www.ijergs.org


Fig 5 Packet Loss vs Speed of Nodes
Fig. 5 for AODV protocol, TCP traffic, Random Walk model is performing better at low speeds. At higher speed,
Random Direction model is having minimum packet loss as compared to another mobility models. While Random
Walk model is performing poor with increase in speed.

4.2 DSDV Result

4.2.1 Routing Over head (packets)


Fig 6 Routing Overhead vs. Speed of Nodes
Fig. 6 indicates that there are fewer variations in routing overhead for DSDV with the change in the mobility of nodes
for all models as compared to AODV protocol. Random Walk is showing minimum overhead at 1.5 m/s and 3 m/s
speeds. Random direction is giving betterperformance at 2 m/s and 2.5 m/s.


4.2.2 Packet Del i ver y Fr act i on (%)

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

217 www.ijergs.org


Fig 7 Packet Delivery Fraction vs. Speed of Nodes
Fig. 7 shows that for DSDV protocol with TCP traffic, Random Direction model is performing better at low speed
with maximum packet delivery. Random Walk is good forhigh speeds.

4.2.3 Normalized Rout i ng Load


Fig 8 normalized Routing Load vs. Speed of Nodes
Fig. 8 shows that for DSDV protocol with TCP traffic, at speed 1.5 m/s and 3 m/s, Random Walk is generating
minimum routing load. Random Direction model is performingbetter at the speed of 2 m/s and 2.5 m/s with generating
minimum routing packets.

4.2.4 Average End-to-End Delay


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

218 www.ijergs.org

Fig 9 End-to-End Delay vs. Speed of Nodes
Fig. 9 for DSDV protocol, TCP traffic, end-to-end delay is more for every model as compared to above protocol.
Here, Random Direction model is performing better for low and high speeds. Here also, Random Walk model is
performing very poor. It takes highest time tosend data packets from one end to another end.

4.2.5 Packet Loss (%).


Fig 10 Packet Loss vs Speed of Nodes
Fig. 10 for DSDV protocol, TCP traffic; we have very less packet loss as compared to AODV. Random Direction is
having minimum packet loss at low speed while at high speed; Random Walk is performing better by minimum packet
losses.

5 Comparison & Conclusion:

The comparison of both Protocols for different random access method is shown in following of table:

In both Protocol i.e. AODV & DSDV Random Walk model have the best performance as the Random Walk model have
better result shown in table.

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

219 www.ijergs.org

6 Future works:
In this paper four Random mobility models have been compared using AODV and DSDV protocols. This work can be
extended on the following aspects:
I nves t i gat i on of other MANET mobility models using different protocols under different types of traffic like CBR.
Di f f er ent number of nodes and different node speeds.
REFERENCE:
[1] E.M. Royer & C.E. Perkins, An Implementation Study of the AODV Routing Protocol, Proceedings of the IEEE
Wireless Communications and Networking Conference, Chicago, IL, September 2000
[2] B.C. Lesiuk, Routing in Ad Hoc Networks of Mobile Hosts, Available Online:
http://phantom.me.uvic.ca/clesiuk/thesis/reports/adhoc/ adhoc.html#E16E2
[3]Andrea Goldsmith; Wireless Communications; Cambridge University Press, 2005.

[4]Bing Lin and I. Chlamtac; Wireless and Mobile Network Architectures; Wiley, 2000. [5] S.K. Sarkar, T.G.
Basawaraju and C Puttamadappa; Ad hoc Mobile Wireless Networks: Principles, Protocols and Applications;
Auerbach Publications, pp. 1, 2008.
[5] C.E. Perkins, E.M. Royer & S. Das, Ad Hoc On Demand Distance Vector (AODV) Routing, IETF Internet draft,
draft-ietf-manet-aodv-08.txt, March 2001
[6] C.E. Perkins & E.M. Royer, Ad-hoc On-Demand Distance Vector Routing, Proceedings of the 2nd IEEE Workshop on
Mobile Computing Systems and Applications, New Orleans, LA, February 1999, pp. 90- 100
[6] E.M. Royer & C.K. Toh, A Review of Current Routing Protocols for Ad-Hoc Mobile Wireless Networks, IEEE
Personal Communications Magazine,
April 1999, pp. 46-55.
1.1.1 [8] D. Comer, Internetworking with TCP/IP Volume 1 (Prentice Hall, 2000)







International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

220 www.ijergs.org

To Analyze Joule Heating in Thermal Expansion with Copper Beryllium Alloy
Raman Babbar
1
, Anurag Singh
2

1
M.Tech Scholar ECE OITM
2
Asst. Professor, ECE Dept, OITM
E-mail- babr.raman@gmail.com

Abstract-Nowadays biomedical, industrial and electrical applications such as optical switches, thermostat and bimetallic
strip systems which are parts of the actuation and sensing components are realized using thermal expansion fabricated
using Micro Electro Mechanical Systems (MEMS) technology. This paper studies through comsol model, the joule
heating properties of the actuation mechanism of comb shape thermal expansion with displacement produced in the
device. The device is made up of copper Beryllium alloy i.e. UNS C17500. The device is more efficient when the UNS
C17510 or UNS C26000 was introduced than UNS C17500 listed above to further increase the displacement.
Keywords: MEMS, RF MEMS, COMSOL, Bimetallic Strip, Joule Heating, Copper beryllium alloy, Thermostat

INTRODUCTION
MEMS has been identified as one of the most promising technologies for the 21st Century and has the potential to revolutionize both
industrial and consumer products by combining silicon-base microelectronics with micromachining technology. Its techniques and
microsystem-based devices have the potential to dramatically affect of all of our lives and the way we live. [6] The rapid growth of
MEMS technology has generated a host of diverse developments in many different fields like RF MEMS, optical MEMS, in
biomedical science, in electrical, in mechanical.[7] In electrical there are Bimetallic strip system[2] which are used in air conditioner,
electric iron, thermostat etc. Over the last two decades, optical fiber sensors have seen an increased acceptance as well as wide spread
use in scientific research and in diversified engineering applications [3], [1]. The principle of bimetallic strip is based on the Thermal
expansion[4].
DESIGNING
Thermal expansion is the type of actuator i.e.when we change temperature the material expands. The device is made up of copper
beryllium alloy. The thermal balance consists of a balance of flux at study state. The heat flux is given conduction only. The heat
source is a constant heat source of 1*10
8
W/m
3
. The air cooling at the boundaries is expressed using a constant heat transfer
coefficient of 10 W/m
2
and an ambient temperature of 298 K. The expression for thermal expansion requires strain reference
temperature for the copper beryllium alloy, which in this case is 293 K .[5] In this we use two sets of physics:
- A thermal balance with a heat source in the device, originating from Joule heating (ohmic heating). Air cooling is applied
on the boundaries except at the position where the device is attached to a solid frame, where an insulation condition is set.
- A force balance for the structural analysis with a volume load caused by thermal expansions.The device is fixed at the
positions where it is attached to a solid frame as shown in figure1.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

221 www.ijergs.org

Figure1: Model Geometry of the Device
The displacement produced in the device is the function of temperature & heat source simultaneouly. In this
paper we study the effect on displacement as temperature changes. As we change temperature displacement
increases. The displacement is minimum at 273K(0
0
C) and as we change temperature (either increases or
decreases) displacement increases.
RESULT
(a) Themal Expansion with UNS C17500
When we use 298K as external temperature then the maximum displacement is 5*10
-8
. The figure 2 shows the
temperature distribution in the device. The heat source increases the temperature to 323 K from an ambient temperature of
298 K. The temperature varies less than 1/100 of a degree in the device.

Figure2: Temperature distribution of the Device at 298K
The figure 3 shows the displacement of a curve that follows the top inner edges of the device from left to right.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

222 www.ijergs.org


Figure 3: Displacement Vs Position graph

The figure 3 shows the displacement produced between the top inner edges of the device Vs position along edges.


Table1: Displacement at different temperature



The above table shows the displacement produced in the device at different temperature range. The displacement is
minimum at 273 K. when temperature is larger than 273K the displacement increases and also when temperature is less
than 273K the displacement increases. There is same displacement value (3.00E-07) at two different temperature (450K &
50 K).
This is shown in next Figure.

. Figure 4: Temperature Vs displacement graph
0.00E+00
1.00E-07
2.00E-07
3.00E-07
4.00E-07
0 200 400
D
i
s
p
l
a
c
e
m
e
n
t

(
m
)
Temperature (K)
UNS C17500

Temperature in Kelvin

Displacement (UNS
C17500)
450 3.00E-07
400 2.20E-07
350 1.40E-07
325 1.00E-07
298 5.00E-08
273 9.00E-09
250 3.00E-08
200 1.10E-07
150 1.80E-07
100 2.50E-07
50 3.00E-07
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

223 www.ijergs.org

The figure 4 shows the relation between temperature and displacement produced in the top inner edges of the
device. The graph is V shape i.e. it shows 273K is reference temperature, at this temperature the displacement is
minimum & displacement is increases on both side of the reference temperature.
(b) Themal Expansion with UNS C17510
When UNS C17510 is load into the device, it shows more displacement than the base material. In this case, the
displacement increases to 1*10
-8
at 273K (reference temperature). At 298K temperature, displacement is 5.5*10
-8
with
new material. Whereas with base material the displacement is 5*10
-8
at same 298K temperture. This increase is due to the
variation in chemical composition of the alloys.
Figure 5: Temperature distribution of the Device with 298K

The figure 5 shows the displacement of a curve that follows the top inner edges of the device from left to right.

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

224 www.ijergs.org

Figure 6 Displacement vs Position graph

The figure 6 shows the displacement produced between the top inner edges of the device Vs position along edges.



Table 2: Displacement at different temperature

Temperature in kelvin Displacement (UNS C17510)
450 3.50E-07
400 2.50E-07
350 1.40E-07
325 1.00E-07
298 5.50E-08
273 1.00E-08
250 3.00E-08
200 1.20E-07
150 2.00E-07
100 3.00E-07
50 4.00E-07


The above table shows the displacement produced in the device at different temperature range. The displacement is
minimum at 273 K. when temperature is larger than 273K the displacement increases and also when temperature is less
than 273K the displacement increases. The displacement increases by a greater factor in this case i.e. the value of
displacement is 3.5*10
-7
at 450K when UNS C17510 used & it is 3*10
-7
when base material is used.

Figure 7: Temperature Vs displacement graph

0.00E+00
5.00E-08
1.00E-07
1.50E-07
2.00E-07
2.50E-07
3.00E-07
3.50E-07
4.00E-07
4.50E-07
0 200 400
D
i
s
p
l
a
c
e
m
e
n
t
(
m
)
Temperature (K)
UNSC17510
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

225 www.ijergs.org

The figure 7 shows the relation between temperature and displacement produced in the top inner edges of the
device. The graph is V shape i.e. it shows 273K is reference temperature, at this temperature the displacement is
minimum & displacement is increases on both side of the reference temperature.

(c) Themal Expansion with UNS C26000
When UNS C26000 is load into the device, it shows more displacement than two listed above.. In this case, the
displacement increases to 1.2*10
-8
at 273K (reference temperature). At 298K temperature, displacement is 7*10
-8

with UNS C26000. Whereas with UNS C17510 the displacement is 5.5*10
-8
& with base material the
displacement is 5*10
-8
at same 298K temperture. This increase is due to the variation in chemical composition of
the alloys.
Figure 8: Temperature distribution of the Device with 298K
The figure 8 shows the displacement of a curve that follows the top inner edges of the device from left to right.

Figure 9: Displacement Vs Position graph
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

226 www.ijergs.org

The above graph shows the displacement produced between the top inner edges of the device Vs position along edges.

Table 3: Displacement at different temperature

Temperature in kelvin Displacement (UNS C26000)
450 4.00E-07
400 3.00E-07
350 1.80E-07
325 1.20E-07
298 7.00E-08
273 1.20E-08
250 3.50E-08
200 1.20E-07
150 2.00E-07
100 3.00E-07
50 3.50E-07


The above table shows the displacement produced in the device at different temperature range. The displacement is
minimum at 273 K. when temperature is larger than 273K the displacement increases and also when temperature is less
than 273K the displacement increases. The displacement increases by a greater factor in this case i.e the value of
displacement is 4*10
-7
at 450K when UNS C26000 used & the value of displacement is 3.5*10
-7
when UNS C17510 used
& it is 3*10
-7
when base material is used.


Figure 10: Temperature Vs displacement graph
The figure 10 shows the relation between temperature and displacement produced in the top inner edges of the
device. The graph is V shape i.e. it shows 273K is reference temperature, at this temperature the displacement is
minimum & displacement is increases on both side of the reference temperature.
0.00E+00
1.00E-07
2.00E-07
3.00E-07
4.00E-07
5.00E-07
0 200 400
D
i
s
p
l
a
c
e
m
e
n
t
(
m
)
Temperature(K)
UNSC26000
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

227 www.ijergs.org


(d) COMPARATIVE ANALYSIS
When we compare all the three material listed above then we noted that displacement is large in last case as shown in table.
Table 4: Displacement at different temperature
Temperature
in kelvin
Displacement
(UNS
C17500)
Displacement
(UNS
C17510)
Displacement
(UNS
C26000)

450 3.00E-07 3.50E-07 4.00E-07
400 2.20E-07 2.50E-07 3.00E-07
350 1.40E-07 1.40E-07 1.80E-07
325 1.00E-07 1.00E-07 1.20E-07
298 5.00E-08 5.50E-08 7.00E-08
273 9.00E-09 1.00E-08 1.20E-08
250 3.00E-08 3.00E-08 3.50E-08
200 1.10E-07 1.20E-07 1.20E-07
150 1.80E-07 2.00E-07 2.00E-07
100 2.50E-07 3.00E-07 3.00E-07
50 3.00E-07 4.00E-07 3.50E-07
The above table shows that displacement produced in the top inner edges of the device at different temperature.

Figure 11: Temperature Vs displacement graph
The figure 11 shows that temperature Vs displacement variation of all the three material.

CONCLUSION

From result it is concluded that out of the three material used above UNS C26000 shows better result. Because in this case
displacement is increases by greater factor. This is due to the variation in the chemical composition of the alloy.It is noted
that in hot region UNS C26000 (triangle) is better & in this region result is increased by 33% when UNS C26000 is used
& 16% when UNS C17510 is used. In cold region it is reverse result is increased by 16% when UNS C26000 is used &
0.00E+00
1.00E-07
2.00E-07
3.00E-07
4.00E-07
5.00E-07
0 200 400
D
i
s
p
l
a
c
e
m
e
n
t
(
m
)
Temperature(K)
UNSC17500
UNSC17510
UNSC26000
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

228 www.ijergs.org

33% when UNS C17510 is used. So it is concluded that in hot region UNS C26000 is better & in cold region UNS
C17510 is better.
PROPOSED FUTURE WORK
In future we wish to redesign the thermal expansion from straight design into serpentine/zigzag shape. This would change
the displacement in the top inner edges of the device. So we would redesign the device..
REFERENCES:
[1] Behrens, V. Honig, T. ; Kraus, A.,Influence of the contact material of the performance of Temperature-dependent switching
controllers in household appliance IEEE journal of electrical contact ;pp 235-239, 2000.
[2] D. A. Krohn, Fiber Optic Sensor, Fundamental and Application, 3rd ed. New York: ISA, 2000.
[3] J. Dakin and B. Culshaw, Optical Fiber Sensors: Applications, Analysis and Future Trends. Boston, MA: Artech
House, 1997.
[4] Abe, Osamu, Taketa, Yoshiaki , Effects of substrate thermal expansion coefficient on the physical and electrical properties of thick film
resistors IEEE journal of Electronic Manufacturing Technology Symposium, pp 259-262, 1989.
[5] WWW.COMSOL.CO.IN/MODEL/JOULE-HEATING-IN-A-MEMS-DEVICE-1840
[6] WWW.SPS.AERO/KEY_COMSPACE.../TSA002_NEMS_WHITE_PAPER
[7] WWW.MEMS-ISSYS.COM/






























International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

229 www.ijergs.org

Detection of High Rate STBC in Frequency Selective m- Nakagami Fading
Environment
Devendra Kumar
1
, Nandkishor S Vansdadiya
1
, Amit Kumar Kohli
1

1
Scholar, Electronics and Communication Dept., Thapar University, Patiala
E-mail- devchauhan151188@gmail.com

Abstract- In this article, we concerned with the performance of high rate space time block code (STBC) scheme
for Frequency selective fading Environment. Due to large channel delay spread in Frequency selective channel
inter symbol interference (ISI) occurred. In high rate STBC, ISI occurs due to loss of quasi static assumption
and Classical Zero forcing (ZF) receiver produces error floor in Bit error rate (BER) performance. In this article
we evaluate and proposed low Complexity Zero forcing which reduce the complexity of receiver for detection
of high rate STBC and evaluate the performance in mNakagami fading environment.
Keywords- Space time block code (STBC), Inter symbol interference (ISI), Zero forcing (ZF).
Introduction
A simple and powerful diversity technique using two transmit antennas was first proposed by Alamouti Space
time block-coding (STBC) scheme has been proposed in several wireless application due to its many attractive
features [1]. First one, at full transmission rate, it achieves full spatial diversity for any signal constellation with
2 transmit antennas and. Second, it does not require channel side information at the transmitter. Third,
maximum likelihood decoding of STBC done by simple linear processing. The range and data rate of wireless
networks is limited. To enhance the data rates and the quality, multiple antennas can be used at the receiver to
obtain the diversity. By utilizing multiple antennas at transmitter and receiver, significant capacity advantages
can be obtained in wireless system. In a Multiple Input Multiple Output (MIMO) system, multiple transmit and
receive antennas, can elevate the capacity of the transmission link. This extra capacity can be utilized to enlarge
the diversity gain of the system. This results in development of Lucents Bell-Labs layered space-time
(BLAST) architecture [5]-[6] and space time block codes (STBCs) [1][7] to attain some of this capacity
In high rate (
5
4
rate = ) full diversity orthogonal STBC for QAM and 2 transmit antennas(Tx) by expanding the
signalling set from the set of quaternions used in Alamouti[1] code.To maximizing full-diversity, selective
power scaling of information symbols is used while maximizing the coding gain(CG) and minimizing the
transmitted signal peak to minimum power ratio (PMPR). Analytically we derives optimum power scaling
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

230 www.ijergs.org

factor and we seen that it achives better performance with the help of rotation of constellation points, decoding
is performed using low complexity maximum likelihood decoding algorithm[2].
After studying the literature we came to know that in [3] They have designed high rate STBC system, by not
considering loss of quasi-static assumption due to frequency selectivity phenomenon of channel. Due to
frequency selectivity of channel causes ISI which results in error floor in bit error rate. In section I (A) we
describe the high rate STBC system for frequency flat case, in section I(B) to compact the effect of ISI in
frequency selective environment, we proposed low comlexity zero forcing which reduces the complexity of
equalizer. In section II and III, we describe the simulation results in m-Nakagami fading environment and
Conclusion respectively.
System Model of High rate STBC:
The simplest complex orthogonal design is the 2 2 code
1 2
(x , x )
* * 1 1 2
2 1
x x
G
x x

| |
|
|
\ .

find out by Alamouti [1] where (.) denotes the complex conjugate transpose. This code accomplishes rate-1 at
full diversity. The correspondence between Alamouti matrices and means of quaternions is that the set of
Alamouti matrices is closed under inversion, addition and multiplication. Consider the set
2
G of x given by 2 2
orthogonal matrices
1 0
1 2
(x , x )
* * 2 1 2 1
0 1
2 1
x x
G G
x x
=

| |
| |
|
|
|
\ .
\ .

Transmission model:
Different antennas represented by the columns of
1
G , different time slots represented by rows of
1
G and two
symbols are transmitted in frequency selective fading environment. We use QAM modulation in this system, the
transmission of space-time matrix is done based on either
1
G or
2
G according to an extra information bit of 1 or
0 respectively shown in Fig.1. To regain full-diversity, Strategy based on rotation of information symbols has
been proposed. In this paper, we assume the information symbol in
2
G is divided only by a real scalar ( 1) K > to
ensure full-diversity, hence it is called selective power scaling. For QAM constellation of unit-radius, scaling
leads to overall signal constellation consisting of two concentric circles of radius 1 and
1
K
[4]. For the optimum
power scaling factor K is to ensure full diversity for high-rate STBC. As K>1,
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

231 www.ijergs.org









Fig. 1 Block diagram of rate-5/4 STBC for QAM Modulation
the average transmitted power is reduced as compared to the case of no scaling. Two important selection
techniques for K are maximizing the CG and minimizing the PMPR due to power scaling. We select
opt
K = 3
which is proposed in [3].
Received signal model of High rate STBC:
Time domain representation of received signal at receiver is denoted by
1, 1,t 2,t
1
2,t 1 1,t 1 1, 1
1
* *
2
t
t
t
r h h
x
n
t
n r
x
h h
( (
(
(
( (
(
(
( (
(
(
+
( ( + + +

= +

(1)

R H X N'
And
1, 2,
1,
* *
1 2, 1 1, 1
1, 1
1
2
opt h h
t t
r
t K K
opt
h h
t t t
r
t
K K
x
n
t
n
x
( (
( (
(
(
( (
(
( =
( (
(
(
+ + + ( (
( ( +

+ (2)
R
opt

opt
H X N'
Tx2
d
o
d
1
d
2
d
3
d
4

QAM
Mapper
QAM
Mapper
G
1
(.)
G
2
(.)
d
0
=0
Tx1
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

232 www.ijergs.org


Where R and R
opt
are 2 2 matrix representations of the time domain received signals corresponding to
transmitted code words in the form of
1
G and , respectively. , 1, 2
,
h i
i t
= is channel path gain from transmitter 1
and transmitter 2, respectively at t time instant.
For frequency flat condition :
Due to frequency flat nature of channel we can say that
1, 1, 1
h h
t t
=
+
and
2, 2, 1
h h
t t
=
+
. Therefore received signal is
represented from eq.(1) is
1, 1, 2,
1
* *
1
2
1, 1 2, 1,
r h h
x
n t t t
t
n r
x
h h
t
t t t
(
(
(
(
(
(
(
(
(

(

= +

+
+
(
(
(

(3)
R = HX+ N'

H
R = H (HX+ N') (4)
Where
2 2
0
1, 2,
2 2
0
1, 2,
h h
t t
H
H H
h h
t t
+
=
+
(
(
(
(

,
here off diagonal element of
H
H His zero so there is no inter symbol interference. Similarly from eq.(2)
1, 2,
1,
1
* *
1
2, 1, 2
1, 1
opt h h
t t
r
x
n t K K t
n
x opt
h h
t
t t
r
t
K K
( (
( (
(
(
( (
(
(
( (
(
(
( (
( (

= +
+
+
(5)
opt opt
R = H X+ N'

H
opt opt opt
R = H .(H X+ N') (6)
Where
1 2 2
1, 2,
2
1 2 2
1, 2,
2
0
0
h h
t t
K
h h
t t
K
H
H H
opt opt
| |
+
|
\ .
| |
+
|
\ .
=
(
(
(
(

, here off diagonal element of
H
opt
H His zero so there is no
inter symbol interference.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

233 www.ijergs.org

Proposed Low complexity ZF receiver:
Due to frequency selectivity nature of channel, there is loss of quasi-static assumption caused ISI. We can say
that
1, 1, 1
h h
t t
=
+
and
2, 2, 1
h h
t t
=
+
. Therefore received signal is represented from eq.(1) is
1, 1,t 2,t
1
2,t 1 1,t 1 1, 1
1
* *
2
t
t
t
r h h
x
n
t
n r
x
h h
( (
(
(
( (
(
(
( (
(
(
+
( ( + + +

= +


R = HX+ N'

H
R = H (HX+ N')
Where
1
2
2 2
1, 2,
2 2
1, 2,
h h
t t
H
H H
h h
t t
+
=
+
(
e
(
(
e (

,
* *
.
1 1, 2, 2, 1 1, 1
h h h h
t t t t
e =
+ +
and
* *
.
2 2, 1, 1, 1 2, 1
h h h h
t t t t
e =
+ +
. Here off
diagonal element of
H
H His not zero so there is inter symbol interference due to loss of quasi-static assumption.
To mitigate the effect of ISI we proposed Low Complexity Zero forcing (LZF).

LZF
R = H (HX+ N') (7)
1

( )
LZF
= X H H R (8)
Where
2, 1 *
1,
1, 1 *
2, *
h
t
h
t
L
t LZF
H
h
t
h
t
L
t
+
=
+

(
(
(
(
(
(

and
*
.
2, 1 1, 1
*
.
1, 2,
h h
t t
L
t
h h
t t
+ +
=
*
2
2
2, 1
*
0
1,
2
2
1, 1
0
2,
t
t
h
t
h
t
LZF
H H
h
t
h
t
L
L
+
+
=
+
+
(
(
(
(
(
(
(


Similarly from eq.(2) is given by
1, 2,
1,
* *
1 2, 1 1, 1
1, 1
1
2
opt h h
t t
r
t K K
opt
h h
t t t
r
t
K K
x
n
t
n
x
( (
( (
(
(
( (
(
( =
( (
(
(
+ + + ( (
( ( +

+
R = H X+ N'
opt opt

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

234 www.ijergs.org

H
R = H .(H X+ N')
opt opt opt

Where
1
2
1 2 2
1, 2, 1
2
1 2 2
1, 1 2,
2
opt
opt
h h
t t
K
h h
t t
K
H
H H
opt opt
| |
+
|
+
\ .
| |
+
|
+
\ .
=
(
e
(
(
e
(

,
* *
.
1, 2, 2, 1 1, 1
1 2 2
h h h h
t t t t opt
K K

+ +
e = + and
* *
.
2, 1, 1, 1 2, 1
2 2 2
h h h h
t t t t opt
K K

+ +
e = + . Here off diagonal element of
H
H H
opt opt
is not zero so there is inter symbol
interference due to loss of quasi-static assumption. To mitigate the effect of ISI we proposed Low Complexity
Zero forcing (LZF).

LZF
opt opt opt
R = H .(H X+ N')
(9)
1

( )
LZF
opt opt opt opt

= X H H R (10)
*
*
1,t 2, 1
*
2, 1, 1
t
t
h h
t
K K L
LZF
H
opt
h h
t t
K
K L
+
=

+
(
(
(
(
(
(

and
2
1 2 2, 1
1,
2
,
2
1 2 1, 1
2,
2 *
,
0
0
LZF
h
t
h
t
L
K t opt
h
t
h
t
K L
t opt
H H
opt opt
| |
+ |
+
|
|
\ .
| |
+ |
+
|
|
\ .
=
(
(
(
(
(
(



Applying Low complexity Zero forcing on eq.(7) and eq.(9) we generate two candidate solution namely,

Xand

opt
X which are compared using
2
T
[ ]
1 2
T
h h R X and
2
T
1 2
[ ]
opt
T
h h R X . The decoding of d
o
follows directly
once the decision between

Xand

opt
X is made.
SIMULATION RESULTS:
Simulation results shown in this paper are verified using Matrix Laboratory v-7.5.0.The symbol error rate
performance of two transmitter and one receiver antenna systems (high rate STBC) was investigated through
computer simulation. We assume that channel state information (CSI) is perfectly known at receiver.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

235 www.ijergs.org


Fig. 2 Performance of Alamouti STBC and High rate STBC in m-Nakagami fading channel for frequency flat
condition
Rayleigh fading channel is special case of Nakagami-m fading channel when m=1. Fast fading occurs if the
coherence time is smaller than the symbol duration of the signal (T
s
>T
c
) this is the case when m<1, such
channels become time varying and within symbol duration rapid changes occur in impulse response of the
channel, In STBC systems an assumption is made that channel impulse response remain same for two
consecutive symbol. Practically it is not the case for m<1. Therefore performance of STBC or High rate STBC
degrades for m<1 as compared to m1.

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

236 www.ijergs.org


Fig. 3 Performance of proposed receiver for Alamouti STBC and High rate STBC in m-Nakagami fading
channel for frequency selective condition.

Fig. 2 shows comparisons of High rate STBC and Alamouti STBC for different values of m for frequency flat
condition. We seen that at lower value of SNR for m>1 effective throughput is high in case of both high rate
STBC as well as full rate STBC as compared to m=1and effective throughput is low for m<1 compared to
m=1for both systems. We also seen that effective throughput of High rate at high SNR is 2.5 becomes constant
where as in full rate STBC get effective throughput 2 at high value of SNR.
Fig. 3 shows performance of proposed low complexity zero forcing for frequency selectivity of channel. At high
values of SNR (e.g. 6dB to 20dB) proposed low complexity ZF receiver gives better effective throughput for
the value of m<1, for m=1 effective throughput is between m<1 and m>1. Whereas at lower values of SNR (e.g.
0dB to 6dB) proposed receiver achieves better effective throughput for the value of m>1.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

237 www.ijergs.org

With the help of proposed receiver at higher values of SNR full rate STBC achieves effective throughput like in
case frequency flat condition by reducing complexity at receiver. For m=1 full rate STBC achieves better
performance compared to m<1 and m>1.
Conclusion:
We proposed a low complexity zero forcing receiver which achieves better effective throughput as compared to
full diversity STBC for m>1. Proposed receiver reduces the complexity of receiver by making the off diagonal
element zero of matched filter. It can be further extended for
9
8
rate STBC proposed in [3].
REFERENCES:
[1] S. M. Alamouti, A simple transmit diversity technique for wireless communications, IEEE J. Select. Areas
Commun., vol. 16, pp. 14511458, Oct. 1998.
[2] V. Tarokh, H. Jafarkhani, and A. R. Calderbank, Space-time block codes from orthogonal designs, IEEE Trans.
Inform. Theory, vol. 45, pp. 14561467, July 1999.
[3] Sushanta Das, Naofal Al-Dhahir and Robert Calderbank, Novel full-diversity high-rate STBC for 2 and 4 transmit
antennas, IEEE commun. Lett., vol. 10, pp171-173, March 2006.
[4] W. Su and X.-G. Xia, Signal constellations for quasi-orthogonal space time block codes with full diversity, IEEE
Trans. Inform. Theory, vol. 50, pp. 23312347, Oct. 2004.
[5] G. J. Foschini and M. J. Gans, On limits of wireless communications in a fading environment when using multiple antennas,
Wirel. Pers.Commun., vol. 6, no. 3, pp. 311335, Mar. 1998.

[6] G. J. Foschini, Layered space-time architecture for wireless communication in a fading environment whenusing multi element
antennas, Bell Labs Tech. J., vol. 1, no. 2, pp. 4159, Autumn 1996.

[7] E. Lindskog and A. Paulraj, A transmit diversity scheme for channels with interference, in Proc. IEEE ICC, New Orleans, LA,
Jun. 2000, vol. 1, pp. 307311




International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

238 www.ijergs.org

Perspectives of Transport and Disposal of Municipal Solid Waste in Srinagar
City
Niyaz Ahmad Khan
1

1
Lecturer Higher Education

Abstract
Transport and Disposal of Municipal solid waste is one of the major environmental problems of Srinagar city. Improper transport and
disposal of municipal solid waste (MSW) causes hazards to inhabitants. The amount of solid waste generated in the world is steadily
increasing and every government in this world is currently focusing on methods to approach the challenge. This paper is to present a
case study on municipal solid waste transport and disposal in the city of Srinagar in Jammu and Kashmir in India and its practice as
lessons learnt. Srinagar has a land area of approximately 279 Sq. Kms with a population of 12.03 lacs. Over the past two decades,
MSW generation in Srinagar has increased tremendously from 180 tons in 1981 to 530 tons in 2011. This is largely as a result of
rapid population growth and economic development in the country. The daily per capita generation of municipal solid waste in India
ranges from about 100 g in small towns to 500 gm in large towns and in Srinagar it is 271 gm. Currently 65-70% of municipal solid
waste generated in Srinagar city is collected by door to door collection method and street bin systems and is transported for dumping
to open landfill site which is at Syedpora Achan about 6 km from center of Srinagar city and the remaining 30-35% of waste is
dumped illegally into depressions, river embankments, unattended open spaces or is locally burnt both by individuals or Safia
Karamcharis creating nuisance for public as well as acting as breeding centers of some diseases. In order to provide proper transport
and disposal of municipal solid waste in Srinagar, this study recommends and suggests that clear goals and timeframes need to be
established, duties and responsibilities of local government, NGOs and Srinagar Municipal Authority and funding needs to be
allocated in order to produce an effective waste management framework in the City.
Key words: Srinagar city, Achan, landfill disposal, Solid waste management, safiakaramcharies, municipal wards, Srinagar
munciplity.
Introduction:
Municipal solid waste is a heterogeneous mixture of paper, plastic, cloth, metal, glass, organic matter, etc. generated from
households, commercial establishments and markets. The proportion of different constituents of waste varies from season
to season and place to place, depending on the lifestyle, food habits, standards of living, the extent of industrial and
commercial activities in the area, etc (Katju, 2006). Solid wastes comprise all the wastes arising from human and animal
activities that are normally solid, discarded as useless or unwanted. Solid wastes are those organic and inorganic waste
materials produced by various activities of the society, which have lost their value to the first user. Improper transport and
disposal of solid wastes pollutes all the vital components of the living environment (i.e., air, land and water) at local and
global levels. There has been a significant increase in MSW (municipal solid waste) generation in Srinagar in the last
few decades. This is largely because of rapid population growth and economic development. Poor collection and
inadequate transportation are responsible for the accumulation of MSW at every nook and corner. According to
Tchobanoglous et al (1993), solid waste management may be defined as the discipline associated with the control of
generation, storage, collection, transfer and transport, processing and disposal of wastes in a manner that is in accord with
the best principles of public health, economics, engineering, conservation, aesthetics, and other environmental
considerations that are also responsive to public attitudes. Management of municipal solid waste continues to remain one
of the most neglected areas of urban development in India and same is the case of Srinagar city. Municipal solid waste
generation in Srinagar has increased from180 tons to 530 tons within last 30 years. This tremendous increase of MSW has
posed great pressure on Govt. and the Srinagar municipality for proper collection, transport and disposal of the waste.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

239 www.ijergs.org

Srinagar city has been divided into 24 municipal wards from which garbage is collected by using door-to-door collection
and street bin systems.
The significance of transportation of waste has increased manifold due to the increase in population, area and per capita
waste generation. Transportation operations of Solid Waste involve several steps that are necessary for proper disposal. In
Srinagar Municipality transportation of waste, is done in many ways. It is transferred from source to secondary
storage/station community bins by hand carts, wheel barrows, tricycles for onward transfer to dumping site. From the
transfer point, the wastes are then littered/ removed by the collection vehicles through loading operations. This loading is
both manual as well as mechanical, depending on type of MSW and location. At this point the wastes are in transit storage
and remain so during transportation operation for a few hours. The wastes are then hauled and ultimately reach to the final
disposal site on the same day. At this site, the wastes are unloaded and collection vehicles return to generation site for
refilling.
Municipal solid wastes are regularly disposed off in open space dumpsite in Srinagar city which is Syedpora Achan
landfill site situated at a distance of 6 Kms from city center. For maximizing efficiency & effectiveness of this service, it
is essential to tackle this problem systematically & scientifically by going through all aspects of solid waste management
which includes door to door waste collection, transportation of waste, development of landfill sites etc. in a cost effective
& eco-friendly manner, which may ensure adequate level of sanitation services to all classes of citizens.
The scope of the study would be limited to the transport and disposal of solid waste practices in operation including
manpower, organization and maintenance, collection, transfer and transportation, processing/disposal and selection of
viable alternative strategies for modernization of MSWM in the city.
Study Area:
Srinagar, the summer capital of Jammu and Kashmir State is situated in the heart of the oval shaped Valley of Kashmir.
Srinagar is located in northern most part of India between 74

-56

and 75

-79

East longitudes and 33

-18

and 34

-45

North
latitudes. Srinagar municipal area which had an area of 12.80 Sq. Kms in 1901 increased to 24.52 Sq. Kms in 1951, 41.44
Sq. Kms in 1961 and 103 Sq. Kms in 1971. The city recorded wide spread expansion from 1971 and its area has
increased to 279 Sq Kms at present.
The population of Srinagar city which was 6.06 lacs in 1981 has increased to 12.03 lacs in 2011. Due to its location,
pronounced primacy, migration and rapid development, it has recorded an accelerated pace. In addition, being the capital
city-centre of all major functions, its floating population (both incoming and outgoing) is also very high. Compared to its
population growth, provision of infrastructure facilities and basic services has been disproportionate resulting in over-
straining of already inadequate infrastructure and deficiency in basic services such as sewerage/drainage system, water
supply, sewage treatment and appropriate solid waste management. The Srinagar municipality provides regular solid
waste management services in about 200.49 Sq. Kms out of 279 Sq Kms area, which accounts nearly 72% of the city area.
For the convenience of conducting a study of solid waste management services, Srinagar city was divided into 5 zones,
which are mentioned below:
I. Inner city.
II. Planned colonies.
III. Unplanned extensions.
IV. Settlements in water bodies.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

240 www.ijergs.org

V. Outlying urban fringes/recent extensions


MAP OF SRINAGAR CITY















Methodology:
The methodology adopted for collecting data with regard to generation, transport and disposal of municipal solid waste,
included collection of information pertaining to solid waste management (SWM), from published documents, data
available with the agencies and through consultations. It was done through primary surveys, especially on the assessment
of the physical characteristics of wastes and consultations to understand the felt needs and priorities of the communities
and the key stakeholders. In addition, secondary data was collected on the existing facilities available, such as sweeping
staff, implements for primary collection, storage capacities, transportation facilities, existing institutional and
organizational framework for SWM, operational and maintenance costs towards SWM, major wastes generating sources,
existing collection routes and collection frequency.
Door to Door sample collection was carried out for 3 days to assess the waste generation at household level. The house
owners were given storage bags to store their waste in the morning which were collected next day and then weighed
seperately on weighing balance.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

241 www.ijergs.org

Interaction with NGOs, town area committees and notified area committees engaged with management and handling of
municipal solid waste.
Questionnaires on inventory on municipal solid waste generation, collection, treatment and disposal were given to
municipalities, NGOs and all other concerned agencies with a request to return after completing the same. After the data
had been collected it was analyzed and inferences were drawn with the help of various statistical measures.


T Th he e o ob bj je ec ct ti iv ve es s o of f t th he e p pr re es se en nt t w wo or rk k a ar re e to gather information regarding recovery, disposal and recycling of solid waste,

to
know human work force involved in solid waste transport and disposal and to see the co-operation between sub-urban and
the municipal department in relation to the disposal of the waste.
Waste Generation:
The municipal authorities in Srinagar do not weigh the refuse vehicles regularly but estimate the quantities on the basis of
number of trips made by the collection vehicle. It is estimated that solid waste generated in small, medium and large cities
and towns in India is about 0.1 kg, 0.3 0.4 kg and 0.5 kg per capita per day respectively. Studies carried out by National
Environmental Engineering Research Institute (1976) indicated that the per capita generation rate increases with the size
of the city and varies between 0.3 to 0.6 kg/d. The quantity and magnitude of actual solid waste generated at various
sources and reaching to local dumps and final dumping site for disposal are not same. It is determined by the efficiency of
collection and transportation of waste, retrieval of recyclable material at different levels and other factors. This study
involves assessment of quantity of waste generation at various functional levels viz. household level, beat/local dump
level, municipal ward level, zone level besides quantity of waste actually reaching landfill site at Achan. Per capita waste
generation from each household was calculated by dividing average quantity of waste generated per day to number of
family members. An interesting part of the study has been that per capita waste generation in various sample areas varies
from 142 grams to 396 grams with an average waste generating of 271grams which is close to the finding of NEERI and
is shown in table 1.
Table 1: Zone wise per capita daily generation of solid waste:








The zone-wise waste produced indicates that maximum waste is generated in planned colonies which form medium
density areas. The generation is comparatively less in outlying fringes, unplanned colonies and high density inner areas.
The waste generated in planned colonies are maximum because these areas are inhibited by the economically rich people
as compared to the other zones.
S.No. Zones Description MSW in (gms)
1. A Inner City 264
2. B Planned Colonies 323
3. C Unplanned Colonies 248
4. D Settlements in Water Bodies 271
5. E Out-laying Urban Fringes/
Recent Extensions
249
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

242 www.ijergs.org

Present transport system for transportation of Solid waste in Srinagar municipality:
Municipal Solid Waste collected in Community bins and other places is transported to the Achan dumping site, using a
variety of vehicles. Most of these vehicles make a number of trips everyday to the disposal site through specified routes.
The transfer of waste from Community bins to disposal site is done by using a variety of vehicles such as conventional
trucks of non-tipping and tipping type, tractors with detachable trailers and hydraulic lifting system which directly lift the
waste or relatively large sized containers to disposal sites. In Srinagar Municipality transportation cost is more than 75 per
cent of the total expenditure made on solid waste management. The existing transportation fleet in Srinagar Municipality
for solid waste management is shown in table 2:
Table 2: List of transport fleet Vehicles/Machines of Srinagar Municipality:
S. No Type of Vehicle Existing Number Actually Required
(i). Mini Truck. 05 05
(ii). Truck-Tipper. 24 34
(iii). Hook Trailer ( transfer station) -- 08
(iv). 1. Refuse Collector
2. Refuse Collector Bins.
01

20
10

400
(v). 1. Dumper Placer vehicle. 12

25

2. Dumper Bins. 110 400
(vii). Tricycle 20 500
(viii). Hand Carts 500 500
(ix) Wheel Barrows 1000 150
(x). Containerized handcarts -- 2000
(xi). Front-End-Loader 20 20
(xii). TATA ACE for door-to-door
collection of waste
- 20
(xiii). Road Sweeping Machines -- 04
(xiv). Compactors for dumping site. 01 03
(xv). Snow clearance Dozer (Mini
Dozer compatible to clear
snow in lanes and by lanes).
-- 04

During the field survey of Solid Waste Management, a number of transportation problems have been visualized in
Srinagar Municipality, which are:
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

243 www.ijergs.org

Inadequate transportation vehicles resulting in delayed clearance of waste, lack of trip monitoring of vehicle movement
and weight of waste in transportation vehicles, poor maintenance of transportation vehicles and machinery resulting in
large percentage of vehicles remaining off roads, lack of planning and poor supervision, use of open trucks and
overloading of vehicles results in littering of waste during transportation and workshop facilities are inadequate for
maintaining vehicles and other machines. For augmentation of municipal transport fleet the corporation has to purchase
modern garbage handling machinery and equipment. Housing and Urban development has to provide funds, so that some
of the machinery and equipment required for collection and transportation of solid waste is purchased and solid waste
management system is improved.
Prevalent Administrative Set-up for Solid Waste Management:
In the prevalent solid waste management organizational set-up Sanitation, transportation and other related spheres are
executed under the control of Chief Sanitation Officer and Transport Officer in coordination with the Executive Officer
and Health Officer to carry out solid waste collection and transportation through a team of Sanitary Inspectors, Sanitary
Supervisors and Safia Karamcharis. The manpower available at present in Srinagar Municipality for solid waste
management is given in table 3:


















International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

244 www.ijergs.org

Table 3: Existing Manpower Deployment for MSW Management



















For proper Solid Waste Management and decentralization of administration Srinagar Municipality has been divided into
24 wards. One ward officer is in charge of each ward who is assisted by one Sanitary Inspector in carrying out Solid
Waste Management activities. Non Governmental Organizations also operate in some selected areas e.g. Dal, Pir Bagh
Co-operative Colony and Rawalpora etc. and carry out the collection of solid wastes from houseboats/households. Thus,
Srinagar Municipality is inadequately equipped to deal with the growing problems of Solid Waste Management. The
existing Solid Waste Management organizational set up is under-staffed, under-trained and shouldering heavier
responsibilities. In absence of primary storage and lack of appropriate secondary storage facilities for collection of MSW,
street sweeping has attained considerable importance to upkeep the health of the Srinagar city. Safia Karamcharis sweep
the streets/roads using long handled brooms and collect heaps of waste at suitable locations on the road sides. Thereafter,
the waste is collected and transferred on wheel barrows or tricycles and taken to the nearest community bin or open
collection point for further transfer to disposal site. In Srinagar, at many places sweepers are carrying out the routine
S.No. DESIGNATION No. of Posts
1. Health Officer 1
2. Chief Sanitation Officer 1
3. Ward Officer 25
4. Compost Officer 1
5. Sanitary Supervisors 136
6. Sanitary Supervisor-Compost 9
7. Sanitary Inspectors 25
8. Chauffeurs/Drivers 62
9. Cleaners 45
10 Compost Coolies 5
11. Safiawalas (regular Staff) 1765
12. Safiawalas/Mashkie (consolidated
staff)
343
13. Safiawalas (Daily wager staff) 285
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

245 www.ijergs.org

sweeping on beat basis consisting of 100 to 200 households including main roads, link roads and lanes. The sweeper
population ratio in Srinagar is 1.25 sweepers per 1000.
The existing strength is not sufficient to cater the requirements for 100% collection of waste on door-to-door basis and
sweep the city roads which consists 174 Km main roads and 860 running Km other roads. This shortage obviously affects
the overall waste collection and road sweep performance of the Srinagar Municipal Corporation. At present about 530
tones of solid waste are produced within Srinagar municipal limits out of which 382 tones are daily taken care of by
Srinagar municipality and disposed off at Syedpora Achan dumping site without any resource recovery. The remaining
148 tons of waste is dumped illegally into depressions, river embankments, unattended open spaces or is locally burnt
both by individuals or Safia Karamcharis.
Disposal of Municipal Solid Waste:
S Sr ri in na ag ga ar r c ci it ty y g ge en ne er ra at te es s l la ar rg ge e q qu ua an nt ti it ti ie es s o of f w wa as st te e w wh hi ic ch h i is s u un ns sc ci ie en nt ti if fi ic ca al ll ly y a an nd d i in nd di is sc cr ri im mi in na at te el ly y d di is sp po os se ed d. . A At t p pr re es se en nt t a ab bo ou ut t
5 53 30 0 t to on ne es s o of f s so ol li id d w wa as st te e i is s p pr ro od du uc ce ed d w wi it th hi in n S Sr ri in na ag ga ar r m mu un ni ic ci ip pa al l l li im mi it ts s o ou ut t o of f w wh hi ic ch h 3 38 82 2 t to on ne es s a ar re e d da ai il ly y t ta ak ke en n c ca ar re e b by y
S Sr ri in na ag ga ar r m mu un ni ic ci ip pa al li it ty y a an nd d d di is sp po os se ed d o of ff f a at t S Sy ye ed dp po or ra a A Ac ch ha an n d du um mp pi in ng g s si it te e w wi it th ho ou ut t a an ny y r re es so ou ur rc ce e r re ec co ov ve er ry y. . T Th he e r re em ma ai in ni in ng g
1 14 48 8 t to on ns s o of f w wa as st te e i is s d du um mp pe ed d i il ll le eg ga al ll ly y i in nt to o d de ep pr re es ss si io on ns s, , n na al ll la ah hs s, , o or r r ri iv ve er r e em mb ba an nk km me en nt ts s, , u un na at tt te en nd de ed d o op pe en n s sp pa ac ce es s o or r i is s
l lo oc ca al ll ly y b bu ur rn nt t b bo ot th h b by y i in nd di iv vi id du ua al ls s o or r S Sa af fi ia a K Ka ar ra am mc ch ha ar ri is s. . Municipal solid wastes is regularly disposed off in open space
dumpsite in Srinagar city which is Syedpora Achan landfill site situated at a distance of 6 Kms from city centre, having an
area of about 30.63 hectares and is in operation since 1985.
A detailed study of the dumping sites to analyze the prevalent dumping practices at Syedpora Achan and at other adhoc
dumping sites within the city was also carried out. At the Achan dumping site proper separation and segregation of waste
is not practiced. Rag pickers indulge in the activity for personal gains and are collecting heaps of few saleable items,
spoiling the surroundings. The rags, papers, packing materials, polythene bags and others find their way to Achan landfill
and create problems of compaction. This has further added to the problem of midway dumping within Achan landfill site
and blocking the entry to the site. The waste to Achan landfill site is brought by vehicles and it is leveled occasionally by
bulldozers. At the site proper soil cover is not applied as a result the site has become major public nuisance.
A detailed survey of Syedpora Achan landfill site was carried out to get information about the existing dumping practices
in Srinagar. The study of the landfill site has revealed the following facts:
The landfill site must necessarily be away from the inhabited area and should be easily accessible and approachable. The
Syedpora Achan site has appropriate location, however, has difficult approach through built up area. The expansion of
residential houses has started towards the landfill site which needs to be checked.
In land filling separation and shredding are very important requisites. In Syedpora Achan and adhoc dumping sites of
Srinagar Municipality separation and shredding are not being properly done. At Achan dumping site soil cover is not
applied properly. Huge heaps of soils have been brought to the site but unfortunately it is not spread over the waste.
The soil cover is also not used in adhoc sites and has become public nuisance.
The waste is brought by tippers/tractors to the Achan dumping site, while as it is brought by wheel barrows/hand carts to
adhoc dumping sites. After tipping the waste at Achan it is leveled occasionally by bulldozers. In the absence of soil cover
and shredding whatever little compaction is achieved is done by plying trucks progressively over the extending waste
surface. In the absence of appropriate compaction and leveling, wastes were visualized at the site.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

246 www.ijergs.org

The site presents a picture of heaps of waste with stray animals, birds and unauthorized rag pickers moving over the
heaps. The odor from the site invades inhabited areas and dust/smoke flying from site often engulfs large areas around it
causing danger of a number of health disorder and damages to the standing crops.

Achan landfill site spread over an area of 30.63 hectares of land. Dumping of Municipal solid waste is in process there
since1985. During the last 25 years 45% of area has got filled. It has enough capacity to serve the purpose for next 10-15
years, if same process continues. However, if dumping process is modernized and if disposal of waste is done in more
scientific manner; it has sufficient capacity to absorb the full volume of wastes of the city for about next 20-25 years.

Municipal Administratation at Dumping Site:
At the dumping site, there is a Chowkidar hut/record room which records vehicles coming to landfill site for dumping of
waste, about 17 staff members have been posted at this site. But there is no technical municipal staff to take care of
dumping and development of site on scientific lines. The number of waste trucks coming daily to Achan Dumping Site is
about 48 trucks /day. In the absence of proper landfill development facilities heaps of waste have been piled which
restrain maneourbility of vehicles and compels them to unload the waste at the entry point. The manpower available at
dumping Site is given in table 5:
Table 5: Manpower available at site:











The manpower available at the site is insufficient and is not properly trained about the proper procedure, information
regarding equipments, use & maintenance of equipment. They are also unaware about the health hazards caused by
unhygienic handling of waste. Lack of education and training severely limits proper disposal of waste.
The equipments which are available at the Site include two D50 Chain Dozer, which have been provided by J&K ERA
(used for compacting and leveling of garbage), one JCB-DX-2007 make Loader (being used for earth moving and soil
S.No Designation Existing number
1 1. . Sanitary Supervisor 2 2
2 2. . Driver/ cleaner 2 2
3 3. . Sanitary workers 8 8
4 4. . Chowkidar 1 1
5 5. . Disinfectants 1 1
6 6. . Pump Operator 1 1
7 7. . Anti Rag picker Squad 2 2
T To ot ta al l 1 17 7
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

247 www.ijergs.org

covering at dumping site) and one Tipper is being used for carrying of earth inside the Achan Dumping Site for soil cover.
The equipments are not enough to carry out proper compaction, leveling and soil covering of the waste at the site.
C Co on nc cl lu us si io on ns s a an nd d R Re ec co om mm me en nd da at ti io on ns s: :
Srinagar city generates a large amount of MSW. In 2011, 530 tons of MSW was generated, with an average generation
rate being 271gm/capita/day. Major contributing factors to increasing MSW generation are urban population growth and
good economic conditions. Srinagar City, the largest urban agglomeration of the Jammu and Kashmir State, generates
large magnitude of waste. Srinagar Municipality is able to take care of 70% of waste daily rest remains unattended or
unauthorized disposed in open spaces, depressions, nallahs and water bodies. This inadequacy in the Management of Solid
Waste has generated a lot of problems which have inflicted irretrievable damages to the environment and declined
sanitation condition of the city.
The rapid and accelerated growth has brought radical transformations in the city especially on the state of physical
environment and infrastructure; civic services in particular have been under a tremendous strain. Therefore, there is an
urgent an undeniable need to improve the present Solid waste Transport and Disposal System through modernization and
adoption of appropriate technologies. Based on the field studies on various aspects of the Transport and Disposal of Solid
Waste in Srinagar, following c co on nc cl lu us si io on ns s a an nd d r re ec co om mm me en nd da at ti io on ns s have been drawn.
Srinagar Municipality has a long history of providing Solid Waste Management service since 1886. The area of
Srinagar city has increased from 82.88 Sq Kms in 1971 to 202 Sq Kms in 2000 and up to 279 Sq Kms at present
and has been divided into 24 municipal wards. The population of the Srinagar has increased from 4.23 lacs in
1971 to 6.06 lacs in 1981 and 12.03 lacs in 2011 including floating population. Magnitude of waste generation at
present is 530 tons/day with an average per capita waste generation of 271 gm/ capita/day. The anticipated
population for 2021 has been estimated 24.93 lacs which would generate about 1271 tons of MSW daily with an
average per capita per day generation of 510 gm.
The MSW management is carried out at ward level through unskilled and professionally unqualified personnel.
Lack of monitoring, accountability and co-ordination are the common features of SWM staff which reduce their
efficiency and performance. Shift and Night sweeping is completely missing in the city. House to house collection
system is extended to small number of households covering less than 5 per cent of the population of the city. The
manpower distribution at ward level is not rational and standardized in accordance with population and waste
generation. Out of the total number of primary collection points 41 are covered and 270 uncovered. The open
community bins which are mostly along the road sides generate public nuisance
Of total waste generated 70% is regularly collected and taken to Achan dumping site while as 30% is either
dumped in adhoc dumping site or remains unattended generating unhealthy conditions in the city. Transportation
vehicles are inadequate and with more hauling distance due to unplanned routing of vehicles. Recycling, reuse
and reduce concept is also missing. There is lack of Public awareness and poor participation of citizens in
Municipal Solid Waste Management. In Syedpora Achan and adhoc dumping sites of Srinagar Municipality
separation and shredding are not being properly practiced and there is absence of appropriate compaction and
leveling. The wastes were visualized at the site; present a picture of heaps of waste with stray animals, birds and
unauthorized rag pickers moving over the heaps.
The present system of Waste disposal in Srinagar Municipality includes collection and transportation followed by
insanitary land filling. This disposal system of Municipal Solid Waste is becoming increasingly costly for the
Srinagar Municipality especially with the increase in transportation expenditure. Unfortunately no processing
plant for Municipal Solid Waste has so far been established in Srinagar. Therefore, the informal sector which is
one of the principal sectors of recovering Municipal Wastes is exploited maximum and such a situation also raises
a number of complicated issues that need to be resolved and redressed.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

248 www.ijergs.org

Wrong vehicle selection, shortage of collection vehicles, inadequate transfer points and traffic congestion are the
factors affecting collection efficiency, resulting in low waste collection rates. Vehicles which are used for
transportation are not designed as per requirement. In Srinagar city, proper garages are not provided for the
vehicles for protection from heat and rain. These vehicles are highly capital cost intensive and due to inadequate
budget, older vehicles are deployed for Solid Waste transportation. This results in an uneconomic operation of the
system. The maintenance facilities for these vehicles are inadequate, which adversely affect operational
schedule of transport of waste. During the day time the traffic remains very congested in the city. From many
parts of the city, a waste collection truck can make only one journey to the disposal site in one shift.
Disposal of waste by SMC is only through land filling at Achan Dumping site and no resource recovery from
waste is attempted. Recycling, reuse and reduce concept is also missing. Dumping at Achan is indiscriminate,
unscientific without compaction, shredding, separation leveling and measures to control bird menace entry of rag
pickers, water and air pollution and basic dumping site facilities are missing at Achan. Legal support for
enforcement of MSW services is weak and difficult to enforce. There is lack of Public awareness and poor
participation of citizens in Municipal Solid Waste Management.
Environmentally sound facilities for the treatment and disposal of MSW are in great shortage. The current
administrative system, sharing both the duties of handling and legislation in the municipal area for SWM, is a
major disadvantage of this system.
MSW shall be collected, stored, segregated, transported and disposed separately without mixing with bio-medical,
slaughter and construction/demolition waste. Srinagar Municipality shall provide separate space for disposal of
bio-medical hazardous waste and carcasses. It shall extend collection and transportation services at cost recovery
basis and users pay principal.
Vehicles used for transportation of wastes shall be covered. Waste should not be visible to public, nor exposed to
open environment preventing their scattering. The storage facilities set up by municipal authority shall be daily
attended for clearing of wastes. The bins or containers wherever placed shall be cleaned before they start
overflowing. Transportation vehicles shall be so designed that multiple handling of wastes, prior to final disposal
is avoided.
Land filling shall be restricted to non-biodegradable, inert waste and other waste that are not suitable either for
recycling or for biological processing. Land filling shall also be carried out for residues of waste processing
facilities as well as pre-processing rejects from waste processing facilities. Under unavoidable circumstances or
till installation of alternate facilities, land filling shall be done following proper norms. Landfill sites shall meet
the specifications as per Standards.
Srinagar Municipality shall provide covered community bins/ Garbage sheds to avoid public nuisance and
effective collection of waste. Srinagar Municipality shall give incentives to dealers to prepare specially designed
storage bins which shall have no problem of odor, leakage of moisture and access to birds and animals with a
mark of Use Me or Reduce, Reuse and Recycle. Citizens shall be motivated to form ward-wise committees and
representative for efficient functioning and accountability. Mechanical composting plant shall be installed at
Syedpora Achan for disposal of biodegradable waste. The compost obtained shall be sold .and Srinagar
Municipality should arrange for marketing of the product. Adequate legal backing to enforce and implement
sanitation laws to provide adequate protection to SWM worker and make punitive measure more operational.



REFERENCES:
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

249 www.ijergs.org

[1] Agarwal, K.C. (1998), Environmental Pollution-Causes, Effects and Control, Nidhi Publishers (India), Bikaaner,
PP:135-138
[2] Alabaster and Graham (1995), Waste Minimization for Developing Countries: Can We Afford to Neglect It?
Habitat Debate (1) 3.
[3] Berkun, M., Aras, E. and Nemlioglu, S., (2005). Disposal of solid waste in Istanbul and along the Black Sea coast
of Turkey Waste Management, (25), PP: 847855.

[4] Brown and Caldwell (1980), Alameda County Solid Waste Management Authority. Solid Waste Management Plan.
[5] Bhattarai, R. C. (2000), Solid Waste Management and Economics of Recycling: A Case of
Kathmandu Metro City, Economic Journal of Development Issues 1(2), PP: 90- 106.
[6] Bhide, A.D. and Sundaresan B.B. (2001), Solid waste management Collection, Processing and
Disposal, Mudrashilpa Offset Printers, Nagpur.
[7] Bhoyar, R.V., Titus, S.K., Bhide, A.D. and Khanna, P. (1996), "Municipal and Industrial Solid Waste
Management in India" Journal of IAEM, (23), PP: 53-64.
[8] CPCB (1999), Status of solid waste generation, collection, treatment and disposal in metro cities
Central Pollution Control Board, Delhi.
[9] CPHERI, Nagpur (1973), Solid Waste in India, Final Report
[10] Data, M. (1997), Waste Disposal in Engineering Landfills, Narrasa Publishing House, New Delhi, PP:201-205
[11] Dhameja, S.K. (2002), Environmental Engineering and Management, S.K Kataria and sons, New Delhi, PP: 177-
180.
[12] Dhere, Chandrasekhar, B. Pawar, Pratapsingh, B. Pardeshi and Dhanraj, A. Patil (2008), Municipal
solid waste disposal in Pune city An analysis of air and groundwater pollution Current Science, (95) 6.
[13] Goel. S., Hazra.T ., (2009), Solid waste management in Kolkata, India Practices and challenges, Waste
Management, (29), PP :470478.

[14] Gotoh S. (1989), Issues and factors to be considered for improvement of solid waste management in Asian
metropolises. Regional Development Dialogue, 10 (3), PP: 1-12.

[15] Hamer G., (2003), Solid waste treatment and disposal: effects on public health and environmental safety,
Biotechnology Advance, (22), PP: 71-79.

[16] Hongtao, W. and Yongfeng, Nie (2001), Municipal Solid Waste Characteristics and Management in China J. Air
& Waste Manage. Assoc, (51), PP: 250-263.

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

250 www.ijergs.org

[17] Imam, A., Mohammed B., Wilson D.C. and Cheeseman, C.R. (2008), Solid waste management in Abuja,
Nigeria. WasteManagement, 28 (2): 468472.

[18] Kansal, A. (2002), Solid Waste Management Strategies for India, JJEP, 22(4), PP: 444-448.
[19] Kumar, S. and Gaikwad, S. A. (2004), Municipal solid waste management in Indian urban centres: An
approach for betterment. In Urban Development Debates in the New Millennium (Gupta, K. R. Ed.), Atlantic
Publishers & Distributors, New Delhi, PP: 101-111.
[20] Kurian Joseph (2002), Perspectives of Solid Waste Management in India, International Symposium on
the Technology and Management of the Treatment & Reuse of the Municipal Solid Waste, Shanghai,China.
[21] Katju C. V. (2006), Solid Waste Management: World Bank, Report 1994, World Web Page:
www.devalt.org/newsletter/jun04/lead.htm. .

[22] Mamdouh, A., El-Messery, Gaber, A.Z. Ismail and Anwaar, K. Arafa (2009), Evaluation of Municipal
Solid Waste Management in Egyptian Rural Areas Journal Egypt Public Health Assoc (84) 1 & 2.
[23] McLean, M. (1971), Planning for Solid Waste Management, Planning Advisory Service Report (275) Chicago:
American Society of Planning Officials.
[24] MoEF (2000), Municipal Solid Wastes (Management and Handling) Rules, Ministry of Environment
and Forests, Government of India, New Delhi.
[25] Mufeed Sharholy, Kafeel Ahmad, Gauhar Mahmood, and R.C. Trivedi (2008), Municipal solid waste management
in Indian cities A review Waste Management, (28) 2, PP: 459-467.
[26] NEERI. (1976), Solid waste management in Indian cities present status, Technical Digest, Nagpur, PP: 51.
[27] Newsletter (2005): Published by J&K State Pollution Control Board Srinagar, PP: 12-30.
[28] Patil, D. A., Pawar, C. B. and Dhere, A. M. (2006), Environment Education, Phadke Publication, Kolhapur.

[29] Shekdar, A.V. (1999), Municipal solid waste management the Indian Experience Journal IAEM, (27), PP:
100 108.
[30] Sharholy, M., Ahmad, K., Mahmood, G., Trivedi, R.C., (2008), Municipal solid waste management in Indian
cities A reviewWaste Management, 28 (2), PP: 459 467.

[31] Singhal, S. and Pandey, S. (2001), Solid Waste Management in India: Status and Future Directions.
TERI Information Monitor on Environmental Science, (6) 1, PP: 1-4.
[32] Shuchi Gupta, Krishna Mohan, Rajkumar Prasad, Sujata Gupta and Arun Kansal (1998), Solid waste management in
India: options and opportunities
Resources, Conservation and Recycling, (24) 2, PP: 137-154.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

251 www.ijergs.org

[33] Solid Waste Management (2004), a Newsletter Published by Directorate of Ecology and Environment.
[34] Sudhir, V., Muraleedharan, V. R., Srinivasan, G. (1996), Integrated solid waste management in Urban India: A
critical operational research framework Socio-Economic Planning Sciences, (30) 3, PP: 163-181.
[35] Tchobanoglous, G, Theisen, H and Vigil, S. (1993). Integrated Solid Waste Management: Engineering Principle
and Management Issue. International Ed. McGraw - Hill Book Co. Singapore, PP: 12-43.

[36] T.V. Ramachandra and Shruthi Bachamanda (2007), Environmental audit of Municipal Solid Waste
Management Int. J. Environmental Technology and Management, (7).
[37] Yadav Kushal Pal S. (2007), Pandoras garbage can. Down To Earth, PP: 2021

















International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

252 www.ijergs.org

Novel Synthesis, Characterization and Antimicrobial Activities of Silver
Nanoparticles in Room- Temperature Ionic Liquids
K. Rajathi
1
, A. Rajendran
2

1
Research and Development Centre, Bharathiar University, Coimbatore, Tamil Nadu, India
2
Department of Chemistry, Sri Theagaraya College, Chennai, Tamil Nadu, India
E-mail- annamalai_rajendran2000@yahoo.com
ABSTRACT - Stable silver nanoparticles were successfully synthesized by chemical reduction of silver nitrate in ionic liquids 1-
Ethyl-3-methylimidazolium tetrafluoroborate [Emim] BF
4
and 1- Ethyl-3-methylimidazolium hexafluorophosphate [Emim] PF
6
at
room temperature. The characterization of the silver nanoparticles such as their size and shape was performed by X-ray Diffraction
(XRD) and Scanning Electron Microscope (SEM) techniques which indicated a size range of 50 to 55 nm. The antimicrobial activity
of silver nanoparticles for three gram negative bacteria and three gram positive bacteria were investigated. It appeared that [Emim]
PF
6
and its Ag nanoparticles are the most effective products against the tested bacterial stains compared with [Emim] BF
4
and its Ag
nanoparticles.
Keywords: silver nanoparticles, ionic liquids, gram negative bacteria, gram positive bacteria, antimicrobial activity, scanning electron
microscopy, X-ray Diffraction.

1. INTRODUCTION
The development of cleaner technologies is a major emphasis in green chemistry. Among the several aspects of green
chemistry, the reduction/replacement of volatile organic solvents from the reaction medium is of utmost importance. The use of a
large excess of conventional volatile solvents required to conduct a chemical reaction creates ecological and economic concerns. The
search for a nonvolatile and recyclable alternative is thus holding a key role in this field of research. The use of fused organic salts,
consisting of ions, is now emerging as a possible alternative. A proper choice of cations and anions is required to achieve ionic salts
that are liquids at room temperature and are appropriately termed room temperature ionic liquids (RTILs). Common RTILs consist of
N,N-dialkylimidazolium, alkylammonium, alkylphosphonium or N-alkylimidazolium as cations [1]. Most of these ionic salts are
good solvents for a wide range of organic and inorganic materials and are stable enough to air, moisture, and heat. Ionic liquids are
polar (but consist of poorly coordinating ions), and immiscible with a number of organic solvents, and therefore provide polar
alternatives for biphasic systems.
Aqueous mediated reactions offer useful and more environmentally friendly alternatives to their harmful organic solvent
versions and have received increasing interest in recent years. Furthermore, water has unique physical and chemical properties, and by
its utilization it would be possible to realize reactivity or selectivity that cannot be attained in organic solvents. Water is the most
abundant, cheapest, and non-toxic chemical in nature. It has high dielectric constant and cohesive energy density compared to organic
solvents. It has also special effects on reactions arising from inter and intramolecular non-covalent interactions leading to novel
solvation and assembly processes. Water as a reaction medium has been utilized for large numbers of organic reactions [2].
Room-temperature ionic liquids(RTILs) have attracted intensive interests only in recent years as a replacement for classical
molecular solvents in fundamental researches and application including separation, catalysis, organic synthesis, and so on [3].
Nanoparticles have been extensively investigated due to the attraction of their unique physical properties, chemical reactivity,
and potential applications with high academic and industrial impacts [4]. Silver nanoparticles may have an important advantage over
conventional antibiotics in that it kills all pathogenic microorganisms, and no organism has ever been reported to readily develop
resistance to it. Researchers believe that the potential of colloidal silver is just beginning to be discovered.
In the present an attempt has been made to synthesize silver nanoparticles in ionic liquid [Emim] BF
4
and [Emim] PF
6
with
sodium citrate as a reducing agent. Effects of the reduction temperature and the precursor concentration on the size of the silver
nanoparticles were investigated. The silver nanoparticles synthesized in this method were characterized by XRD and SEM analyses
and their antimicrobial activities were screened for three gram (+) and three gram (-) bacteria.
2. EXPERIMENTAL
2.1 Materials
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

253 www.ijergs.org

All chemicals were of AR grade. They were purchased from Merk, SD Fine Chemicals Limited and used without further
purification. All the solvents and reagents were used as received and all reactions were run in oven-dried glassware. The homogeneity
of the products was checked on TLC plates coated with silica gel-G and visualized by exposure to iodine vapors.
2.2 Instruments
The
1
H-NMR and
13
C-NMR spectra were recorded in CDCl
3
and DMSO-d
6
on a Jeol JNN ECX 400P spectrometer. The IR
spectra were obtained on a Varian 800 FT-IR as thin films or for solid samples. Nanoparticles were well characterized by powder X-
ray diffraction (powder XRD) and Scanning Electron Microscope (SEM). The phase, purity and crystalline size of the Silver
nanoparticles were studied by XRD. Based upon the peak broadening in the patterns indicates that the Silver nanoparticles were very
small in size. In addition to identification of the crystalline phases, the XRD data were used to estimate the size of the constituent
crystallites by scherrers equation. The average particle size, D was determined by Eq. D = K / (.Cos). Where is the wavelength
of X-ray radiation (0.15406), K the scherrers constant (K= 0.9), the characteristic X-ray radiation and is full width at half
maximum of the plane. The X-ray diffraction (XRD) patterns were recorded on a Philips Xpert X-ray diffractometer with Cu K
radiation ( = 0.15406 nm) employing a scan rate of 1
o
/min in the 2 range from 20
o
to 80
o
. Surface morphology and the distribution
of particles were characterized by a LEO 1430VP scanning electron microscopy (SEM) using an accelerating voltage of 15 kV.
2.3 Synthesis of ionic liquid [Emim] Br [5]
1-Methylimidazole (41.0 g, 0.5 mol) was added drop wise to bromoethane (54.5 g, 0.5 mol) in a 500 mL three-neck round-
bottom flask equipped with a reflux condenser and a magnetic stirrer, and cooled in an ice-bath, as the reaction is highly exothermic.
Having been vigorously stirred for 5 h, the mixture was refluxed in room temperature until it turned into solid completely. The solid
was pounded to pieces and washed four times, each with 50 mL trichloroethane. Then the product obtained [Emim]Br (87.9 g), was
dried under vacuum at 70
o
C for 24 h and it was characterized with FT-IR (neat): 3,155, 3,105, 2,927, 2,857, 1,572, 1,460, 1,169, 837,
753, 620 cm
-1
;
1
H-NMR (400 MHz, CDCl3): 9.66 (s, 1H), 7.42 (t, 1H), 7.18 (t, 1H), 4.11 (t, 2H), 3.79 (s, 3H), and 1.27 (t, 3H).
13
C-
NMR (75 MHz) = 136.24, 123.59, 121.94, 44.91, 36.42, 15.46.
2.4 Synthesis of ionic liquid [Emim] BF
4
[6]
To a solution of crude imidazolium bromide (19.2g, 0.1mol) obtained from the above reactions, in acetone (70 mL) was
added sodium tetrafluoroborate (10.90 g, 0.1mol). The reaction mixture was stirred for 24 h at room temperature. The resulting
mixture was filtered through a pad of aluminum oxide to remove the sodium salt and color. Evaporation of the solvent under reduced
pressure afforded the corresponding imidazolium tetrafluoroborate.
FT-IR (neat): 3,153, 3,102, 2,990, 2,878, 2,833, 2,078, 1,633, 1,571, 1,457, 1388, 1301, 1,022, 842, 760, 619,523 cm
-1
;
1
H
NMR: 1.41 (t, J = 7.3, 3H), 3.85 (s, 3H), 4.14 (q, J = 7.3, 2H), 7.35 (s, 1H), 7.41 (s, 1H), 8.55 (s, 1H);
13
C NMR: 15.2, 36.0,45.1,
122.2, 123.8, 135.9.
2.5 Synthesis of ionic liquid [Emim] BF
6
[1]
In a typical synthesis of 1-ethyl-3-methylimidazolium hexafluorophosphate, EMIMBr (19.2g, 0.1mol) was transferred to
round bottom flask followed by the addition of 40 ml deionized water. An aqueous solution of 65% KPF
6
in a 1:1:1 molar ratio was
added slowly to minimize the amount of heat generated. As KPF
6
was added, two phases were formed in which [EMIM] PF
6
occupied
the bottom phase and KCl, the upper phase. The upper phase was decanted and the remaining product was washed with water several
times. Then the resulting product was dried at 70
o
C in vacuum line for 4 h to get the desired product. FT-IR (neat): 3,175, 3,132,
2,987, 2,888, 1,615, 1,578, 1,466, 1342, 1,293, 1,174, 835, 752, 644,557,433 cm
-1
; 1H NMR: 1.411 (t, J = 7.3, 3H), 3.852 (s, 3H),
4.14 (q, J = 7.3, 2H), 7.648(s, 1H), 7.727 (s, 1H), 9.068(s, 1H); 13C NMR: 15.2, 36.0, 45.1, 122.2, 123.8, 135.9.
2.6 Synthesis of Silver Nanoparticles [7]
Two solutions of silver nitrate with a concentration of 0.03 mol/L and sodium citrate with a concentration of 0.020.05
mol/L in ionic liquid were prepared. The sodium citrate solution was then added drop wise into the AgNO
3
solution under a vigorous
stirring at a given temperature in the range of 2540 C. The addition process was conducted for 0.5 h and the solution was
continuously stirred for another 3 h. After adjusting the pH at 8.0, the brown colloid contained silver nanoparticles could be formed.
The products were separated by centrifugation, washed with absolute ethanol for several times, then vacuum-dried at 60 C for 48 h
for further characterization.
2.7. Antimicrobial Activity (Broth dilution assay) [8]
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

254 www.ijergs.org

A series of fifteen test tubes were filled with 0.5 ml sterilized nutrient broth. Sequentially, test tubes 2 14 received an
additional 0.5 ml of the sample serially diluted to create a concentration sequence from 500 0.06 g. The first test tube served as the
control. All the test tubes received 0.5 ml of inoculums. The test tubes were vortexed well and incubated for 24 h at 37
o
C. The
resulting turbidity was observed, and after 24h the Minimum Inhibition Concentration (MIC) was determined where growth was no
longer visible by assessment of turbidity by optical density readings at 600nm.
3. RESULTS AND DISCUSSION
3.1. FTIR Spectra
FT-IR spectra of pure [EMIM] BF
4
and Ag (0) in [EMIM] BF
4
ionic liquid solutions are presented in Figures 1 and 2 and the
main frequencies of peaks are listed in Table1. In Figure 1, the bands at 3153 and 3102 cm
-1
are assigned to the C-H of imidazole ring
stretching vibration. The bands at 2990 - 2878 and 1633 cm
-1
are due to the stretching vibration for the C-H bands of alkyl chains and
C=C groups, respectively. The bands at 1571 and 1457 cm
-1
are due to the imidazole ring skeleton stretching vibration. The bands
at1301 and 1022 cm
-1
are due to the C-H of the imidazole ring in plane deformation vibration and the stretching vibration,
respectively. The bands at 842 and 760 cm
-1
probably originate from m-substituted imidazole ring. Compared with the pure [EMIM]
BF
4
, several significant changes are observed in the FTIR spectra of the Ag (0) in [EMIM] BF
4
solution (i) Two C-H of imidazole ring
stretching vibration bands are up-shifted by 7 and 16 cm
-1
. (ii) There is no stretching vibration for the C-H bands of alkyl chains and
C=C group, there is a change in the imidazole ring skeleton stretching vibration, C-H of the imidazole ring in plane deformation
vibration and the stretching vibration, in Ag (0) in [EMIM]BF
4
. Figures 3 and 4 displays the FT-IR spectra of pure [EMIM] PF
6
and
Ag (0) in [EMIM] PF
6
, respectively. The above changes of bands demonstrate that Ag (0) has an effect on the electron cloud density
of imidazole ring. Based on the analysis of FT-IR spectra, it is concluded that there are strong interactions between RTILs and
AgNO
3
, and the interactions focus on the imidazole ring of RTILs. Present findings showed similarity to the results previously
reported [9].
3.2. XRD analysis
The phase, purity and crystallite size of the Ag nanoparticles were studied by XRD (Figures 5 and 6). The typical diffraction
pattern shows that the Ag nanoparticle prepared with two different ionic liquids are crystalline in nature, high purity and free of
impurities. The crystallite size of the Ag nanoparicles for [EMIM] BF
4
and [EMIM] PF
6
was determined from the most intense
diffraction peak (101) by using the Debye-Scherrers equation was 51 and 55 nm, respectively. The relative intensity of the diffraction
peaks of Ag nanoparticle prepared with different ionic liquids deviates from one to one, suggesting that an each ionic liquid has
different nanostructures in certain direction of the growing material.
3.3. SEM analysis
The morphologies and dispersity of synthesized nanostructured Ag nanoparticles from ionic liquids are shown in Figures 7
and 8. It can be seen from the SEM images that the morphology of Ag nanoparticles from [EMIM] BF
4
exhibits the well-defined Ag
nanostructure composed of nanosized, regular and uniform spherical shaped particles. Figure 7 represents silver nanoparticles with an
average particle size of 64 nm. The synthesized Ag nanoparticles from [EMIM] PF
6
also exhibit the morphology of spherical shaped
particles (Figure 8). The mean diameter read from the nanoscale bar of SEM images of Ag nanoparticle prepared from [EMIM] PF
6
is around 70 nm which is higher than the Ag nanostructure produced from [EMIM] BF
4
. SEM analysis clearly indicates that the
different characteristic ionic liquids produced the Ag nanoparticles with well-defined and extended ordered morphology without any
agglomeration and aggregation. The average size of Ag nanostructures received from the ionic liquids [EMIM] PF
6
is greater than
[EMIM] PF
6
.
3.4. Antimicrobial Activities
A preliminary investigation on the antibacterial activities of pure ILs and nanoparticles were performed through
measurements of minimal inhibitory concentrations (MIC) expressed in g/mL based on the previous study [10]. The efficiency of Ag
nanoparticles stabilized by ILs were evaluated against bacterial strains through measurements of minimal inhibitory concentrations
(MIC) expressed in g /mL. The values after one day of exposure are shown in Table 2.
Six microorganisms were chosen as test strains: For three gram negative bacteria (Escherichia coli, Pseudomonas aeruginosa
and Aeromonas hydrophila) and three gram positive bacteria (Staphylococcus aureus, Micrococcus luteus and Bacillus cereus).
In view of the results, it appeared that [EMIM] PF
6
and its nanoparticle are the most effective products against the tested
bacterial stains compared with [EMIM] BF
4
and its nanoparticles. The decreasing orders of antimicrobial activities are listed below.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

255 www.ijergs.org

Staphylococcus aureus : 2 = 2a > 1 > 1a
Micrococcus luteus : 2 > 2a > 1a > 1
Bacillus cereus : 2 > 2a > 1a = 1
Escherichia coli : 2 > 2a > 1 > 1a
Pseudomonas aeruginosa : 2a > 2 > 1 = 1a
Aeromonas hydrophila : 2a > 2 > 1 > 1a
1. [EMIM] BF
4
; 1a Ag (0) of [EMIM] BF
4
; 2 - [EMIM] PF
6
; 2a Ag (0) of [EMIM] PF
6

ACKNOWLEDGEMENTS

The authors immensely thank the principal and the management of Sir Theagaraya College, Chennai-21,
and Govt. Arts College, Thiruvannamalai, Tamil Nadu for their constant encouragement and support given.

REFERENCES:
[1] Welton T., Room-Temperature Ionic Liquids. Solvents for Synthesis and Catalysis. Chemical Reviews, 99, 2071, 1999.
[2] Naik S., Bhattacharjya G., Talukdar B. and Patel B.K, Chemoselective Acylation of Amines in Aqueous Media. European
Journal of Organic Chemistry, 2004(6), 1254, 2004.
[3] Dzyuba S. V. and Bartsch R.A, Recent Advances in Applications of Room-Temperature Ionic Liquid/Supercritical
CO
2
Systems. Angewandte Chemie International Edition, 42, 148, 2003.

[4] Quake S.R. and Scherer A, From Micro- to Nanofabrication with Soft Materials. Science, 290, 1536, 2000.

[5] Elliot Ennis and Handy S.T, Facile Route to C2-Substituted Imidazolium Ionic Liquids. Molecules, 14, 2235, 2009.
[6] Min G.H., Yim T., Lee H.Y., Huh D.H., Lee E., Mun J., Oh S.M. and Kim Y.G, Synthesis and Properties of Ionic Liquids:
Imidazolium Tetrafluoroborates with Unsaturated Side Chains. Bulletin- Korean Chemical Society, 27(6), 847, 2006.
[7] Jing A.N., De-song W. and Xiao-yan Y, Synthesis of Stable Silver Nanoparticles with Antimicrobial Activities in Room-
temperature Ionic Liquids. Chemical Research in Chinese Universities, 25(4), 421, 2009.
[8] Canillac N. and Mourey A, Antibacterial activity of the essential oil of Picea excelsa on Listeria, Staphylococcus aureus
and coliform bacteria. Food Microbiology, 18 (3), 261, 2001.
[9] Zhu J., Shen Y., Xie A., Qiu L., Zhang Q., Zhang S, Photoinduced synthesis of anisotropic gold nanoparticles in room-
temperature ionic liquid. Journal of Physical Chemistry C, 111, 7629, 2007.
Demberelnyamba D., Kim K.S., Choi S., Park S.Y., Lee H., Kim S.J., Yoo I.D, Synthesis and antimicrobial properties of
imidazolium and pyrrolidinonium salts. Bioorganic & Medicinal Chemistry, 12, 853, 2004














International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

256 www.ijergs.org

Table1. Frequencies of FTIR Absorption Bands for the Pure [EMIM] BF
4
and Ag (0) in [EMIM] BF
4
, Pure
[EMIM] PF
6
and Ag (0) in [EMIM] PF
6


Pure [EMIM]
BF
4
Ag (0) in [EMIM]
BF
4
Pure [EMIM]
PF
6
Ag (0) in [EMIM]
PF
6
Assignments
3153, 2102 3160, 3118 3175, 3132 3173, 3124
C-H of imidazole ring
stretching vibration
2990, 2833, 2078 135, 1742 2987,2888 2979, 2312, 2312
C-H of alkyl chain
stretching vibration
1633 1647 1615 1647 C=C stretching vibration
1571, 1457 1518, 1462, 1423 1578, 146, 1400 1517, 1462, 1424
imidazole ring skeleton
stretching vibration
1301 1166 1342, 1293 1338
C-H imidazole ring in
plane deformation
vibration
1022 1052 1025 1167 Stretching vibration
842, 760 727 835,752 827, 622
m-substituted imidazole
ring

Table 2. The MIC of silver nanoparticles solution stabilized by ionic liquids

Compound
number

Compound

MIC (g/mL)
Tested organisms (bacteria )
SA ML BC EC PA AH
1 [EMIM] BF
4
31.25 15.63 15.63 3.91 3.91

31.25

1a Ag (0) in [EMIM] BF
4
250 7.81 15.63 250 3.91

125

2 [EMIM] PF
6
3.91 1.95 1.95 0.98 15.63

15.63

2a Ag (0) in [EMIM] PF
6
3.91 3.91 3.91 3.91 0.98

3.91

SA - Staphylococcusaureus; ML - Micrococcusluteus; BC - Bacilluscereus; EC - Escherichia coli; PA - Pseudomonas aeruginosa and AH -
Aeromonashydrophila



















International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

257 www.ijergs.org

Figure 1: FTIR spectra of synthesized [EMIM] BF
4
ionic liquid


Figure 2: FTIR spectra of synthesized Ag (0) in [EMIM] BF
4
ionic liquid


Figure 3: FTIR spectra of synthesized [EMIM] PF
6
ionic liquid
C:\Documents and Settings\Dr.Arun\Desktop\RAR\MEAS\11.1 11 Instrument type and / or accessory 18/07/2012
3
4
3
3
.
7
2
3
1
5
3
.
7
5
3
1
0
3
.
7
1
2
9
8
9
.
4
5
2
8
7
8
.
3
1
2
8
2
9
.
4
1
2
0
7
4
.
4
4
1
6
3
4
.
0
4
1
5
7
1
.
4
8
1
4
5
8
.
9
1
1
3
8
9
.
8
4
1
3
3
5
.
2
8
1
3
0
1
.
5
0
1
0
3
1
.
4
3
8
4
2
.
0
5
7
6
0
.
2
6
6
2
0
.
3
5
5
2
7
.
3
9
4
4
0
.
4
1
500 1000 1500 2000 2500 3000 3500
Wavenumber cm-1
0
1
0
0
2
0
0
3
0
0
4
0
0
5
0
0
6
0
0
T
r
a
n
s
m
i
t
t
a
n
c
e

[
%
]
Page 1/1
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

258 www.ijergs.org



Figure 4: FTIR spectra of synthesized Ag (0) in [EMIM] PF
6
ionic liquid



Figure 5: XRD Pattern for Ag (0) in [EMIM] BF
4
ionic liquid
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

259 www.ijergs.org


Figure 6: XRD Pattern for Ag (0) in [EMIM] PF
6
ionic liquid


Figure 7: The SEM image for Ag (0) in [EMIM] BF
4
ionic liquid

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

260 www.ijergs.org



Figure 8: The SEM image for Ag (0) in [EMIM] PF
6
ionic liquid




International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

261 www.ijergs.org

Performance Evaluation of Time Reversed Space Time Block Codes in
Nakagami-m Fading Channel
Subhashini Dhiman
1
, Surbhi Sharma
1
1
Department of Electronics and Communication, Thapar University, Patiala
E-mail Subhashini.dhiman@gmail.com

Abstract Two transmit and one receive antenna design was presented by Alamouti in [5], where channel coefficients at adjacent
time intervals are assumed to be same. When the channel suffers from intersymbol interference (ISI) due to large delay spread, Time
Reversal Space Time Block Codes (TR-STBC) achieves better performance [8]. In frequency selective Multiple Input Multiple Output
(MIMO) channel environment, loss of quasi static assumption produce the ISI in TR-STBC. In this paper, a low complexity receiver
is evaluated to mitigate the effect of intersymbol interference caused due to quasi static assumption in TR-STBC in Nakagami-m
fading channel.
Keywords Space time block codes (STBC), Time Reversal Space Time Block Codes (TR-STBC), Intersymbol interference
(ISI), Multiple Input Multiple Output (MIMO),fast fading, Nakagami channel, Orthogonal frequency time division multiplexing
(OFDM)
INTRODUCTION
Wireless communications has emerged as one of the fastest growing sectors of the communications industry. Wireless networks
widely used today comprise: Wireless Local Area Networks, cellular networks, personal area networks and wireless sensor networks.
Use of Wireless communication for data application such as internet and multimedia access is increased. So demand for reliable high-
data-rate services is elevated quickly. However, it is hard to achieve reliable wireless transmission due to time varying multipath
fading of wireless channel. Also, the range and data rate of wireless networks is limited. To enhance the data rates and the quality,
multiple antennas can be used at the receiver to obtain the diversity. By utilizing multiple antennas at transmitter and receiver,
significant capacity advantages can be obtained in wireless system. In a Multiple Input Multiple Output (MIMO) system, multiple
transmit and receive antennas, can elevate the capacity of the transmission link. This extra capacity can be utilized to enlarge the
diversity gain of the system. This results in development of Lucents Bell-Labs layered space-time (BLAST) architecture [1]-[4] and
space time block codes (STBCs) [5][7] to attain some of this capacity. Space time coding has utilized diversity and coding gains to
achieve high data rate transmission. STBC gained popularity because of their capability to provide simple linear processing for
maximum likelihood decoding at the receiver.
Time reversal space time block codes (TR STBC)
STBC scheme presented by Alamouti in [5] is a transmit diversity scheme, where two transmit and one receive antenna was used. The
scheme was proposed for flat fading channel where the fading is assumed to be constant over two consecutive symbols. But further
same scheme approach was applied to the frequency selective channels. Particularly, methods such as time reversal [8], OFDM [9],
[10], and single-carrier Frequency domain equalization [11][13] have gained attention. But both OFDM and SC-FDE schemes,
depends on transmission of cyclic prefix, which makes the channel matrix circulant. This characteristic diagonalizes the matrices by
FFT and permits effective equalization in the frequency domain. In contrast, TR-STBC applies the Alamoutis scheme on blocks
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

262 www.ijergs.org

instead of symbols in the time domain. At the receiver, spatiotemporal matched filter is used for transforming the received signal into
block decoding and permits the perfect decoupling between the blocks [8], [13].

TR STBC System model
TR STBC expands transmission of Alamoutis scheme for frequency selective channels. It encodes normally arranged and time
reversed blocks of symbols together [8], [14]. The data stream ( ) y t is divided into two separate streams, ( )
1
y t and ( )
2
. y t Then
these two streams are transmitted from first antenna and second antenna in (alternating) time intervals. At first time interval, ( )
1
y t is
transmitted form antenna 1 and ( )
2
y t is transmitted from antenna 2. So the corresponding received signal at the transmitter end is
( ) ( ) ( ) ( )
1 1, 1 2, 2 1

t t
r t h y t h y t n t = + + (1)
where
1,t
h is the channel between transmit antenna 1 and the receive antenna. And ( )
1
n t is the noise sample at the first time interval.
At the second time interval, -
2
y *( ) t is transmitted form antenna 1 and
1
y *( ) t is transmitted from antenna 2. Where (.)* denotes
the complex conjugate. And ( ) . represents the time reversed signal. So the received signal is

2
r ( )
1, 1 t
t h
+
=
2
y ( )
2, 1
*
t
t h
+
+
1
y ( ) * t +
2
n ( ) * t (2)
where ( )
2
n t is the noise sample at the 2nd time interval.
Case 1: Slow fading : For slow fading in time we have,
1, 1, 1 t t
h h
+
=

So, (2) can be written as ( )
2
r t =
1,t
h ( )
2
* y t +
2,t
h ( ) ( )
1 2
* y t n t +
where hi is the time-reversed expression of hi .So we can rewrite following:

( )
( )
1
2
r t
r t
| |
|
|
\ .
=
1, 2,
2, 1, * *
t t
t t
h h
h h
| |
|

\ .
( )
( )
1
2
y t
y t
| |
|
|
\ .
+
( )
( )
1
2
n t
n t
| |
|
|
\ .
, where H =
1, 2,
2, 1, * *
t t
t t
h h
h h
| |
|

\ .


At the receiver, received signal is multiplied by
H
H and a decoupled matched filter output was produced.
So, ( ) ( ) ( ) ( )
H H H
z t H r t H Hy t H n t = = +
which perfectly decouples the decoding of ( )
1
y t and ( )
2
y t . Since all off diagonal terms of
H
H H are zero, we can obtain

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

263 www.ijergs.org


H
H H =
1, 1, 2, 2,
1, 1, 2, 2,
0
0
H H
t t t t
H H
t t t t
h h h h
h h h h
(
(
(

+
+
=
0
0
J
J
(
(



So, the received signal can be written as, ( ) ( ) ( ) ( ) ( ) ( )
1 1 1 2 2 2
z t Jy t n t and z t Jy t n t = + = + . So ( )
1
y t and ( )
2
y t can be
separately decoded.

Case 2: Fast fading: In this case,
1 1 1 k k
h h
+
= and
2 2 1 k k
h h
+
=
So in matrix form it can be written as:

( )
( )
1
2
r t
r t
| |
|
|
\ .
=
1, 2,
2, 1 1, 1

* *
t t
t t h
h h
h + +
(
(


(

( )
( )
1
2
y t
y t
| |
|
|
\ .
+
( )
( )
1
2
n t
n t
| |
|
|
\ .

In this case output of matched filter is:
( ) ( )
H
z t H r t =
=
1, 1, 2, 1 2, 1 1, 2, 2, 1 1, 1
2, 1, 1, 1 2, 1 2, 2, 1, 1 1, 1


H H H H
t t t t t t t t
H H H H
t t t t t t t t
h h h h h h h h
h h h h h h h h
+ + + +
+ + + +
+

| |
|

+
|
\ .

( )
( )
1
2
y t
y t
| |
|
|
\ .
+
H
H
( )
( )
1
2
n t
n t
| |
|
|
\ .



which can be written as,
=
1, 1, 2, 1 2, 1
2, 2, 1, 1 1, 1


H H
t t t t
H H
t t t t
h h h h
h H h h
c
c
+ +
+ +
| |
|
|
\ .
+
+

( )
( )
1
2
y t
y t
| |
|
|
\ .
+
H
H
( )
( )
1
2
n t
n t
| |
|
|
\ .


Where
H
H H =
1, 1, 2, 1 2, 1
2, 2, 1, 1 1, 1


H H
t t t t
H H
t t t t
h h h h
h H h h
c
c
+ +
+ +
| |
|
|
\ .
+
+

As the off-diagonal terms are not zero. So, the received signal cannot be decoupled separately.
Here off diagonal terms i.e. c represents the interference.
Proposed scheme: To remove the ISI in the fast fading, we propose a low complexity zero forcing receiver.
So,
PZF
H =
2, 1
1,
1, 1
2,
/
/ *
* t
t t
t
t t
h h P
h h P
+
+
| |

\

|
|
.

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

264 www.ijergs.org

Where 2, 1 1, 1
1, 2,
/ * * t t
t t t
h P h h h + + =

further,

PZF
H H =
2
2
2, 1
1, 1
2 2
1, 1
2 2,
/
| / |
t
t t
t
t t
h P
P h
h
h
c
c
+
+
| |

+
+
|
|
|
\ .

where 2, 1 1, 1
1 1, 2,
/ * * t t
t t t
h h h P h c + + =

by substituting the value of P
t
in above equation,
1
0 c =
Also, 1, 1 2, 1
2 2, 1,
/ * * * t t
t t t
h h h P h c + + =
So, it reduces to zero after substituting the value of *
t
P in above equation.
Hence the off- diagonal terms becomes zero. So ISI is reduced in fast fading environment. But this scheme also reduces the diversity
gain.

Therefore, ( ) ( ) ( )
PZF
z t H H y t n t = +
And

( ) y t =
( ) ( )
1
.
PZF
H H z t


Where

( ) y t is the estimated data stream.


So, decoding of ( )
1
y t and ( )
2
y t can be done at the receiver.
Simulation results:
Bit error rate performance of TR-STBC for two transmit and one receive antenna is studied. The performance of TR-STBC is
evaluated for fast fading in Nakagami-m fading channel for different values of shape factor and compared with the classical zero
forcing receiver. Proposed scheme reduces the computational complexity at the receiver. Proposed low complexity receiver gave same
results as of classical zero forcing receiver for value of m=1 i.e. for the Rayleigh channel and it gives better performance than classical
zero forcing for value of m>1 and its performance degrades for value of m<1.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

265 www.ijergs.org


Figure1: Time reversal STBC performance for different fading channel.
CONCLUSION
The high speed mobile environment results in fast fading channel in the wireless communication. The proposed scheme mitigates the
effect of interference in the fast fading environment and reduces the computational complexity at the receiver. Performance of
proposed low complexity receiver is identical to that of classical zero forcing receiver for value of shape factor m=1.
REFERENCES:
[1] G. J. Foschini and M. J. Gans, On limits of wireless communications in a fading environment when using multiple antennas,
Wirel. Pers. Commun., vol. 6, no. 3, pp. 311335, Mar. 1998.
[2] I. E. Telatar, Capacity of Multi-Antenna Gaussian Channels, Jun. 1995. AT&T Bell Labs Intern. Rep.
[3] I. E. Telatar, Capacity of multi-antenna Gaussian channels, Eur. Trans. Telecommun., vol. 10, no. 6, pp. 585595, Nov./Dec.
1999.
[4] G. J. Foschini, Layered space-time architecture for wireless communication in a fading environment when using multi element
antennas, Bell Labs Tech. J., vol. 1, no. 2, pp. 4159, Autumn 1996.
[5] S. M. Alamouti, A simple transmit diversity technique for wireless communications, IEEE J. Sel. Areas Commun., vol. 16, no.
8, pp. 14511458,Oct. 1998.
[6] V. Tarokh, H. Jafarkhani, and A. R. Calderbank, Space-time block coding for wireless communications: Performance results,
IEEE J. Sel. Areas Commun., vol. 17, no. 3, pp. 451460, Mar. 1999.
[7] V. Tarokh, H. Jafarkhani, and A. R. Calderbank, Space-time block codes from orthogonal designs, IEEE Trans. Inf. Theory,
vol. 45, no. 5, pp. 14561467, Jul. 1999.
[8] E. Lindskog and A. Paulraj, A transmit diversity scheme for channels with interference, in Proc. IEEE ICC, New Orleans, LA,
Jun. 2000, vol. 1, pp. 307311.
[9] Z. Liu, G. Giannakis, A. Scaglione, and S. Barbarossa, Decoding and equalization of unknown multipath channels based on
block precoding and transmit-antenna diversity, in Proc. 33rd Asilomar Conf. Signals, Syst., Comput., Oct. 1999, vol. 2, pp.
15571561.
[10] H. Bolcskei and A. J. Paulraj, Space-frequency coded broadband OFDM systems, in Proc. IEEE WCNC, Chicago, IL, Sep.
2000, pp. 16.
[11] N. Al-Dhahir, Single-carrier frequency-domain equalization for space-time block-coded transmissions over frequency-selective
fading channels, IEEE Commun. Lett., vol. 5, no. 7, pp. 304306, Jul. 2000.
[12] S. Zhou and G. Giannakis, Single-carrier space-time block-coded transmissions over frequency-selective fading channels,
IEEE Trans. Inf. Theory, vol. 49, no. 1, pp. 164179, Jan. 2003.
[13] N. Al-Dhahir, Overview and comparison of equalization schemes for space time coded signals with application to EDGE,
IEEE Trans. Signal Process., vol. 50, no. 10, pp. 24772488, Oct. 2002.
[14] S. Geirhofer, L. Tong, and A. Scaglione, Time-reversal space-time coding for doubly-selective channels, in Proc. IEEE WCNC,
Las Vegas, NV, 2006, pp. 16381643

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

266 www.ijergs.org

CFD Analysis of Electrolyte Flow Pattern in Pulse ECM and to Optimize MRR
for Circular Tool
1
Anamika Mishra,
2
D B Jadhav,
3
P V Jadhav
1
Research scholar, Mechanical Engineering Department
2
Assistant Professor, Mechanical Engineering Department
3
Associate Professor, Production Engineering Department
Bharti Vidyapeeth Deemed University College of Engineering, Pune-43
E-mail- anamika_mishra@hotmail.com

ABSTRACT ECM emerges out to be one of the major non conventional machining techniques based on Faradays laws of
electrolysis, highly efficient due to its zero tool wear characteristic. Occurrence of passivation is the major problem faced in ECM. In
the present work, study of the flow pattern of electrolyte has been performed so that, the machining variable distribution can be
predicted accurately thus passivation can be minimized.
A tool was modeled in Pro-E design modeler and study is considered under steady state with turbulence. The model was simulated for
various inlet pressures. The results obtained showed that the flow velocity decreases when electrolyte moves towards the work piece
and it increases at the outlet. Turbulent kinetic energy and turbulent eddy dissipation rate profile exhibits higher value of turbulence at
pressure 1.0 kg/cm2 and 1.4 kg/cm2 whereas at 1.2 kg/cm
2
pressure, turbulence is almost negligible. The MRR is maximum affected
by the tool feed rate followed by voltage and least by the electrolyte pressure.The optimize results A
2
B
2
C
2
gives the best material
removal rate (MRR).Hence, from the computational simulation and experimental results it was found that 1.2 kg/cm
2
is a optimum
value for pressure .
Keywords PECM, CFD, Flow pattern analysis, MRR, ECM, Hole making process, CFD analysis of Electrolyte flow pattern
INTRODUCTION
Electrochemical machining is one of the most potential non conventional machining techniques used to machine high strength, heat
resistant material. It is considered a reverse of electroplating, based on the principal of electrolysis. As there no contact between tool
and work piece at the time of machining it results in zero tool wear. It has been widely used in the automobile industries, turbo-
machinery aerospace, aeronautics, defense and medical industries because of its various advantages like negligible tool wear, high
precision machining in difficult-to-cut materials, lower thermal and mechanical stress on work piece etc. Though there are few
disadvantages, such as hydrogen bubble generation and its effect on Material Removal Rate (MRR), complexity of tool geometry and
its effect on various process parameters, prediction of electrolyte flow pattern and its impact etc. which have been investigated by
various researchers. In complicated work piece it is very difficult to know the machining variables distribution within the inter
electrode gap (IEG). Study of the flow pattern of electrolyte can predict the machining variable distribution accurately and thus
passivation can be avoided which is the major drawback in electrochemical machining of complicated shapes [1]. Many researchers
have presented experimental and analytical studies related to material removal mechanism and current density distribution in ECM
using different tool shapes and different software, but they couldnt predict the flow pattern accurately [2]. In complicated work piece,
its very difficult to deduce the machining variables within the IEG. So there is a need to understand parameters related to flow
pattern. Once the flow pattern is known, then its easy to design the tool and avoid passivation. With this background and the art of
literature studied, the salient objective of the present study is
1) The analysis of the flow pattern of electrolyte.
2) Determination of the effect of various parameters over MRR (material removal rate) and surface roughness.
3) Optimization of results.
ELECTROCHEMICAL MACHINING SET UP
In Electrochemical Machining (ECM), a high current, low voltage DC power supply connects a conducting tool and work piece. The
shaped tool is connected to the negative (-ve) terminal and work piece to the positive (+ve) which are cathode and anode respectively.
A conducting electrolyte flows through a small gap that is maintained between the tool and work piece, thus providing the necessary
path for electrolysis. As the direction of electron flow is from work piece to the tool, material removal is from the work piece in a
reverse image of the tool. The several components of ECM setup as shown in the figure 1.


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

267 www.ijergs.org

Work-Piece
Work-piece is a conducting material which acts as an anode. It is connected to the positive terminal of the pulse power supply.
Generally materials with very high value of hardness or a very low value of machinability are used as work-piece materials in ECM as
it removes material independent of the hardness.
Tool
Tool acting as cathode is connected to the negative terminal. Tool material used for electrochemical machining should have good
electrical and thermal conductivity, easy machinability, resistance to chemicals, good stiffness and be easily obtainable. The most
commonly used tool materials are copper, brass, stainless steel etc.

Fig. 1 Schematic diagram of various elements of ECM setup
Electrolyte
Electrolyte is a conducting fluid which plays a very vital role in electrochemical machining. An electrolyte in electrochemical
machining completes electrical circuit allowing the passage of current (i.e. acts as a conductor), sustains the required electrochemical
reactions and acts as a coolant and carries away the waste products. The selection of the electrolyte is based upon the work-piece
material, the tool material and the application. Also, it must have a good chemical stability. Apart from these, it should be inexpensive,
safe, and as non corrosive as possible.
Power Supply
Pulse DC power supply with low value of voltage and high value of current is used to minimize the loss of electricity. Current of the
order of 50-40,000 A and voltage of order 4-30 V is generally applied to overcome the resistance at the gap.
MODELLING & SIMULATION
To machine the work piece into required shape, tool should be designed properly. The shape of the tool affects the critical parameters
of machining and also affects the MRR.
Geometrical modelling: The modeling is done using PRO-E Design modeler. The model used for the simulation study under
consideration in the present work is cylindrical shaped with a central through hole having a diameter of 2 mm and height 100 mm. The
centre of the hole is fixed at (0, 0, 0) on XYZ coordinate. A cubical block having 100 mm length, 35 mm width and 5 mm height is
used as work piece. Electrolyte used for this simulation is NaNO
3
solution. The electrolyte starts flowing with a constant diameter
from the inlet of the tool. The complete physical model of work-piece-tool set up is as shown in figure 2.
Importing model to ANSYS: Model prepared in the ProE cannot be directly opened in the ANSYS. It has to be converted into a
compatible format like IGES or STEP/STP for further processing. ProE model of the tool and work-piece assembly is firstly converted
into STEP/STP format and then imported to ANSYS ICEM CFD14.5.

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

268 www.ijergs.org


Fig.2: Physical model of work-piece-tool set up
Geometry checking: After importing the geometry to the ICEM CFD we repair the Geometry to check the errors like any incomplete
surfaces, holes and gaps etc, by build diagnostic topology. These errors should be rectified as FLUENT does not tolerate such errors.
After rectifying all the errors we can proceed further for part naming.
Part naming and material points: After importing the model from the Pro-E parts are assigned the name for identify. Material points
are created to indicate the fluid volume or solid volume. Different names of the parts shown in the model tree. This fig 3 shows the
different parts and their names.

Fig.3: Model with part naming Fig.4: Meshed box model with part naming
Meshing: Meshing is used to discretizing a spatial domain in to simple geometric elements such as triangles (in 2D) or tetrahedral (in
3D) for getting the numerical solution. After importing the geometry and part naming we set the parameters for meshing. Firstly we
have to decide which type of meshing we are going to do (a) Structured mesh, or (b) Unstructured mesh based on the application and
complexity of the geometry. In the present work unstructured mesh is used as the model is not too complex and it also takes less time
for calculation and analysis. The quality of mesh is a relevant factor in the case of appropriate geometry of the model and accuracy of
the results. This can be expressed in terms of orthogonal quality. If the value of orthogonal quality is > 0, mesh quality is good and
better results are obtained while if it is < 0, mesh gives bad results. Tetrahedron elements are used for meshing the geometry as they
provides more automatic solutions with ability to add mesh controls to improve accuracy in critical regions [17]. We select t he part
mesh set up to set the proper mesh size for different parts of the model for capturing the proper physic and important features involved
in that. The box structure outside the tool work piece setup is generated to capture the air volume which is present in atmosphere.
Next important step was to create prism elements over the wall surface as the flow pattern of the electrolyte is to be analysed so layer
is created only over the electrolyte fluid volume.
After meshing we check the mesh for the different kind of errors which can create problems at the time of analysis in FLUENT. Errors
which can create problem at the time of analysis are as follows [17]:
(a) Duplicate elements (b) Uncovered faces (c) Missing internal faces (d) Volume orientation (e) Surface orientation (f) Hanging
elements (g) Multiple edges (h) Triangle boxes (i) Single edges (j) Non-manifold vertices (k) Unconnected vertices.
Errors related to multiple edges are and unconnected vertices are ignored as they do not create any problem while importing the model
to FLUENT.

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

269 www.ijergs.org


Fig.5: Volume mesh at cut plane Fig.6: Prism layer at wall surface
Boundary conditions: Meshing done in ICEM is then imported in FLUENT in .msh file extension. Before setting the boundary
conditions it is necessary to set proper dimensional units. So that proper results are achieved.
Model: In model setup we activate multiphase mode for volume of fluids, as we are considering two volumes: air and electrolyte.
Energy equation is also activated as temperature profile is required in present work. As, we are working on 4000+ Reynolds number,
So flow is turbulent. k- and k- are two options available in turbulent flow model. k- model is selected for realizable wall function
as it accurately predicts the spreading rate of both planar and round jets and also provides superior performance for flows involving
rotation, boundary layers under strong adverse pressure gradients, separation, and recirculation.[17]
Material: In material setup we create material to be used as solid and fluid volumes as in our work copper and steel as solid material
for tool and work-piece respectively and electrolyte and air as a fluid material are used. Air as a fluid volume is defined as it is present
in the atmosphere and electrolyte as it circulates inside tool.
The input values for analysis are as:
For inlet zone we select type as pressure-inlet and box bottom as pressure-outlet.
In inlet conditions the pressure of 1.0, 1.2 and 1.4 kg/cm
2
accordingly are inserted. In specification method we give intensity as 5 and
hydraulic diameter as 0.02m. For inlet thermal conditions temperature of air is taken as ambient temperature i.e. 300 k.
The outlet is set as a interior type, box-bottom set as pressure-outlet, the gauge pressure at the outlet surface will be 0. In
specification method we give backflow intensity as 5 and backflow hydraulic diameter as 0.02 m.
RESULTS AND DISSCUSSIONS
This deals with the analysis of the results of the three models generated in ANSYS Fluent as modelling. It shows the crucial
parameters affecting overall machining process of ECM in terms of contours from which we can predict the variation of these
parameters in the IEG and their effects.
It also describes the various experimental results we have obtained from the experiment performed.
Critical parameters analyzed in simulation:
Volume Fraction Profile
Figures 7, 8 and 9 show the volume fraction profiles, generated at different pressure. The inlet pressure for this simulation study was
taken as 1.0 kg/cm
2
, 1.2 kg/cm
2
and 1.4 kg/cm
2
respectively. The volume fraction contours shown are the volume fraction of sodium
nitrate electrolyte between IEG.
As in figure the volume fraction of the electrolyte is higher at the center of the hole and decrease at the outer side. The value of the
volume fraction for model at different pressure will be different.

Fig.7: Volume fraction at pressure 1.0 kg/cm
2
Fig.8: Volume fraction at pressure 1.2 kg/cm
2

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

270 www.ijergs.org


Fig.9: Volume fraction at pressure 1.4 kg/cm
2

Velocity Profile
Figures 10, 11 and 12 show the velocity profile for model at inlet pressure 1.0 kg/cm
2
, 1.2 kg/cm
2
and 1.4 kg/cm
2
respectively.
The velocity profile at 1.0kg/cm
2
pressure is as shown in Fig. 10 which indicates that velocity of electrolyte increases from the hole to
the boundary due to reduction in area of flow. The velocity of the electrolyte within the IEG is 10.03 m/s, which is less than the outlet
velocity. So as the fluid flows towards the work-piece the velocity decreases. There is a slight change in velocity within IEG at
different pressure.

Fig.10: Velocity Profile at pressure 1.0 kg/cm
2
Fig.11: Velocity Profile at pressure 1.2 kg/cm
2


Fig.12: Velocity Profile at pressure 1.4 kg/cm
2

Pressure Profile
Figures 13, 14 and 15 describes the pressure contours for model with different inlet pressure 1.0 kg/cm
2
, 1.2 kg/cm
2
and 1.4 kg/cm
2

respectively in the inter electrode gap on the plane of work-piece.
The above pressure profiles describe the variation in pressure at the IEG on the plane of machining area. As all cases shows that
pressure is higher at the center of the hole and decreases towards the boundary. The pressure increases from the inlet to outlet. The
pressure within the IEG will be higher as compare to inlet pressure.

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

271 www.ijergs.org


Fig.13: Pressure profile at inlet pressure 1.0 kg/cm
2
Fig.14: Pressure profile at inlet pressure 1.2 kg/cm
2


Fig.15: Pressure profile at inlet pressure 1.4 kg/cm
2

Turbulent Kinetic Energy Profile
Figures 16, 17 and 18 show the turbulent kinetic energy contour within the IEG for model with different pressure.

Fig.16: Turbulence kinetic energy profile at 1.0kg/cm
2
Fig.17: Turbulent kinetic energy profile at 1.2kg/cm
2


Fig.18: Turbulence kinetic energy profile at pressure 1.4kg/cm
2

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

272 www.ijergs.org

Turbulence in the k- model depends on turbulent kinetic energy (k) and turbulent eddy dissipation (). Turbulence is directly related
to the surface roughness. If the turbulence within the IEG is more, then the roughness of the machined surface will also be more.
Turbulent kinetic energy determines the energy in the turbulence. Turbulent kinetic energy produced by fluid shear, friction or
buoyancy or through external forcing at low frequency eddy scale. At 1.0kg/cm
2
pressure the kinetic energy values varies from
3.29410
-1
m
2
/s
2
to 1.77610 m
2
/s
2
. In second case the variation of kinetic energy distribution is less than that of first case. The kinetic
energy values varies from 3.26410
-1
m
2
/s
2
to 1.75 m
2
/s
2
. At 1.4kg/cm
2
pressure the kinetic energy values varies from 3.8610
-1
m
2
/s
2
to 2.069 m
2
/s
2
. This value is greater than that of case first and case second.
From the above discussion it can be observed that the value of kinetic energy within the IEG is very less at 1.2 kg/cm2 pressure. So as
shown in the figures 5.11 turbulent kinetic energy is less so there is less turbulence. And if the turbulence is low then we will get
better machining surface.
Turbulent Eddy Dissipation Profile
Turbulent eddy dissipation gives the quantitative measurement of the turbulence. Figs. 19, 20 and 22 represent the profiles of turbulent
eddy dissipation for model within the pressure range 1.0 -1.4 kg/cm
2
.

Fig.19: Turbulent eddy dissipation at 1.0 kg/cm
2
Fig.20: Turbulent eddy dissipation at 1.2 kg/cm
2

At 1.0 kg/cm
2
pressure

the value of eddy dissipation is varies from 2.2210
2
m
2
/s
3
to 1.054210
4
m
2
/s
3
. In second case the variation of
distribution is less than that of case first. The values are ranges from 2.1910
2
m
2
/s
3
to 1.016810
4
m
2
/s
3
.

At 1.4 kg/cm
2
pressure
the value of eddy dissipation is varies from 2.8110
2
m
2
/s
3
to1.3553 m
2
/s
3
which is much greater than as compare to case first and
second.

Fig.21: Turbulent eddy dissipation at 1.4 kg/cm
2

It can be understood that at 1.2 kg/cm
2
pressure, the value of turbulent eddy dissipation is less within IEG.
Experimental results
After conducting the DOE as per Taguchi method using L9 orthogonal array for two repetitions following results/ responses are
obtained for PECM.
Table 1: Result table
Test no Response (repetition) Test response total Mean S/N ratio
1
St
2
nd

E1 0.017 0.018 0.035 0.0175 -35.15
E2 0.046 0.052 0.098 0.049 -26.24
E3 0.023 0.018 0.041 0.0205 -33.95
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

273 www.ijergs.org

E4 0.038 0.042 0.080 0.040 -31.95
E5 0.072 0.075 0.147 0.0735 -22.67
E6 0.033 0.039 0.072 0.036 -28.96
E7 0.037 0.032 0.069 0.0345 -29.31
E8 0.039 0.042 0.081 0.0405 -27.86
E9 0.063 0.058 0.121 0.0605 -24.38
Mean change in MRR
A
1
= 0.035+0.098+0.041
A
2
= 0.080+0.147+0.072
A
3
= 0.069+0.081+0.121
Dividing A
1
, A
2
and A
3
by 32 (i.e. three factor combinations and two repetitions), the mean change in MRR under the conditions
A
1
, A
2
and A
3
was obtained. Thus;
A
1
= 0.174/6 = 0.029
A
2
= 0.298/6 = 0.0498
A
3
= 0.0270/6 = 0.045
Similarly calculating the mean change in MRR under the conditions B
1
, B
2
, B
3
, C
1
, C
2
, C
3 .
Signal to Noise Ratio
Taguchi method stresses the importance of studying the response variation using the signal -to- noise (S/N) ratio, resulting in
minimization of quality characteristic variation due to uncontrollable parameter. The metal removal rate was considered as the quality
characteristic with the concept of "the larger-the-better". The S/N Ratio for the larger-the-better is:
S/N = -10*log (mean square deviation)
S/N Ratio = 1010[
1

1
2
]
Larger is better (S/N) Ratio is used when there is no predetermined value for the target (T=), and larger the value of the
characteristics, the better the MRR. S/N Ratio and mean change under the condition A1, A2 ,. C2 and C3 were calculated and
presented in table 2.
Table 2: Mean change and S/N ratio for individual factors
Factor Total result Mean change S/N Ratio
A1 0.174 0.029 -31.78
A2 0.298 0.0498 -27.86
A3 0.0270 0.045 -27.18
B1 0.1836 0.0306 -32.13
B2 0.3258 0.0543 -25.59
B3 0.234 0.039 -29.10
C1 0.1878 0.0313 -29.49
C2 0.298 0.0498 -27.52
C3 0.2568 0.0428 -28.64
Main effect plots
The main effect plots of MRR vs. Voltage, MRR vs. Feed rate and MRR vs. electrolyte pressure and S/N Ration vs. voltage, S/N
Ratio vs. feed rate and S/N Ratio vs. electrolyte pressure for all the values obtained from MINITAB are as shown in the Figs. 22, 23,
24 and 25.

Voltage(V)
M
e
a
n

o
f

M
R
R
G
(
g
/
m
i
n
)
3 2 1
0.050
0.045
0.040
0.035
0.030
Main Effects Plot (data means) for MRRG(g/min)

Tool Feed rate (f)mm/min
M
e
a
n

o
f

M
R
R
G
(
g
/
m
i
n
)
3 2 1
0.055
0.050
0.045
0.040
0.035
0.030
Main Effects Plot (data means) for MRRG(g/min)

Fig.22: Effect of voltage on MRR Fig.23: Effect of tool feed rate on MRR
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

274 www.ijergs.org

Electrolyte pressure (kg/cm2)
M
e
a
n

o
f

M
R
R
G
(
g
/
m
i
n
)
3 2 1
0.050
0.045
0.040
0.035
0.030
Main Effects Plot (data means) for MRRG(g/min)

M
e
a
n

o
f

S
N

r
a
t
i
o
s
3 2 1
-25.0
-27.5
-30.0
-32.5
3 2 1
3 2 1
-25.0
-27.5
-30.0
-32.5
Voltage(V) Tool Feed rate (f)mm/min
Electrolyte pressure (kg/cm2)
Main Effects Plot (data means) for SN ratios
Signal-to-noise: Larger is better

Fig.24: Effect of electrolyte pressure on MRR Fig.25: Effect of process parameters on S/N Ratio
Analysis Of Variance
The relative magnitude of the effect of different factors can be obtained by the decomposition of variance, called Analysis of Variance
(ANOVA).
Overall Mean = 0.0413
Total Sum Of Square = SSTO = 0.005172,
Treatment Sum Of Square = SSTR
A
= 0.001423,
SSTR
B
= 0.001732,
SSTR
C
= 0.001047,
Total Treatment Sum of Square = 0.004202
Error Sum Of Square = SSE = 0.00097
As we know that, SSTO = SSTR + SSE
SSTO = 0.004202 + 0.00097 = 0.005172 (verified)
Table.3: ANOVA Table
Parameter DOF SS V F P
(%)
A(Voltage) 2 0.001423 0.000711
5
7.12 27.51
B(Feed
rate)
2 0.001732 0.000866 8.66 33.48
C
(Pressure)
2 0.001047 0.000523
5
5.23
5
20.24
E (Error) 9 0.00097 0.0001 1 18.75
Total 15 0.005172

In the ANOVA, the F-ratio is used to determine the significance of the factor. Percent (%) is defined as the significance rate of the
process parameters on the metal removal rate. The percent number shows that the applied voltage, feed rate and electrolyte
concentration have significant effect on the MRR. It can observed from table that applied voltage (A), feed rate (B) and electrolyte
pressure (C) affect the material removal rate by 27.51%, 33.48% and 20.24% in the pulse electrochemical machining of SS 304l
respectively.
CONCLUSIONS
Three dimensional two phase flow pattern analysis of electrochemical machining with circular (hollow) tool provides fundamental idea of
velocity distribution, pressure pattern, turbulence etc. in the IEG. A cubical stainless steel work piece, circular copper tool and 15% sodium
nitrate solution as electrolyte were considered in this analysis. Tool was modeled using Design Modeler of PRO-E and analyzed in ANSYS
FLUENT 14.5. To get consistent and good results, model was meshed with Fine mesh resolution. Model is analyzed with inlet pressure of 1.0
kg/cm
2
, 1.2 kg/cm
2
and 1.4 kg/cm
2
respectively.
Major conclusions:
1) The flow velocity decreases when electrolyte moves towards the work-piece and it increases at the outlet.
2) Turbulent kinetic energy and turbulent eddy dissipation rate profile exhibits higher value of turbulence at pressure 1.0 kg/cm
2
and 1.4
kg/cm
2
whereas at 1.2 kg/cm
2
pressure, turbulence is almost negligible.
3) The MRR is maximum affected by the tool feed rate followed voltage and least affected by the electrolyte pressure.
4) The optimized results A
2
B
2
C
2
gives the better material removal rate (MRR).
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

275 www.ijergs.org

5) Hence, from the computational simulation and experimental results it was found that 1.2 kg/cm2 is a optimum value for pressure .
REFERENCES:
[1] Usharani Rath Two phase flow analysis in electrochemical machining for l-shaped tool: A CFD APPROACH. M.tech project
report (2013). National Institute of Technology, Rourkela, Odisha, India.
[2] Baburaj, M. CFD analysis of flow pattern in electrochemical machining for L-shaped tool. M.Tech project report (2012).
National Institute of Technology, Rourkela, Odisha, India.
[3] Benedict, Gary F. Nontraditional Manufacturing Processes, Marcel Dekker, Inc. 270 Madison Avenue, New York.
[4] Ghosh, A. and Mallik, A.K., (2010). Manufacturing Science Second Edition, East- West Press Private Limited, New Delhi, India
[5] Sekar T., Marappan R Improving Material Removal Rate of Electrochemical Machining by Using Rotating Tool
[6] H. S. Beravala1, R. S. Barot, A. B. Pandey, G. D. Karhadkar (2011) Development of Predictive Mathematical model of process
parameters in ElectroChemical Machining Process. National conference on recent trends in engineering & technology.
[7] Rama Rao. S, Padmanabhan. G (2012) Application of Taguchi methods and ANOVA in optimization of process parameters for
metal removal rate in electrochemical machining of Al/5%SiC composites. International journal of engineering research and
applications, Vol. 2, pp. 192-197.
[8] Suresh H. Surekar, Sudhir G. Bhatwadekar, Wasudev G. Kharche, Dayanand S. Bilgi (2012) Determination Of Principle
Component Affeting Material Removal Rate In Electrochemical Machining process. International journal of engineering science and
technology, Vol. 4, pp. 2402-2408.
[9] J. Pattavanitch, S. Hinduja, J. Atkinson (2010) Modelling of the electrochemical machining process by the boundary element
method. CIRP Annals Manufacturing technology, Vol. 59, pp. 243-246.
[10] M.H. Wanga, D. Zhub (2009) Simulation of fabrication for gas turbine blade turbulated cooling hole in ECM based on FEM.
Journal of material processing technology, Vol. 209, pp. 1747-1751.
[11] Mohan Sen, H.S. Shan (2005) A review of electrochemical macro- to micro-hole drilling processes. International Journal of
Machine Tools & Manufacture, Vol. 45, pp. 137152.
[12] Evgueny I. Filatov (2001)The numerical simulation of the unsteady ECM process. Journal of Materials Processing
Technology, Vol. 109 pp. 327-332.
[13] Jerzy Kozak (2001)Computer simulation system for electrochemical shaping. Journal of Materials Processing Technology,
Vol. 109, pp. 354-359.
[14] Upendra Behera , P.J. Paul , S. Kasthurirengan , R. Karunanithi , S.N. Ram , K. Dinesh , S. Jacob () CFD analysis and
experimental investigations towards optimizing the parameters of RanqueHilsch vortex tube.
[15] Rui Wu, Danwen Zhang and Juan Sun (2011) 3-D Flow Field of Cathode Design for NC Precision Electrochemical Machining
Integer Impeller Based on CFD. Research journal of applied sciences, engineering and technology, Vol. 3, pp.1007-1013.
[16] Krishna Mohan Singh1, R. N. Mall (2013) Analysis Of Optimum Corner Radius Of Electrolyte Flow Path In ECM Using
CFD. International journal of engineering research & technology, Vol. 2, pp. 617-635.
[17] Sian, S. CFD analysis of flow pattern in electrochemical machining. B.Tech. Project Report (2011), National Institute of
Technology Rourkela, Odisha, India.
[18] Ansys Training Manual Inventory Number: 002600, 1
st
Edition ANSYS Release: 12.0, published date: 28April 2009.
[19] Product Data Sheet, AK Steel , UNS S30400/UNS S30403






International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

276 www.ijergs.org

Face Recognition using Principal Component Analysis with DCT
Kiran D. Kadam
1

1
E&TC Department, Dr.D.Y.Patil College of Engineering, Pune University, Ambi-Pune
E-mail- kiran3181kk@gmail.com

AbstractFace recognition (FR) is a challenging issue due to variation in expression, pose, illumination and aging etc. In this paper
hybrid combination of principal component analysis (PCA) and discrete cosine transform (DCT) is used is used to represent accurate
face recognition system. Face recognition system used for many applications such as security access to video indexing by content.
This method is useful to increase the efficiency by extracting meaningful features and also increase in recognition rate of system
which is easy to implement. This paper proposes a methodology for improving the recognition rate of face recognition system.
Standard databases such as FACES 94 and ORL are used to test the experimental results which proves that proposed system achieves
more accurate face recognition as compared to individual method.

Keywords: DCT, FACES 94 databases, face recognition, feature extraction, Mydatabase, ORL database, PCA, recognition rate
INTRODUCTION
In recent years, automatic face recognition has become a popular area of research. An excellent survey paper on the topic appeared
recently in [l]. Recognition, verification and identification of faces from still images or video data have a wide range of commercial
applications including video indexing of large databases, security access and other multimedia applications. As one of the most
successful applications of image analysis and understanding, face recognition has recently received significant attention, especially
during the past several years.
Generally, feature extraction and classification are two fundamental operations in any face recognition system. In order to improve the
recognition performance it is necessary to enhance these operations. Feature extraction is used for reducing the dimensionality of the
images using some linear or non-linear transformations of face images with successive feature selection, so that exacted feature
representation is possible. However, there are some problems such as lightning condition, illumination, various backgrounds, aging
and individual variation with feature extraction of human face.
In this paper PCA is used for identification and pattern recognition. Since pattern recognition is very difficult, particularly when input
data (images) are with very great dimensions. In such a case PCA can be seen as a very powerful tool to explore the data since it
operates by reducing their dimensions in a considerable way. Advantages of using PCA are data can be compressed without losing
useful information and dimensions can be reduced.
At least two reasons are accounted for this trend: first it is widely used in real life applications and second, is the availability of
feasible technologies after many years of research. The range of face recognition applications is very assorted, such as face-based
video indexing and browsing engines, multimedia management, human-computer interaction, biometric identity authentication,
surveillance, image and film processing, and criminal identification. In face recognition method is based on biometric study to identity
authentication. As compared with existing identification technologies such as fingerprint and iris recognition, face recognition has
several characteristics which are useful for consumer applications, such as nonintrusive and user-friendly interfaces, low-cost sensors
and easy setup, and active identification. This method can be divided in the following categorization: holistic matching methods,
feature-based matching methods and hybrid methods. The holistic methods used the whole face as input. Principal Component
Analysis (PCA), Linear Discriminant Analysis (LDA) and Independent Component Analysis (ICA) belong to this class of methods.
First time PCA algorithm used for face recognition by Mr. Turk and A. Pentland [2] in 1991 with MIT Media Labs. Applying
Principal component analysis (PCA) includes evolution of covariance matrix and computing the eigenvalues for covariance matrix.
The proposed method is based on hybrid combination of PCA and DCT and face recognition is done by feature extraction using PCA
and DCT. The redundant information interference is eliminated by using normalization. Principal Component Analysis (PCA) was
used for feature extraction and dimension reduction. In general for PCA based face recognition by increasing in the number of
signatures there is increase the recognition rate. However, the recognition rate saturates after a certain amount of increases.
Classification is done using different algorithms such as Euclidian distance, hamming distance etc. After these algorithms final
recognition result will be displayed whether face is match or not. And percentage of recognition rate is calculated.
Presently, there are two types of face detection technique, geometrical face detectors and holistic-based face detectors. Geometric face
detector extracts local features such as location and local statistics of the eyes, nose and mouth. Holistic-based detector extracts a
holistic representation of the whole face region and has a robust recognition performance under noise, blurring, and partial occlusion.
Principal component analysis (PCA) is holistic based approach


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

277 www.ijergs.org

2. FACE RECOGNITION

Face recognition technique is a research hotspot in the fields of computer vision and pattern recognition, which is widely used in
human-computer interaction, security validation and etc. Up to now, almost all the techniques are based on multi-sample. But in some
special situations, such as passport verification and ID card verification, only one image can be obtained for one person, and these
techniques may failed.
Principal Component Analysis (PCA), proposed by Turk [2], is one of the most important single sample face recognition methods,
which can exactly express every face image via linear operation of eigenvector.
Most currently DCT is used in the field of face recognition field. It uses the discrete transformation into a cosine to eliminate the
redundancies in an image and extract from them the most significant elements (i.e. coefficients) in order to use them for recognition.
In discrete cosine transform (DCT) special domain signal is transformed to frequency domain.

Fig.1 Face recognition system
2.1 Face recognition problem
The challenges of face recognition are the rapid and accurate identification or classification of a query image [3].There are some
difficulties in face recognition are identifying similar faces (inter-class similarity) and intra-class variability as head pose, illumination
condition, facial expression and aging effect. The performance of a face recognition technique should be able to produce the results
within a reasonable time [4]. In human-robot interaction, real-time response time is critical [10]. Besides, it also enables computer
systems to recognize facial expressions and infer emotions from them in real time [11].

2.2 Feature extraction
In the field of pattern recognition and data mining technology feature extraction is very important. It extracts the meaningful feature
subset from original dates by some rules, to reduce the time of machine training and the complexity of space, in order to achieve the
goal of dimensionality reduction. In feature extraction input data is transformed into set of features and new reduced representation
contains most of the important information from the original data [5].In any face recognition system feature extraction is key step.
Feature extraction is a process which is used to transfers the data from primary spaces into feature space and represents them in a
lower dimensional space with less effective characters. Many methods of feature extraction are proposed till now such as knowledge-
based methods, feature invariant approaches, template matching methods, and appearance-based methods. Among all these methods
the algorithm of Eigen face, the most widely used method of linear map based on PCA (Principle Component Analysis) useful for face
recognition.

3. PRINCIPAL COMPONENT ANALYSIS (PCA)

The technique used to reduce the dimensionality which can be used to solve compression and recognition problems is Principal
Component Analysis (PCA). PCA is also known as Hotelling, or eigenspace Projection or Karhunen and Leove (KL) transformation
[6]. In PCA the original data image is transformed into a subspace set of Principal Components (PCs) such that the first orthogonal
dimension of this subspace captures the greatest amount of variance among the images. The last dimension of this subspace captures
the least amount of variance among the images, based on the statistical characteristics of the targets [7].
Principal Component Analysis (PCA) is a popular transformation system whose result is not directly related to a single feature
component of the original sample. PCA has the potential to perform feature extraction, that able to capture the most variable data
components of samples, and select a number of important individuals from all the feature components. In the field of face recognition,
image denoising, data compression, data mining, and machine learning PCA has been successfully used. Implementation of the PCA
method in face recognition is called eigenfaces technique [12].

Calculation and subtraction of the average
Average image is calculated and subtracted from all the images.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

278 www.ijergs.org


Where M is number of images, is the input image and indicates difference from average.

Calculation of the covariance matrix
Covariance matrix of the data file is calculated using the following formula:

Calculation of the eigenvectors and eigenvalues
Only M eigenfaces () of highest eigenvalues are actually needed to produce a complete basis for the face space. A new input face
image () is transformed into its eigenface components by a simple operation
W
k
= U
T
K
( - ) For K= 1, 2,M'
The wk are called weights and form a vector T:
T = [w1, w2, w3, wM']
The feature vector descriptor is then used in a standard face recognition algorithm.

4. DISCRETE COSINE TRANSFORM (DCT)

The discrete cosine transform (DCT) is used to transform a signal from the spatial domain into the frequency domain. A signal in the
frequency domain contains the same information as that in the spatial domain.

IDCT is expressed as,

5. HYBRID METHOD

Hybrid method is the combination of two individual methods which is useful to improve the performance. Recognition rates are
slightly more as compared with individual methods. In this paper two technologies are PCA and DCT are combined. Two methods
PCA and DCT have certain mathematical similarities since that they both aim to reduce the dimensions of data. Initially we will use
DCT which is useful to compress the input image, then PCA is entered to reduce the dimensions and the final recognition or
classification is done using the Euclidian distance formula. It should be noted that it requires less memory what makes its use
advantageous with bases of significant size.
5.1 The complete process of face recognition system
Fig.2 Algorithm flowchart
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

279 www.ijergs.org

Distance Matching (Detection)
In this paper, the nearest neighbour classifier with Euclidean distance was used for classification. The Euclidean distance is used to
measure the distances from the probed feature vector to the reference feature vectors in the gallery. The two vectors are close to each
other when distance between them minimum. It is defined as:



6. EXPERIMENTAL RESULTS

Some experiments are performed. These experiments were performed to evaluate the performance of PCA with DCT as a face
recognition system on standard database such as FACES 94 and ORL. In FACES 94 there are 153 Number of individuals, 180 by 200
pixels image resolution, Contains images of male and female subjects in separate directories, the background is plain green, Head turn,
tilt and slant with very minor variation in these attributes, no image blurring. The ORL database consists of 400 images of 40
individuals; there are 10 different images of each person. The ORL database includes variations in facial expression, illumination.
Mydatabase is created there are 60 images of 6 individuals, 180 by 200 pixels image resolution.



Fig.3 FACES 94, ORL and Mydatabase database
6.1 Experimental setup

In order to evaluate the performance of PCA and DCT, a code for each algorithm has been generated using Matlab. These algorithms
have been tested using standard such as FACES 94 and ORL and Mydatabase etc [9].After testing results on standard database we
tested it on database created by author.

6.2 Result discussion

The result of the overall experiments shows that Combination of PCA with DCT gives better recognition rates than using simple PCA.
We have tested PCA with DCT on standard databases FACES 94 and ORL which achieve level of accuracy 99.90% and 94.70%.We
have also tested this one on Mydatabase which gave recognition rate 95%. This method is useful especially to recognize face with
expression disturbance.

Table1. Dataset Description
Database Name Sample
Number
Total
Images
ATT 40 400
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

280 www.ijergs.org

FACES94 153 3040
Mydatabase 10 60

Table 2. Recognition Rate
Dataset name PCA PCA+DCT
ATT 91.30% 94.70%
FACES94 99.90% 99.90%
Mydatabase 87.00% 95%

7. CONCLUSION

In this paper, we have represented a new rapid method which is the combination of DCT and PCA.PCA is considered as a very fast
algorithm with a more or less high robustness and DCT is used for time reduction of recognized output images. So finally we can
conclude that combination of PCA and DCT it will offers higher rates of recognition .This face recognition method verifies
improvement in parameters in comparison to the existing method.

ACKNOWLEDGEMENTS
This work is supported in part by Electronics department of a Dr.D.Y.Patil college of Engineering Ambi-Pune. The author would like
to thank the anonymous reviewers and the editor for their constructive comments.

REFERENCES:
[1] Dashun Que, Bi Chen, Jin Hu A Novel Single training sample face recognition algorithm based on Modular Weighted (2d)
2
PCA School of Information Technology, Wuhan University of Technology, Wuhan 430063, P. R. China.
[2] M. Turk and A. Pentland. Eigenfaces for recognition. Journal of Cognitive Science, pages 7186, 1991.
[3] K.E. Gates, Fast and Accurate Face Recognition Using Support Vector Machines, Proceedings of the 2005 IEEE Computer
Society Conference on Computer Vision and Pattern Recognition, 2005, pp.163-163.
[4] S. Palanivel, B.S. Venkatesh, and B. Yegnanarayana, Real Time Face Recognition System Using Autoassociative Neural
Network Models, 2003
[5] L. Xie and J. Li, A Novel Feature Extraction Method Assembled with PCA and ICA for Network Intrusion Detection, 2009
International Forum on Computer Science-Technology and Applications, vol. 3, 2009, pp. 31-34.
[6] M. Karg, R. Jenke, W. Seiberl, K. K, A. Schwirtz, and M. Buss, A Comparison of PCA , KPCA and LDA for Feature Extraction
to Recognize Affect in Gait Kinematics, 3rd International Conference on Affective Computing and Intelligent Interaction and
Workshops, 2009, pp. 1-6.
[7] . Toygar and A. Acan, Face Recognition Using PCA , LDA and ICA Approaches on Colored Images, Journal of Electrical &
Electronic Engineering, vol. 3, 2003, pp. 735-743.
[8] Z. M. Hafed and Martin D. Levin, "Face Recognition Using the Discrete Cosine Transform", International Journal ofComputer
Vision, 43(3), 2001, pp 167-188.
[9] Available at: http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html
[10] C. Cruz, L.E. Sucar, and E.F. Morales, Real-Time Face Recognition for Human-Robot Interaction, 2008 8th IEEE International
Conference on Automatic Face & Gesture Recognition, Sep. 2008,pp. 1-6.
[11] P. Michel and R. El Kaliouby, Real Time Facial Expression Recognition in Video Using Support Vector Machines,
Proceedings of the 5th international conference on Multimodal interfaces - ICMI03, 2003, p. 258
[12] C. Li, Y. Diao, H. Ma, and Y. Li, A Statistical PCA Method for Face Recognition, Second International Symposium on
Intelligent Information Technology Application, vol. 3, Dec. 2008, pp. 376-380

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

281 www.ijergs.org

Time Efficient Equations to Solve Calculations of Five Using Recursion
Method
Sahana S Bhandari
1
, Shreyas Srinath
1

1
Department of Information Science and Engineering, Dayananda Sagar College of Engineering, Bangalore, India
E-mail- sahana.unique@gmail.com

Abstract In this paper, shortest method to solve calculations of number ending with five have been presented. Many
facts related to the calculation are proposed through which the entire calculation gets reduced to the level of an eye blink.
There are many methods in Vedic Mathematics to multiply any two numbers. They are time consuming since they are not
specifically meant for numbers ending with five. This describes the method to find the cube of a number ending with five
accurately and very fast. It even describes the shortest method to solve the multiplication of two numbers ending with
five. By using these formulas, calculations involving two numbers ending with five can be easily solved. This method can
be also used in the field of math coprocessors in computer. This algorithm is tested in matlab(2012a version) . This
method can be implemented on vlsi chip for faster multiplication.

Keywords Vedic Mathematics, Multiplier,vlsi,digital logic.
INTRODUCTION
We have been doing some of the things in our life since grade 1. Unfortunately we are unable to understand the origin of
those basics. One of those basic things is calculations involving numbers ending with five. We have been finding cubes of
number ending with five for long but never know the fact that the answer can end only in four different numbers.
Similarly we are unaware of many facts which have been reflected in this paper. No matter how big the numbers are; this
formula holds good for all the numbers ending with five. There are many methods in Vedic Mathematics to multiply any
two numbers. They are time consuming since they are not specifically meant for numbers ending with five. These
formulas for the first describe the method to find the answer to any kind of calculation involving numbers ending with
five in one step. This method led to the evolution of method to multiply N numbers in one step i.e. multiplying three or
more numbers in one step. This can be developed into a math coprocessor by designing the algorithm. This reduces time,
area and power in math coprocessor.
TO FIND THE CUBE OF A NUMBER ENDING WITH FIVE
There are quite a few methods to find the square of number ending with five. What if we want to find the cube of a
number ending with five? Either you can find the square of the number and again multiply the square with number itself
or you can apply Universal Multiplication Equation twice. Both the methods are two step process which is time
consuming and chances of committing mistake is more. This drawback can be overcome by using Recursion formula. In
this method the two step calculation has been reduced to one step which is faster than any other method. The simple
formula to find the cube of a number ending with five is
(1)
4
) 3 6 4 (
2
+ + X X X

This equation can be used only for numbers ending with five i.e. (X5). To find the cube of a number ending with five, we
substitute the value of X in Eq. (1). The answer obtained from the Eq. (1) forms the first part and to get the final answer,
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

282 www.ijergs.org

we just write the answer obtained from Eq. (1) followed by one of the numbers from Table 1. based on the remainder. To
start with we need to follow some steps:
1. Take any number of the form (X5).
Example (85)
3

Here X=8
2. Substitute the value of X in the equation to get the first part of the answer.

Example

4
) 3 6 4 (
2
+ + X X X

=
4
) 3 8 6 (8) 4 ( 8
2
+ +

=
4
) 3 8 6 64 4 ( 8 + +

=
4
) 3 48 256 ( 8 + +

=
4
304 8

=3042
=614
3. Ignore the decimal part and consider only the whole number part.
4. Second part of the answer is obtained by remainder basis.
5. Divide X by 4. Check for the remainder.
Remainder Answer
0 125
1 375
2 625
3 875
Table 1. Recursive remainder

6. When any number is divided by 4, you get remainder only as 0,1,2,3.
Example when 8 is divided by 4 you get the remainder as 0.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

283 www.ijergs.org

7. Check the table for the second part of the answer. Check for the answer corresponding to zero.

So the second part of the answer is 125.

Therefore, the final answer is 614125

(995)
3

Here X=99
Substituting the value of X in the equation
4
) 3 6 4 (
2
+ + X X X

=
4
) 3 99 6 (99) 4 ( 99
2
+ +

=
4
) 3 99 6 9801 4 ( 99 + +

=
4
) 3 594 39204 ( 99 + +

=
4
39801 99

=2407539801
= 985074.75
So the first part of the answer is 985074.
(Point no.9 is being illustrated here.
Divide 99 by 4. We get the remainder as 3.
Check the remainder table to get the second part of the answer and check the answer corresponding to remainder 3 in the table.
So the second part of the answer is 875.
Therefore, the final answer is 985074875.
If the decimal part of the first part of the answer and the remainder is observed, some relation could be found in them, which is given
in table 2.

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

284 www.ijergs.org

Remainder Decimal
0 0
1 0.25
2 0.5
3 0.75
Table 2. Recursive Remainder

If we observe this table, we can obtain the second part of the answer through decimal basis also. It can be used as a verification
technique. There is no method to find the cube of a number directly but this equation provides direct result.The main advantage of this
equation is that the 3 digit calculation has been reduced to a 2 digit calculation, 4 digit calculation has been reduced to 3 a digit
calculation which increases the accuracy and speed.If we need to find the cube of the number ending with 5in a traditional school way,
we need to find the square of the number, then again we need to multiply the square with the same number which is time consuming
and there are chances of going wrong.We have been finding the cube of a number ending with 5 since class 3 or 4. We might never
have observed the fact that a cube of a number ending with 5 can only end with 125, 375, 625, and 875. Hence this equation reveals
the fact that cube of number can end only with 125, 375, 625, and 875.
MULTIPLICATION OF TWO NUMBERS ENDING WITH 5
This part of the paper describes the method to multiply two numbers ending with five. This calculation could be even solved using
Universal Multiplication Equation but is not as efficient as Recursion method and chances of committing mistake is more. The
complexity can be reduced by using recursion method. The simple method to find the product of two numbers ending with 5 is

2
2 Y X XY + +
(2)
This equation can be used only for two numbers ending with five i.e. (X5) and (Y5). The values of X and Y are substituted in Eq. (2).
The answer obtained from Eq. (1) is clubbed with 25 or 75 to get the final answer. To start with we need to follow some steps:

1. The multiplication should be of the form X5Y5.
2. X and Y are two numbers ending with 5.

Example 135165

Here X=13 and Y=16
or
X=16 and Y=13
Commutative property holds good.

3. Substitute the values of X and y in the above equation to get the first part of the answer.

Example

2
13 16 13 16 2 + +
=

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

285 www.ijergs.org

2
13 16 208 2 + +
=

2
29 416+
=

2
445
=

=222.5

4. Ignore the decimal part and take the whole number as the answer of the first part.
5. Take the difference of X and Y.

Example 16-13=3
Which is odd.

6. It is not necessary that you need to take difference of 16 and 13. It is enough if you just take the difference of 6 and 3, which is 3!
Our aim is not to find the difference but only to find the last digit of the difference and to judge whether it is odd or even. It is
satisfied by the last digit of X and Y.

7. If the difference is even then the second part of the answer is 25 else 75.

Example here the difference is odd so the answer will end with 75. If the difference had been even then the answerwould
have ended with 25.

Therefore, the final answer is 22275.

Here, again the 3 digit calculation has been reduced to a 2 digit calculation, 4 digit calculation has been reduced to 3 a digit calculation
which increases the accuracy and speed.
Suppose you get a question where you need to multiply two numbers ending with 5.
Example you get a question where you need to multiply 4525854465 and you have options as

a) 3866454165 b) 3866454135
c) 3866454185 d) 3866454125

This equation can be extended specifically for the multiplication of any number ending with five with 25. This could be even solved
by the above given method. But the using the extended method, multiplication could be done faster and efficient. Since this method is
very simple, it is been illustrated through an example.
1.Take a calculation of the form 25(X5)

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

286 www.ijergs.org

Example 2585
Here X=8
2.Divide X by 4 to get the first part of the answer.
Example
4
8
=2

3.Second part of the answer is obtained by remainder rule.
Remainder Answer
0 125
1 375
2 625
3 875
Table 3. Recursive remainder

Example
4
8
leaves remainder as 0 .
4. Check Table 3, corresponding to reminder 0 to get the second part of the answer.

So the second part of the answer is 125.
Therefore, the final answer is 2125.

251234567895
Here X=123456789
Divide 123456789 by 4.
So the first part of the answer is 30864197
Dividing 123456789 by 4 leaves remainder 1.
From Table 3.the second part of the answer is 375, since the remainder is 1 and the answer corresponding to 1 is 375.
Therefore, the final answer is 30864197375,which is even out of the calculators limit.
Hence from this method we come to know that 25 multiplied by any number ending with five can end only with 125, 375,
625 and 875.

255 =0125 2515 =0375 2525 =0625 25 35 =0875
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

287 www.ijergs.org

2545 =1125 2555=1375 2565 =1625 2575 =1875
2585 =2125 2595 =2375 25105=2625 25115=2875
25125=3125 25135=3375 25145=3625 25155=3875
CONCLUSION
It can be concluded that the Time Efficient Equation to Solve Calculations of Five Using Recursion is an efficient method of
multiplication because there is no equation to multiply two numbers. We generally multiply numbers using traditional method which
is time consuming and there are chances of making mistakes unlike this equation. Not only in the field of calculation but also in the
field of math coprocessor,vlsi it has a wide application for its efficiency. Results can be synthesized by using this method and can be
compared with the results of array multiplier and booth multiplier. This equation can be used to developed applications for faster and
efficient output.
ACKNOWLEDGMENT
I would like to express my gratitude and appreciation to all those who gave me the possibility to complete this paper. I have taken
efforts of authors mentioned in the reference. Without you, this Technical paper would have taken years off my life. However, it would
not have been possible without the kind support and help of many individuals and organization. I would like to extend my sincere t
hanks to all of them. I would like to express my gratitude towards my parents for their kind co-operation and encouragement which
help me in completion of this paper. Most especially to my family, friends, cannot express what I owe them for their encouragement
and whose patient love enabled me to complete task. And especially to God, who made all things possible.

REFERENCES:
[1] S. A. Rahim, Lecture on Math Magic, MICE Group, Mangalore (India), 2007.
[2] HimanshuThapliyal and Hamid R Arbania, Time-Area-Power Efficient Multiplier and Square Architecture Based on Ancient
Indian Vedic Mathematics, IEEE, 2009.
[3] GensukeGoto, High Speed Digital Parallel Multiplier, United States Patent-5,465,226, November 7 1995.
[4] Tam Anh Chu, Booth Multiplier with Low Power High Performance Input Circuitry, US Patent, 6,393,454 B1, May 21 2002
[5] http://www.fastmaths.com
[6] vedic mathematics- a fuzzy & neutrosophic analysis by w. B. Vasantha kandasamy lorentin smarandache 2006









International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

288 www.ijergs.org

Detecting Wormhole Nodes in WSN using Data Trackers
Harleen Kaur
1
, Neetu Gupta
2

1
Research Scholar (M.Tech), ECE, Global Institute of management and Emerging Technology
2
Asst. Professor, Global Institute of management and Emerging Technology
E-mail- harleen.kaur15@yahoo.com

Abstract- Wormhole attack can be destabilizes or disables wireless sensor networks. In a typical wormhole attack, the attacker
receives packets at one point in the network and forwards them with a less latency than the network links, and relays them to another
point in the network. This paper describes the taxonomy of wormhole attack and presents the several scenarios of wormhole attacks.
Keywords- Wireless sensor network, Wormhole detection, Ad hoc network, tunnel, latency, Wireless sensor nodes, malicious node.
INTRODUCTION
The basic wireless sensor network [1] consists of large number of sensor nodes which are densely deployed over a sensor field. All
nodes are connected by radio frequency, infrared, or other medium without any wire connection. This type of network is called
wireless sensor network Fig.1.1 is shown below.WSN contains micro-controller, circuit for interface between sensor node and battery,
a radio transceiver with antenna for generating the radio waves through which they can communicate and perform operations [2].

Fig.1.1: General Wireless Sensor Network
With the rapid development in wireless technology, ad hoc network have emerged to attract the attention from industrial and academic
research projects. Ad hoc networks are vulnerable to attacks due to many reasons a particularly severe security attack, called the
wormhole attack [3], [4], [5]. During the attack [6] an adversary receives packets at one location in the network and tunnel them to
another location in the network, where the packets are resent into the network .The remainder of this paper is organized as the
following way. Section II gives the taxonomy and basic definition of Wormhole attack. Section III presents survey on wormhole
attack. Finally, conclusion is presented in Section IV.

WORMHOLE ATTACK
In the wormhole attack, an attacker receives packets in one part of the network over a low latency and tunnels them in a different part.
The simplest instance of this attack is that single node is situated between two other nodes for forwarding the messages between two
of them.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

289 www.ijergs.org


Fig.2.1: Wormhole Attack
Depending on whether the attackers are visible on the route, packets forwarding behavior of wormhole nodes as well as their tendency
to hide or show the identities, wormholes are classified into three types: closed, half open, and open as shown in fig.2.2.
1. Open Wormhole
In this mode, nodes (Source(S), destination (D), wormhole ends M1 and M2) are visible and A and B are kept to be hidden. The
attacker is aware about the presence of malicious nodes which further include themselves in the packet header to follow the route
discovery procedure.
2. Half-Open Wormhole
Malicious node M1 near the source (S) is visible, while second end M2 is set hidden. To tunnel the packets from one side to another
over the path S-M1-D sent by S for D, attacker does not modify the contents of the packet and rebroadcasts it.
3. Close Wormhole
Identities of all the intermediate nodes (M1, A, B, M2) on path from S to D were kept hidden. In this scenario both source and
destination feel themselves just one-hop away from each other. Thus fake neighbors were created.

Fig.1.3: Representation of Open, Half-Open and Closed Wormhole
A. Taxonomy of Wormhole Attack
Wormhole attacks can be classified based on implementation technique used for launching it and the number of nodes involved in
establishing wormhole into the following types:
1. Wormhole using Packet Encapsulation
Nodes exist between two malicious nodes and the data packets are encapsulated between the malicious nodes. Hence, routing
protocols that use hop count for path selection are particularly susceptible to encapsulation-based wormhole attacks.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

290 www.ijergs.org

2. Wormhole Using High-quality/Out-of-band Channel
In this mode, the wormhole attack is launched by having a high-quality, single-hop, out-of-band link (called tunnel) between the
malicious nodes. This tunnel can be achieved, for example, by using a direct wired link or a long-range directional wireless link.

3. Wormhole Using High-power Transmission Capability
In this only one malicious node with high-power transmission capability increases its chance to be in the routes established between
source and the destination without the interference of another malicious node. When a malicious node receives an RREQ, it broadcasts
the request at a high-power level. Any node that hears the high-power broadcast rebroadcasts the RREQ towards the destination. [11].

4. Wormhole Using Packet Relay
In this attack, one or more malicious node relays data packets of two distant sensor nodes to convince them that they are neighbors.
This kind of attack is also called "replay-based attack.

5. Wormhole Using Protocol Distortion
In this mode, one malicious node tries to attract network traffic by distorting the routing protocol. Routing protocols that are based on
the 'shortest delay' instead of the 'smallest hop count' is at the risk of wormhole attacks by using protocol distortion.
LITERATURE REVIEW

Ref
no.
year
[7] 2005 A lightweight countermeasure for the wormhole attack, called LITEWORP, which is particularly
suitable for resource-constrained multihop wireless networks. Simulation results show that every
wormhole is detected and isolated within a very short period of time and packet loss is less when
LITEWORP applied.

[8] 2006 A severe attack in ad hoc network routing protocols and location based that is particularly
challenging to defend against. A general mechanism, called packet leashes, for detecting and,
thus defending against wormhole attacks, and we present a specific protocol, called TIK, that
implements leashes. Topology-based wormhole detection is discussed, and shows that it is
impossible for these approaches to detect some wormhole topologies.

[9] 2009 This paper describes different modes and classes with an attack graph that is used to illustrate the
sequence of events in each mode. This attack presents as a two phase process launched by one or
several malicious nodes. To illustrate this attacks effect we presented the simulation results of
two modes of this attack.
[10] 2011 Routing protocol WHOP for detecting wormhole of large tunnels length without use of any
hardware such as directional antenna and clock synchronization. WHOP uses an additional
Hound packet and does not require changes in the existing protocol AODV. Our simulation
results show that the WHOP is quite excellent in detecting wormhole of large tunnel lengths.

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

291 www.ijergs.org

[11] 2012 This paper proposes the security emerges as a central requirement as mobile ad hoc network
applications are deployed and form a serious threat in wireless networks. It introduces the
wormhole attack, enables an attacker with limited resources and no cryptographic material to
wreak havoc on wireless networks. It is possible even if the attacker has not compromised any
hosts and even if all communication provides authenticity and confidentiality.

[12] 2013 This paper presents simulation results based on packet reception ratio, packet dropped ratio, and
throughput and providing higher level security. Routing attack for wireless sensor network and
can be implemented by using Mint route protocol to defend against.

[13] 2013 In this paper alternative path from source to second hop and calculate the number of hops to
detect the wormhole. The technique is localized, requires only a small overhead, and does not
have special requirements such as location information, accurate synchronization between nodes.

CONCLUSION
The intent of the paper is to throw light on the wormhole attacks in WSN. The paper provides a detailed description of the wormhole
attack categories and provides a description of the review of the studies about the wormhole attack in different scenarios.

REFERENCES:
[1] I.Akyildiz, W. Su, Y. Sankara subramaniam and E. Cayirci, A survey of sensor networks, IEEE Communications, vol. 40(8), pp.
102114, 2002.
[2] Kashyap Patel and T.Manoranjitham, Detection of Wormhole attack in wireless sensor network

International Journal of
Engineering Research & Technology (IJERT) ISSN: 2278-0181, Vol. 2 Issue 5, May -2013.
[3] C. Karlof and D. Wagner, Secure Routing in Sensor Networks: Attacks and Countermeasures, in 1st IEEE International
Workshop on Sensor Network Protocols and Applications (WSNA), 2003, pp. 113-127.
[4] Y. C. Hu, A. Per rig, and D. B. Johnson, Packet Leashes: A Defence against Wormhole Attacks in Wireless Networks, in 22nd
Annual Joint Conference of the IEEE Computer and Communications Societies (INFOCOM), 2003, pp. 1976-1986.
[5] L. Hu and D. Evans, Using Directional Antennas to Prevent Wormhole Attacks, in Network and Distributed System Security
Symposium (NDSS), San Diego, 2004.
[6] K. Lee, H. Jeon, and D. Kim, Wormhole Detection Method based on Location in Wireless Ad-Hoc Networks, in New
Technologies, Mobility and Security: Springer Netherlands, 2007, pp. 361-372.
[7] Issa Khalil, Saurabh Bagchi, Ness B. Shroff, LITEWORP: A Lightweight Countermeasure for the Wormhole Attack in Multihop
Wireless Network Proceedings of the 2005 International Conference on Dependable Systems and Networks, 0-7695-2282-3, IEEE
2005.
[8] Yih-Chun Hu,Adrian Perrig ,David B.Johnson, Wormhole attacks in wireless networks IEEE Journal on Selected Areas in
Communications, Vol. 24, NO.2,February2006, pp. 0733-8716.
[9] Marianne Azer, Sherif El-Kassas,Magdy El-Soudani, A Full Image of the Wormhole Attacks towards introducing Complex
Wormhole Attacks in wireless Ad Hoc Networks , International Journal of Computer Science and Information Security, Vol. 1, No.
1, May 2009.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

292 www.ijergs.org

[10] Saurabh Gupta,Subrat Kar ,S Dharmaraja, WHOP: Wormhole Attack Detection Protocol using Hound PacketIEEE
International Conference on Innovations in Information Technology,2011.
[11] Bintu Kadhiwala and Harsh Shah, Exploration of Wormhole Attack with its Detection and Prevention Techniques in Wireless
Ad-hoc Networks, International Conference in Recent Trends in Information Technology and Computer Science (ICRTITCS - 2012)
Proceedings published in International Journal of Computer Applications (IJCA) (0975 8887).
[12] Kashyap Patel and T .Manoranjitham, Detection of Wormhole attack in wireless sensor network

International Journal of
Engineering Research & Technology (IJERT) ISSN: 2278-0181, Vol. 2 Issue 5, May -2013.
[13] Devendra Singh, Kushwaha Ashish Khare, J. L .Rana, Improved Trustful Routing Protocol to Detect Wormhole Attack in
MANET International Journal of Computer Applications (0975 8887), Volume 62 No.7, January 2013


















International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

293 www.ijergs.org

Analysis and Study of Quality Factor for Simple Fixed Beam MEMS
Resonator
Meenu Pruthi
1
, Anurag Singh
2

1
Research Scholar (M.Tech), ECE Department, OITM
2
Asst. Professor, ECE Department, OITM
E-mail- menu.pruthi815@gmail.com
Abstract This paper focus the study of Quality factor of MEMS resonators are analyzed by varying the material of cantilever
beam. Modeling and simulation of Thermoelastic Damping (TED) is an important issue in the development of actuators, MEMS
resonators, and filters. The energy dissipation mechanism by TED highly affects the Q factor. Here we uses material s Ge, GaAs,
PolySi, Single Crystal Si. Out of these material Single crystal shows the better value of Qfactor at eigen frequency (6.304492e5).
Modeling and simulation of TED effect on various resonators are done by using COMSOL Multiphysics software. Thus the effect of
material properties on the Q factor is also studied in the case of simple fixed-fixed beam resonators.

Keyword MEMS, NEMS, Eigen frequency analysis, Simple fixed beam resonators, COMSOL, Displacement.

II. INTRODUCTION

Thermoelastic damping has been identified as an important loss mechanism in MEMS resonators [l]-[4].With the advent of the
microelectromechanical systems (MEMS) technology, MEMS resonators with low weight, small size, low consumption energy and
high durability have been extensively utilised for various sensing and wireless communications applications such as accelerometers,
gyroscopes , oscillators , and filters [1].
The main advantage of MEMS resonators lies in the possible integration onto the silicon based IC platforms. Silicon MEMS
resonators are positioned as potential competitors to quartz crystal resonators [5] [6] . However, to compete with the mature, well-
established quartz technology, silicon MEMS resonators must first provide the same or better performance characteristics. For all
these applications, it is important to design and fabricate micro electromechanical resonators with very high quality factors (Q factors)
or very little energy loss. Q factor is defined as the ratio of total system energy to dissipation that occurs due to various damping
mechanisms. Thermoelastic damping is considered to be one of the most important factors to elicit energy dissipation due to the
irreversible heat flow of oscillating structures in the micro scales. In this study, the Q-factor for thermo elastic damping is investigated
in various RF MEMS resonators, because a high quality factor directly translates to high signal-to-noise ratio, high resolution, and low
power consumption. A low value of Q implies greater dissipation of energy and results in reduced sensitivity, degraded spectral purity
and increased power consumption [7]. It is therefore desirable to eliminate, or mitigate, as many mechanisms of dissipation as
possible. Various energy dissipation mechanisms exist in microelectromechanical systems (MEMS) and nanoelectromechanical
systems (NEMS) [6]. Several different mechanisms contribute to energy dissipation such as air-damping, squeezed-film damping,
acoustic radiation from the supports of the beam (also called anchor or clamping losses), damping due to crystallographic defects
(such as dislocations and grain boundaries ) and thermo elastic damping [8]. Some of these sources of energy losses are considered
extrinsic such that they can be altered by changing the design or operating conditions . For example, operating the device in vacuum
and designing nonintrusive supports reduces air-damping and clamping losses, respectively. However, intrinsic sources of dissipation,
such as thermo elastic damping, impose a strict upper limit on the attainable quality factors of a resonator


III. THERMOELASTIC DAMPING
Zener predicted that thermo elastic losses may be a limitation to the maximum Q factor of a resonator [9]. Basically, the
principle of thermoelstic damping is the following: When a mechanical structure vibrates, there are regions where
compressive stress occurs and others where tensile stress occurs, in a cyclic way given by the vibration frequency.
Accordingly, compressed regions heat up and stretched regions cool down. Hence a temperature gradient is established
between different regions of the system.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

294 www.ijergs.org

However, to set the mechanical system in vibration, energy must be provided, leading to a non-equilibrium state having an
excess of energy. Disregarding thermo elastic damping, the vibration could persist indefinitely in an elastic body that is
perfectly isolated from its environment. However, local temperature gradients lead to irreversible flow of heat, which is a
dissipation mechanism that attenuates the vibration until complete rest is achieved. Heat flow through a thermal resistance
will result in power dissipation, which is a Q limiting energy loss mechanism. This loss is the most prominent when the
period of the resonator is of the same order as the thermal time constant across the beam. From a thermodynamic
standpoint TED can be viewed as the initial flexing of the beam which causes the temperature profile of the beam to
become more ordered. If the beam re-establishes equilibrium this order is lost, resulting in an irrecoverable increase in
entropy, which is an energy loss[10]
IV. SIMPLE FIXED-FIXED TYPE BEAM RESONATORS


The resonator is a beam of silicon with length 400 m, height 12 m, and width 20 m as shown in Fig.1. The beam is fixed at both
ends, and it vibrates in a flexural mode in the z direction (that is, along the smallest dimension). The model assumes that the vibration
takes place in vacuum. Thus there is no transfer of heat from the free boundaries. The model also assumes that he contact boundaries
are thermally insulated [8].


Figure 1: Geometry of a simple fixed-fixed type beam resonator.

A high Q value is a key factor of a MEMS resonator. It it essential that the resonator vibrates consistently at the desired
frequency and that it requires as little energy as possible to maintain its vibration. These features can be characterized by
the resonators Q value, which is a measure of the sharpness of its spectrums peak. There are several ways to define the
Q value, for example:

where W
0
is the total stored vibrational energy, W is the energy lost per cycle,
0
is the natural angular frequency, is
the damping factor (vibration decays exponentially with t), and is the half power width of the spectrum.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

295 www.ijergs.org

In order to improve the resonator, the designer needs to consider all aspect that produce damping and noise to the system.
For example, resonators are usually run in vacuum to minimize effects of air and squeeze-film damping.
For simple structures, researchers have developed analytical expressions to estimate thermoelastic damping. According to
Zener [11] and [12], you can calculate the Q value for a resonator with a single thermal mode by:



where E is the Youngs modulus, is the thermal expansion coefficient, T
0
is the resonator temperature at rest, is the
density, C
p
is the heat capacity of the material, is the vibration angular frequency, and is the thermal relaxation time of
the system. Thus it is easy to see that in order to have good Q value, the system needs to be designed so that is as far
from 1/ as possible.
The natural frequency of a beam clamped at both ends can be calculated as [1]



where a
0
equals 4.730; h and L are the thickness and length of the beam, respectively; and E and are material parameters
as above.
The thermal relaxation time of the beam is given by



where is the thermal conductivity and other parameters are as above.
To gain information about the quality of the resonator, it is of interest to know its natural frequency and Q value. To do
this, run an eigenfrequency analysis to find the eigenvalues for this system. For a system with damping, the eigenvalue
contains information about the natural frequency and Q value [6]. Fig.2 shows the variation of TED factor with eigen
frequency. From the analysis it is clear that at some particular frequency internal friction (TED factor) is maximum and
this corresponds to the maximum dissipation of the resonator
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

296 www.ijergs.org



Figure. 2: TED Factor versus Eigen Frequency of a Simple fixed-fixed beam resonator.
Q factor of a simple fixed-fixed type resonator is highly material dependent. It depends on the parameters such as
Youngs Modulus ( E), Thermal expansion Coefficient (), Density of the material () and Poissons ratio (). The
variation of Q factor with thermoelastic damping (TED) is summarized in Table I.
Properties Ge GaAs PolySi
Single
Crystal
Si
E 1.03E+11 8.59E+10 1.60E+11 1.57E+02
v 0.26 0.31 0.22 0.3
5.90E-06 5.70E-06 2.60E-06 2.60E-06
5323 5316 2.32E+03 2330
Eigen Freq 3.32E+03 8.45E+05 6.39E+05 6.30E+05
QwithTED 9.25E+05 4116.153 1.01E+04 10169.89
Table I: shows the variation of Q factor (with TED effect) with materials.
It is seen that compared to Ge, GaAs, PolySi, Single Crystal Si provides better Q value and less thermoelastic damping.
The performance of Single Crystal Si based resonator in terms of Q factor with TED show better result than others.
V. SIMULATION RESULTS
There are various material which shows the variation in eigenfrequency by varying temperature so the quality of the material must be
need to analysed for proper designing of MEMS resonator. Thus the following simulation is done as below given:-
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

297 www.ijergs.org


Figure 3: First eigenmode and temperature distribution of Ge material.
This figure 3 shows variation in temperature of Ge according to the eigenfrequency analyzed Quality factor 9.245036e5 by using
TED

Figure 4: First eigenmode and temperature distribution of the GaAs material.
This figure 4 shows variation in temperature of GaAs according to the eigenfrequency analyzed Quality factor 4116.152562 by using
TED.

Figure 5: First eigenmode and temperature distribution of the PolySi
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

298 www.ijergs.org

This figure 5 shows variation in temperature of PolySi according to the eigenfrequency analyzed Quality factor 10076.460279 by
using TED.

Figure 6: Simulated output of a Simple fixed-fixed beam resonator-2D showing the first eigen mode and temperature distribution (Eigen frequency=630.449KHz).
This figure 6 shows variation in temperature of PolySi according to the eigenfrequency analyzed Quality factor 10169.891942 by
using TED.
V. CONCLUSION
Here we concluded that all the material i.e.Ge, GaAs, Poly Si, Single Crystal Si shows various changes in Quality factor
when eigen frequency changes respectively. By using the TED factor ,Quality of various material analysed. Single Crystal Si
intresting material of its high Q factor i.e 10170. Also PolySi shows the better Q value than when compare with GaAs, Ge but it
nevertheless gives better result than Single Crystal Si. So due to high quality factor of Single Crystal Si , it is used in Tunnable
piezoelectric Actutaor The analysis is done by using a high end software COMSOL Multiphysics. One important parameter is to able
to predict the Q factor of the structure and have accurate design guidelines to reduces the energy losses.
REFERENCES:
[1] R.Lifshitz, and M.L.Roukes, Thermoelastic damping in micro and nanomechanical systems,Physical review B, vol. 6, no
8, Feb. 2000,5600-5609.
[2] T.V. Roszhart, The effect of thermoelastic internal friction on the Q of micromachined silicon resonator ,Tech.Dig.Solid-
State Sens Actutaor Workshop,Hilton Head, SC,1990,13-16.
[3] Srikar Vengallatore, Analysis of thermoelastic damping in laminated composite micromechanical beam
resonator,J.Micromech.Microeng.(2005), 2398-2404.
[4] B. Le Foulgoc., Highly decoupled single-crystal silicon resonators: an approach for the intrinsic quality factor, J.
Micromech. Microeng. 16 (2006), S45-S53.
[5] Weinberg, M.S.; Cunningham, B.T.; Clapp,C.W. Modeling flexural plate wave devices,Journal of Microelectromechanical
Systems , vo1.9,no.3 , p. 370-9 Publisher: IEEE , Sept. 2000.
[6] Amy Duwel, Rob N. Candler, Thomas W. Kenny, and Mathew Varghese Engineering MEMS Resonators With Low
Thermoelastic Damping Journal of Mcroelectromechanical systems.Vol. 15, No.6, December 2006.
[7] Sairam Prabhakar and Srikar Vengallatore, Thermoelastic damping in Hollow and Slotted Microresonators Journal of
Microelectromechanical systems, Vol. 18, No. 3, June 2009.
[8] Jinling Yang, Takahito Ono, and Masayoshi Esashi Energy Dissipation in Submicrometer Thick Single-Crystal Silicon
Cantilevers Journal of Microelectromechanical systems, Vol. 11, N0.6 ,December 2002.
[9] Zener, C., Internal Friction in Solids, I: Theory of Internal Friction in Reeds, Phys. Rev., 52, pp. 230235. 1937.
[10] J. Yan, R. Wood, S. Avadhanula, M. Sitti, and R. Fearing, Towards flapping wing control for a micromechanical flying
insect, in Proc. IEEE Int. Conf. Robot. Autom., 2001, vol. 4, pp. 39013908.
[11] A. Duwel, R.N. Candler, T.W. Kenny, and M. Varghese, Journal of Microelectromechanical Systems, vol. 15, no. 6, pp.
14371445, 2006.
[12] S. Gupta, Estimation of Thermo-Elastic Dissipation in MEMS, MSc. Thesis, Dept. Mechanical Engineering, Indian Institute
of Science, Bangalore, July 2004
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

299 www.ijergs.org

Image Encryption using Different Techniques for High Security
Transmission over a Network
Mohammad Sajid Qamruddin Khizrai
1
, Prof. S.T.Bodkhe
2

1
Research Scholar (PG), Priyadarshini Institute of Engineering & Technology, Dept. Computer Science and Engg, Nagpur, India
2
Professor, Priyadarshini Institute of Engineering & Technology, Dept. Computer Science and Engg, Nagpur, India
E-mail- IDsajid4u0023@gmail.com

1. ABSTRACT
Digital image is a collection of the pixel with different intensity values, and each image is in the form of n*m, no of
pixel (where n,m is no of Rows and Column) when we transfer a digital image from source to destination through a network, it need to
be encrypted at the source side and decrypted at the destination side. Encryption is process of hiding the information, when the
information is transferred through a network and decryption is the process of extracting the information from an encrypted
information. For this encryption and decryption, we need some encryption and decryption algorithm.
Security of a data or information is very important now a day in this world. And everybody want a secure network, for
transmission of his information, being a well secure network there is also a chance of hacking a data, most of the banks and other
organization where data security in important are well secured but there is also a online fraudulent is there. So we need a more secure
data with high security environment. Generally, we do high secure working environment and data is also secure with a encryption and
decryption method or technique, but that techniques uses only one encryption and decryption keys.
Keywords Image encryption with high security, Image security, high security encryption decryption
2. INTRODUCTION
As the world changes technology is also changing rapidly. In advancement of network technology domain, large amount
of multimedia information is transmitted over the Internet conveniently. Various confidential data such as Government, Military,
Banking and other secured data, space and geographical images taken from satellite and commercial important document are
transmitted over the Internet. While using secret information we need more secure information hiding techniques.

In our new method, we are securing information sixteen (16) times or we can increase 2
n
no of times (where n is a
no of splitted part) instead of one in a single information transmission, more no of splitted blocks means more secure information.
3. RELATED WORKS.

The information security is used from old ages, different person using different technique to secure their data .
Following are some techniques that uses for security of images from ancient age to till date
A. Steganography
B. Water Marking Technique
C. Visual Cryptography
D. Without sharing Keys Techniques



International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

300 www.ijergs.org

A) Steganography

The steganography word comes from the Greek word Steganos, which is used to covered or secret and a graphy is
used for writing or drawing. Therefore, steganography is, literally, covered writing. The main idea for covering the information or
steganography is used for secure communication in a completely undetectable manner and to avoid drawing suspicion to the
transmission of a hidden data [4]. During the transmission process, characteristics of these methods are to change in the structure and
features so as not to be identifiable by human eye. Digital videos, images, sound files, and other files of computer that contain
perceptually important information can be used as covers or carriers to hide secret messages. After embedding a message into the
cover-image, a so-called stego image is obtained.
In [2] Security, Capacity and robustness are three different aspects which is affecting steganography and its usefulness.
Capacity is used to the amount of information that can be hidden in the cover medium. Security relates to an eavesdroppers inability
to detect hidden information and robustness is the amount of modification the stego medium can withstand before an adversary can
destroy the hidden information. The concept of the mosaic images in [1] was created perfectly and it has been widely used. Four types
of mosaic images namely crystallization mosaic, ancient mosaic, photo mosaic and puzzle image mosaic are proposed in [2]. In the
first two types, the source image is split into tile image and then it is reconstructed by painting the tiles and they are named as tile
images. The next two types include obtaining target image and with the help of database, cover image has been obtained. They may be
called as multi-picture mosaics.

B) Water Marking Technique

Water Marking is also one of the technique used to hide the digital image, Digital watermarking is a process of
embedding (hiding) marks which are typically invisible and that can be extracted only by owners of the authentication. This is the
technology which is used in [15] with the image that cannot be misused by any other unauthorized miss users. This technology allows
anyone to do without any distortion and keeping much better quality of stegno-image, also in a secured and reliable manner
guaranteeing efficient and retrievals of secret file. Digital watermarking finds wide application in security, authentication,
copyright protection and all walks of internet applications. There has been effective growth in developing techniques to
discourage the unauthorized duplication of applications and data . The watermarking technique is one, which is feasible and design
to protect the applications and data related. The term cover is used to describe the original message in which it will hide our
secret message, data file or image file. Invisible watermarking and visible watermarking are the two important types of the above said
technology. The main objective of this package is to reduce the unauthorized duplication of applications and data, provide copyright
protections , security, and authentication, to all walks of internet applications.
C) Visual Cryptography
Visual Cryptography is used to hide information in images, a special encryption technique in such a way that
encrypted image can be decrypted by the human eyes, if the correct key image is used. The technique was propose by Naor and
Shamir in 1994[1]. It is uses two transparent images. One image contains image contains the secret information and the other random
pixels.. It is not possible to get the secret information from any one of the images. Both layers or transparent images are required to
get the actual information. The easiest way to implement Visual Cryptography is to print the two layers onto a transparent sheet.
D) Without sharing Keys Techniques
The author at [11] is securing image for transmission without sharing his encrypted key, but it needs two transmission
for a single image transmission, In [11]the image is encrypted with private key and is sent without sharing key to the receiver, after
receiving the encrypted image receiver again encrypted the image by its own keys, and send it to the first sender, first sender removed
the first encrypted key and again send to opponent, The opponent already had its keys then with this key the image is finally
decrypted. Thus different person applying different-different techniques for securing his information.
4. Proposed Research Methodology
4.1) Encryption Process
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

301 www.ijergs.org

Process of Encryption Methodology of this research, we will read a image (A) fig (a) by using some command OR
algorithm we will divide the image in to J*J parts i.e. (2*2, 4*4) parts. Each parts of the image will be treated as a single image, we
can say that Splitted Image1, Splitted Image2, Splitted Image3,Splitted Image4, Splitted Image5,Splitted ImageJ,


Fig(a) (Original Image)

Fig(b) (Splitted Image)
The output of the above i.e. fib(b) Splitted Image1, Splitted Image2, Splitted Image3, Splitted Image4, Splitted
Image5,Splitted Image J, and each parts of the image is treated as a single image. And using different-different encryption
algorithm, we will encrypt each image, and we can say that each encrypted images (Encrypted images = Encrypted Part1, Encrypted
Part2, Encrypted Part3, Encrypted Part4, Encrypted Part5,,Encrypted Part J), Shown in fig (c).
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

302 www.ijergs.org


Fig(c) (Splitted & Encrypted Image)

Fig (d) (Combined Encrypted Image)
After that we have two options
I. Now We will transfer all sub encrypted images (Encrypted images= Encrypted Part1, Encrypted Part2, Encrypted Part3,
Encrypted Part4, Encrypted Part5,.. Encrypted Part J), which is shown in Fig(c) to the receiver side.

OR

II. We can Merges (Combine) all encrypted images (Encrypted images= Encrypted Part1, Encrypted Part2, Encrypted Part3,
Encrypted Part4, Encrypted Part5,.. Encrypted Part J) and make a single encrypted image i.e. Fig (d) we can say image
(A1), for transfer.
Now we will transfer the image (A1) from one location (source) to another location (destination).



International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

303 www.ijergs.org

4.2) Decryption Process

Here we will receive the encrypted image from source side through option (I) and decrypt the each part of image which is
shown in fig(f) construct a single image shown in fig (g).

OR

We will receive the image from option (II) and we will divide the image into a sixteen part (Encrypted images= Encrypted
Part1, Encrypted Part2, Encrypted Part3, Encrypted Part4, Encrypted Part5,. Encrypted Part J) which will also in a encrypted
form which is also shown in a figure shown in fig (e). Now we will apply decryption algorithm on each encrypted sixteen part
(Encrypted images= Encrypted Part1, Encrypted Part2, Encrypted Part3, Encrypted Part4, Encrypted Part5,Encrypted Part J).
Now will say that decrypted part (Decrypted images= Decrypted Part1, Decrypted Part2, Decrypted Part3, Decrypted Part4, Decrypted
Part5 Decrypted Part J) shown in fig(f).


Fig(e) (Splitted & Encrypted Image)


Fig(f) (Splitted & decrypted Image)

Now we will combine each of the decrypted part in to a single image which is shown in fig(g) i.e original image.

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

304 www.ijergs.org



Fig(g) ( Original Image )


5. DIFFERENCE BETWEEN
i) Existing Encryption Method and
ii) Proposed Encryption Method

Existing Encryption Method Proposed Encryption Method
1 ) It is encrypted using single key. 1) It is encrypted using sixteen no of keys.
22) It is less secure as, it is encrypted by single key 2) It is more secure as, It is sixteen time
encrypted than any other encryption algorithm.
3) It takes less time for encryption and
decryption.

3) It takes more time for encryption and
decryption. But more secure.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

305 www.ijergs.org



6 ACKNOWLEDGMENT
I acknowledge the sincere and long lasting support of my project guide Prof. S.T Bodkhe and other Professors of Computer
Science Department, who gave me healthy suggestion and had helpful discussion.

7 CONCLUSION
Thus we have increased the security of an image for transmission over a network up to sixteen (16) times or we can
increase 2
n
number of times (where n is a no of splitted part) instead of one in a single information transmission, more number of
splitted blocks means more secure information.
8 FUTURE SCOPE
Our future work will mainly focus on to study and analysis of more security, and security can be increased by splitting the
images into more no of parts and different algorithm can be applied in a single image. If we apply more algorithms it will take more
time for encryption and decryption but it will be more secure than this methods, but one problem would come, if we apply different
algorithms than different key sizes can cause the problem.
REFERENCES:

[1] Silver and M. Hawley, Photo mosaics. New York: Henry Holt, 1997.
[2] Battiato, G. M. farinella, and G. Gallo, Digital mosaic framework: An overview, Eurograph. Comput.
Graph.Forum,Vol.26, no. 4, pp. 794 812, Dec.2007.
[3] Y. Dobashi, T. Haga, H. Johan, and T. Nishita, A method for creating mosaic image using voronoi diagrams,
[4] John Blesswin, Rema, Jenifer Josel 978-1-4244-9799-71111$26.00 20 11 IEEE , in Proc. Eurographics,
Saarbrucken, Germany, Sep. 2002, pp. 341-348.
[5] Visual Cryptography Moni Naor and Adi Shamir EUROCRYPT -1994
4) If it is hacked, after N no of iteration using
different keys(if key is success) he is able to view
hole image.


4) If it is hacked, after N no of iteration using
different keys (if key is success) he is able to
view only single part of image.


Decrypted
Part
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

306 www.ijergs.org

[6] Resolution Variant Visual Cryptography for Street View of Google Maps Jonathan WeirWeiQi YanQueens University
Belfast Belfast, BT7 1NN
[7] Koo Kang in IEEE transactions on image processing, vol. 20, no. 1, January 2011
[8] Jayanta Kumar Pal1, J. K. Mandal2 and Kousik Dasgupta3 in (IJNSA), Vol.2, No.4, October 2010
[9] Debasish Jena1, Sanjay Kumar Jena2 in 978-0-7695-3516-6/08 $25.00 2008 IEEE DOI 10.1109/ICACC.2009.109
[10] Zhi Zhou IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 15, NO. 8, AUGUST 2006 2441 Halftone Visual
Cryptography
[11] Abdul Razzaque and Narendra Thakur International Journal of Engineering Research & Technology (IJERT) Vol. 1 Issue 5,
July - 2012 ISSN: 2278-0181
[12] N. Madhumidha and Dr.S. Chandramathi Bonfring International Journal of Advances in Image Processing, Vol. 2, Special
Issue 1, Part 2, February 2012 63 ISSN
[13] E. Myodo, S. Sakazawa, and Y. Takishima, Visual cryptography based on void-and-cluster halftoning technique, in Proc.
IEEE Int. Conf. Image Process., 2006, pp. 97100.
[14] Tsung-Yuan Liu and Wen-Hsiang Tsai, IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 5, MAY 2010
[15] Ahmad Salameh Abusukhon, Block Cipher Encryption For Text-To-Image Algorithm,International Journal of Computer
Engineering & Technology (IJCET), Volume 4, Issue 3,2013, pp. 50 - 59, ISSN Print: 0976 6367, ISSN Online: 0976 6375














International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

307 www.ijergs.org

Neighboring Optimal Solution for Fuzzy Travelling Salesman Problem
D. Stephen Dingar
1
, K. Thiripura Sundari
2

1
Research Scholar (PG), Research Department of Mathematics, TBML College, Porayar, India
2
Research Scholar (PG), Department of Mathematics, Poompuhar College, Melaiyur, India
E-mail- Ksundari_1982@yahoo.com

Abstract - A new method is introduced to find fuzzy optimal solution for fuzzy Travelling salesman problems. In this method,
intuitionistic trapezoidal fuzzy numbers are utilized to find the fuzzy optimal solution. This proposed method provides some of other
fuzzy salesman problem very neighbour optimal solution called fuzzy neighbouring optimal salesman. A relevant numerical
example is also included

Key words - Intuitionistic fuzzy number, intuitionistic trapezoidal fuzzy number, fuzzy salesman algorithm, fuzzy optimal solution

1. INTRODUCTION
Travelling salesman problem is a well-known NP-hard problem in combinatorial optimization. In the ordinary form of travelling
salesman problem, a map of cities is given to the salesman and he has to visit all the cities only once and return to the starting point to
complete the tour in such a way that the length of the tour is the shortest among all possible tours for this map. The data consists of
weights assigned to the edges of a finite complete graph and the objective is to find a cycle passing through all the vertices of the
graph while having the minimum total weight. There are different approaches for solving travelling salesman problem. Almostevery
new approach for solving engineering and optimization problems has been tried on travelling salesman problem. Many methods have
been developed for solving travelling salesman problem. These methods consist of heuristic methods and population based
optimization algorithms etc. Heuristic methods like cutting planes and branch and bound can optimally solve only small problems
whereas the heuristic methods such as 2-opt, 3-opt, Markov chain, simulated annealing and tabu search are good for large problems.
Population based optimization algorithms are a kind of nature based optimization algorithms. The natural systems and creatures which
are working and developing in nature are one of the interesting and valuable sources of inspiration for designing and inventing new
systems and algorithms in different fields of science and technology. Particle Swarm Optimization, Neural Networks, Evolutionary
Computation, Ant Systems etc. are a few of the problem solving techniques inspired from observing nature. Travelling salesman
problems in crisp and fuzzy environment have received great attention in recent years [1-4, 5,6,7,8,9,10,11]. With the use of LR fuzzy
numbers, the computational efforts required to solve fuzzy assignment problems and fuzzy travelling salesman problem are
considerably reduced [12].
In this paper, we introduce new method for finding a fuzzy optimal solution as well as of alternative solutions which is very
near to fuzzy optimal solution for the given fuzzy travelling salesman problem. In section 2 ,recall the definition of intuitionistic
trapezoidal fuzzy number and some operations. In section 3 , we presented fuzzy travelling salesman problem and algorithm. In
section 4 numerical example. In section 5 , conclusion is also included.

2. PRELIMINARIES

In this section, some basic definitions and arithmetic operations are reviewed.

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

308 www.ijergs.org

2.1. INTUITIONISTIC FUZZY NUMBER

Let a set X be fixed an Ifs
~
in X is an object having the form
~
= {(,

~ ,

~ )/ } where

~ :
[0,1] and

~ : [0,1],define the degree of membership and degree of non-membership respectively of the element xX to the
set
~
,which is a subset of X, for every element of xX , 0

~ +

~ () 1.
2.2. DEFINITION
A IFS
~
, defined on the universal set of real numbers R, is said to be generalized IFN if its membership and non-
membership function has the following characteristics:
(i)

~ (x) : R [0, 1] is continuous.


(ii)

~ (x) = 0 for all x (, a


1
] [a
4
,).
(iii)

~ (x) is strictly increasing on [a


1
,a
2
] and strictly decreasing on [a
3
,a
4
].
(iv)

~ (x) = w
1
for all x [a
2
,a
3
].
(v)

~ (x) : R [0, 1] is continuous.


(vi)

~ (x) = w
2
for all x [b
2
,b
3
].
(vii)

~ (x) is strictly decreasing on [b


1
,b
2
] and strictly increasing on [b
3
,b
4
].
(viii)

~ (x) = w
1
, for all x (, b
1
] [b
4
,) and w = w
1
+w
2
, 0 < w 1.
2.3. DEFINITION
A generalized intuitionistic fuzzy number
~
is said to be a generalized trapezoidal intuitionistic fuzzy number with
parameters, b
1


a
1


b
2


a
2
a
3
b
3
a
4
b
4
and denoted by
~
= (b
1
,a
1
,b
2
,a
2
,a
3
,b
3
,a
4
,b
4
;w
1
,w
2
) if its membership and non-
membership function is given by

~ (x) =

1
(
1
)
(
2

1
)
,
1

2

1
,
2

3

,
3

4
0,

and

~ (x) =

2
(
2
)
(
2

1
)
,
1

2

2
,
2

3

,
3

4

1
,


Generalized trapezoidal intuitionistic fuzzy number is denoted by

~
= (b
1
, a
1
, b
2
, a
2
, a
3
, b
3
, a
4
, b
4
; w
1
, w
2
). Fig.1 membership and
non-membership function of GITrFN.

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

309 www.ijergs.org

2.4 DEFINITION
We define a ranking function : F(R) R which maps each fuzzy number in to the real line; F(R) represents the set of all
intuitionistic trapezoidal fuzzy numbers. If be any linear ranking function, then
~
=

1
+
1
+
2
+
2
+
3
+
3
+
4
+
4
8
..
2.5 ARITHMETIC OPERATIONS
In this section, arithmetic operations between two intuitionistic trapezoidal fuzzy numbers, defined on universal set of real
numbers R. Let
~
=
1
,
1
,
2
,
2
,
3
,
3
,
4
,
4
and
~
= (
1
,
1
,
2
,
2
,
3
,
3
,
4
,
4
) intuitionistic trapezoidal fuzzy numbers,
are as follows:
- Image
~
= (
4
,
4
,
3
,
3
,
2
,
2
,
1
,
1
).
-
~
+
~
= (
1
+
1
,
1
+
1
,
2
+
2
,
2
+
2
,
3
+
3
,
3
+
3
,
4
+
4
,
4
+
4
.
-
~

~
=
1

4
,
1

4
,
2

3
,
2

3
,
3

2
,
3

2
,
4

1
,
4

1
.
- if is any scalar , then
~
=
1
,
1
,
2
,
2
,
3
,
3
,
4
,
4
. > 0.
= (
4
,
4
,
3
,
3
,
2
,
2
,
1
,
1
). < 0.
-
~

~
=
1
,
1
,
2
,
2
,
3
,
3
,
4
,
4

~
> 0
= (
4
,
4
,
3
,
3
,
2
,
2
,
1
,
1
),
~
< 0
-
~

~
=


~
0
~
> 0,
= (

)
~
0,
~
< 0.
Where = (
1
+
1
+
2
+
2
+
3
+
3
+
4
+
4
)/8.

3. FUZZY TRAVELLING SALESMAN PROBLEMS
The fuzzy travelling sales man problem is very similar to the fuzzy assignment problem expect that in the former, there is an
additional restrictions. Suppose a fuzzy salesman has to visit n cities. He wishes to start from a particular city, visit each city once, and
then return to his starting point.The objective is to select the sequence in which the cities are visited in such a way that his total fuzzy
travelling time is minimized. Since the salesman has to visit all n cities , the fuzzy optimal solution remains independent of selection
of starting point.
The mathematical form of the fuzzy travelling salesman is given below
Minimize
~
=

~
=1

=1

=1
ij
Subject to
x
ijk
~i
= 1,
n
j=1
n
i=1
ij, k=1,2,m
x
ijk
~i
= 1,
n
k=1
n
j=1
i=1,2,m
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

310 www.ijergs.org

x
ijk
~i
= 1,
n
k=1
n
i=1
j=1,2,n

=
(+1)
~

for all j and k

~
=
1 ,
0,


Where i,j and k are integers that vary between 1 and n.
An fuzzy assignment in a row is said to be a minimum fuzzy assignment if the fuzzy cost of the fuzzy assignment is minimum in the
row.
A tour of a fuzzy travelling salesman problem is said to be minimum tour if it contains one or more minimum fuzzy assignments.
3.1 ALGORITHM
Step 1 Find the minimum assignments for each row in the fuzzy cost matrix below and above of leading diagonal elements.
Step 2 Find all possible minimum tour and their fuzzy costs.
Step 3: Find the minimum of the all fuzzy costs of the minimum possible tours say
~
.
Step 4: The tour corresponding to
~
is the fuzzy optimal tour and
~
is the fuzzy optimal value of the tour.
4. EXAMPLE
Consider the following fuzzy travelling salesman problem so as to minimize the fuzzy cost cycle.
A B C D
A - (-3,-1,0,2,3,4,5,6) (1,2,3,4,6,7,8,9) (-10,-6,5,6,10,15,17,19)
B (-3,-1,0,2,3,4,5,6) - (-3,0,2,3,4,5,6,7) (-6,4,6,8,10,12,14,16)
C (1,2,3,4,6,7,8,9) (-3,0,2,3,4,5,6,7) - (0,1,2,3,5,6,7,8)
D (-10,-6,5,6,10,15,17,19) (-6,4,6,8,10,12,14,16) (0,1,2,3,5,6,7,8) -

The minimum fuzzy costs in each row and their elements are given below
R(
12
~
) = 2 R(
13
~
) = 5 R(
14
~
) = 7 R(
21
~
) = 2 R(
23
~
) = 3 R(
24
~
) = 8 R(
31
~
) = 5 R(
32
~
) = 3 R(
34
~
) = 4. R(
41
~
) = 7 R(
42
~
) = 8 R(
43
~
)
= 4
1
st
row
12
~
: AB 2
nd
row
23
~
: BC 3
rd
row
34
~
: CD
All possible cycles which contains one or more minimum elements are given below
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

311 www.ijergs.org

Cycle 1 2 3 4
1 AB BC CD DA
2 AB BD DC CA
3 AC CB BD DA
4 AC CD DB BA
5 AD DC CB BA
6 AD DB BC CA

The fuzzy cost of the each of the minimum tours with their minimum elements is given below
Cycle Tour

~
R(
~
)
1 AB (-16,-6,9,14,22,30,35,40) 16
2 AB (-8,6,11,17,24,29,34,39) 19
3 AC (-18,0,16,21,30,39,45,51) 23
4 AC (-8,6,11,17,24,29,34,39) 19
5 AD (-16,-6,9,14,22,30,35,40) 16
6 AD (-18,0,16,21,30,39,45,51) 23

Best tours are AB and AD .The minimum total distance travelled is 16.
Satisfaction tours are AB and AC .The total distance travelled is 19.
The worst tours are AC and AD The total distance travelled is 23.
5. CONCLUSION
Using the proposed method , we can solve a fuzzy travelling salesman problem. The proposed method is very easy to understand
and apply and also provides not only to an fuzzy optimal solution for the problem and also, to list some other alternative solutions to the
problem which are very near to fuzzy optimal solution of the problem.

REFERENCES:
[1] Andreae, T. 2001. On the travelling salesman problem restricted to inputs satisfying a relaxed triangle
inequality.Networks, 38: 59-67.
[2] Blaser, M., Manthey, B., and Sgall, J.2006. An improved approximation algorithm for the asymmetric TSP
with strengthened triangle inequality. Journal of Discrete Algorithms, 4: 623-632.
[3] Bockenhauer, H. J., Hromkovi, J., Klasing, R., Seibert, S., and Unger, W. 2002. Towards the notion of stability
of approximation for hard optimization tasks and the travelling salesman problem. Theoretical Computer
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

312 www.ijergs.org

Science, 285:3-24.
[4] Chandran, L. S. and Ram, L. S. 2007. On the relationship between ATSP and the cycle cover problem.
Theoretical Computer Science, 370: 218-228.
[5] Crisan, G. C. and Nechita, E. 2008. Solving Fuzzy TSP with Ant Algorithms. International Journal of
Computers,Communications and Control, III (Suppl.Amit Kumar and Anila Gupta Int. J. Appl. Sci. 170 Eng.,
2012. 10, 3 issue: Proceedings of ICCCC 2008), 228-231.
[6] Fischer, R. and Richter, K. 1982. Solving a multiobjective travelling salesman problem by Dynamic
programming. Optimization, 13:247-252.
[7] Melamed, I. I. and Sigal, I. K. 1997. The linear convolution of criteria in the bicriteria travelling salesman
problem. Computational Mathematics and Mathematical Physics, 37: 902-905.
[8] Padberg, M. and Rinaldi, G. 1987.Optimization of a 532-city symmetric travelling salesman problem by
branch and cut. Operations Research Letters, 6:1-7.
[9] Rehmat, A., Saeed H., and Cheema, M.S. 2007. Fuzzy multi-objective linear programming approach for
travelling salesman problem. Pakistan Journal of Statistics and Operation Research, 3: 87-98.
[10] Sengupta, A. and Pal, T. K. 2009.Fuzzy Preference Ordering of IntervalNumbers in Decision Problems.
Berlin.
[11] Sigal, I. K. 1994. An algorithm for solving large-scale travelling salesman problem and its numerical
implementation. USSR Computational Mathematics and Mathematical Physics, 27: 121-127.
[12] Zimmermann, H. J. 1996. Fuzzy Set Theory and its Application. Boston












International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

313 www.ijergs.org

Max-Relay Selection in Cooperative Wireless Networks with Data
Compression
Alok M. Jain
1
, Neeraj Tiwari
2

1
Research Scholar, Department of ECE, TIT, Bhopal
2
Assistant Professor, Department of ECE, TIT, Bhopal
E-mail- alok.jain012@gmail.com

Abstract The secure wireless communication has been an important field of research. The proposed max-ratio relay selection
techniques introduced to security of transmission in buffer aided cooperative wireless network. The data transmitted from source to
relay and relay to destination. An eavesdropper can used to intercept the data from source to relay and relay to destination. Assume,
data buffer are available at each relay to select best link source to relay and relay to destination. An eavesdropper channel strength,
introduced two cases, exact and average gain respectively. In this paper proposed two new scheme, data compression and fast
communication scheme. RC6 block cipher has been purposed for data security and RLE (Run Length Encoding) has been proposed
for data compression and fast communication in cooperative wireless network. This both the scheme has been proposed to improve the
performance and security of wireless communication.
Keywords Max-Ratio Relay Selection, Cooperative Wireless Network, Secure Wireless Communication, Buffer, RC6 Block
Cipher, RLE (Run Length Encoding), Data Compression.
INTRODUCTION
Max-Ratio Relay selection is very useful method for security of wireless network. The finite size data transmitted from source to relay
and relay to destination links. Generally, the relay nodes used for the improving the coverage, reliability and quality-of-service in
wireless network. [1]. A selection amplify-and-forward (AF) relaying is one another scheme in cooperative wireless network to
improve the performance of wireless communication. In this scheme source to destination (SR) link varies with time and we obtained
the diversity gain [2].

In this paper, the two relay nodes are used to increase the security against the eavesdroppers. The first relay operates as conventional
mode and second relay is used to crate intentional interference at the eavesdropper nodes. This approach is used to improve the
security. This method is protecting the network to jamming problems and hybrid method proposed for switching between jamming
and non-jamming [3]. In relay based wireless communication the relay node receives a message node from a source node, process it
and forward the message to destination node. An adaptive relay selection scheme proposed some protocol for wireless networks which
is very useful and good for the gains in robustness energy-efficiency in wireless networks [4].

The output rate and timing are the two main factors which are analysis in cooperative wireless network. The object of this paper
increase spectral efficiency, mitigates error propagation, and maximizes the network lifetime. To achieve this result used Distributed
Optimal Relay Selection in Wireless Cooperative Networks. The obtained relay-selection policy reduces the computation and
implementation complexity [5].

A simple distribution method can be used to find the end-to-end path between source and destination. The distributed method required
space-time coding and coordination among the terminals. In this paper, to get the benefits of cooperative diversity, by using two
simple software and hardware implementation approach [6]. In this paper, the term cooperative communications are related to
multiple fading effects which are used to improving the adaptively, reliability and network throughput in wireless networks. After the
simulation can achieve near-optimal performance on both diversity gain and channel efficiency [7]. The Physical-layer Network
Coding (PNC) on the throughput can reduce the effect of interference for the one-dimensional networks and throughput bound for the
two-dimensional network. The throughput of wireless ad hoc networks can be improved by the transmission schemes [8].
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

314 www.ijergs.org

Generally the data compression method is used to reduce the electronic space or data bits. It used in representing a piece
of information, by eliminating the repetition of identical sets of data bits (redundancy) in an audio/video, graphic, or text data file.
Data compression involves encoding information using fewer bits than the original representation. An an improved test data
compression scheme based on a combination of test data compatibility and dictionary for multi-scan designs. This test data
compression scheme is used to reduce test data volume and thus test cost [9]. The Compression is useful because it helps reduce
resource usage, such as data storage space or transmission capacity. The data compression scheme in not always beneficial for energy
conversion. In this paper, introduced new adaptive compression arbitration system which are uses a new prediction modeling and
adaptation. This paper proposed energy-efficient arbitration mechanism that enhances the performance of compression algorithms
[10].
Traditionally, the compression is a one way to reduce the number of bits in a frame but retaining its meaning. It reduces the
transmission cost, latency and bandwidth and also data compression method reduce the number of intermediate node in the wireless
networks. In wireless communication, data compression algorithms proposed different data compression method, i.e. distributed
source modeling (DSM), Distributed Transform Coding (DTC), Distributed Source Coding (DSC) and Compression Sensing (CS)
respectively [11].
SYSTEM MODEL
To enhance the performance of wireless communication, Relay Selection is one of the most important issues. To solve this problem
we proposed the max ratio relay selection with minimum distance. Relay selection can improve the secrecy capacity and can
maximize the signal to eavesdropper channel gain ratio [1]. The relay selection scheme is based on the concept on the eavesdropper
intercepts signals from both the source and relay nodes which are showing in Fig.1.
Fig.1. Relay selection system model in secure transmission for wireless communication with eavesdropper.
An eavesdropper placed in middle of source and destination which are intercepting the data incoming from the source links. For the
wireless data transmission scheme, the instantaneous secrecy capacity for the overall system is obtained as,


Where C
k
(t) is the secrecy capacity and the source-to-eavesdropper channel gain is denoted as E
s
\h
se
(t)\2. In this buffer-aided relay
selection in secure transmission approach the eavesdropper can intercept signals from both the source and relay nodes. The data
transmitted from source to relay and relay to destination links with signal to eavesdropper channel gain ratio. The finite size data are
available in each buffer which is available in each relay present in cooperative wireless network.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

315 www.ijergs.org

PROPOSED WORK
A simple characteristic of data compression is that it involves transforming strings of character in some representation into new string
of bits which contain the same information but whose length is as small as possible. Data compression is also used for backup utilities,
spreadsheet application and data base management systems. Some types of data such as bit-mapped graphics can be compressed to
small fraction of their normal size. Wireless network can support data compression techniques. Generally, Data compression
techniques are used for save energy and increase network capacity in wireless networks. Data compression proposed to increase data
rate in wireless network. Data compression is categories into two types, one is loss less data compression and other is lossy data
compression.
1. Loss less data compression
Lossless data compression can be compressed to exactly its value. No information is lost in lossless compression. Lossless data
compression algorithms usually exploit statistical redundancy to represent data more concisely without losing information, so that the
process is reversible. Lossless compression is possible because most real-world data has statistical redundancy. Lossless data
compression is used in many applications. For example, it is used in the ZIP file format and in the GNU tool gzip. It is also used as a
component within lossy data compression technologies.

2. Lossy data compression
Lossy data compression is used to reduce data by identifying unnecessary information and removing it. By using lossy compression, a
substantial amount of data reduction is often possible before the result is sufficiently degraded to be noticed by the user. Lossless data
compression is used in many applications. It permits reconstruction only of an approximation of the original data, though this usually
allows for improved compression rates. Lossy compression is commonly used to compress audio, video and still images.
In this paper, proposed Run Length Encoding Method for data compression. Run-length coding (RLE) is a very simple and famous
method of data compression.
A. Run Length Encoding:
Run-Length Encoding is a data compression algorithm that is supported by bitmap file format, such as TIFF, BMP and PCX. Run
Length Encoding is simple form of data compression in which data are stored as a single data value and count, rather than as the
original run. RLE is mostly used for compressing any type of data regardless of its information content, but the content of the data will
affected the data compression ratio achieved by Run Length Encoding. RLE are very easy to implement and quick to execute
operation. RLE works for reducing the size of a repeating string of data. This types of string are knows as run. RLE are also used to a
graphics file format supported by CompuServe for compressing black and white images. RLE is a lossless type of compression and
cannot achieve great compression ratios, but a good point of that compression.

Run-Length Encoding is based on the replacement of a long sequence of the same symbol by a shorter sequence and is a better
introduction into the data compression techniques. The sequence of the length of a repeated symbols is replaced by a shorter
sequence, containing one or more symbols of s, get the length information and sometimes an escape some symbol.








Fig.2. Basic flow chart of Run Length Encoding method.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

316 www.ijergs.org

B. RC6 Block cipher:
A block cipher is a set of code books and every key produces a different code book. The encryption of a plaintext block is the
corresponding cipher text block entry in the code book. RC6 (Rivest Cipher 6) is a symmetric key block cipher which are derived
from RC5. The RC6 block cipher is very simple, fast and secure AES (Advanced Encryption Standard). The new version of RC5
block cipher is RC6. The RC5 cipher use data dependent rotation to achieve a high level of security. RC6 is one of the families of
encryption algorithms. RC6 are easily available in a block size of 128 bits and supports key sizes of 128, 192, and 256 bits, but, like
RC5. The RC6 Block Cipher shown in Fig.3.
The RC6 has provided a simple cipher yielding numerous evaluations and adequate security in a small package. RC6, like RC5,
consists of three components: a key expansion algorithm, an encryption algorithm, and a decryption algorithm. RC6-w/r/b, where w is
the word size, r is the non-negative number of rounds, and b is the byte size of the encryption key. RC6 makes use of data-dependent
rotations, similar to DES rounds. RC6 is based on seven primitive operations. Normally, there are only six primitive operations;






Fig.3. The RC6 Block Cipher.
However, the parallel assignment is primitive and an essential operation to RC6. The addition, subtraction, and multiplication
operations use twos complement representations. Integer multiplication is used to increase diffusion per round and increase the speed
of the cipher. The parts of run-length encoding algorithms that differ are the decisions that are made based on the type of data being
decoded (such as the length of data runs). RLE schemes used to encode bitmap graphics are usually divided into classes by the type of
atomic (that is, most fundamental) elements that they encode. The three classes used by most graphics file formats are bit-, byte-, and
pixel level RLE.
DISCUSSION
We consider two most important cases for max-ratio relay selection i.e. Exact Knowledge of eavesdropping channel and knowledge of
average channel gains for eavesdropping channel. We plot basic graph of target secrecy capacity in -axis and secrecy output
probability in y-axis shown in Fig.4. for both the two different cases. Fig.4. shows the secracy output probability of max-ration
scheme for case1 and case 2.













Fig.4. The secrecy outage probabilities of the max-ratio scheme for cases 1 and 2.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

317 www.ijergs.org


Similarly, Fig.5. shows the secracy output probability vs signal-to-noise ratio where gain is 30db and target secracy capacity is unity.

Case 1 Case 2
Fig.5. The secrecy outage probabilities vs Signal-to-Noise Ratio for cases 1 and 2.
ACKNOWLEDGEMENT
I would like to express my thanks to Department of Electronics and Communication Engineering, TIT Bhopal, to undertake this work
and to allow us to present our findings as our contribution to the development of knowledge in the field of Wireless Communication
and their generous help in various ways for the completion on my review paper.
CONCLUSION
This paper we proposed max-ratio relay selection policy for cooperative wireless network with data compression techniques. We
proposed max. hops with minimum distance scheme. Buffer is present in each relay node, which is used to send the data from one
relay to another relay. Relay was selected with the largest gain ratio among all available source-to-relay and relay to-destination path.
We proposed data compression method called RLE (Run Length Encoding) to reduce the data size in wireless network and increased
the communication speed and RC6 block cipher for security of the data in cooperative wireless network. We proposed both the scheme
to improve the security and efficiency of wireless communication.

REFERENCES:
[1] Gaojie Chen, Zhao Tian, Yu Gong, Zhi Chen, and Jonathon A. Chambers, Max-Ratio Relay Selection in Secure Buffer-Aided
Cooperative Wireless Networks, IEEE Transactions on Information Forensics and Security, Vol. 9, No. 4, April 2014.
[2] Jeehoon Lee, Minjoong Rim, and Kiseon Kim, On the Outage Performance of Selection Amplify-and-Forward Relaying
Scheme, IEEE Communications Letters, Vol. 18, No. 3, March 2014.
[3] Ioannis Krikidis, John S. Thompson, and Steve McLaughlin, Relay Selection for Secure Cooperative Networks with Jamming,
IEEE Transactions On Wireless Communications, Vol. 8, No. 10, October 2009.
[4] Helmut Adam, Christian Bettstetter, and Sidi Mohammad Senouci, Adaptive Relay Selection in Cooperative Wireless
Network, IEEE International Symposium on Personal, Indoor and Mobile Radio Communication (PIMRC), Cannes, France,
September 15-18, 2008.
[5] Yifei Wei, F. Richard Yu, and Mei Song, Distributed Optimal Relay Selection in Wireless Cooperative Networks With Finite-
State Markov Channels, IEEE Transactions on Vehicular Technology, Vol. 59, No. 5, June 2010.
[6] Aggelos Bletsas, Andrew Lippman, and David P. Reed, A Simple Distributed Method for Relay Selection in Cooperative
Diversity Wireless Networks, based on Reciprocity and Channel Measurements, Vehicular Technology Conference, 2005. VTC
2005-Spring. 2005 IEEE 61
st
, Vol. 3, 30 May-1 June 2005.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

318 www.ijergs.org

[7] V. Rajaravivarma, E. Lord, and J. Barker, Data compression techniques in image compression for multimedia systems,
Southcon/96. Conference Record, 25-27 Jun 1996.
[8] Xican Yang, Jian Li, Changliang Xie, and Li Li, Throughput Gain of Random Wireless Networks with Physical-Layer Network
Coding, Tsinghua Science And Technology ISSN ll1007-0214ll05/12llpp161-171 Vol 17, Number 2, April 2012.
[9] LIN Teng, FENG Jianhua, and Wang Yangyuan, Improved Data Compression Scheme for Multi-Scan Designs, Tsinghua
Science And Technology ISSN 1007-0214 16/49 pp89-94 Vol 12, Number S1, July 2007.
[10] Ying Beihua, LIU Yongpan, and WANG Hui, Improved Adaptive Compression Arbitration System for Wireless Sensor
Networks, Tsinghua Science And Technology ISSN ll1007-0214ll10/16llpp202-208 Vol 15, Number 2, April 2010.
[11] You-Chiun Wang, Yao-Yu Hsieh, and Yu-Chee Tseng, Compression and Storage Schemes in a Sensor Network with Spatial
and Temporal Coding Techniques, Vehicular Technology Conference, 2008. VTC Spring 2008. IEEE, 11-14 May 2008.
[12] Zhenzhen Gao, Yu-Han Yang, and K. J. Ray Liu, Anti-Eavesdropping Space-Time Network Coding for Cooperative
Communications, IEEE Transactions on Wireless Communications, Accepted For Publication.
[13] M.VidyaSagar, and J.S. Rose Victor, Modified Run Length Encoding Scheme for High Data Compression Rate, International
Journal of Advanced Research in Computer Engineering & Technology (IJARCET) Vol 2, Issue 12, December 2013.
[14] T. A. Welch, A technique for high-performance data compression, Computer, 17(6):819, 1984.
[15] Scott Hauck, and William D. Wilson, Runlength Compression Techniques for FPGA Configurations, IEEE Symposium on
FPGAs for Custom Computing Machines, 1999.
[16] M. J. Neely Energy Optimal Control for time varying wireless networks, IEEE Transactions on Information Theory,
52(7):29152934, 2006.
[17] Gordon Cormack and Nigel Horspool, "Data Compression using Dynamic Markov Modeling," Computer Journal 30:6
(December 1987).
[18] Cleary, J.; Witten, I. (April 1984). "Data Compression Using Adaptive Coding and Partial String Matching," IEEE Trans.
Commun. 32 (4): 396402. doi:10.1109/TCOM.1984













International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

319 www.ijergs.org

Protecting Source and Sink Nodes Location Privacy against Adversaries in
Sensor Network: A Survey
Pavitha N
1
, S.N. Shelke
2

1
PG Scholar, Sinhgad Academy of Engineering, Pune , Maharashtra, India
2
Assistant Professor, Sinhgad Academy of Engineering, Pune, Maharashtra, India
E-mail-pavithanrai@gmail.com

Abstract- Due to the open nature of a sensor network, it is relatively easy for an adversary to eavesdrop and trace packet movement in
the network in order to capture the source and destination physically. Many security protocols have been developed to provide
confidentiality for the content of messages whereas contextual information usually remains exposed. Such contextual information can
be exploited by an adversary to derive sensitive information such as the locations of monitored objects and data sinks in the field. This
paper is a survey of various techniques to provide location privacy in sensor network. We have analysed various techniques to provide
location privacy for source node and also for sink node.
Keywords sensor network, location privacy.
I. INTRODUCTION
Sensor networks have been extensively used in many various applications because of their ease of installation, cost efficient
and portability. A WSN is usually composed of hundreds or thousands of sensor nodes. These sensor nodes are often densely deployed
in a sensor field and have the capability to collect data and route data back to a base station (BS). A sensor consists of four basic parts:
a sensing unit, a processing unit, a transceiver unit, and a power unit. It may also have additional application- dependent components
such as a location finding system, power generator, and mobilizer. Sensing units are usually composed of two subunits: sensors and
analog-to-digital converters (ADCs). The ADCs convert the analog signals produced by the sensors to digital signals based on the
observed phenomenon. The processing unit, which is generally associated with a small storage unit, manages the procedures that make
the sensor node collaborate with the other nodes.
A transceiver unit connects the node to the network. One of the most important units is the power unit. A power unit may be
finite (e.g., a single battery) or may be supported by power scavenging devices (e.g., solar cells). Most of the sensor network routing
techniques and sensing tasks require knowledge of location, which is provided by a location finding system. Finally, a mobilizer may
sometimes be needed to move the sensor node, depending on the application.
II. NETWORK MODEL
Usually, sensor nodes are deployed in a designated area by an authority such as the government or a military unit and then,
automatically form a network through wireless communications. Sensor nodes can be either static or dynamic according to application
requirements. One or several base stations (BSs) are deployed together with the network. A BS can be either static or mobile. Sensor
nodes keep monitoring the network area after being deployed. After an event of interest occurs, one of the surrounding sensor nodes
can detect it, generate a report, and transmit the report to a BS through multihop wireless links. Collaboration can be carried out if
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

320 www.ijergs.org

multiple surrounding nodes detect the same event. In this case, one of them generates a final report after collaborating with the other
nodes. The BS can process the report and then forward it through either high-quality wireless or wired links to the external world for
further processing. The WSN authority can send commands or queries to a BS, which spreads those commands or queries into the
network. Hence, a BS acts as a gateway between the WSN and the external world. An example is illustrated in Figure 1.[17]
Because a WSN consists of a large number of sensor nodes, usually, each sensor node is limited in its resources due to the
cost consideration in manufacturing. For example, MICA2 MPR400CB , which is the most popular sensor node platform, has only
128 KB of program memory and an 8-bit ATmega128L CPU. Its data rate is 38.4 kbaud in 500 feet, and it is powered by only two AA
batteries. The constrained resource cannot support complicated applications. On the other hand, usually, BSs are well designed and
have more resources because they are directly attached to the external world.[17]

Figure 1: A wireless Sensor Network
III. SECURITY ISSUES IN SENSOR NETWORK
Privacy is one of the most important problems in wireless sensor networks due to the open nature of wireless communication,
which makes it very easy for adversaries to eavesdrop. When deployed in critical applications, mechanisms must be in place to secure
a WSN. Security issues associated with WSNs can be categorized into two broad classes: content-related security, and contextual
security. Content-related security deals with security issues related to the content of data traversing the sensor network such as data
secrecy, integrity, and key exchange. Numerous efforts have recently been dedicated to content-related security issues, such as secure
routing, key management and establishment, access control, and data aggregation. In many cases, it does not suffice to just address the
content-related security issues. Suppose a sensitive event triggers a packet being sent over the network; while the content of the packet
is encrypted, knowing which node sends the packet reveals the location where the event occurs. Contextual security is thus concerned
with protecting such contextual information associated with data collection and transmission.
One of the ways to increase the reliability and range of the WSNs is to employ multi-hop routing. The concept of multi-hop
routing is to forward a packet to the destination using different path in case of the node failure. But, the critical issue still remains of
providing security and privacy in WSNs. Therefore, preserving location privacy of the source node remains critical. Wireless sensor
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

321 www.ijergs.org

networks are used in many areas such as military supervision where possibility of the eavesdropping the traffic is high to get hold of
sensitive information. Exploitation of such information can cause economic losses or cause danger to human lives. To protect such
information, researchers are finding out new ways to provide standard security services such as, availability, integrity, confidentiality
and authentication. The exchange of information between sensors can disclose sensitive information which can reveal the location
information of the critical modules present in the network.

Figure 2: Threats in military surveillance
The figure 2 shows WSNs deployed in the military observation area. In this figure the soldier 1 is sending some trusted data
to the soldier 2 via many intermediate nodes. Here soldier 2 is the sink node. A spy who is present on the same network tries to
intercept the data by negotiating one of the intermediary nodes. The nodes may reveal trusted data to the adversary such as location of
the source, location of the sink or positions of the armed forces in the locality.

Figure 3: Threats in monitoring endangered animals
The figure 3 shows the deployment of sensor network to monitor the endangered animals in a forest. An event is generated
whenever an animal is spotted in the monitored area. The hunter tries to gather this information and may capture or kill the
endangered animal. The above scenario depicts the vulnerability of WSNs is more because of its open wireless medium to transmit the
information from source to destination.

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

322 www.ijergs.org

IV. SOURCE LOCATION PRIVACY TECHNIQUES
flooding technique[16]
In flooding, a message originator transmits its message to each of its neighbours, who in turn retransmit the message to each
of their neighbours. Although flooding is known to have performance drawbacks, it nonetheless remains a popular technique for
relaying information due to its ease of implementation, and the fact that minor modifications allow it to perform relatively well.
Fake packet generation[5]
Fake packet generation creates fake sources whenever a sender notifies the sink that it has real data to send. The fake senders
are away from the real source and approximately at the same distance from the sink as the real sender.
Phantom single-path routing[5]
Phantom single-path routing achieves location privacy by making every packet walk along a random path before being
delivered to the sink.


Figure 4:Phantom routing
Cyclic entrapment[2]
Cyclic entrapment creates looping paths at various places in the network to fool the adversary into following these loops
repeatedly and thereby increase the safety period.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

323 www.ijergs.org


Figure 5: Cyclic entrapment
V. SINK LOCATION PRIVACY TECHNIQUES
Location Privacy Routing (LPR) [14]
A technique called Location Privacy Routing (LPR) is used along with the fake packet injection which uses randomized
routing to confuse the packet tracer along with fake packets that makes the transmission completely random. Careful monitoring of
packet sending time may allow adversary to get information about the data traffic flows.
Randomized Routing with Hidden Address (RRHA) [12]
As the name suggests, the identity and location of the sink is kept private inthe network to avoid it to be revealed and to
become the target of attacks. The destination addresses of the packets are kept hidden so that the attacker cannot obtain the location of
the sink even when he reads the header fields of the packets. The packets are forwarded along different random paths. RRHA provides
strong protection for the sink privacy against both active and passive attackers.
Bidirectional Tree Scheme (BT) [11]
This is used to protect the end-to-end location privacy in sensor network. The real messages travel along the shortest route
from the source to the sink node. Branches are designed along the shortest route in source side to travel dummy messages from leaf
nodes to nodes which makes the adversary deviate from the real route, and help to protect the source location privacy.
Secure location verification using randomly selected base stations [7]
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

324 www.ijergs.org

This method selects a random set of base stations and assumes that they are known instead of hiding them. But, it hides the
details of which particular base stations are being used in a specific execution of the location determination protocol. Even if the
positions of base stations are known, invader has at most a 50% chance of succeeding in one trial.
Base station Location Anonymity and Security Technique (BLAST) [10]
BLAST aims to secure the base station from both packet tracing and traffic analysis attacks and provides good privacy
against the global attacker. Network is divided into blast nodes and ordinary nodes. Receiver is present somewhere nearby blast nodes.
Source node sends packet to one of the blast nodes which is then retransmitted inside blast region. The adversary is unaware of the
communication between blast node and actual receiver. Hence, location privacy of the receiver is maintained.
BLAST with Clustering[1]
The whole sensor network is divided into small groups called clusters using some efficient clustering algorithm.A cluster
contains many members and a cluster head. An efficient shortest path algorithm is used to send data from source node to the blast
node. Now, packet is retransmitted within the blast security ring using varying transmission power depending upon location of the
sink node. In this approach Always the sink node is present within the security ring of blast nodes an adversary who has the global
knowledge of the network traffic can easily defeat this scheme. In this case the adversary only needs to identify the region of high
activity to locate the destination.
VI. CONCLUSION
Providing privacy for contextual information such as location of the source or sink node is very important in sensor network.
An adversary can use location information and perform some attacks on either source node or destination node. In this paper, we have
studied different approaches for providing location privacy for source node and sink node against adversaries in sensor network.

REFERENCES:
[1] Priti C. Shahare, Nekita A. Chavhan An Approach to Secure Sink nodes Location Privacy in Wireless Sensor Networks Fourth
Intl Conf. on Communication Systems and Network Technologies 2014. pp 748-751.
[2] Y. Ouyang, Z. Le, G. Chen, J. Ford, and F. Makedon, Entrapping Adversaries for Source Protection in Sensor Networks, Proc.
Intl Conf. World of Wireless, Mobile, and Multimedia Networking (WoWMoM 06), June 2006.
[3] V. Rini, and K. Janani, Securing the Location Privacy in wireless Sensor Networks, International Journal of Engineering
Research & Technology (IJERT), Vol. 2 Issue 1, January- 2013.pp.1-4.
[4] Ying Jian, Liang Zhang, and Shigang Chen, Protecting Receiver Location Privacy in Wireless Sensor Networks, IEEE
INFOCOM 2007 proceedings. pp. 1955-1963.
[5] P. Kamat, Y. Zhang, W. Trappe, and C. Ozturk, Enhancing Source-Location Privacy in Sensor Network Routing, Proc. Intl
Conf. Distributed Computing Systems (ICDCS 05), June 2005.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

325 www.ijergs.org

[6] Chinnu Mary George and Teslin Jacob, Privacy Towards Base Station In Wireless Sensor Networks Against a Global
Eavesdropper A Survey, International Journal of Computer Science and Management Research, Vol 2, Issue, February 2013. pp.
1493-1497.
[7] Matthew Holiday, Subbarayan Venkatesan, and Neeraj Mittal, Secure Location Verification with Randomly-Selected Base
Stations, Intl Conf. on Distributed Computing Systems Workshops 2011. pp. 119-122.
[8] Mohamed Younis, and ZhongRen, Effect of Mobility and Count of Base stations on the Anonymity of Wireless Sensor
Networks, Department of Computer Science and Electrical Engineering, USA, 2011. pp. 436-441.
[9] Mauro Conti, Bruno Crispo, and Jeroen Willemsen, Providing Source Location Privacy in Wireless Sensor Networks: A Survey,
IEEE Communications Surveys & Tutorials, 2013.
[10] Venkata Praneeth, Dharma P. Agrawal, Varma Gottumukkala, Vaibhav Pandit, and Hailong Li, Base-station Location
Anonymity and Security Technique (BLAST) for Wireless Sensor Networks, First IEEE Intl Workshop on Security and Forensics in
Communication Systems, 2012 IEEE.
[11] W. Lou, and H. Chen, From nowhere to somewhere: protecting end-to end location privacy in wireless sensor networks, 2010.
[12] E. Ngai, On providing sink anonymity for sensor networks, in Proceedings of 2009 International Conference on Wireless
Communications and Mobile Computing: Connecting the World Wirelessly. ACM, 2009, pp. 269273.
[13] Yong Wang, Yuyan Xue, and Byrav Ramamurthy, A Key Management Protocol for Wireless Sensor Networks with Multiple
Base Stations, IEEE Communications ICC proceedings. 2008. pp.1625-1629.
[14] Y. Jian, L. Zhang , S. Chen, and Z. Zhang, A novel scheme for protecting receivers location privacy in wireless sensor
networks, Wireless Communications, IEEE Transactions, vol. 7, no. 10, pp. 37693779, 2008.
[15] K. Mehta, M. Wright, and D. Liu, Location privacy in sensor networks against a global eavesdropper, IEEE Intl Conf. on
IEEE, 2007, pp. 314323.
[16] C. Ozturk, Y. Zhang, and W. Trappe, Source Location Privacy in Energy-Constrained Sensor Network Routing, Proc.
Workshop Security of Ad Hoc and Sensor Networks (SASN 04), Oct. 2004.
[17] YUN ZHOU, YUGUANG FANG, YANCHAO ZHANG SECURINGWIRELESS SENSOR NETWORKS: A SURVEY IEEE
COMMUNICATIONS Surveys. 2008




International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

326 www.ijergs.org

Using Wavelet for Finding Fault Place and Neural Network for Types of
Facult in Transmission Lines
Mohammad Ali Adelian Rahul S Desai, Assistant Professor
E-mail- Ma_adelian@yahoo.com, Tel- 0097507638844

Abstract a transmission line has their own faults and they could be single phase, double phase and three phases to ground. There
are different scheme which is related to modern relay that can work with re closer for protecting the faulted phases also there should
be accurate selection for finding the right phase. This thesis shows right different scheme for detection and classification of faults on
transmission line. The scheme is to use neural network and wavelet transform together, to choose a proper way for solving the
problem. Wavelet transform has strong mathematical, very fast and accurate tools for transient signal in the transmission lines beside
We use artificial neural network that can make a different between measured signal and associated signal that has different pattern. It
can be done by using specific algorithm. This algorithm using time frequency analysis of faulted transient line with help of wavelet
transform, and then this will followed to the artificial neural network for identify what phase is faced with the fault. Here we used
MATLAB software for simulation of fault signals and verifying the correctness of the algorithm. There will be different types of fault
type which is giving to the software and result will show that where and what phase is faced the problem.

Keywords neural network, wavelet transform, fault identification and classification, transmission line.
INTRODUCTION
Transmission lines are lines with sharing same voltage and current with their specific length. These lines are used for transferring the
electrical energy with accurate, reliability and security. There are different configurations of parallel lines which is mixed with the
effect of mutual coupling and make their protection a challenging problem. With using Statistic we can say that, about 80% of the
faults on transmission lines are transient in nature.
When we have abnormal transient over voltages, there will be a breaking down of the air which is surrounding the insulator. If the
supply is interrupted, then these fault can be disappear and arc will allowed de-ionizing. There is another device that during these
times is starting its role for the purpose of restoring transmission line to service subsequent to tripping of their associated circuit
breakers due to fault [1].
As we know, most of the faults on transmission lines are single line to ground faults, relaying systems should be in position to clear
the difference between these faulted phases. For this purpose, there should be an algorithm which correctly make different between
single line to ground faults for the purpose of tripping a single pole and initiating three-phase tripping for another faults.
One of the most important things here is to select the right phase for avoiding unnecessary three phase tripping. In addition it is
important to minimize the possibility of single phase faults spreading to other phases because when this issue is happened, there will
be some problems: like make more time for clearance of single phase to earth faults and high speed decision making . In addition there
are some other benefits like:
1. High speed of selecting the right phase
2. High speed clearance
3. Reducing the level of post arc gas
4. Reducing the dead time to achieve satisfactory extinction of the secondary arc [2].
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

327 www.ijergs.org

There are some benefits that is related to the single phase tripping and also reclosing which will be:
I) High improvement of transient state stability.
II) When there are remote generating stations that are connected to the load center and they have one or two transmission
lines, there will be improvements in system reliability & availability
III) It will reduce the switching of over voltages
IV) It will reduce the shaft oscillation with large thermal units[3]
There is a common type of protection which is distance relay and it is based on the measuring the fundamental frequency of positive
sequence impedance of the line. More than detecting the fault zone and directional discrimination, the distance relay can also measure
elements perform the job of faulted phase selection. However, the jobs of ground distance units is to operate for double phase to
ground faults and phase distance units is to operate for ground faults which is very close to the relay location.
Planners could not relay only to distance relay for determine the fault type, so they used different type of techniques like wavelet
transform and neural network for finding faulted phase in EHV/UHV transmission lines and also they have developed these
techniques over these years.
Methodology:
There are different methods that can help to find the place of fault, also to find the fault types. Here we can use wavelet transform as a
one of the best tools to find the place of the fault which is send an signal throughout the transmission lines and with the measuring the
time of returning that signal, find the place which the fault is happened. We can use another tools to find the types of fault which is
neural network. When we run the modeling, there is another part which is coding it works with the modeling and they are match withc
each other. So in the command line of the MATLAB we can see the place with considering the some more and les tolerance, and see
that the fault is. When we need to choose the place of fault, which we need in the output of the program, we need to just change the
numbers of sending and receiving end, which means the total amount of them should be equal to 300, because our transmission line is
suppose to be 300 km.

Modeling:

Figure 7.1 is showing the modeling that we used for simulating the transmission line. As we can see, there are two there phase source
which are connected to the transmission line in both side. Both sources have same quantity and they are 400kw.there is some other
amount which is mentioned below.
Three phase source:
Parameter in left three phase source:
Phase to phase rms voltage (v): 400
3
Phase angle of phase A (degree) = 0 Frequency (HZ) = 50 Internal connection: Yg
3 phase short circuit level at base voltage (VA): 250
6
Base voltage (Vrms ph-ph): 400
3

X/R ration: 12.37/2.46
Parameter in right three phase source:
Phase to phase rms voltage (v) = 400
3
Phase angle of phase A (degree) = -15 Frequency (HZ) = 50 Internal connection: Yg
3 phase short circuit level at base voltage (VA) = 1915
6
Base voltage (Vrms ph-ph) = 400
3
X/R ration = 12.37/2.46.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

328 www.ijergs.org


Figure 7.1 modeling of transmission line
Circuit breaker:
There is another element in this figure which is circuit breaker and in both circuit breaker the all amount is same and mentioned
below. There is a short explanation for this block which is mentioned. Connect this block in series with the three-phase element you
want to switch. You can define the breaker timing directly from the dialog box or apply an external logical signal. If you check the
'External control' box, the external control input will appear. Parameter for both circuit breakers:
Transition times(s) = [0.3] Breaker resistance Ron (ohms) = 0.001 Snubbers resistance Rp (ohms) = 1
6

Snubbers capacitance Cp (Farad) = inf Initial status of breakers: closed
Three phase series RLC load:
Another part is three phase series RLC load, there amount is same and mentioned below. Three phase series RLC load in both sides:
Configuration: Y grounded
Nominal phase to phase voltage Vn (Vrms) = 400
3
Nominal frequency fn (Hz) = 50 Active power P (W) = 100
6

Inductive reactive power QL (positive var) = 0 Capacitive reactive power QC (0) = 0
Distributed parameters line:
Another part is distributed parameters line, and their amount is almost same, the difference is only in line length in KM, and the reason
is because of our line is about 300KM, so during the modeling we need to choose the place of the fault, so the total amount of these
two block should be 300. For example if we need to show that the fault is happening in the 28KM we need to change the amount of
other line length to 272KM. Rest of the amount in different part will be same during the modeling, but if we need to change the output
of the signal, we can change each part. The explanation for this block is:
Implements an N-phases distributed parameter line model. The rlc parameters are specified by [NN] matrices. To model a two-,
three-, or a six-phase symmetrical line you can either specify complete [NN] matrices or simply enter sequence parameters vectors:
the positive and zero sequence parameters for a two-phase or three-phase transposed line, plus the mutual zero-sequence for a six-
phase transposed line (2 coupled 3-phase lines). This block has these parameters amounts:
Number of phases [N] = 3 Frequency used for RLC specification (HZ) = 50
Resistance per unit length (ohms/km) [NN matrix] or [i1 r0 r0m] = [0.0298 0.162]
Inductance per unit length (H/km) [NN matrix] or [i1 l0 l0m] = [1. 05
3
3. 94
3
]
Capacitance per unit length (F/km) [NN matrix] or [c1 c0 c0m] = [12.74
9
7. 751
9
]
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

329 www.ijergs.org

Line length (km): it is selective and the total amount of the both side should be equal to 300 because our transmission line suppose to
be 300KM. Measurements: phase to ground voltage
Three phase VI measurement:
Another part of the figure is three phase VI measurement. As we can see in the figure they are two, one in right and one in left. Their
amount is almost same the difference is only in there label in signal label in voltage and current which in left the label is Iabc for
current and for voltage is Vabc and in the right label for current is Iabc1 and for voltage is Vabc 1. The rest of things are same. Also
there is an explanation for this block which is mentioned. Ideal three phase voltage and current measurements. The block can output
the voltages and currents in per unit values or in volts and amperes.
Three phase fault:
Another part of the figure is three phase fault. With this block we can choose different types of fault and also resistance for ground. As
we can see in that block, there are different phases (phase A, phase b phase C) and also ground fault that we can choose any one with
or without ground. There is an explanation for this block which is mentioned and after that their parameter as well. Use this block to
program a fault (short-circuit) between any phase and the ground. You can define the fault timing directly from the dialog box or
apply an external logical signal. If you check the 'External control' box, the external control input will appear. Parameters:
Fault resistance Ron (ohms) = 8 Transition status [1, 0, 1 ...) = [1 0] Transition times (s) = [0.04 0.042]
Snubbers resistance Rp (ohms) = 1
6
Sunbbers capacitance Cp(Farad) = inf Measurement = none
There is another part in the modeling that we can see in figure 7.2, this part is consist of the two other sub part and they work together
to give their signal to the scope to see the result.

Figure 7.2 voltage and current block to scope
As we can see in the figure 7.2, there are two part which is consist of voltage and current and they gave their signals to the scope to
show the result, also there are two other block which are connected to the voltage and current, they are three phase V-I measurement
and their configuration is shown in figure 7.3, and we can see their connections.

Figure 7.3 three phase VI measurement
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

330 www.ijergs.org


Figure 7.5 no fault condition in transmission line Figure 7.6 voltage, current and wavelet signal base on wavelet transform

figure 7.8 performance figure 7.9 Gradient and validation performance Figure 7.10 output of program

when there is no fault in the system, with the help of neural network and wavelet tarnsform, we can see that the out put of the coding,
after runing show that there is no fault in transmission line as we can see in the figure 7.10.

Figure 7.13 LG fault (fault between phase A and Ground) Figure 7.17 voltage, current and wavelet signal base on wavelet transform
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

331 www.ijergs.org


Figure 7.15 performance figure 7.16 Gradient and validation performance Figure 7.18 output of single phase to ground

After running the program, we can see that, in the output we have the result with respective place which fault happened. This result
has some tolerance. It is shown in figure 7.18. We can see that in the output the correct phase and place is shown.

We can see this type of fault when two different phase, make a connection between each other and produce this type of fault. It is
mentioned in figure 7.49.

Figure 7.49 LL fault (fault between phase A and phase B) Figure 7.53 voltage, current and wavelet signal base on wavelet transform

Figure 7.51 performance figure 7.52 Gradient and validation performance Figure 7.54output of double phase with each other
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

332 www.ijergs.org



Figure 7.67 LLL fault (fault between phase A, phase B and phase C) Figure 7.71 voltage, current and wavelet signal base on wavelet transform

Here in figure 9 and figure 12 we will see the result for 2 phase to ground and three phase to ground. Figure 7.72 outputs of three phases with each other


In this stage, we choose phase A and B when they made a connection with each other to ground and take the output result. The result
is shown in figure (7.31).


Figure 7.31 LLG fault (fault between phase A and B to Ground) Figure 7.35 voltage, current and wavelet signal base on wavelet transform
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

333 www.ijergs.org


Figure 7.33 performance figure 7.34 Gradient and validation performance Figure 7.36 output of double phase to ground
As we saw in the figure (3.36), there is right selection of the phase with some tolerance in place of fault.


Figure 12. LLLG (phase A, phase B and phase C to ground) fault on the transmission line Figure 7.77 voltage, current and wavelet signal base on wavelet transform


Figure 7.75 performance figure 7.76 Gradient and validation performance Figure 7.78 output of double phase to ground

Analysis of results

This simulation is done in 300 km transmission line with different type of fault and location of fault with using
MATLAB simulation software.
As we have seen, there was some tolerance in finding the place of fault. During each modeling, there was some percentage error,
which is collected in next tables.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

334 www.ijergs.org


Table 7.1 Percentage errors as a function of fault distance and fault resistance for the
ANN chosen for one line to ground fault location
Serial
No:
% Error vs. Fault Distance
(Fault Resistance = 20 )
% Error vs. Fault Distance
(Fault Resistance = 60 )
Fault
Resistance
()
Measured
Fault
Location
Percentage
Error
Fault
Distance
(Km)
Measured
Fault
Location
Percentage
Error
1 24 24.39 0.153 49 50.46 0.51
2 74 74.48 0.187 99 100.02 0.33
3 124 124.02 0.03 149 152.03 1.05
4 174 174.08 0.02 198 200.57 0.79
5 224 224.81 0.203 248 253.79 1.63

Table 7.2 Percentage errors as a function of fault distance and fault resistance for the
ANN chosen for double line to ground fault location.
Serial
No:
% Error vs. Fault Distance
(Fault Resistance = 20 )
% Error vs. Fault Distance
(Fault Resistance = 60 )
Fault
Resistance
()
Measured
Fault
Location
Percentage
Error
Fault
Distance
(Km)
Measured
Fault
Location
Percentage
Error
1 24 24.43 0.167 49 52.76 1.25
2 74 74.17 0.05 99 100.02 1.03
3 124 124.09 0.026 149 151.03 0.68
4 174 174.15 0.043 198 200.89 0.89
5 224 224.29 0.11 248 253.79 1.52

Table 7.3 Percentage errors as a function of fault distance and fault resistance for the
ANN chosen double line with each other fault location.

Serial
No:
% Error vs. Fault Distance
(Fault Resistance = 20 )
% Error vs. Fault Distance
(Fault Resistance = 60 )
Fault
Resistance
()
Measured
Fault
Location
Percentage
Error
Fault
Distance
(Km)
Measured
Fault
Location
Percentage
Error
1 24 24.03 0.012 49 50.16 0.29
2 74 74.29 0.12 99 100.42 0.74
3 124 124.57 0.123 149 151.03 1.11
4 174 174.13 0.038 198 200.89 0.55
5 224 224.74 0.265 248 254.19 1.63



International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

335 www.ijergs.org




Table 5.5 Percentage errors as a function of fault distance and fault resistance for the
ANN chosen for three phase fault location.
Serial
No:
% Error vs. Fault Distance
(Fault Resistance = 20 )
% Error vs. Fault Distance
(Fault Resistance = 60 )
Fault
Resistance
()
Measured
Fault
Location
Percentage
Error
Fault
Distance
(Km)
Measured
Fault
Location
Percentage
Error
1 24 24.41 0.16 49 50.31 0.37
2 74 74.16 0.046 99 102.02 1.009
3 124 124.42 0.25 149 151.27 0.69
4 174 174.59 0.20 198 200.89 0.53
5 224 224.36 0.1433 248 252.74 1.18

Acknowledgment
I am very grateful to my institutes, Bharati Vidyapeeth Deemed University College of Engineering Pune and my guide Prof. Rahul
S.Desai Assistant professor, other faculty and associates of electrical engineering department who are directly or indirectly helped me
for this work. This work is done by research scholar department of Electrical Engineering Bharati Vidyapeeth Deemed University
College of engineering pune.

CONCLUSIONS
This thesis worked on finding different types of fault in transmission lines with the help of two different materials. Neural network is
used to find the different types of fault when wavelet transform is used to find the place of the fault. All types of fault is studied and
modeled in this thesis. We can change the place and the types of fault by choosing in the modeling. All part of modeling is done with
considering the transmission lines in 300KM length. As we have seen in modeling, we used the (10.20.10.5.5) neural network, and it
means it has 10 inputs, 20 hidden layer1, 10 hidden layer2, 5 output layers and 5 outputs. This shape of neural network can be
different and is up to the types of network, but here we used this type. The important thing about this thesis is, finding the place of
fault which is done here

REFERENCES:
[1] Das R, Novosel D, Review of fault location techniques for transmission and sub transmission lines. Proceedings of 54th
Annual Georgia Tech Protective Relaying Conference, 2000.
[2] IEEE guide for determining fault location on AC transmission and distribution lines. IEEE Power Engineering Society Publ., New
York, IEEE Std C37.114, 2005.
[3] Saha MM, Das R, Verho P, Novosel D, Review of fault location techniques for distribution systems, Proceedings of Power
Systems and Communications Infrastructure for the Future Conference, Beijing, 2002, 6p.
[4] Eriksson L, Saha MM, Rockefeller GD, An accurate fault locator with compensation for apparent reactance in the fault resistance
resulting from remote-end feed, IEEE Trans on PAS 104(2), 1985, pp. 424-436.
[5] Saha MM, Izykowski J, Rosolowski E, Fault Location on Power Networks, Springer publications, 2010.
[6] Magnago FH, Abur A, Advanced techniques for transmission and distribution system fault location, Proceedings of CIGRE
Study committee 34 Colloquium and Meeting, Florence, 1999, paper 215.
[7] Tang Y, Wang HF, Aggarwal RK et al., Fault indicators in transmission and distribution systems, Proceedings of International
conference on Electric Utility Deregulation and Restructuring and Power Technologies DRPT, 2000, pp. 238-243.
[8] Reddy MJ, Mohanta DK, Adaptive-neuro-fuzzy inference system approach for transmission line fault classification and location
incorporating effects of power swings, Proceedings of IET Generation, Transmission and Distribution, 2008, pp.235 244.
[9] Alessandro Ferrero, Silvia Sangiovanni, Ennio Zappitelli, A fuzzy-set approach to fault-type identification in digital relaying,
Transmission and Distribution conference, Proceedings of the IEEE Power Engineering Society, 1994, pp. 269-275.
[10] Cook V, Fundamental aspects of fault location algorithms used in distance protection, Proceedings of IEE Conference 133(6),
1986, pp. 359-368.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

336 www.ijergs.org

[11] Cook V, Analysis of Distance Protection, Research Studies Press Ltd., John Wiley & Sons, Inc., New York, 1985.
[12] Network Protection & Automation Guide, T&D Energy Automation & Information, Alstom, France.
[13] Wright A, Christopoulos C, Electrical Power System Protection, Chapman & Hall publications, London, 1993.
[14] Ziegler G, Numerical Distance Protection, Principles and Applications, Siemens AG, Publicis MCD Verlag, Erlangen, 2006.
[15] Djuric MB, Radojevic ZM, Terzija VV, Distance Protection and fault location utilizing only phase current phasors, IEEE
Transactions of Power Delivery 13(4), 1998, pp. 1020-1026.
[16] Eriksson L, Saha MM, Rockefeller GD, An accurate fault locator with compensation for apparent reactance in the fault
resistance resulting from remote-end feed, IEEE Trans on PAS 104(2), 1985, pp. 424-436.
[17] Kasztenny B, Sharples D, Asaro V, Distance Relays and capacitive voltage transformers balancing speed and transient
overreach, Proceedings of 55th Annual Georgia Tech Protective Relaying Conference, 2001.
[18] Zhang Y, Zhang Q, Song W et al., Transmission line fault location for double phaseto- earth fault on non-direct-ground neutral
system, IEEE Transactions on Power Delivery 15(2), 2000, pp. 520-524.
[19] Girgis AA, Hart DG, Peterson WL, A new fault location techniques for two and three terminal lines, IEEE Transactions on
Power Delivery 7(1), 1992, pp. 98-107.
[20] Saha MM, Izykowski J, Rosolowski E, A method of fault location based on measurements from impedance relays at the line
ends, Proceedings of the 8
th
International Conference on Developments in Power Systems Protection DPSP, IEE CP500, 2004, pp.
176-179.
[21] Wanjing Xiu, Yuan Liao, Accurate transmission line fault location considering shunt capacitances without utilizing line
parameters, Electric Power components and Systems, 2012.
[22] Yuan Liao, Generalized fault location methods for overhead electric distribution systems, IEEE Transactions on Power
Delivery, vol. 26, no. 1, pp. 53-64, Jan 2011.
[23] Yuan Liao, Ning Kang, Fault Location algorithms without utilizing line parameters based on distributed parameter line model,
IEEE Transactions on Power Delivery, vol. 24, no. 2, pp. 579-584, Apr 2009.
[24] Karl Zimmerman, David Costello, Impedance-based fault location experience, Schweitzer Engineering Laboratories, Inc.
Pullman, WA USA.
[25] T. Takagi, Y. Yamakoshi, M. Yamaura, R. Kondou, and T. Matsushima, Development of a New Type Fault Locator Using the
One-Terminal Voltage and Current Data, IEEE Transactions on Power Apparatus and Systems, Vol. PAS-101, No. 8, August 1982,
pp. 2892-2898.
[26] Edmund O. Schweitzer, III, A Review of Impedance-Based Fault Locating experience, Proceedings of the 15th Annual
Western Protective Relay Conference, Spokane, WA, October 24-27, 1988.
[27] Aurangzeb M, Crossley PA, Gale P, Fault location using high frequency travelling waves measured at a single location on
transmission line, Proceedings of 7
th
International conference on Developments in Power System Protection DPSP, IEE CP479,
2001, pp. 403-406.
[28] Bo ZQ, Weller G, Redfern MA, Accurate fault location technique for distribution system using fault-generated high frequency
transient voltage signals, IEEE Proceedings of Generation, Transmission and Distribution 146(1), 1999, pp. 73-79.
[29] Silva M, Oleskovicz M, Coury DV, A fault locator for transmission lines using travelling waves and wavelet transform theory,
Proceedings of 8th International conference on Developments in Power System Protection DPSP, IEE CP500, 2004, pp. 212-215.
[30] El-Sharkawi M, Niebur D, A tutorial course on artificial neural networks with applications to Power systems, IEEE Publ. No.
96TP 112-0, 1996.
[31] Pao YH, Sobajic DJ, Autonomous Feature Discovery of Clearing time assessment, Symposium of Expert System Applications
to Power Systems, Stockholm Helsinki, Aug 1988, pp. 5.22-5.27.
[32] Dalstein T, Kulicke B, Neural network approach to fault classification for highspeed protective relaying, IEEE Transactions on
Power Delivery, vol. 4, 1995, pp. 1002 1009.
[33] Kezunovic M, Rikalo I, Sobajic DJ, Real-time and Off-line Transmission Line Faulyt Classification Using Neural Networks,
Engineering Intelligent Systems, vol. 10, 1996, pp. 57-63.
[34] Bouthiba T, Fault location in EHV transmission lines using artificial neural networks, Int. J. Appl. Math. Comput. Sci., 2004,
Vol. 14, No. 1, pp. 69-78.
[35] Sanaye-Pasand M, Kharashadi-Zadeh H, An extended ANN-based high speed accurate distance protection algorithm, Electric
Power and Energy Systems, vol. 28, no. 6, 2006, pp. 387 -395.103
[36] Bhalja B.R, Maheshwari R.P., High resistance faults on two terminal parallel transmission line: Analysis, simulation studies, and
an adaptive distance relaying scheme, IEEE Trans. Power Delivery, vol. 22, no. 2, 2007, pp. 801-812.
[37] Venkatesan R, Balamurugan B, A real-time hardware fault detector using an artificial neural network for distance protection,
IEEE Trans. on Power Delivery, vol. 16, no. 1, 2007, pp. 75 82.
[38] Lahiri U, Pradhan A.K, Mukhopadhyaya S, Modular neural-network based directional relay for transmission line protection,
IEEE Trans. on Power Delivery, vol. 20, no. 4, 2005, pp. 2154-2155.
[39] Cichoki A, Unbehauen R, Neural networks for optimization and signal processing, John Wiley & Sons, Inc., 1993, New York.
[40] Haykin S, Neural Networks. A comprehensive foundation, Macmillan Collage Publishing Company, Inc., 1994, New York.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

337 www.ijergs.org

[41] Kezunovic M, A survey of neural net applications to protective relaying and fault analysis. International Journal of Engineering
Intelligent Systems for Electronics, Engineering and Communications 5(4), 1997, pp. 185-192.
[42] El-Sharkawi M, Niebur D, A tutorial course on artificial neural networks with applications to Power systems, IEEE Publ. No.
96TP 112-0, 1996.
[43] Akke M, Thorp JT, Some improvements in the three-phase differential equation algorithm for fast transmission line protection,
IEEE Transactions on Power Delivery, vol. 13, 1998, pp. 66-72.
[44] Howard Demuth, Mark Beale, Martin Hagan, The MathWorks users guide for MATLAB and Simulink, Neural Networks
Toolbox 6.

[45] S.M. El Safty and M.A. Sharkas, Identification of Transmission line faults using Wavelet Analysis, IEEE Transactions on
Industrial Applications, ID: 0-7803-8294-3/04, 2004.
[46] Fernando H. Magnago and Ali Abur, Fault Location Using Wavelets, IEEE Transactions on Power Delivery, Vol. 13, No. 4,
pp.1475-1480,1998.
[47] Amara Graps, An Introduction to Wavelets, IEEE Computational Science & Engineering, pp.50-61, 1995.
[48] Mattew N.O. Sadiku, Cajetan M. Akujuobi and Raymond C.Garcia, An Introduction to Wavelets in Electromagnetics, IEEE
microwave magazine, pp.63-72, 2005. Ching-Lien Huang, Application of Morlet Wavelets to Supervise Power System
Disturbances, IEEE Transactions on Power Delivery, Vol.14, No. 1, pp.235-243, 1999.
[49] R.N.Mahanty,P.B.Dutta Gupta, A fuzzy logic based fault classification approach using current samples only,EPSR,pp.501-507
,14 Feb 2006















International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

338 www.ijergs.org

A Network Overview of Massive MIMO for 5G Wireless Cellular: System
Model and Potentials
Ramya Ranjan Choudhury
1

1
Assistant Professor (ETC), Trident Academy of Technology, Bhubaneswar, Odisha, India
E-mail- ramyaranjan@gmail.com

Abstract This research article presents an overview on massive MIMO systems and its signal processing applications
in future trends unlocking the aspects fifth generation of cellular communication. The key technologies includes MIMO
integration to emerging technologies like device to device support, heterogeneous networks, base centric architecture for
millimeter wave range for developing future generation 5G standard for wireless cellular. The system modeling design is
also illustrated thereby providing a direction for meeting high data and bandwidth needs in future by employing massive
MIMO cellular networks with current wireless technologies have been identified.
Keywords 5G, massive MIMO, base station, antenna arrays, D2D, millimeter wave, cell, heterogeneous network
INTRODUCTION
In communications, MIMO implies multiple-input and multiple-output and is used by combinations of multiple transmitters/receivers
or antennas at both sides of digital communication systems. It can be termed as replica of smart antennas array group. In wireless
communications MIMO techniques is evolving technology that offers considerable increase in data bandwidth without any extra
transmission power. Due to these properties, MIMO technology is a vital aspect of modern cellular and wireless communication
standards of today. These emerging fields includes WiMAX, HSPA+, 5G cellular, energy efficient satellites etc.

Figure 1: Block diagram of SISO and MIMO systems
Massive MIMO
It has been observed that massive MIMO networks can provide higher performance than partial multi-user MIMO since the multiple
antennas used are much smarter. Massive-MIMO systems can be termed as the scenario of multi-user MIMO in which the
number of transmitter terminals is very less than the number of BS (base station) antennas. For scattered environment,
merits of massive MIMO technology could be further developed by using simple ZF(zero forcing) or MRT(maximum
ratio transmission). Practically, for orthogonal channels the reception and transmission data lacks the channel coherence
time. If more than one base stations (antenna) exist in this scenario, the devices renders these channels to various
machines maintaining orthogonality optimal multiplexing. It can be argued that in the current text of disruption of
emerging technologies massive - MIMO is the best choice for future generation wireless evolution for 5G.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

339 www.ijergs.org

MASSIVE-MIMO MODELLING FOR 5G
Let us consider a massive MIMO downlink system for single BS (base station) and N users. A
T
antennas for transmission and use
k has A
R
antennas for reception.


Figure 2: Massive-MIMO system model with k users and N base stations
If d
k
the data stream of kth user, the number of streaming data (sum rate) for all k users can be written as
=

=

Total number of receiver antennas is given by

=

Clearly, we have chosen A
R
> A
T
. Using a fading channel for common BS and massive MIMO, the channel matrix for k
th
user is given
by H
k
A
R
k
A
T
.It is assumed that the channel H
k
is quasi-static in nature and it is a constant. Let s
k
d
k
be the transmitter signal
for k
th
user, the receiver matrix is thus given by P
i
A
R
k
d
k
.If W
k
is the white Gaussian noise of the channel, the total received
signal Power P
R
is given by


Where P
k
is the kth user and M
i
is the beamforming signal of kth user in the antenna array matrix. Clearly M
i
A
T
d
k


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

340 www.ijergs.org


Figure 3: Massive-MIMO services provided to number of users by employing 2048, 4096 and 8192 Antenna Arrays (AA)
Massive MIMO proposals for this model by employing a very huge number of antennas to multiplex information signals
in communication systems for several machines by utilizing devices-to-devices link (D2D) on each time-frequency
access schemes (TDM/FDD), focus must be on optimizing energy radiated energy towards the directions intended while
minimizing intra- and inter-cell interference. Figure 3 clearly highlights the comparison cellular services provided in
terms of data rate gain for various antenna arrays for massive MIMO application in a 4X4 baseline for subscribers in a
single cell cluster. The 8192 number of antennas is deployed by massive MIMO systems thereby increasing the user
efficiency. Services to users with 2048 antennas in simple MIMO schemes were classically adopted where both 5 and
50 percentile of full efficiency is achieved. The MIMO systems with 5096 is the intermediate with optimal service
provided .Thus increasing the number of array in antenna with advanced signal processing tools could a huge information
could be transmitted which would be the requirement of 5G cellular.

MASSIVE-MIMO AND 5G CELLULAR

In massive MIMO present research challenges include estimation of criticality of coherent channels. Propagation impairments for
massive MIMO in present context could also be hypothetically calculated on experimental basis for channel orthogonality. This could
be further implemented on the basis of lower costs in the context of hardware power consumption in each of the antennas.
Considering present scenario 5G has many merits over 4G
i) Non- bulky in space
ii) Directive antennas
iii) Coherent angle spread of the propagation

There are limited number of antennas in MIMO employing single-user that is fit for current standard of cellular communication. But
massive MIMO is not limited if TDD (Time Division Duplex) is incorporated for enabling channel characterization.
This relative scenario has massive MIMOs application which governs the multiple antennas distributed in which a small town or
university campus or city could be utilized.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

341 www.ijergs.org


Figure 4: Integration of various emerging technologies towards 5G wireless system.
A. Millimeter Wave (mm-Wave)

The frequencies in the range of 600 MHz to 1600 MHz are currently in use for cellular . This little range can hardly be exploited for
future generation wireless access systems by reframing the system. Higher spectrums in the ranges of GHz and THz could be
deployed by utilizing techniques in cognitive radio. The highly potential field is exploited by wavelength in millimeter range and
hence the term millimeter wave is in practice. Today different cellular and wireless firms want a radical increase in capacity emerging
trends which has to be carried in coming years beyond fourth generation of wireless standards in Long Term Evolution (4GLTE).
Around 2020, the cellular networks would face a very high speech and data traffic and thereby higher capacity demands for data rate
and bandwidth. For wireless future wireless generation of 5G mobile data rates must increase up to several gigabit per second (Gbps)
range, which can only be processed by using the millimeter wave spectrum steerable antennas. This would support 5G cellular
backhaul communications in addition to integration of world-wide fidelity in wireless services. Since Massive MIMO is a spatial
processing technique which would have orthogonal polarization and beam-forming adaptation, this smaller millimeter wavelength is
suitable frequencies. The highly populated geographical regions could be covered by 4G+ to 5G technologies by setting backhaul link
using massive MIMO in case of greater bandwidth challenges. Cost per base station will significantly reduce due to innovative
architectures of co-operative MIMO, thereby minimizing interference relays and servicing base stations.

Figure 5: A satellite-cellular communication system showing uplink and downlink

The wireless operators would reduce cellular coverage area to pico and femto cells for generating spatial reuse. Since cellular
networks would face gigantic traffic (data and speech) over next ten to twenty years, a huge challenge would be to harmonize
frequency bands by ITU to GHz and THz. This will enhance low cost of service and roaming. The mobile network operators are
planning to fulfill future needs, by combing of to share spectrum for this solutions which would be beneficial beyond 2020.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

342 www.ijergs.org

B. Base-centric architectures.

For 5G evolution the base centric architectures would have major role to play in wireless communication. The up-linking and down-
linking concepts must be integrated to data wireless channels for better servicing of data flows with different priorities towards nodes
set within wireless network.


Figure 6: Base-centric architecture employing small cell for N users.

Wireless designs in this concept are based on the axiomatic cells roles as which are basic building block units within the the radio
network access. By use of base centric design both control and traffic signals are transmitted under same downlink and its
corresponding uplink connection for more denser networks in future some vital changes must be done to 5G. The increase in transmit
power in base stations is a major issue for denser coverage areas. This base centered architecture would employ massive MIMO for a
decoupling uplink as well as downlink and thus would allow the link data to flow through various of nodes set. Virtual radio access
networks (RAN) will have node and the hardware allocation for handling the processing associated with this node. Dynamic hardware
resources allocation in a base centered mode must depending on network operators defined matrix operator. Architectural network
design in this context should compensate multi-hop by imposing partial centralization via aggregation of resources

C. Device-to-Device (D2D) Native Support

Cell-phones, local small cell wireless networks are a deciding factors of smart proxy call caching for redefining new aspects of device
supports by use of massive MIMO. The 5G wireless cellular must employ base-centric architectural structures and invent new a
device-support so that human devices could easily communicate with virtual emotions.

Table 1: Features of Device-to-Device support
D2D Support Features and examples
Real-time operation with
low latency
- Demands reliable data transfer a given time .
- Vehicle to D connectivity improving traffic by alert and control messages.
Massive Device inter-
connection
- Some D2D services might require over 10 devices connection
- Devices operating typically at hundred per base station for smart grids and meter sensors.
Higher Reliability Linkage - More safe and reliable than wired standards
- Virtual and operational wireless link everytime and everywhere.

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

343 www.ijergs.org


Figure 7: Device-to-device Ad-hoc connections in present scenario

Data transmitted by several possible contexts of heterogeneous networks greatly rely on the sets of device to device support which is
also discussed in next section. These network sets must provide full connectivity aspects of a given machine in session cellular
approach. Wireless systems have become necessities like water and electricity. Thus it must be dealt with utmost commoditization,
thereby enhancing new types of requirements. These would be brought upon by employing massive-MIMO modeling. In systems that
employ voice centric operations, a call is established when two parties in close proximity have situations co-locations of several
devices share multimedia content. A single hop is usually established to utilize multi-operational tendencies of a single hop. This is
responsible for waste of signaling resources. The transmission powers of a several watts in both downlink and uplink are consumed
to achieve a few milli watts per device. Thus battery drains and also there is increase in interference occupying the same resources for
signaling everywhere. This can be minimized if we focus on accompanying overheads by controlling estimation of used wireless
channel by employing massive MIMO which can focus on enhancing the capacity for 5G based D2D. The current wireless network
researchers must study this 4G+ systems and must ensure that a green network focusing on current studies is detected for safety of the
public safety

D. Heterogeneous Networks

The base-station is becoming denser rapidly, driven by the rise of heterogeneous networks. While heterogeneous networks were
already standardized in 4G, the architecture for next generation massive MIMO employments would be designed to support those 5G
networks. Heterogeneous network which represent a novel networking paradigm based on the idea of deploying short-range, low-
power, and low-cost base stations operate in conjunction with the main macro-cellular network infrastructure. 5G networks would
provide high data rates, allow offloading traffic from the macro cell and providing dedicated capacity to homes, enterprises, or urban
hotspots. As Evolution of wireless cellular devices continues to explode, the traffic demand in wireless communication systems is also
increasing. It is expected that the traffic demand will increase up to twenty times by 2020 as that of 2014.One of the main challenges
of Heterogeneous Network is planning and managing multilayer, dense networks with high traffic loads. The tools used today for
network planning, interference management and network optimization require too much manual intervention and are not scalable
enough for advanced Heterogeneous Networks. Self-organizing networks (SON) enables operators to automatically manage
operational aspects and optimize performance in their networks, and to avoid squandering staff resources to micromanage their radio
access networks. In denser networks, automization reduces the potential for errors, and frees up precious resources to focus on more
important activities of network design, management and operation. . Mobile networks continue to become faster and capable of
transporting more traffic, thanks to the increased efficiency and wider deployment of 3G, 4G technologies now and 5G in future.
It also introduces network-performance optimization processes that are too granular or too fast for manual intervention and these
bring benefits not only to multi-layer networks, but also to the macro-dominated networks of today. SON can be thought of as a
toolbox of solutions. Yet performance improvements are not sufficient to meet the increase in traffic load driven by more subscribers,
more applications, and more devices. To meet subscribers demand for ubiquitous and reliable broadband connections, operators have
to do more than expand their networks. They have to embrace a deep, qualitative change in the way they plan, deploy and operate their
networks. Heterogeneous Networks are central to this change: they capture multiple, convergent dimensions along which networks
have started to evolve gradually. The move toward Heterogeneous Networks is driven by a combination of market forces, capacity
limitations in the existing infrastructure, and new technologies that enable operators to deploy and manage dense, multi-layer
networks that increasingly include small cells. Operators can choose which ones to adopt and when, depending on their needs, their
strategies, and the maturity of the solutions. SON standardization efforts started with 3GPP Release 8 but are still ongoing, so there is
a varying level of maturity among tools, in terms of both specifications and the commercial availability of products. The focus of SON
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

344 www.ijergs.org

standardization has gradually moved from the macro-cell layer to the small-cell layer, as the small-cell market expands and ecosystem
players encounter the challenges that small cells introduce in mobile network.


Figure 8: A typical heterogeneous network

Operators expect Heterogeneous Networks to deliver a higher capacity density, increase spectrum efficiency, and improve the
subscriber experience, while lowering the per-bit cost of transporting traffic. Achieving these goals is necessary, but it will not be
easy. Operators and vendors are jointly working to ensure a smooth transition to Heterogeneous Networks, but the process will require
time, effort and the establishment of a robust ecosystem. In the process, mobile networks will become more complex.

E. Multiple Cell-Cluster and applications to Smarter Machines (Wireless devices)

For the multi-user MIMO downlink in a single and in clustered multiple cells, we consider the situation in which the total number of
receive antennas of the served users is larger than the number of transmit antennas of the serving base station (BS).

Figure 9:. Clustered cellular scenario with a virtual controller for full Base Station coordination within each cluster.

This situation is relevant for many scenarios. For instance, in multi-user MIMO broadcast channels, the BS simultaneously serves as
many users as possible and hence a large total number of receive antennas..As Clustered cellular scenario, in each cluster there is a
virtual controller due to the full BSs coordination within each cluster which is shown in figure 9. .Newer technologies which could be
included to current scenarios are LiFi (Light Fidility), WiZig+ , etc.It must be noted that, the power consumptions of assembled A/D
(Analog to Digital) converters at frequencies from 300 MHz to 30 GHz has been considered in this section. It has been found that
these costs and energy related parts must adopt massive-MIMO technology for achieving higher efficiency.. The justification of some
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

345 www.ijergs.org

of these vital parametric changes is solved by objectives of massive MIMO counterparts. It is argued that 5G systems must not follow
2G-4G network designs, but must integrate previously used architectures into new paradigms to exploit Machine intelligence by
layering various protocol stack for Device-to-Device (D2D) connectivity or by introducing smart caching discussed in previous
section .While this Each of these designs require a change at the layered node level component change by implying architectural level
multi-hop for massive MIMO based next generation wireless cellular Earlier the generations from 2G to 4G were built on the design
primitive by completing control at the infrastructural level of site. Some probabilistic approaches can be assumed b unleashed by
allowing the devices to play smart roles and, then think to enhance 5Gs design accounting for an increase in machines intelligence at
end users level. These technologies are named as
a. Higher interference rejection.
b. Intelligence for smarter machines
c. User level local caching

CONCLUSION
From this review paper it is concluded that adaption of massive-MIMO for 5G is an evolutionary challenge which would affect major
change in component design for cellular systems and component design. Graphical study of antenna arrays show that more and
more users can be provides services in denser cellular networks. The system model describes that emerging technologies
such as these would have potential functions for transmission and reception purposes. Massive MIMO technique would
inculcate more efficiency in present cellular systems when number of antennas is increased with advanced signal
processing tools laid out in downlink model. Massive-MIMO may require major architectural changes, in particular in the
design of macro base stations, and it may also lead to new types of deployments.


REFERENCES:
[1] H. Huh, G. Caire, H. C. Papadopoulos, and S. A. Ramprashad, Achieving Massive MIMO Spectral Efficiency with a Not-so-
Large Number of Antennas IEEE Trans. Wireless Communications, vol. 11, no. 9, pp. 3226-3239, Sept. 2012.

[2] R. C. de Lamare Massive MIMO Systems: Signal Processing Challenges and Research Trends, URSI Radio Science Bulletin,
Dec. 2013.

[3] E. G. Larsson, F. Tufvesson, O. Edfors, and T. L. Marzetta, Massive MIMO for Next Generation Wireless Systems, IEEE
Commun. Mag., vol. 52, no. 2, pp. 186-195, Feb. 2014.

[4] Rangan, S.; Rappaport, T.S.; Erkip, E. "Millimeter-Wave Cellular Wireless Networks: Potentials and Challenges", Proceedings of
the IEEE, On page(s): 366 - 385 Volume: 102, Issue: 3, March 2014

[5] F. Rusek, D. Persson, B. K. Lau, E. G. Larsson, T. L. Marzetta, O. Edfors, and F. Tufvesson, Scaling up MIMO: Opportunities
and Challenges with Very Large Arrays, IEEE Signal Proces. Mag., vol. 30, no. 1, pp. 40-46, Jan. 2013.

[6] C. Studer and E. G. Larsson, PAR-Aware Large-Scale Multi-User MIMO-OFDM Downlink, IEEE J. Sel. Areas Commun, vol.
31, no. 2, pp. 303-313, Feb. 2013.

[7] O. N. Alrabadi, E. Tsakalaki, H. Huang, and G. F. Pedersen, Beamforming via Large and Dense Antenna Arrays above a
Clutter, IEEE J. Sel. Areas Commun, vol. 31, no. 2, pp. 314-325, Feb. 2013.

[8] R. Aggarwal, C. E. Koksal, and P. Schniter, On the Design of Large Scale Wireless Systems, IEEE J. Sel. Areas Commun, vol.
31, no. 2, pp. 215-225, Feb. 2013.

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

346 www.ijergs.org

[9] B. Yin, M. Wu, G. Wang, C. Dick, J. R. Cavallaro, and C. Studer, A 3.8 Gb/s Large-scale MIMO Detector for 3GPP LTE-
Advanced, Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), May 2014.

[10] Federico Boccardi, Robert W. Heath Jr., Angel Lozano,Thomas L. Marzetta, Bell and Petar Popovski, Five Disruptive
Technology Directions for 5G, Communications Magazine, IEEE Volume:52 , Issue: 2, February 2014 Page(s):74 - 80

[11] R.C. de Lamare, R. Sampaio-Neto, Minimum mean-squared error iterative successive parallel arbitrated decision feedback
detectors for DS-CDMA systems, IEEE Trans. Commun., vol. 56, no. 5, May 2008,

[12] J. Zhang, X. Yuan, and L. Ping, Hermitian precoding for distributed MIMO systems with individual channel state information,
IEEE J.Sel. Areas Commun., vol. 31, no. 2, pp. 241250, Feb. 2013.

[13] F. Rusek, D. Persson, B. Lau, E. Larsson, T. Marzetta, O. Edfors and F. Tufvesson, Scaling up MIMO: Opportunities, and
challenges with very large arrays, IEEE Signal Processing Mag., vol. 30, no. 1, pp.40-60, Jan. 2013.

[14] T. Rappaport and et al, Millimeter wave mobile communications for 5G cellular: It will work! IEEE Access, vol. 1, pp. 335
349, 2013.

[15] J. Jose, A. Ashikhmin, T. L. Marzetta, S. Vishwanath, Pilot Contamination and Precoding in Multi-Cell TDD Systems, IEEE
Transactions on Wireless Communications, vol.10, no.8, pp. 2640-2651, August 2011.

[16] A. Ozgur, O. Leveque, and D. Tse, Spatial Degrees of Freedom of Large Distributed MIMO Systems and Wireless Ad Hoc
Networks, IEEE J. Sel. Areas Commun, vol. 31, no. 2, pp. 202-214, Feb. 2013.

[17] H. Q. Ngo, E. G. Larsson, and T. L. Marzetta, Energy and spectral efficiency of very large multiuser MIMO systems, IEEE
Trans. Commun., vol. 61, no. 4, pp. 1436-1449, Apr. 2013.

[18] H. Yang and T. L. Marzetta, Performance of conjugate and zeroforcing beamforming in large-scale antenna systems, IEEE J.
Sel.Areas Commun., vol. 31, no. 2, pp. 172179, Feb. 2013.

[19] T. S. Rappaport, Wireless Communications: Principles and Practice, 2
nd
ed. Englewood Cliffs, NJ, USA: Prentice-Hall, 2002.

[20] P. Li and R. D. Murch, Multiple Output Selection-LAS Algorithm in Large MIMO Systems, IEEE Commun. Lett., vol. 14, no.
5, pp. 399-401, May 2010.
[21] E. Bjornson, M. Kountouris, M. Debbah, Massive MIMO and Small Cells: Improving Energy Efficiency by Optimal Soft-Cell
Coordination, in Proc. ICT, May 2013

[22] J. W. Choi, A. C. Singer, J Lee, N. I. Cho, Improved linear softinput soft-output detection via soft feedback successive
interference cancellation, IEEE Trans. Commun., vol.58, no.3, pp.986-996, March 2010.

[23] M. J. Wainwright, T. S. Jaakkola, and A.S. Willsky, A new class of upper bounds on the log partition function ,IEEE Trans.
Information Theory, vol. 51, no. 7, pp. 2313 - 2335, July 2005.

[24] H. Wymeersch, F. Penna and V. Savic, Uniformly Reweighted Belief Propagation for Estimation and Detection in Wireless
Networks, IEEE Trans. Wireless Communications, vol. PP, No. 99, pp. 1-9, Feb. 2012.
[25] T. S. Rappaport, E. Ben-Dor, J. N. Murdock, and Y. Qiao, ``38 GHzand 60 GHz Angle-dependent Propagation for Cellular and
peer-to-peer wireless communications'', in Proc. IEEE Int. Conf. Commun., Jun. 2012, pp. 4568_4573.

[26] F. Rusek, D. Persson, B. Lau, E. Larsson, T. Marzetta, O. Edfors,and F. Tufvesson, ``Scaling up MIMO: Opportunities and
challengeswith very large arrays'', IEEE Signal Process. Mag., vol. 30, no. 1,pp. 40_60, Jan. 2013.

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

347 www.ijergs.org

[27] A. F. Molisch, M. Steinbauer, M. Toeltsch, E. Bonek, and R. Thoma,``Capacity of MIMO systems based on measured wireless
channels,'' IEEE JSAC., vol. 20, no. 3, pp. 561_569,Apr. 2002.

[28] S. Rajagopal, S. Abu-Surra, Z. Pi, and F. Khan, ``Antenna array design for multi-Gbps mmwave mobile broadband
communication'', in Proc. IEEE Global Telecommun. Conf., Dec. 2011, pp. 1_6.

[29] Spatial Channel Model for Multiple Input Multiple Output(MIMO) Simulations (Release 10), Standard 3GPP TR 25.996, Mar.
2011.
[30] T. L. Marzetta, Non-cooperative cellular wireless with unlimited numbers of base station antennas, IEEE Trans. on Wireless
Communications, Vol. 9, No. 11, pp. 3590-3600, Nov. 2010.

[31] Guidelines for Evaluation of Radio Interference Technologies for IMT-Advanced, Standard ITU-R M.2135, 2008


















International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

348 www.ijergs.org

Various Issues in Computerized Speech Recognition Systems
Shally Gujral
1
, Monika Tuteja
1
, Baljit Kaur
1

1
Electronics and Communication Department, PTU, Jalandhar, Anand College of Engineering and Management, Kapurthala
E-mail- gujralshally81@gmail.com , 09878235636

INTRODUCTION
Speech recognition is the translation of spoken words into text. It is also known as "automatic speech recognition", "ASR", "computer
speech recognition", "speech to text", or just "STT". Speech Recognition is technology that can translate spoken words into text. Some
SR systems use "training" where an individual speaker reads sections of text into the SR system. These systems analyze the person's
specific voice and use it to fine tune the recognition of that person's speech, resulting in more accurate transcription. Speech
Recognition (is also known as Automatic Speech Recognition (ASR) or computer speech recognition) is the process of converting a
speech signal to a sequence of words, by means of an algorithm implemented as a computer program.
1.1. Basic Model of Speech Recognition: Research in speech processing and communication for the most part, was motivated by
peoples desire to build mechanical models to emulate human verbal communication capabilities. Speech is the most natural
form of human communication and speech processing has been one of the most exciting areas of the signal processing.[1] The
main goal of speech recognition area is to develop techniques and systems for speech input to machine. Speech is the primary
means of communication between humans. This paper reviews major highlights during the last few decades in the research and
development of speech recognition, so as to provide a technological perspective. Although many technological progresses have
been made, still there remain many research issues that need to be tackled.


Fig 1 A Speech recognition system

TYPES OF SPEECH RECOGNITION SYSTEMS

A. Speaker dependant- A number of voice recognition systems are available on the market. The most powerful can recognize
thousands of words. However, they generally require an extended training session during which the computer system becomes
accustomed to a particular voice and accent. Such systems are said to be speaker dependent [2]. A speaker dependent system is
developed to operate for a single speaker. These systems are usually easier to develop, cheaper to buy and more accurate, but not
as flexible as speaker adaptive or speaker independent systems. Speakerdependent software Work by learning the unique
characteristics of a single person's voice, in a way similar to voice recognition. New users must first "train" the software by
speaking to it, so the computer can analyze how the person talks. This often means users have to read a few pages of text to the
computer before they can use the speech recognition software
B. Speaker independent - A speaker independent system is developed to operate for any speaker of a particular type (e.g. American
English). These systems are the most difficult to develop, most expensive and accuracy is lower than speaker dependent systems.
However, they are more flexible. Speakerindependent software is designed to recognize anyone's voice, so no training is
involved. This means it is the only real option for applications such as interactive voice response systems where businesses
can't ask callers to read pages of text before using the system. The downside is that speakerindependent software is generally
less accurate than speakerdependent software.
C. Speaker adaptive - A third variation of speaker models is now emerging, called speaker adaptive. Speaker adaptive systems
usually begin with a speaker independent model and adjust these models more closely to each individual during a brief training
period.
3.AUTOMATIC SPEECH RECGNITION SYSTEM CLASSIFICATION:
The following tree structure emphasizes the speech processing applications. Depending on the chosen criterion, Automatic Speech
Recognition systems can be classified as shown in figure 2


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

349 www.ijergs.org

















Fig. 2 Speech Processing Classification
4. RELEVENT ISSUES OF ASR DESIGN: Main issues on which recognition accuracy depends have been presented in the
table 1.
Table 1: Relevant issues of ASR design
Environment Type of noise; Signal/noise
ratio; working conditions
Transducer Microphone; telephone
Channel Band amplitude; distortion;
echo
Speakers Speakerdependence/independe
nce Sex, Age; physical and
psychical state

Speech styles Voice tone(quiet, normal,
shouted); (isolated words or
continuous speech read or
spontaneous speech) Speed

Vocabular Characteristics of available training data;
y specific or generic vocabulary;


















International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

350 www.ijergs.org

Table 2 Speech Recognition Techniques
Techniques Representation Recognition
Function

Acoustic Spectral analysis Probabilistic lexical
Phonetic with feature access
Approach detection procedure
Phonemes/
segmentation and
labelling
Pattern Speech, samples, Correlation distance
Recognition pixels and curves Measure
approach Set of sequence Dynamic warping
_ Template of spectral Optimal algorithm
_ DTW vectors Set of Clustering function
_ VQ spectral vectors
Features
Neural Speech features/ Network function
Network perceptrons/
Rules/
Units/Procedures
Support Kernel based Maximal margin
Vector features hyperplane,Radial
Machine basis
Artificial Knowledge
intelligence based
approach

5. APPROACHES TO SPEECH RECOGNITION: Basically there exist three approaches to speech recognition[3]. They are:
Acoustic Phonetic Approach B. Pattern Recognition Approach C. Artificial Intelligence Approach .

A. ACOUSTIC PHONETIC APPROACH:

The earliest approaches to speech recognition were based on finding speech sounds and providing appropriate labels to these sounds.
This is the basis of the acoustic phonetic approach, which postulates that there exist finite, distinctive phonetic units (phonemes) in
spoken language and that these units are broadly characterized by a set of acoustics properties that are manifested in the speech signal
over time. Even though, the acoustic properties of phonetic units are highly variable, both with speakers and with neighbouring
sounds, it is assumed in the acoustic-phonetic approach that the rules governing the variability are straightforward and can be readily
learned by a machine. The first step in the acoustic phonetic approach is a spectral analysis of the speech combined with a feature
detection that converts the spectral measurements to a set of features that describe the broad acoustic properties of the different
phonetic units[4]. The next step is a segmentation and labelling phase in which the speech signal is segmented into stable acoustic
regions, followed by attaching one or more phonetic labels to each segmented region, resulting in a phoneme lattice characterization
of the speech. The last step in this approach attempts to determine a valid word (or string of words) from the phonetic label sequences
produced by the segmentation to labelling. In the validation process, linguistic constraints on the task (i.e., the vocabulary, the syntax,
and other semantic rules) are invoked in order to access the lexicon for word decoding based on the phoneme lattice. The acoustic
phonetic approach has not been widely used in most commercial applications [5].The following table 2 broadly gives the differ ent
speech recognition techniques.

B. PATTERN RECOGNITION APPROACH:

The pattern-matching approach (Itakura 1975; Rabiner 1989; Rabiner and Juang 1993) involves two essential steps namely, pattern
training and pattern comparison. The essential feature of this approach is that it uses a well formulated mathematical framework and
establishes consistent speech pattern representations, for reliable pattern
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

351 www.ijergs.org


comparison, from a set of labeled training samples via a formal training algorithm. A speech pattern representation can be in the form
of a speech template or a statistical model (e.g., a HIDDEN MARKOV MODEL or HMM) and can be applied to a sound (smaller
than a word), a word, or a phrase. In the pattern-comparison stage of the approach, a direct comparison is made between the unknown
speeches (the speech to be recognized) with each possible pattern learned in the training stage in order to determine the identity of the
unknown according to the goodness of match of the patterns. The pattern-matching approach has become the predominant method for
speech recognition in the last six decades [6]. In this, there exists four methods discussed below:

1. Template Based Approach:

Template based approach to speech recognition have provided a family of techniques that have advanced the field considerably during
the last decades. A collection of prototypical speech patterns are stored as reference patterns representing the dictionary of candidates
words. Recognition is then carried out by matching an unknown spoken utterance with each of these references templates and
selecting the category of the best matching pattern. Each word must have its own full reference template; template preparation and
matching become prohibitively expensive or impractical as vocabulary size increases beyond a few hundred words. One key idea in
template method is to derive typical sequences of speech frames for a pattern (a word) via some averaging procedure, and to rely on
the use of local spectral distance measures to compare patterns. Another key idea is to use some form of dynamic programming to
temporarily align patterns to account for differences in speaking rates across talkers as well as across repetitions of the word by the
same talker.

2. Stochastic Approach:

Stochastic modelling [7] entails the use of probabilistic models to deal with uncertain or incomplete information. In speech
recognition, uncertainty and incompleteness arise from many sources; for example, confusable sounds, speaker variability s,
contextual effects, and homophones words. Thus, stochastic models are particularly suitable approach to speech recognition. The most
popular stochastic approach today is hidden Markov modeling. A hidden Markov model is characterized by a finite state markov
model and a set of output distributions. The transition parameters in the Markov chain models, temporal variabilities, while the
parameters in the output distribution model, spectral variabilities. These two types of variabilites are the essence of speech
recognition.

3. Dynamic Time Warping (DTW):

Dynamic time warping is an algorithm for measuring similarity between two sequences which may vary in time or speed. For
instance, similarities in walking patterns would be detected, even if in one video, the person was walking slowly and if in another, he
or she were walking more quickly, or even if there were accelerations and decelerations during the course of one observation. DTW
has been applied to video, audio, and graphics indeed, any data which can be turned into a linear representation can be analyzed with
DTW. A well known application has been automatic speech recognition, to cope with different speaking speeds. In general, DTW is a
method that allows a computer to find an optimal match between two given sequences (e.g. time series) with certain restrictions. The
sequences are "warped" non-linearly in the time dimension to determine a measure of their similarity independent of certain non-
linear variations in the time dimension. This sequence alignment method is often used in the context of hidden Markov models.

4. Vector Quantization (VQ):

Vector Quantization (VQ) [8] is often applied to ASR. It is useful for speech coders, i.e., efficient data reduction. Since transmission
rate is not a major issue for ASR, the utility of VQ here lies in the efficiency of using compact codebooks for reference models and
codebook searcher in place of more costly evaluation methods. The test speech is evaluated by all codebooks and ASR chooses the
word whose codebook yields the lowest distance measure. In basic VQ, codebooks have no explicit time information, since codebook
entries are not ordered and can come from any part of the training words. However, some indirect durational cues are preserved
because the codebook entries are chosen to minimize average distance across all training frames, and frames, corresponding to longer
acoustic segments (e.g., vowels) are more frequent in the training data. Such segments are thus more likely to specify code words than
less frequent consonant frames, especially with small codebooks. Code words nonetheless exist for constant frames because such
frames would otherwise contribute large frame distances to the codebook. Often a few code words suffice to represent many frames
during relatively steady sections of vowels, thus allowing more codeword to represent short, dynamic portions of the words. This
relative emphasis that VQ puts on speech transients can be an advantage over other ASR comparison methods for vocabularies of
similar words.

C. Artificial Intelligence Approach (Knowledge Based Approach):

The Artificial Intelligence approach [9] is a hybrid of the acoustic phonetic approach and pattern recognition approach. In this, it
exploits the ideas and concepts of Acoustic phonetic and pattern recognition methods. Knowledge based approach uses the
information regarding linguistic, phonetic and spectrogram. Some speech researchers developed recognition system that used acoustic
phonetic knowledge to develop classification rules for speech sounds. While provided little insight about human speech processing,
thereby making error analysis and knowledge-based system enhancement difficult. On the other hand, a large body of linguistic and
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

352 www.ijergs.org

phonetic literature provided insights and understanding to human speech processing. In its pure form, knowledge engineering design
involves the direct and explicit incorporation of experts speech knowledge into a recognition system. This knowledge is usually
derived from careful study of spectrograms and is incorporated using rules or procedures. Pure knowledge engineering was also
motivated by the interest and research in expert systems. However, this approach had only limited success, largely due to the difficulty
in quantifying expert knowledge. Another difficult problem is the integration of many levels of human knowledge phonetics,
phonotactics, lexical access, syntax, semantics and pragmatics. Alternatively, combining independent and asynchronous knowledge
sources optimally remains an unsolved problem. In more indirect forms, knowledge has also been used to guide the design of the
models and algorithms of other techniques such as template matching and stochastic modelling. This form of knowledge application
makes an important distinction between knowledge and algorithms. Algorithms enable us to solve problems. Knowledge enables the
algorithms to work better. This form of knowledge based system enhancement has contributed considerably to the design of all
successful strategies reported. It plays an important role in the selection of a suitable input representation, the definition of units of
speech, or the design of the recognition algorithm itself.

D. Connectionist Approaches (Artificial Neural Networks):

The artificial intelligence approach [10], Lesser et al. 1975; Lippmann 1987) attempts to mechanize the recognition procedure
according to the way a person applies intelligence in visualizing, analysing, and characterizing speech based on a set of measured
acoustic features. Among the techniques used within this class of methods are uses of an expert system (e.g., a neural network) that
integrates phonemic, lexical, syntactic, semantic, and even pragmatic knowledge for segmentation and labelling, and uses tools such
as artificial NEURAL NETWORKS for learning the relationships among phonetic events. The focus in this approach has been mostly
in the representation of knowledge and integration of knowledge sources. This method has not been widely used in commercial
systems. Connectionist modelling of speech is the youngest development in speech recognition and still the subject of much
controversy.

E. Support Vector Machine (SVM):

One of the powerful tools for pattern recognition that uses a discriminative approach is a SVM [9]. SVMs use linear and nonlinear
separating hyper-planes for data classification. However, since SVMs can only classify fixed length data vectors, this method cannot
be readily applied to task involving variable length data classification. The variable length data has to be transformed to fixed length
vectors before SVMs can be used. It is a generalized linear classifier with maximum-margin fitting functions. This fitting function
provides regularization which helps the classifier generalized better. The classifier tends to ignore many of the features. Conventional
statistical and Neural Network methods control model complexity by using a small number of features (the problem dimensionality or
the number of hidden units). SVM controls the model complexity by controlling the VC dimensions of its model. This method is
independent of dimensionality and can utilize spaces of very large dimensions spaces, which permits a construction of very large
number of non-linear features and then performing adaptive feature selection during training.

6.CURRENT AND FUTURE USES OF SPEECH RECOGNITION SYSTEM:
Currently speech recognition is used in many fields like Voice Recognition System for the Visually Impaired [10] highlights the Mg
Sys Visi system that has the capability of access to World Wide Web by browsing in the Internet, checking, sending and receiving
email, searching in the Internet, and listening to the content of the search only by giving a voice command to the system. In addition,
the system is built with a translator that has the functionality to convert html codes to voice; voice to Braille and then to text again.
This system comprises of five modules namely: Automatic Speech Recognition (ASR), Text-to-Speech (TTS), Search engine, Print
(Text-Braille) and Translator (Text-to-Braille andBraille-to - Text) module, was originally designed and developed for the visually
impaired learners, can be used for other users of specially needs like the elderly, and the physically impaired learners. Speech
Recognition in Radiology Information System. The Radiology report is the fundamental means by which radiologists communicate
with clinicians and patients. The traditional method of generating reports is time consuming and expensive. Recent advances in
computer hardware and software technology have improved Speech Recognition systems used for radiology reporting. [6] Integration
of Robust Voice Recognition and Navigation System on Mobile Robot [7] and there are many other fields in which speech
recognition can be used.

7. CONCLUSIONS:
This paper introduces the basics of speech recognition technology and also highlights the difference between different speech
recognition systems. In this paper the most common algorithms which are used to do speech recognition are also discussed along with
the current and its future use.

REFERENCES:
[1] Dat Tat Tran, Fuzzy Approaches to Speech and Speaker Recognition, A thesis submitted for the degree of Doctor of Philosophy of
the university of Canberra.

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

353 www.ijergs.org

[2] R.K.Moore, Twenty things we still don t know about speech,Proc.CRIM/ FORWISS Workshop on Progress and Prospects of
speech Research an Technology , 1994.
[3] Behrang P. Dep. of Info. Science, UKM,Selangor,Mlaysia.hani_p114@yahoo.com

[4] Choo W.O.UTAR, Kampar, Perak, Malaysia.kenny@yahoo.comVoiceRecognition System for the Visually Impaired: Virtual
Cognitive Approach, IEEE2008.

[5] Xinxin Wang1,Feiran Wu1,Zhiqian Ye11College of Biomedical Engineering& Instrument Science, Zhejiang University,
Hangzhou, China meaita2009@gmail.com com, yezhiqian@hzcnc, The Application of Speech Recognition in Radiology Information
System,IEEE2010.

[6] Huu-Cong Nguyen, Shim-Byoung, Chang-Hak Kang, Dong-Jun Park and Sung-Hyun Han Division of Mechanical System Eng.,
Graduate School, Kyungnam University, Masan, Korea Integration of Robust Voice Recognition and Navigation System on Mobile
Robot, ICROS-SICE International Joint Conference 2009

[7] O. Khalifa, S. Khan, M.R. Islam, M. Faizal and D. Dol, Text Independent Automatic Speaker Recognition , 3rd International
Conference on Electrical & Computer Engineering,, Dhaka, Bangladesh, 28-30 December 2004, pp. 561-564.

[8] C.R. Buchanan, Informatics Research Proposal Modeling the Semantics of Sound , School of Informatics, University of
Edinburgh, United Kingdom, March 2005. http://ozanmut.sitemynet.com/asr.htm, Retrieved in November 2005.
[9] D., Jurafsky, Speech Recognition and Synthesis: Acoustic Modeling , winter 2005.

[10] M., Jackson, Automatic Speech Recognition: Human Computer Interface for Kinyarwanda Language . Master Thesis,
Faculty ofComputing and Information Technology, Makerere University, 2005.

[11] M.R., Hasan, M., Jamil, and M.G., Saifur Rahman, Speaker Identification Using MelFrequency Cepstral Coefficients . 3
rd

International Conference on Electrical and Computer Engineering, Dhaka, Bangladesh, 2004, pp. 565-568.

[12] http://project.uet.itgo.com/speech.htm

[13] http://www.speech.be.philips.com/index.htm













International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

354 www.ijergs.org

Assessment of Physico- Chemical Parameters of Upper Lake Bhopal M.P.
India
Muzaffar U Zaman Khan
1
, Ishtiyaq Majeed Ganaie
1

1
Lecturer Higher Education
E-mail- Muzaffarkhan722@gmail.com

Abstract: The present study assesses the various physico-chemical parameters of Upper Lake Bhopal. For the analysis, methodology
as given in APHA (1995) was followed. The results obtained revealed higher values for some parameters such as free CO
2
, indicating
higher trophic status of the lake as were also reported by Wanganeo and Wanganeo (2006). Chloride values were also recorded on the
higher side indicating that the lake waters are fed with sewage and other run-off materials from its catchment area. The calcium and
magnesium hardness revealed less hard waters of the lake. The pH values recorded were also of near neutral to alkaline range
suggesting well buffered lake waters.
Key Words: Physico-chemical parameters, APHA, Sewage, Free CO
2
, Chloride, Trophic status, pH values.
Introduction: Water is one of the most important natural resource available to mankind. Knowing importance of water for
sustenance of life, the need for conservation of water bodies espacially the fresh water bodies is being realised everywhere in the
world. Our planet is sometimes known as water planet as 2/3
rd
of earths surface is covered by water. However only 1% of the water
resource is available as fresh water i.e surface water, rivers, lakes, streams, ground water for human consumption and other useful
activities.
Lakes also prove a useful source of fresh water in various parts of the world and hence it becomes necessary to check and
maintain their water quality for a healthy survival. Lakes have been at the center of human attention. Several cities, industrial
infrastucture and other complexes have been built in the vicinity of lakes, rivers and other water bodies. Development of human
communities has deteriorated lake and river water qualities. Bearing the idea in mind it is inevitable to analyse and understand quality
of surface water for various purposes such as for drinking, agriculture and industries.
In the current study, some of the important physico-chemical characteristics of Upper Lake Bhopal were analysed and studied
inorder to have an idea about its water quality as it is an important source of water espacially for drinking purpose to the urban
population of Bhopal city.
Study Area: Bhopal, the picturesque capital of the state of Madhya Pradesh, is also known as City of Lakes on account of a large
number of water bodies present in and around Bhopal. The upper lake is the source of drinking water to urban populations,and is also
known as Badah talab. Upper lake is surrounded by Van Vihar National Park on the south, human settlements on the east and north,
and agricultural fields on the west. The water of the Upper Lake was used for drinking purposes up to year 1947 without any
treatment, which proves that the waterquality was very good. After Bhopal become the capital of Madhya Pradesh in 1956, it noticed
tremendous population inflow and consequent rapid urban development which adversely affected the lake. Upper lake in Bhopal is
arguably the oldest man-made lake in India, and was created by Raja Bhoj in the 11th century by constructing an earthen dam across
the Kolans River.The Upper Lake is a major source of portable water for the people of the city of Bhopal, Madhya Pradesh, India. For
the present work water samples were taken from two sites of Upper Lake named as Site-I, at the shore of the lake and Site-II, at the
center of the lake



International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

355 www.ijergs.org



Climate: Bhopal, experiences a tropical climate with tropic of cancer passing through the state. It has hot summers and air
temperature varies between 40-45degrees, winters are moderate. The maximum temperature recorded during the season is 45 degree.

Methodology
The methods employed for analysis of various physico-chemical characteristics of water were followed from APHA(1995).

Temperature:
The atmospheric temperature at the sampling site was recorded with the help of Celsius thermometer, avoiding its exposure of
mercury bulb to direct sunlight. Water temperature was recorded by immersing the thermometer into the sampler soon after it was
taken (alongwith sample) out of water. Inorder to estimate the depth wise distribution of temperature, samples were collected
vertically from top to bottom at regular depth intervals of one meter with help of Ruttner sampler.

Transparency:
A standard secchi disc (diameter 20 cm), tied to graduated nylon rope, and was used for obtaining the extent of light penetration in
water. Mean of the depth at which secchi disc disappeared and then re-appeared was taken as transparency of water.

Hydrogen ion concentration (pH):
It was measured by digital pH meter-Systronics.

Electrical conductivity:
The electrical conductivity has been measured by digital conductivity meter.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

356 www.ijergs.org


Dissolved oxygen (DO):
Modified Wrinkler`s method as given in the APHA (1995) was followed for determination of the DO content. To a sample collected
in a 250 ml glassbottle,1 ml of each manganous sulphates solution and alkaline iodide azide solution was added one after the other
with separate pippets.The precipitate (manganous hydroxide floc) formed was dissolved after about five minutes with the help of
concentrated sulphuric acid. The fixed samples were carried to the laboratory where these were titrated against 0.025 N sodium
thiosulphate solution, using starch solution as indicator.The end point was noted at the first disappearance of blue colour. The amount
of DO present was then calculated by using the formula:
DO (mg/l) = Volume of the titrant x 0.2 x 1000/Volume ofsample
Where 0.2 value represent 1 ml of sodiumthiosulphate equivalent to 0.2mg of oxygen.

Free carbon dioxide:
The free CO2 content of the sample was determined by the samples against 0.227 N sodium hydroxide titrant using phenolphthalein as
indicator till the faint pink colour developed. The CO2 present was calculated by usingthe formula given in APHA (1995) as:
Free CO2 (mg/l) =volume of titrant used x 1000/Volume of sample

Total hardness:
Total hardness of a water sample was estimated by titrating it against 0.01M EDTA titrant in presence of ammonium buffer solution
and Eriochrome black-T as an indicator. Titration was continued till the colour of the sample changed from wine red to blue.The total
hardness was then calculated by the formula given as:
Total hardness mg/l as CaCO3 = Used volumeof titrant (V1) x 1000/Volume ofsample

Calcium hardness:
For this purpose, an aliquot of water sample, after treating with N/10 NaOH followed by a pinch of muroxide indicator, was titrated
against 0.01M EDTA solution until a colour changed salmon pink to purple end point. Titration was stopped and volume of titrant
used was noted. The calcium hardness was then calculated by using the formula given below:
Calcium hardness as mg/l CaCO3
Volume of titrant used (V2) x 1000 x1.05 (mol. Wt. Of CaCO3)/Volume ofsample

Magnesium hardness:
The formula given in APHA (1995) was used to estimate the magnesium content of the water sample. The formula is given as:
Magnesium content as mg/l = V1 V2 x 1000/Vol.ofsample
Where, V1= Volume of EDTA titrant used for estimation of total hardness.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

357 www.ijergs.org

And V2 = Volume of titrant used for estimation of calcium hardness.

Alkalinity:
For estimation of phenolphthalein alkalinity (i.e, alkalinity due to OH and CO
2
) a sample volume of 50 ml was titrated against0.02 N
H2SO4 in presence of phenolphthalein indicator till disappearance of pink colour .Volume of titrant used was noted .Then for
estimation of total alkalinity (i.e. alkalinity due to OH, CO3 and HCO3) the same sample was titrated with 0.02 N NaOH in presence
of methyl orange indicator till the colour changed from yellow to orange. The total volume of titrant was noted. On the other hand,
when there was found no pink colour formation after addition of phenolphthalein indicator, the sample was run through the sample
procedure followed by the addition of methyl orange indicator as mentioned above for total alkalinity. Then phenolphthalein alkalinity
(P)and total alkalinity (T) were calculated by using the formula as given below
Phenolphthalein alkalinity (P) as mg/l CaCO3=Volume of titrant used x1000/Volumeof sample.
Chloride:
To 50 ml of water sample 2-3 drops of potassium chromate indicator were added. Once the yellow color was formed, the sample was
titrated against standard silver nitrate solution (0.0141 N) till a faint brick red colour formation. Then in accordance with a formula
given in APHA (1995), the chloride content of the sample was calculated. The formula is given as:
Chloride mg/l = Volume of titrant used x 35.46x 0.0141 x 1000/Volume ofsample
RESULTS
The results obtained for various physico-chemical parameters are shown in the below tables, from table 1 to table 11:

Table 1 Showing variation in Air and Water temperature (
0
c) at two sites of Upper Lake
Site I Site II
Air Water Air Water
Maximum 40.0 33.0 40.0 31.0
Minimum 30.0 24.0 30.5 25.0
Average 35.7 27.1 36.1 27.1





International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

358 www.ijergs.org

Table 2 Showing variation in Seechi transparency (m) at two sites of Upper Lake
Site I Site II
Maximum 1.3 1.5
Minimum 0.8 0.8
Average 1.0 1.2



Table 3 Showing variation in Total Dissolved Solids (mg/l) at two sites of Upper Lake
Site I Site II
Surface Surface Midlle Bottom
Maximum 120 120 130 150
Minimum 80.0 90.0 90.0 120
Average 110 110 118 132



Table 4 Showing variation in Conductivity (S) at two sites of Upper Lake
Site I Site II
Surface Surface Midlle Bottom
Maximum 190 200 210 250
Minimum 120 140 140 160
Average 170 180 188 204
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

359 www.ijergs.org

Table 5 Showing variation in pH at two sites of Upper Lake
Site I Site II
Surface Surface Midlle Bottom
Maximum 8.9 9.2 8.8 8.0
Minimum 8.6 6.7 7.8 7.2
Average 8.8 8.3 8.2 7.7

Table 6 Showing variation in D.O (mg/l) at two sites of Upper Lake
Site I Site II
Surface Surface Midlle Bottom
Maximum 12.5 16.0 9.6 4.4
Minimum 7.6 5.0 4.4 0.0
Average 9.8 10.1 6.1 1.7

Table 7 Showing variation in Free CO
2
at two sites of Upper Lake
Site I Site II


Surface
Surface Midlle Bottom
Maximum 22.0 24.0 14.0 26.0
Minimum 10.0 4.0 10.0 14.0
Average 13.6 13.8 12.0 18.8


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

360 www.ijergs.org


Table 8 Showing variation in Calcium Hardness(mg/l) at two sites of Upper Lake
Site I Site II
Surface Surface Midlle Bottom
Maximum 81.0 71.0 79.8 88.2
Minimum 51.0 65.1 54.6 54.6
Average 61.5 64.2 68.9 73.1

Table 9 Showing variation in Magnesium(mg/l) at two sites of Upper Lake
Site I Site II
Surface Surface Midlle Bottom
Maximum 8.3 6.1 7.4 7.6
Minimum 0.2 0.3 3.0 4.0
Average 4.5 3.8 5.2 5.7

Table 10 Showing variation in Total Alkalinity (mg/l) at two sites of Upper Lake
Site I Site II
Surface Surface Midlle Bottom
Maximum 112 122 116 192
Minimum 96 102 88 102
Average 101.2 110 104 130



International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

361 www.ijergs.org

Table 11 Showing variation in Chloride (mg/l) at two sites of Upper Lake
Site I Site II
Surface Surface Midlle Bottom
Maximum 26.0 21.0 23.0 36.0
Minimum 15.0 14.0 14.0 20.0
Average 20.8 18.6 19.4 24.8

DISCUSSION:-The current study was conducted for a period of three months from February to May 2007, to investigate the various
Physico- chemical characteristics of Upper Lake Bhopal. Due to the fluctuations in the Physico-chemical characteristics, the
biologicaldiversity is affected. The limno chemistry and limno biology of various Indian fresh water bodies and wetlands have been
studied and reported by various workers. During the present investigation, water temperature at site - I ranged from 24
0
C - 33
0
C while
at site II it ranged from 25
0
C - 31
0
C. The rise in atmospheric temperature caused enhancement in the evaporation rate which resulted
in Colossal of water resulting in reduction in water depth. From February onwards atmospheric temperature recoded gradual increase
with corresponding rise in surface water as well. Such a phenomena has also been recorded by Wanganeo et al., (1984 and 2006) in
temperate lakes.Transparency is an important physical parameter in an aquatic ecosystem and thus directly affects the productivity.
Even though the water body is shallow and overgrazed with macrophytes but its transparency values were relatively high signifying
that the euphotic zone extends up till bottom at certain places. Wanganeo et al., 1997 also recorded high Secchi transparency in upper
lake.Uniform distribution of total dissolved solids have been found at both the sites of upper lake. The total dissolved solids have been
found to be of moderate nature in Upper Lake .Wanganeo1984 and 2006 also recorded such results.The conductivity values were
recorded to be of moderate range in the present system. There was not much difference between bottom and surface conductivity
values at site-II. Similar results were also recorded by Wanganeo (2006).The pH recorded during the present investigation were
generally of near neutral to alkaline range suggesting that the lake water was well buffered through out the period. Wanganeo (1984)
related high pH values (towards alkaline side) to enhancement of photosynthetic rate. Relatively high values of dissolved oxygen have
been recorded in the present study. At site -II slight reduction in dissolved oxygen which was in no way a matter of concern as even at
their value both flora and fauna could comfortably survive. The high secchi value have been found to be responsible for enhancing the
photosynthesis of autotrophs in deeper water resulting in high oxygenated waters of upper lake, such reports were also recorded by
Wanganeo et. al; (1997). During the present investigation, higher values of free Carbon dioxide was recorded at both the sites of upper
lake. The maximum value of free carbondioxide that is 22.0 mg/l was recorded at site-I and a maximum value of 26.0 mg/l was
recorded at site -II. The increase in free carbon dioxide values at both the sites of upper lake indicate higher trophic status. The higher
value of free carbon dioxide was also recorded by Wanganeo andWanganeo (2006) while studying variation in zooplankton
population in two morphologically dissimilar rural lakes of Kashmir Himalayas.The calcium and magnesium hardness values revealed
less hard waters of the upper lake in comparison to other water bodies in the vicinity of the present water body.During the present
investigation the chloride was in the range of 15-26 mg/l at site-I and 13-36 mg/l at site -II. Chloride values in the present study were
not alarming, though slight enhancement is recorded in its value in the waters suggesting timely measures for stopping the entry of
sewage and other run-off materials from its catchments area.

REFERENCES:
[1]APHA (1995): Standard methods for the examination of water and waste
water, 19
th
edition, American Public Health Association Washington D.C.5.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

362 www.ijergs.org

[2] Bhatnagar, Chhaya. Sharma, Vinita. Jani, Karnika, Gill and
Nidhi(2007).Plankton and Ichthyo Fauna of Jhamri Dam,Udaipur Rajasthan,
C.P.-31.NSL2007, 236-238.
[3] Cole,C.A.(1979). Text book of Limnology,ii.Edn. , C.V.MosbyCo.,London,321
pp.
[4] Gannon. J.E and Stemberger, R.S. (1976).Trans. Amer. Micros. Soc. 97:16-35.
[5] Horn, W. and Benndrof, J. (1980). Field investigation and model stimulation of
the dynamics of zooplankton population in fresh waters. Int. Revue,
Ges.Hydrobiol. 65(2): 209-222.
[6] Kulshrestha, S. K., Adholia, U. N., Khan, A. A., Bhatnagar, A., Saxena, M. and
Baghel, M. (1989). Pollution study on river Kshipra with special reference to
macro benthos. J. Nat. Com. 1-2,1989. 85-92.
[7]. Odum, E. P. (1971). Fundamentals of Ecology 3
rd
Ed. W. B. Saunders Co.,
Philadelphia, 574 pp.
[8]. Sharma, B. K. (1998). In Faunal diversity of India. (Eds. J. R.B. Alfred,A. K.
Das and A. K. Sanyal). Zool. Sury. India,Envir. Centre, 57-70.
[9] Tundisi, M. T. and Tundisi, J. G. (1976).Oceanologia (Berl.). 25: 265-270.
[10] Wanganeo, A and Wanganeo, R. (2006).Variation in Zooplankton population in
two morphologically dissimilar rural lakes in Kashmir Himalayas.
PROCT.NAT.ACAD.SCI. INDIA, 76 (B), III, 2006. 222-239.
[11]. Wanganeo, A. Dima, A. C. Kaul, V. and Wanganeo, R. (1984): Limnological
study of a Kashmir Himalayan lotic system. Jr.Aq. Biol. 2 (1): 1-6.
[12] Wanganeo, A. Wanganeo, R. and Pani, S.(1997). Summer dissolved oxygen
regimes in a tropical Vindhyan Lake in relation to its conservation strategy.
Strategy Bionature 17(1): 7-11
[13] Waters, T.F. (1987). Adv. Ecol. Res., 10:11-164.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

363 www.ijergs.org

[14] Wetzel, R. G. (1975). Limnology W. B.Saunders Company,
Philadelphia,Pennsylvania: 743 pp





























International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

364 www.ijergs.org

Application of 7 Quality Control (7 QC) Tools for Continuous Improvement of
Manufacturing Processes
Varsha M. Magar
1
, Dr. Vilas B. Shinde
2

1
Research Scholar (PG), Department of Mechanical Engineering, Datta Meghe College of Engineering, Mumbai University
2
Professor, Department of Mechanical Engineering, Datta Meghe College of Engineering, Mumbai University
E-mail- var31jul@rediffmail.com

Abstract In this paper a review of systematic use of 7 QC tools is presented. The main aim of this paper is to provide an easy
introduction of 7 QC tools and to improve the quality level of manufacturing processes by applying it.QC tools are the means for
Collecting data , analyzing data , identifying root causes and measuring the results. these tools are related to numerical data
processing .All of these tools together can provide great process tracking and analysis that can be very helpful for quality
improvements. These tools make quality improvements easier to see, implement and track.
The work shows continuous use of these tools upgrades the personnel characteristics of the people involved. It enhances their ability
to think generate ideas, solve problem and do proper planning. The development of people improves the internal environment of the
organization, Which plays a major role in the total Quality Culture.
Keywords QC Tools , continuous improvement , manufacturing processes ,Quality control , Root Cause analysis
,PDCA,Efficiency
INTRODUCTION
The 7 QC Tools are simple statistical tools used for problem solving. These tools were either developed in Japan or introduced to
Japan by the Quality Gurus such as Deming and Juran. In terms of importance, these are the most useful. Kaoru Ishikawa has stated
that these 7 tools can be used to solve 95 percent of all problems. These tools have been the foundation of Japan's astomishing
industrial resurgence after the second world war.
For solving quality problems seven QC tools used are Pareto Diagram, Cause & Effect Diagram ,Histogram, Control Charts , Scatter
Diagrams, Graphs and Check Sheets . all this tools are important tools used widely at manufacturing field to monitor the overall
operation and continuous process improvement. This tools are used to find out root causes and eliminates them , thus the
manufacturing process can be improved. The modes of defects on production line are investigated through direct observation on the
production line and statistical tools.
Methodology
For solving quality problems following seven QC tools are required
1. Pareto Diagram
2. Cause & Effect Diagram
3. Histogram
4. Control Charts
5. Scatter Diagrams
6. Graphs
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

365 www.ijergs.org

7. Check Sheets
1) Pareto Diagram
Pareto Diagram is a tool that arranges items in the order of the magnitude of their contribution, thereby identifying a few items
exerting maximum influence. This tool is used in SPC and quality improvement for prioritizing projects for improvement, prioritising
setting up of corrective action teams to solve problems, identifying products on which most complaints are received, identifying the
nature of complaints occurring most often, identifying most frequent causes for rejections or for other similar purposes. The origin of
the tool lies in the observation by an Italian economist Vilfredo Pareto that a large portion of wealth was in the hands of a few people.
He observed that such distribution pattern was common in most fields. Pareto principle also known as the 80/20 rule is used in the
field of materials management for ABC analysis. 20% of the items purchased by a company account for 80% of the value. These
constitute the A items on which maximum attention is paid. Dr.Juran suggested the use of this principle to quality control for
separating the "vital few" problems from the "trivial many" now called the "useful many".
Procedure :
The steps in the preparation of a Pareto Diagram are :
1. From the available data calculate the contribution of each individual item.
2. Arrange the items in descending order of their individual contributions. If there are too many items contributing a small percentage
of the contribution, group them together as "others". It is obvious that "others" will contribute more than a few single individual
items. Still it is kept last in the new order of items.
3. Tabulate the items, their contributions in absolute number as well as in percent of total and cumulative contribution of the items.
4. Draw X and Y axes. Various items are represented on the X-axis. Unlike other graphs Pareto Diagrams have two Y-axes - one on
the left representing numbers and the one on right representing the percent contributions. The scale for X-axis is selected in such a
manner that all the items including others are accommodated between the two Y-axes. The scales for the Y-axes are so selected that
the total number of items on the left side and 100% on the right side occupy the same height.
5. Draw bars representing the contributions of each item.
6. Plot points for cumulative contributions at the end of each item. A simple way to do this is to draw the bars for the second and each
subsequent item at their normal place on the X-axis as well as at a level where the previous bar ends. This bar at the higher level is
drawn in dotted lines. Drawing the second bar is not normally recommended in the texts.
7. Connect the points. If additional bars as suggested in step 6 are drawn this becomes simple. All one needs to do is - connect the
diagonals of the bars to the origin.
8. The chart is now ready for interpretation. The slope of the chart suddenly changes at some point. This point separates the 'vital few'
from the 'useful many' like the A,B and C class items in materials management.

2) Cause & Effect Diagram
A Cause-and Effect Diagram is a tool that shows systematic relationship between a result or a symptom or an effect and its possible
causes. It is an effective tool to systematically generate ideas about causes for problems and to present these in a structured form. This
tool was devised by Dr. Kouro Ishikawa and as mentioned earlier is also known as Ishikawa Diagram.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

366 www.ijergs.org



Procedure
The steps in the procedure to prepare a cause-and-effect diagram are :
1. Agree on the definition of the 'Effect' for which causes are to be found. Place the effect in the dark box at the right. Draw
the spine or the backbone as a dark line leading to the box for the effect.

1. Determine the main groups or categories of causes. Place them in boxes and connect them through large bones to the
backbone.

1. Brainstorm to find possible causes and subsidiary causes under each of the main groups. Make sure that the route from the
cause to the effect is correctly depicted. The path must start from a root cause and end in the effect.

1. After completing all the main groups, brainstorm for more causes that may have escaped earlier.

1. Once the diagram is complete, discuss relative importance of the causes. Short list the important root causes.

3) Histogram

Histograms or Frequency Distribution Diagrams are bar charts showing the distribution pattern of observations grouped in convenient
class intervals and arranged in order of magnitude. Histograms are useful in studying patterns of distribution and in drawing
conclusions about the process based on the pattern.
The Procedure to prepare a Histogram consists of the following steps :
1. Collect data (preferably 50 or more observations of an item).

1. Arrange all values in an ascending order.

1. Divide the entire range of values into a convenient number of groups each representing an equal class interval. It is
customary to have number of groups equal to or less than the square root of the number of observations. However one
should not be too rigid about this. The reason for this cautionary note will be obvious when we see some examples.

1. Note the number of observations or frequency in each group.

1. Draw X-axis and Y-axis and decide appropriate scales for the groups on X-axis and the number of observations or the
frequency on Y-axis.

1. Draw bars representing the frequency for each of the groups.

1. Provide a suitable title to the Histogram.

1. Study the pattern of distribution and draw conclusion.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

367 www.ijergs.org



normal histogram



Bi modal



high platue

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

368 www.ijergs.org

alternate peaks and vales

cliff patern
4) Control Charts

Variability is inherent in all manufacturing processes. These variations may be due to two causes ;
i. Random / Chance causes (un-preventable).
ii. Assignable causes (preventable).

Control charts was developed by Dr. Walter A. Shewhart during 1920's while he was with Bell Telephone Laboratories.
These charts separate out assignable causes.
Control chart makes possible the diagnosis and correction of many production troubles and brings substantial improvements in the
quality of the products and reduction of spoilage and rework.
It tells us when to leave a process alone as well as when to take action to correc trouble
.
BASIC CONCEPTS :

a. Data is of two types :
Variable - measured and expressed quantitatively
Attribute - quanlitative
b.mean and Range :

X - Mean is the average of a sub-group
R - Range is the difference between the minimum and maximum in a sub-group c.control Charts for Variables

Charts depleting the variations in X and R with time are known as X and R charts. X and R charts are used for variable data when
the sample size of the subgroup is 2-5. When the subgroup size is larger, s Charts are used instead of R charts where s is the standard
deviation of the subgroup.
d.control Charts for Attributes

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

369 www.ijergs.org

The control charts for attributes are p-chart, np-chart, c-chart and u-chart. Control charts for defectives are p and np charts. P charts are
used when the sample size is constant and np charts are used when the sample size is variable. In the case where the number of defects
is the data available for plotting, c and u charts are used. If the sample size is constant, c charts are used and u charts are used for
variable sample sizes.
5) Scatter Diagram

When solving a problem or analysing a situation one needs to know the relationship between two variables. A relationship may or may
not exist between two variables. If a relationship exists, it may be positive or negative, it may be strong or weak and may be simple or
complex. A tool to study the relationship between two variables is known as Scatter Diagram. It consists of plotting a series of points
representing several observations on a graph in which one variable is on X-axis and the other variable in on Y-axis. If more than one
set of values are identical, requiring more points at the same spot, a small circle is drawn around the original dot to indicate second
point with the same values. The way the points lie scattered in the quadrant gives a good indication of the relationship between the two
variables.

6) Graphs
Graphs of various types are used for pictoral representation of data. Pictoral representation enables the user or viewer to quickly grasp
the meaning of the data. Different graphical representation of data are chosen depending on the purpose of the analysis and preference
of the audience. The different types of graphs used are as given below :
Sr.No Type of graph purpose
1 Bar Graph To compare sizes of data
2 Line Graph To represent changes of data
3 Gantt chart To plan and schedule
4 Radar chart To represent changes in data (before and after)
5 Band Graph Same as above

7) Check Sheets
8.1 As measurement and collection of data forms the basis for any analysis, this activity needs to be planned in such a way that the
information collected is both relevant and comprehensive.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

370 www.ijergs.org

8.2 Check sheets are tools for collecting data. They are designed specific to the type of data to be collected. Check sheets aid in
systematic collection of data. Some examples of check sheets are daily maintenance check sheets, attendance records, production log
books, etc.
Data collected using check sheets needs to be meaningfully classified. Such classification helps gaining a preliminary understanding
of relevance and dispersion of the data so that further analysis can be planned to obtain a meaningful output. Meaningful classification
of data is called stratification. Stratification may be by group, location, type, origin, symptom, etc.

7QC TOOLS THROUGH PDCA-CYCLE
In successful application of quality tools an implemented quality management system is an advantage. The quality management
principles are a starting point for the companys management striving for continuous efficiency improvement over a long period of
time and customer satisfaction. A quality management system is based on the integrity of all production and support resources of a
certain company. It enables a faultless process flow in meeting related contracts, standards and market quality requirements.
Implementation of a quality management system is always a part of a companys development processentification and/or process
analysis. Continuous improvement as a fifth principle of QMS (ISO 9001:2000) could not be realized without quality tools which are
presented through four groups of activities of Demings quality cycle or PDCA-cycle, The PDCA-cycle is an integral part of process
management and is designed to be used as a dynamic model because one cycle represents one complete step of improvement. The
PDCA-cycle is used to coordinate continuous improvement efforts. It emphasizes and demonstrates that improvement programs must
start with careful planning, must result in effective action, and must move on again to careful planning in a continuous cycle the
Demings quality cycle is never-ending. It is a strategy used to achieve breakthrough improvements in safety, quality, morale, delivery
cost, and other critical business objectives.
The completion of one cycle continues with the beginning of the next. A PDCA-cycle consists of four consecutive steps or phases, as
follows:
Plan - analysis of what needs to be improved by taking into consideration areas that hold opportunities for change. Decision on what
should be changed.
Do - implementation of the changes that are decided on in the Plan step.
Check - Control and measurement of processes and products in accordance to changes made in previous steps and in accordance
with policy, goals and requirements on products. Report on results.
Act - Adoption or reaction to the changes or running the PDCA-cycle through again. Keeping improvement on-going.
Seven basic quality tools (7QC tools) in correlation with PDCA-cycle steps
Seven basic
quality tools
(7QC tools)

Plan Do Plan , Check Plan ,Act Check
Problem
Identification
Implement
solutions
Process analysis Solution
Development
Result Evaluation
Flow chart
Cause and Effect
diagram

Check Sheet
Pareto diagram
Histogram
Scatter plot
Control chart



International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

371 www.ijergs.org

CONCLUSION
- Statististical QC is chiefly concerned in making sure that several procedures and working arrangements are in place to
provide for effective and efficient statistical processes , to minimize the risk of errors or weaknesses in procedures or systems
or in source material
- Seven QC tools are most helpful in troubleshooting issues related to quality
- All processes are affected by multiple factors and therefore statistical QC tools can be applied to any process.
- The continuous use of these tools upgrades the personnel characteristics of the people involved. It enhances their ability to
think generate ideas, solve problem and do proper planning.

REFERENCES:
[1] Pyzdek, T., Quality Engineering Handbook, Second Edition, Marcel Dekker, Inc., New
York, 2003.
[2] Pimblott, J.G., Managing Improvement Where to start, Quality Forum, Vol. 16, No. 4,
1990, pp. 165-173.
[3] Pratik J. Patel*, Sanjay C. Shah**, Sanjay Makwana [Int. Journal of Engineering Research and Applications www.ijera.com ISSN
: 2248-9622, Vol. 4, Issue 2( Version 1), February 2014, pp.129-134 ]
[4] Paliska, G.; Pavleti, D. & Sokovi, M.[ advanced engineering 2(2008)1, ISSN 1846-5900]
[5] Duko Pavleti, Mirko Sokovi,Glorija Paliska [International Journal for Quality research UDK- 658.562]
[6] Bisgaard, S. 1993. Statistical Tools for Manufacturing. Manufacturing Review. 6(3): 192200
[7] Juran J. M., 1974, Quality Control Handbook, McGraw Hill, New York.
[8] Kim, J.S. and Larsen, M.D. (1997): Integration of Statistical Techniques into Quality Improvement Systems. In Proceedings of the 41st
Congress of the European Organization for Quality, 2, 277-284
[9] S Raghuraman, K Thiruppathi, J Praveen Kumar, B Indhirajith, Enhancement of quality of the processes using statistical tools- A review,
International Journal of Engineering Science & Advanced technology, ISSN: 2250-3676, volume-2, Issue-4,1008-1017,2012.
[10] Kamalesh Panthi, Syed M. Ahmed, Application of Quality Tools to Improve Quality: A case study of a building Remediation Contractor,
Florida International University Miami, Florida.
[11]Aichouni, M. Quality Control The Basic Tools and their Applications in Manufacturing and Services, ISBN 6690-75-688-2,Dar Al-Asshab
Book Publishing, Riyadh., 2007
[12] Walker, H. F and Levesque, J. 'The Innovation Process and Quality Tools', Quality Progress, Vol. 40, No. 7, July 2007, pp. 18/22.
[13] Gunther, J., and Hawkins, F., 'Making TQM work: Quality tools for human service organizations'. Springer Publishing Company, New York,
1999









International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

372 www.ijergs.org

Moving Object Detection and Tracking for Video Survelliance
Ms Jyoti J. Jadhav
1
E&TC Department, Dr.D.Y.Patil College of Engineering, Pune University, Ambi-Pune
E-mail- Jyotijadhav48@gmail.com, Contact no- 9096219620

AbstractMoving object detection and Tracking has been widely used in diverse discipline such as intelligent transportation
system, airport security system, video surveillance applications, and so on. This paper presents the moving object detection and
tracking using reference Background Subtraction. In this method, we used Static camera for video and first frame of video is directly
consider as Reference Background Frame and this frame is subtract from current frame to detect moving object and then set threshold
T value. If the pixel difference is greater than the set threshold T, then it determines that the pixels from moving object, otherwise, as
the background pixels. But this fixed threshold suitable only for an ideal condition is not suitable for complex environment with
lighting changes. So that in this paper we used dynamic optimization threshold method to obtain a more complete moving objects.
This method can effectively eliminate the impact of light changes.

Keywords: Moving object Detection, Static camera, Moving Object Tracking, Reference Background, video surveillance.
INTRODUCTION
Automatic visual detection of object is crucial task for a large range of home, business, and industrial applications. Video cameras are
among the most commonly used sensors in a large number of applications which ranging from surveillance to smart rooms for video
conferencing. Moving target detection means to detect moving objects from the background image to the continuous video image.
Moving target tracking means to find various locations of the moving object in the video. There is a need to develop algorithm for task
such as moving object detection.
Currently used methods in moving object detection are mainly the frame subtraction method, the background subtraction method and
the optical flow method [1, 2]. Frame subtraction method [1] is through the difference between two consecutive frames to determine
the presence of moving objects. Its calculation is simple and easy to develop. For a variety of dynamic environments, it has strong
adaptability, but it is mostly difficult to obtain a complete outline of moving object, and so that the detection of moving object is not
accurate. Optical flow method [4] is to calculate the image optical flow field, and do clustering processing according to the optical
flow distribution features of image. This method gives the complete movement information and detects the moving object from the
background better, due to a large quantity of calculation, sensitivity to noise and poor anti-noise performance; make this method not
suitable for real-time demanding occasions.
The Background subtraction method [7] is use difference between the current image and background image to detect moving objects,
with simple algorithm. And it can provide the most complete information about object in the case of the background is already known
[8]. This method is effective to enhance the effect of moving object detection. In this paper, we used background subtraction method
for moving object detection. In this basically we used a single static camera for detection. For moving object detection basically
needed camera and typical setup is given as below.

Fig.1 Typical setup for moving object detection in video




International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

373 www.ijergs.org

2. OVERVIEW OF THE SYSTEM

In proposed system the main aim is to build robust moving object detection algorithm that can detect and Track object in video.

Fig.2 Overview of the system

1. The first step is to take input video from static cameras. For processing the video files, convert video into frames and from frames to
images.
2. Next step is take first frame as a Background frame and next is current frame and then apply subtraction operation. Background
frame is subtracted from current frame.
3. Then Threshold operation is performed and foreground object is detected.
4. After object detected last step is track object in video.

3. BACKGROUND SUBTRACTION METHOD
The background subtraction method is the common method of motion detection. It is a technology that uses the difference of the
current image and the background image to detect the motion region [6], and it is generally able to provide data included object
information. The background image is subtracted from the current frame. If the pixel difference is greater than the set threshold value
T, then it determines that the pixels from the moving object, otherwise, as the background pixels. By using dynamic threshold method
we can dynamically change the threshold value according to the lighting changes of the two images obtained. This method can
effectively suppress the impact of light changes. Here we consider first frame as the background frame directly and then that frame is
subtracted from current frame to detect moving object.


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

374 www.ijergs.org


Fig.3 The flow chart of moving object Detection
Figure no.3 shows flow chart for moving object detection using reference Background. Reference Background means Background is
fixed.
4. MOVING OBJECT DETECTION

4.1 Moving Object Extraction
After the background image B(x, y) is obtained, subtract the background image B(x, y) from the current frame F
k
(x, y). If the pixel
difference is greater than the set threshold value T, then determines that the pixels occur in the moving object, otherwise, as the
background pixels [1]. The moving object can be detected after applying threshold operation [2]. Its expression is given below:

Where D
k
(x, y) is the binary image of differential results, T is gray-scale threshold, dynamic, which will be selected according to the
environmental conditions; its size determines the accuracy of object identification.
As in the algorithm T is a fixed value, only for an ideal condition, is not suitable for complex environment with lighting changes.
Therefore, we refer the dynamic threshold method, using this method we dynamically changes the threshold value according to the
lighting changes of the two images obtained. On this basis, add a dynamic threshold T to the object detection algorithm. Its
mathematical expression is given below:


Then,

Where A is the inhibitory coefficient and it set according to the requirements of practical applications and its reference value is 2,[1].
M x N is the size of each image to deal with [2]. M x N numerical results indicate the number of pixels in detection region. T reflects
the overall changes in the environment. If small changes in image illumination, dynamic threshold T takes a very small value. Under
the premise of enough pixels in the detection region, T will tend to O. If the image illumination changes significantly, then the
dynamic threshold T will increase significantly. This method can effectively eliminate the impact of light changes.

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

375 www.ijergs.org

5. OBJECT TRACKING METHOD

Moving target tracking means to find various locations of the moving object in the video sequences.
Tracking information about the moving objects is represented using a vector state notation by
X
t
= [ X
t,n
|n=1,.,N
0
] (4)
Where No is the number of moving objects at time step t .

X
t,n
= [ r,R ]
t,n
(5)


The nth component contains the (r) object centroid and the (R) Square bounding of an object, respectively.


6. EXPERIMENTAL RESULTS

Following figures shows results for moving object detection using Reference Background subtraction. Here we used static camera to
capture video images Fig. no.1 shows Reference Background frame. For object detection we subtract reference background frame
from current frame with some object so we get subtracted frame means difference between original image and current image


Fig4.Reference Background Frame


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

376 www.ijergs.org

Fig5.current frame with some object

Fig 6.Reference Background subtracted Frame


Fig7. Frame with object Detected


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

377 www.ijergs.org

Fig8. Color frame with object detected

Fig9. Moving object Tracking

7. CONCLUSION

In this paper, a real-time and accurate method for moving object detection and Tracking proposed based on reference background
subtraction and use dynamic threshold method to obtain a more complete moving object. This method can effectively eliminate the
impact of light changes. This algorithm is very fast and uncomplicated, able to detect moving object better and it has a broad
applicability. This method is very reliable and mostly used in video surveillance applications.
ACKNOWLEDGEMENTS
This work is supported in part by Electronics & Telecommunication department of a Dr.D.Y.Patil college of Engineering Ambi-Pune.
The author would like to thank the anonymous reviewers and the editor for their constructive comments.

REFERENCES:
[1] Lijing Zhang, Yingli Liang," Motion human detection based on background subtraction," Second International Workshop on
Education Technology and Computer Science. 2010 IEEE.
[2] Tao Jianguo, Yu Changhong, "Real-Time Detection and Tracking of Moving Object," Intelligent Information Technology
Application 2008 UTA '08. Second International Symposium on Volume 2, 20-22 Dec2008 Page(s):860 - 863
[3] Carlos R. del-Blanco, Fernando Jaureguizar, and Narciso Garca, " An Efficient Multiple Object Detection and Tracking
Framework for Automatic Counting and Video Surveillance Applications," IEEE Transactions on Consumer Electronics, Vol. 58, No.
3, August 2012.
[4] K.Kinoshita, M.Enokidani, M. Izumida and K.Murakami, "Tracking of a Moving Object Using One-Dimensional Optical Flow
with a Rotating Observer," Control, Automation, Robotics and Vision, 2006. ICARCV'06. 9th International Conference on 5-8 Dec.
2006 Page(s): 1 - 6
[5] Niu Lianqiang and Nan Jiang, "A moving objects detection algorithm based on improved background subtraction," Intelligent
Systems Design and Applications, 2008. ISDA '08. Eighth International Conference on Volume 3, 26-28 Nov. 2008 Page(s):604 607
[6] M. Mignotte and IKonrad, "Statistical Background Subtraction Using Spatial Cues," Circuits and Systems for Video Technology,
IEEE Transactions on Volume 17 Issue 12, Dec. 2007 Page(s):1758 -1763.
[7] Zhen Tang and Zhenjiang Miao, "Fast Background Subtraction and Shadow Elimination Using improved Gaussian Mixture
Model," Haptic, Audio and Visual Environments and Garnes, 2007. IEEE International Workshop on 12-14 Oct. 2007 Page(s):38 41
[8] Wang Weiqiang, Yang Jie and Gao Wen, "Modeling Background and Segmenting Moving Objects from Compressed Video, "
Circuits and Systems for Video Technology, IEEE Transactions on Volume 18, Issue 5, May 2008 Page(s):670 681
[9] M.Dimitrijevic, "Human body pose detection using Bayesian spatio temporal templates," 2007 International Conference on
Intelligent and Advanced Systems, 2008, pp.764-9.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

378 www.ijergs.org

[10] Du-Ming Tsai and Shia-Chih Lai, "Independent Component Analysis Based Background Subtraction for Indoor Surveillance ,"
image Processing,IEEE Transactions on Volume 18, Issue 1, Jan. 2009 Page(s):158 16
[11] N. Amamoto and A. Fujii, Detecting obstructions and tracking moving objects by image processing technique, Electronics and
Communications in Japan, Part 3, vol. 82, no. 11, pp. 2837, 1999.
[12] N. Ohta, A statistical approach to background suppression for surveillance systems, in Proceedings of IEEE Intl Conference
on ComputerVision, 2001, pp. 481486





















International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

379 www.ijergs.org

Speech Compression for Better Audibility Using Wavelet Transformation with
Adaptive Kalman Filtering
P. Sunitha
1
, Satya Prasad Chitneedi
2
1
Assoc. Professor, Department of ECE, Pragathi Engineering College, Andhra Pradesh, India
2
Research Scholar (M.Tech), VLSI System Design, Department of ECE, Pragathi Engineering College, Andhra Pradesh, India
E-mail- satyaprasadchitneedi@gmail.com

Abstract This paper deals with speech compression based on Discrete wavelet transforms and Adapive Kalman filter. English
words were used for this experiment. This kalman filter with wavelet coding could successfully compress and reconstructed words
with perfect audibility by using both waveform coding. But in general the Wavelet coding gives more accuracy for audibility. Here the
proposed Adaptive Kalman filter with Wavelet Coding which gives more audibility than only Wavelet coding.
In mobile communication systems, service providers are continuously met with the challenge of accommodating more users within a
limited allocated bandwidth. For this reason, manufactures and service providers are continuously in search of low bit-rate speech
coders that deliver toll-quality speech.
The result obtained from Wavelet Coding was compared with Adaptive Kalman with Wavelet Coding. From the results we saw that
the performance of Wavelet Coding with Adaptive Kalman Filter was better than wavelet transform.

Keywords Wavelet Transform coding (DWT), Adaptive Kalman filtering, Signal to Noise Ratio (SNR), Peak Signal to Noise
Ratio (PSNR), Normalized Root Mean Square Error (NRMSE), Percentage of zero coefficients (PZEROS) , Compression Score (CS).
INTRODUCTION
Speech is a very basic way for humans to convey information to one another. With a bandwidth of only 4 kHz, speech can convey
information with the emotion of a human voice. People want to be able to hear someones voice from anywhere in the world, as if the
person was in the same room. As a result a greater emphasis is being placed on the design of new and efficient speech coders for voice
communication and transmission; today applications of speech coding and compression have become very numerous. Many
applications involve the real time coding of speech signals, for use in mobile satellite communications, cellular telephony, and audio
for videophones or video teleconferencing systems. Other applications include the storage of speech for speech synthesis and
playback, or for the transmission of voice at a later time. Some examples include voice mail systems, voice memo wristwatches, voice
logging recorders and interactive PC software.
Traditionally speech coders can be classified into two categories: waveform coders and analysis/synthesis vocoders (from voice
coders). Waveform coders attempt to copy the actual shape of the signal produced by the microphone and its associated analogue
circuits [1]. A popular waveform coding technique is pulse code modulation (PCM), which is used in telephony today. Vocoders use
an entirely different approach to speech coding, known as parameter coding, or analysis/synthesis coding where no attempt is made at
reproducing the exact speech waveform at the receiver, only a signal perceptually equivalent to it. These systems provide much lower
data rates by using a functional model of the human speaking mechanism at the receiver. One of the most popular techniques for
analysis/synthesis coding of speech is called Linear Predictive Coding (LPC).
Some higher quality vocoders include RELP (Residual Excited Linear Prediction) and CELP (Code Excited Linear Prediction) [2].
Very simply wavelets are mathematical functions of finite duration with an average value of zero that are useful in representing data
or other functions. Any signal can be represented by a set of scaled and translated versions of a basic function called the mother
wavelet. This set of wavelet functions forms the wavelet coefficients at different scales and positions and results from taking the
wavelet transform of the original signal. The coefficients represent the signal in the wavelet domain and all data operations can be
performed using just the corresponding wavelet coefficients [3].
Whispered speech is playing a more and more important role in the widespread use of mobile phones for private communication than
ever. Speaking loudly to a mobile phone in public places is considered a nuisance to others and conversations are often overheard.
Since noisy signals are not available directly here we are taking the original signal and adding noisy signals such as babble, car, street.
Different methods such as Weiner, MMSE, spectral subtraction, wavelets are used to filter the signals from noise. These methods are
used earlier but output after filtering is not accurate. So in this paper we proposed Kalman filter method which improves signal to
noise ratio (SNR) of original speech compared to above methods.
This paper is organized as follows: Section 2 covers discrete wavelet transform. Section 3 covers Speech Enhancement and Kalman
filtering method, Section 4 discuses Performance measurements of wavelets, Section 5 shows results. Finally, section 6 gives
Conclusion.

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

380 www.ijergs.org

SPEECH COMPRESSION USING DISCRETE WAVE TRANSFORM
Speech compression using discrete wave transforms (DWT) is shown in steps below.

Choice of Appropriate Wavelet

The choice of the mother wavelet plays a very important role in designing high quality speech. Choosing the appropriate wavelet will
maximize the SNR and minimizes the relative error. Here we selected db20 wavelet for better results.
Wavelets with more vanishing moments provide better reconstruction quality, as they introduce less distortion into the processed
speech and concentrate more signal energy in a few neighboring coefficients. However the computational complexity of the DWT
increases with the number of vanishing moments and hence for real time applications it is not practical to use wavelets with an
arbitrarily high number of vanishing moments [4].
Decomposition Level

Wavelets work by decomposing a signal into different frequency bands and this task is carried out by choosing the wavelet function
and computing the discrete wavelet transform (DWT) [5]. Choosing a decomposition level for the DWT Usually depends on the type
of signal being analyzed.

Truncation of Coefficients

The coefficients obtained after applying DWT on the frame concentrate energy in few neighbors. Here we are truncating all
coefficients with low energy and retain few coefficients holding the high energy value. Two different approaches are available for
calculating thresholds.

Global Thresholding

The aim of Global thresholding is to retain the largest absolute value coefficients. In this case we can manually set a global threshold.
The coefficient values below this value should be set to zero, to achieve compression.

Level dependent Thresholding

This approach consists of applying visually determined level dependent thresholds to each de composition level in the Wavelet
Transform. The value of the threshold applied depends on the compression. The task is to obtain high compression and an acceptable
SNR needed to reconstruct the signal and detect it. Among these two, high SNR is achieved using global thresholding compared to
level dependent thresholding.

Encoding

Signal compression is achieved by first truncating small valued coefficients and then efficiently encoding them.
One way of representing the high-magnitude coefficients is to store the coefficients along with their respective positions in the wavelet
transform vector [5]. For a speech signal of frame size F, taking the DWT generates a frame of size T, slightly larger than F. If only
the largest L coefficients are retained, then the compression ratio C is given by: C = F/2L

Another approach to compression is to encode consecutive zero valued coefficients [6], with two bytes. One byte to indicate a
sequence of zeros in the wavelet transforms vector and the second byte representing the number of consecutive zeros.

SPEECH ENHANCEMENT

Modeling noisy speech and filtering

If the clean speech is represented as x(n) and the noise signal as v(n), then the noise-corrupted speech y(n), which is the only
observable signal in practice, is expressed as
Y (n) = x (n) + v (n) (1)
In Wiener filtering method filtering depends on the adaptation of the transfer function from sample to sample based on the speech
signal statistics (mean and variance). It is implemented in time domain to accommodate for the varying nature of the speech signal.
The basic principle of the Wiener filter is to obtain an estimate of the clean signal from that corrupted by additive noise. This estimate
is obtained by minimizing the Mean Square Error (MSE) between the desired signal s (n) and the estimated signal s(n). Transfer
Function in frequency domain is given below
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

381 www.ijergs.org

H () = P
s
() + P
v
()
Where P
s
() and P
v
() are the power spectral densities of the clean and the noise signals, respectively.
An improved method is based on minimum mean square error-short time spectral amplitude (MMSE-STSA) is proposed to cancel
background noise in whispered speech. Using the acoustic character of whispered speech, the algorithm can track the change of non-
stationary background noise effectively. Compared with original MMSE-STSA algorithm and method in selectable mode Vo-coder
(SMV), the improved algorithm can further suppress the residual noise for low signal-to-noise radio (SNR) and avoid the excessive
suppression. Whereas in Spectral subtraction based speech enhancement methods are known to be effective for the suppression of
additive stationary, broadband noise. Tonal noises such as car horn sounds are found to cause serious degradation of the output speech
quality. And in wavelet de-noising method is a nonlinear de-noising method based on the wavelet decomposition. Compared with the
traditional low pass filters, the wavelet de-noising method can not only realize the function of low pass filter but also maintain the
feature of the signal. Among the different methods of wavelet de-noising, the wavelet threshold de-noising method is applied widely
and can meet the needs of real time.

Kalman Filtering Method

The Kalman filter is an unbiased, time-domain, linear minimum mean squared error (MMSE) estimator, where the enhanced
speech is recursively estimated on a sample-by-sample basis. Hence, the Kalman filter can be viewed as a joint estimator for both the
magnitude and phase spectrum of speech, under non-stationary assumptions [7]. This is in contrast to the short-time Fourier transform
(STFT)-based enhancement methods, such as spectral subtraction, Wiener filtering, and MMSE estimation [8], where the noisy phase
spectrum is combined with the estimated clean magnitude spectrum to produce the enhanced speech frame. However, it has been
reported that for spectral SNRs greater than approximately 8 dB, the use of unprocessed noisy phase spectrum does not lead to
perceptible distortion [8], kalman filter is used by Stephen So, Kamil K. Wojcicki, Kuldip K. Paliwal for speech enhancement in his
paper Single-channel speech enhancement using Kalman filtering in the modulation domain 2010 [9]
In the scalar Kalman lter that is used for speech enhancement, v (n) is a zero-mean, white Gaussian noise that is uncorrelated with
x(n)3 . A p
th
order linear predictor is used to model
The speech signal:

=

+()

=1
(2)

Where {ak , k = 1, 2, . . . , p} are the wavelets and w(n) is the white Gaussian excitation with zero mean and a variance of
w

.Rewriting Eq. (1) and (2) using state vector representation:

x(n) = Ax(n 1) + dw(n) (3)

y(n) = c
T
x(n) + v(n) (4)

where x(n) = [x(n), x(n1), . . . , x(np+1)]T is the hidden state vector, d = [1, 0, . . . , 0]T and c = [1, 0, . . . , 0]T are the
measurement vectors for the excitation noise and observation, respectively. The linear prediction state transition matrix A is given by:
(5)


When provided with the current sample of corrupted speech y(n), the Kalman filter calculates x(n|n), which is an unbiased, linear
MMSE estimate of the state vector x(n), by using the following recursive equations

Pnn 1 = APn 1n 1A
T
+
w
2
dd
T


Kn = Pnn 1c[
v
2
+c
T
P(n|n 1)c]
1


xnn 1 = Ax(n 1|n 1)
Pnn = I KnC
T
Pnn 1

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

382 www.ijergs.org

xnn = xnn 1 + K(n)[yn C
T
xnn 1]

The current estimated sample is then given by xn = c
T
x(n|n)
This extracts the first component of the estimated state vector. During the operation of the Kalman filter, the noise corrupted speech
y(n) is windowed into non-overlapped, short (e.g. 20 ms) frames and the wavelets and excitation variance 2 we are estimated.

WAVELETS PERFORMANCE MEASURES

A number of quantitative parameters can be used to evaluate the performance of the wavelet based speech coder, in terms of both
reconstructed signal quality after decoding and compression scores. The following parameters are compared:

- Signal to Noise Ratio (SNR),
- Peak Signal to Noise Ratio (PSNR),
- Normalized Root Mean Square Error (NRMSE),
- Percentage of zero coefficients (PZEROS)
- Compression Score (CS)

The results obtained for the above quantities are calculated using the following formulas

Signal to Noise Ratio (SNR)

This value gives the quality of reconstructed signal. Higher the value, the better:

SNR = 10 log
10
(
x
2
e
2
)

2
Is the mean square of the speech signal and
2
is the mean square difference between the original and reconstructed signals.

Peak Signal to Noise Ratio (PSNR)

PSNR = 10 log
10
NX
2
/X r
2


N is the length of the reconstructed signal, X is the maximum absolute square value of the signal x and ||x-r||2 is the energy of the
difference between the original and reconstructed signals.

Normalized Root Mean Square Error (NRMSE)

NRMSE = sqrt[xn rn
2
/(xn xn)
2
]

Where X(n) is the speech signal, r(n) is the reconstructed signal, and x(n) is the mean of the speech signal.

Percentage of zero coefficients (PZEROS)

It is given by the relation:

PZEROS = 100*(No of Zeros of the current decomposition) / No of coefficients.

Compression Score (CS)

It is the ratio of length of the original signal to the compressed signal.

C = Length(x(n)) Length (cWC)
cWC is the length of the compressed wavelet transform vector.
Effects of Threshold

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

383 www.ijergs.org

In this experiment, there is a need to study the effects of varying threshold value on the speech signals in terms of SNR and
compression score. For db20 at level 2, the threshold value was slowly increased, and the corresponding values of the SNR and
Compression score were recorded in Tables 1 and 2:

Table 1. Male

Threshold
Values
SNR Compression Score
2 4.81 45.88
4 4.83 45.29
6 4.82 45.22
8 4.89 45.16

Table 2. Female

Threshold
Values
SNR Compression Score
2 3.19 37.01
4 3.20 36.86
6 3.23 37.04
8 3.19 37.51

RESULT
As shown in table 1 and 2 a speech files, in spoken English language spoken, is recorded by male and female, the effects of varying
threshold value on the speech signals in terms of SNR and compression score were observed at different levels.

There are many factors affect the wavelet based speech
coders performance, mainly what compression ratio could be achieved at suitable SNR value with low value of NRMSE. To improve
the compression ratio of wavelet-based coder, we have to consider that it is highly speaker dependent and varies with his age and
gender. That is low speaking speed cause high compression ratio with high value of SNR. Increasing the scale value in wavelet-based
speech coder gives higher compression ratios.
From table 3, kalman filter with wavelet coding has the better peak signal to noise ratio (PSNR) compared to the wavelet transform.

Figure 1: Output Waveform




International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

384 www.ijergs.org













Table 3. Noisy
Speech Model
with Kalman Filtering and Wavelet Coding


CONCLUSION

A simple kalman filter algorithm for one-dimensional signals (as speech signal) based on wavelet transform coding was developed. It
compacts as much of the signal energy into as few coefficients as possible. These coefficients are preserved and the other coefficients are
discarded with little loss in signal quality.
As previously mentioned, the purpose of this approach is to reconstruct an output speech signal by making use of the accurate estimating
ability of the Kalman filter.
Performance of the wavelet coder is tested by male and female speech signals. Results illustrate that the performance of Wavelet Coding with
Adaptive Kalman Filter was better than wavelet transform.
Thus the resultant compression will be more accurate than only Wavelet Transformation Technique

REFERENCES:
[1]. J.N. Holmes, Speech Synthesis and Recognition, Chapman & Hall, London, 1988.
[2]. A. Gersho, Speech Coding, Digital Speech Processing, A.N. Ince, ed., Kluwer Academic Publishers, Boston, 1992, pp. 73-100.

[3]. Hatem Elaydi, Mustafa I. Jaber, Mohammed B. Tanboura, Speech Compression using Wavelets Electrical & Computer
Engineering Department Islamic University of GazaGaza, Palestine.

[4]. V. Viswanathan, W. Anderson, J. Rowlands, M. Ali and A. Tewfik, Real-Time Implementation of a Wavelet-Based Audio
Coder on the T1 TMS320C31 DSP Chip, 5th International Conference on Signal Processing Applications & Technology (ICSPAT),
Dallas, TX, Oct. 1994.
[5]. E.B. Fgee, W.J. Phillips, W. Robertson, Comparing Audio Compression using Wavelets with other Audio Compression
Schemes, IEEE Canadian Conference on Electrical and Computer Engineering, IEEE, Edmonton, Canada, 1999, pp. 698-701.
[6]. W. Kinsner and A. Langi, Speech and Image Signal Compression with Wavelets, IEEE Wescanex Conference Proceedings,
IEEE, New York, NY, 1993, pp. 368-375.
[7]. C. J. Li, Non-Gaussian, non-stationary, and nonlinear signal processing methods with applications to speech processing and
channel estimation, Ph.D. dissertation, Aarlborg University, Denmark, Feb. 2006.
[8]. P. Loizou, Speech Enhancement: Theory and Practice, 1st ed. CRC Press LLC, 2007.
[9]. Stephen So, Kamil K. Wojcicki, Kuldip K. Paliwal Single-channel speech enhancement using Kalman filtering in the
modulation domain 2010, Signal Processing Laboratory, Griffith School of Engineering, Griffith University, Brisbane, QLD,
Australia, 4111

Wavelet SNR NRMSE PZEROS
COMPREESIO
N SCORE
PSNR
WITH
WAVELET
PSNR WITH
KALMAN
Haar 4.85 0.75 75 45.27 12.96 14.33
Sym2 6.00 0.70 74.99 50.48 13.53 14.71
Sym5 6.06 0.70 74.98 49.90 13.56 14.51
Coif2 6.06 0.70 74.98 49.49 13.56 14.51
Db20 6.13 0.70 74.94 50.45 13.60 14.46
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

385 www.ijergs.org

A Study of Page Replacement Algorithms
Anvita Saxena
1

1
Research Scholar, M.Tech (CS), Mewar University, Rajasthan
E-mail- anvita21saxena@rediffmail.com
Abstract-- A virtual memory system requires efficient page replacement algorithms to make a decision which pages to evict from
memory in case of a page fault. Many algorithms have been proposed for page replacement. Each algorithm is used to decide on
which free page frame a page is placed and tries to minimize the page fault rate while incurring minimum overhead. As newer
memory access patterns were explored, research mainly focused on formulating newer approaches to page replacement which could
adapt to changing workloads. This paper attempts to summarize major page replacement algorithms. We look at the traditional
algorithms such as Optimal replacement, LRU, FIFO and also study the recent approaches such as Aging, ARC, CAR.

Index Terms- Page Replacement, Optimal Replacement, LRU, FIFO, ARC, CAR, Aging.

INTRODUCTION
The full potential of multiprogramming systems can be realized by interleaving the execution of more programs. Hence we use a two-
level memory hierarchy consisting of a faster but costlier main memory and a slower but cheaper second memory.
In virtual memory the combined size of program code, data and stack may exceed the amount of main memory available in the
system. This is made possible by using secondary memory, in addition to main memory [1]. Pages are brought into main memory only
when the executing process demands them, this is known as demand paging.
A page fault typically occurs when a process references to a page that is not marked present in main memory and needs to be brought
from secondary memory. In such a case an existing page needs to be discarded. The selection of such a page is performed by page
replacement algorithms which try to minimize the page fault rate at the least overhead.
This paper outlines the major advanced page replacement algorithms. We start with basic algorithms such as optimal page
replacement, LRU, FIFO and move on to the more advanced dueling ARC, CAR, Aging algorithm.


PAGE REPLACEMENT ALGORITHMS
A. Optimal Algorithm
The Optimal page replacement algorithm is easy to describe. When memory is full you always evict a page that will be unreferenced
for the longest time. This scheme, of course, is possible to implement only in the second identical run, by recording page usage on the
first run. But generally the operating system does not know which pages will used, especially in applications receiving external input.
The content and the exact time of the input may greatly change the order and timing in which the pages are accessed. But nevertheless
it gives us a reference point for comparing practical page replacement algorithms. This algorithm is often called OPT or MIN.
B. Least Recently Used (LRU)
The LRU policy is based on the principle of locality which states that program and data references within a process tend to cluster.
The Least Recently Used replacement policy selects that page for replacement which has not been referenced for the longest time. For
a long time, LRU was considered to be the most optimum online policy. The problem with this approach is the difficulty in
implementation. One approach would be to tag each page with the time of its last reference; this would have to be done at each
memory reference, both instruction and data. LRU policy does nearly as well as an optimal policy, but it is difficult to implement and
imposes significant overhead [3].
The result on scan data is as follows.

Algorithm Ref count Page count Page faults Hit count Hit ratio
LRU 16175 7150 10471 5704 63.20%

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

386 www.ijergs.org


Scan datapage fault ratio using LRU

C. First In First Out (FIFO)
The simple First-In, First-Out (FIFO) algorithm is also applicable to page replacement. All pages in main memory are kept in a list
where the newest page is in head and the oldest in tail. When a page needs to be evicted, the oldest page is selected and the page is
inserted to the head of the list and the page at the tail is swapped out. Another implementation is using a ring (usually referred to as
clock): Every time a page has to be replaced, the page the pointer points at is swapped out and at the same place the new page is
swapped in. After this, the pointer moves to the next page. The FIFO algorithm's performance is rather bad [2].
The result on scan data is as follows :

Algorithm Ref count Page count Page faults Hit count Hit ratio
FIFO 16175 7150 11539 4636 51.37%


Scan data page fault ratio using FIFO


D. Adaptive Replacement Cache (ARC)
The Adaptive Replacement Cache (ARC) is an adaptive page replacement algorithm developed at the IBM Almaden Research Center
[4]. The algorithm keeps a track of both frequently used and recently used pages, along with some history data regarding eviction for
both. ARC maintains two LRU lists: L1 and L2. The list L1 contains all the pages that have been accessed exactly once recently, while
the list L2 contains the pages that have been accessed at least twice recently. Thus L1 can be thought of as capturing short-term utility
(recency) and L2 can be thought of as capturing long term utility (frequency). Each of these lists is split into top cache entries and
bottom ghost entries. That is, L1 is split into T1 and B1, and L2 is split into T2 and B2. The entries in T1 union T2 constitute the
cache, while B1 and B2 are ghost lists. These ghost lists keep a track of recently evicted cache entries and help in adapting the
behavior of the algorithm. In addition, the ghost lists contain only the meta-data and not the actual pages. The cache directory is thus
organized into four LRU lists:
1. T1, for recent cache entries
2. T2, for frequent entries, referenced at least twice
3. B1, ghost entries recently evicted from the T1 cache, but are still tracked.
4. B2, similar ghost entries, but evicted from T2
If the cache size is c, then |T1 + T2| = c. suppose |T1| = p,
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

387 www.ijergs.org

then |T2| = c - p. The ARC algorithm continually adapts the value of parameter p depending on whether the current workload favors
recency or frequency. If recency is more prominent in the current workload, p increases; while if frequency is more prominent, p
decreases (c - p increases).
Also, the size of the cache directory, |L1| + |L2| = 2c.
For a fixed p, the algorithm for replacement would be as:
1. If |T1| > p, replace the LRU page in T1
2. If |T1| < p, replace the LRU page in T2
3. If |T1| = p and the missed page is in B1, replace the LRU page in T2
4. If |T1| = p and the missed page is in B2, replace the LRU page in T1
The adaptation of the value of p is based on the following idea: If there is a hit in B1 then the data stored from the point of view of
recency has been useful and more space should be allotted to store the least recently used one time data. Thus, we should increase the
size of T1 for which the value of p should increase. If there is a hit in B2 then the data stored from the point of view of frequency was
more relevant and more space should be allotted to T2. Thus, the value of p should decrease. The amount by which p should deviate is
given by the relative sizes of B1 and B2.

E. CLOCK with Adaptive Replacement (CAR)
CAR attempts to merge the adaptive policy of ARC with the implementation efficiency of CLOCK [5]. The algorithm maintains four
doubly linked lists T1, T2, B1, and B2. T1 and T2 are CLOCKs while B1 and B2 are simple LRU lists. The concept behind these lists
is same as that for ARC. In addition, the lists T1 and T2 i.e. the pages in the cache, have a reference bit that can be set or reset.
The precise definition of four lists is as follows:
1. T10 and B1 contains all the pages that are referenced exactly once since its most recent eviction from T1 U T2 U B1 U B2 or was
never referenced before since its inception.
2. T11 , B2 and T2 contains all the pages that are referenced
more than once since its most recent eviction from T1 U T2 U B1 U B2.
The two important constraints on the sizes of T1, T2, B1and B2 are:
1. 0 |T1|+|B1| c. By definition, T1 U B1 captures recency. The size of recently accessed pages and frequently accessed pages keep
on changing. This prevents pages which are accessed only once from taking up the entire cache directory of size 2c since increasing
size of T1 U B1 indicates that the recently referenced pages are not being referenced again which in turn means the recency data that is
stored is not helpful. Thus it means that only the frequently used pages are re-referenced or new pages are being referenced.
2. 0 |T2|+|B2| 2c. If only a set of pages are being accessed frequently, there are no new references. The cache directory has
information regarding only frequency.

F. Aging
The aging algorithm is somewhat tricky: It uses a bit field of w bits for each page in order to track its accessing profile. Every time a
page is read, the first (i.e. most significant) bit of the page's bit field is set. Every n instructions all pages' bit fields are right-shifted by
one bit. The next page to replace is the one with the lowest (numerical) value of its bit field. If there are several pages having the same
value, an arbitrary page is chosen. The aging algorithm works very well in many cases, and sometimes even better than LRU, because
it looks behind the last access. It furthermore is rather easy to implement, because there are no expensive actions to perform when
reading a page. However, finding the page with the lowest bit field value usually takes some time. Thus, it might be necessary to
predetermine the next page to be swapped out in background [6].


ANALYSIS
Offline performance of the algorithms is measured as page fault count and hit ratio.
Hit ratio (hr) is calculated as
hr = 100 - mr.
Miss ratio (mr) is
mr = 100 _ ((#pf - #distinct)/(#refs - #distinct)),

where #pf is the number of page faults, #distinct is the number of distinct pages used in the trace and #refs is The number of
references in the trace.


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

388 www.ijergs.org

CONCLUSION
The evolution of replacement algorithms shows the analyses and proof of better performance has moved from mathematical analysis
to testing against real world program traces. This trend shows how difficult it is to mathematically model the memory behavior of
programs. An important factor is also the large amount and easy availability of important programs. The other clear trend is the
realization of the need for workload adaption. The simple traces used in this thesis support the conclusions of the authors. CAR and
ARC seem most promising algorithms and offer significant improvement over basic CLOCK. Page replacement plays only a small
part in overall performance of applications, but studies, have shown that the benefits are real. It certainly seems like a worthwhile idea
to further evaluate implementations of both CAR and ARC in real operating system.

REFERENCES:
[1] A. S. Sumant, and P. M. Chawan, Virtual Memory Management Techniques in 2.6 Linux kernel and challenges, IASCIT International Journal
of Engineering and Technology, pp. 157-160, 2010.
[2] Heikki Paajanen, Page replacement in operating system memory Management, Masters thesis, University of Jyvskyl, 2007
[3] Amit S. Chavan, Kartik R. Nayak, Keval D. Vora, Manish D. Purohit and Pramila M. Chawan, A Comparison of Page
Replacement Algorithms, IACSIT, vol.3, no.2, April 2011.
[4] N. Meigiddo, and D. S. Modha, ARC: A Self-Tuning, Low overhead Replacement Cache, IEEE Transactions on Computers,
pp. 58-65, 2004.
[5] S. Bansal, and D. Modha, CAR: Clock with Adaptive Replacement, FAST-04 Proceedings of the 3rd USENIX Conference on
File and Storage Technologies, pp. 187-200, 2004.
Mohd Zeeshan Farooqui, Mohd Shoaib, Mohammad unnun Khan, A Comprehensive Survey of Page Replacement Algorithms,
IJARCET, VOLUME 3 ISSUE 1, JANUARY 2014















International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

389 www.ijergs.org

Color Image Segmentation with Different Image Segmentation Techniques
Rupali B. Nirgude
1
, Shweta Jain
1

1
Pune University, Gyanba Sopanrao Moze College of Engineering, Balewadi, Pune, India
E-mail- rupali.nirgude@gmail.com

Abstract This paper deals with different image segmentation techniques to enhance the quality of color images. The
technique follows the principle of clustering and region merging algorithm. The system is combination various stages like histogram
with hill climbing techniques; auto clustering includes k means clustering, the consistency test of regions, and automatic image
segmentation using dynamic region merging algorithms. The different techniques of image segmentation include thresolding,
clustering, region merging, region growing, color segmentation, motion segmentation and automatic image segmentation. This paper
mention different methods for efficient segmentation which is combination of different algorithms. Here the given image gets
converted into histogram. The histogram is graphical representation of input image. The peaks from histogram diagram are detected
using hill climbing algorithm; this gives the rough number of clusters for the further steps. The clusters are form usingefficient K
means clusteringalgorithm. The regions having homogenous or similar characteristics can be combining with the nearest neighbor
algorithm and dynamic region merging algorithm .This segmentation technique is useful in field of image processing as well as
advance medical use.
Keywords DP, NNG, Kmeans, SPRT, RAG, Hill climbing techniques,DRM.
INTRODUCTION
The image quality is an important issue since the use of images for practical purposes is increasing day by day. Image segmentation
collects the useful pieces of the image [2] and uses it according to the application. There are different methods to segment the data.
The efficient combination of the methods is used here for better segmentation results. The result imagerepresent segmented image
which is useful for variety ofapplications. The image segmentation operation follows certain properties or attributes like intensity of
colour, edgepattern, colour hue, edges, texture etc. [1]








Fig 1.Original image with segmented image
As shown in the figure the segmented image is output of the system, which is output and is improved image. The system uses
automatic image segmentation technique. The best example of automatic image segmentation is the use of dynamic region merging.
The basic goal of these image enhancement techniques is to improve the images so that so they will be better to use as input to image
analysis. This system converts given input image into corresponding histogram graph first, secondlyAuto clustering operation use for
detection of the peaks. The detected peak gives number of clusters to be form as an input for the actual clustering.Then it gets




International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

390 www.ijergs.org

converted into clusters by using hill climbing algorithm. The homogenous features are captured with K means clustering. The actual
merging is performed with the help of region adjency graph, nearest neighbour graph and dynamic region merging. This total system
gives efficient output which is segmented image; this result image is useful from engineering field to medical field.
2. LITERATURE SURVEY
Literature review suggests various methods for image segmentation, this paper suggest combination of various methods which is
beneficial from efficiency point of view. Following are some methods of image segmentation:
1. Thresholding: This is one of the useful and easy to use methods. This method separates given input data into different subparts
according to its features. One subpart is with positive characteristics and another is with negative characteristics. Here as shown in the
following diagram consider color as a feature, then this method divideinput image into black color and white color partition. [6] This
operation is shown diagrammatically as follows:











Fig2.Input image Threshold effect on input image
2. Clustering:
Clustering is grouping of similar type of data. The clusters of the colors are formed with the help of various clustering technique such
as Log based clustering, Fuzzy clustering; k-means (KM) [7] clustering. Out of these the paper suggests K means clustering. Input to
clustering algorithm is K, whichare number of clusters and the all the data points are randomly assigned to the clusters. The procedure
is repeated as we continuously computing the distance between the centroids and data points. K means clustering is the very well-
known method to group the similar elements of the given image.





International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

391 www.ijergs.org








Fig3 conversion of original image into K Means segmentation
3. Automatic image segmentation
This is most advance method in the image segmentation. Dynamic region merging algorithm [9] and watershed algorithm [8] are the
famous examples of Automatic image segmentation. In this process the closest regions are merged together to form output segmented
image. The regions are represented bylabels, and these labels are transfers from initial to final label. And gets merged if we find large
homogenous characteristics, this procedure continue up to stopping criteria.






Fig 4 segmentation of original image in region merging style.










International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

392 www.ijergs.org

3. OVERVIEW OF SYSTEM


Hill climbing
Technique


K means
Clustering

SPRT


NNG &DRM
Algorithm


1. Hill climbing Technique
This algorithm is used at the initial stage in our system. This algorithm has unique property to detect the peaks from given histogram
diagram. The algorithm [3][4] is mention as follows:
- Obtain histogram of the given color image.
- Start from initial points of color histogram graph, it then move upwardsup to peak.
- If number of pixels of the closest regions is not same then the algorithm goes upwards.
- If the closest regions have same numbers of pixels, then algorithm follows neighboring regions, and the process is
continue.
- At last stage histogram gives number of peaks which gives number of clusters as input for the cluster formation. The
hill climbing process is as shown diagrammatically below:





Original image
Peak detection
Image into
clusters
Consistency test
Segmented image
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

393 www.ijergs.org








Fig.4 Hill climbing process
(a) Input image. (b) Hill-climbing process
(c) Histogram shows 3 peaks.
(d) Output as segmented image.
2. K means clustering: k-means clustering is an algorithm to divide and merge the objects based on some featuresinto K
number of group. The groups are based on the squares of distances between random points of the images andits
nearest cluster centroid. Iterate the process up to the final iteration. [7]
The algorithm for the K means is as follows:
- Consider number of clusters as input.
- Compute the centroid.
- Calculate the distance objects to centroid.
- If two regions find minimum distance the gather them.
- Continue up to stopping criteria. The color clusters are formed at the output stage.






Fig5.Kmeans Clustering
3. Sequential probability ratio test: The neighbouring regions check the consistency of the regions using SPRT test [5].This test
identifies the similar characteristics according to various attributes like intensity, edge etc. At the initial stage Consider two
assumptions to check if the regions are similar or not.
Result=valid, if neighboring regions are same in desired features, then called as valid hypothesis.


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

394 www.ijergs.org

Result=not valid, if neighboring regions are different, or very contradictory features then called as invalid hypothesis.







Fig.6 Consistency Test
SPRT Algorithm works as follows:
Consider S number of regions which are in sequence.
Form (A, B) as merging boundaries.
Sequence of successive likelihood ratio () is calculated.
If this ratio is out of range the test stops.
Otherwise the test is carried on.
The algorithm for consistency test is as below
Inputs: A = log(1-/), B = log (1-)
Where , are probabilities of decision error.
- The distribution of visual cues is given by P0(x/
0
), P1(x/
1
)
This values of predicate is calculated as
P
0
(x|
0
) =
1
exp (-(I
b
I
a+b
)
T
S
I
-1
(I
b
I
a+b
))
P
1
(x|
1
) = 1
2
exp (-(I
b
I
a
)
T
S
I
-1
(I
b
I
a
))
Choose the k pixels of neighboring regions.
Calculate likelihood ratio = log (P
0
(x|
0
)/(P
1
(x|
1
)
Update = + log (P
0
(x|
0
)/(P
1
(x|
1
)
If >=A, then regions are consistent
If >=B, then regions are not consistent.
4. Nearest neighbor graph
This algorithm is used to speed up the actual merging. Nearest neighbor graph structure is as shown below:




Fig 7.NNG Process


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

395 www.ijergs.org

As shown in the above diagraph we can merge two regions directly if they find similar in the consistency test.there is no need to scan
whole image. Thus the speed of the process is greatly increased.
5. Dynamic region merging algorithm
Dynamic region merging algorithm [1][9] is optimum algorithm as it is not over merged not under merge. It gives optimum solution as
it follows the principle of dynamic programming. This algorithm divides the regions into problem, and each problem is assigned with
label. Algorithm flows through initial label to final label to find the minimum edge weight.
If the algorithm finds minimum weight then we can merge the regions up to stopping criteria. Dynamic region merging algorithm
gives automatic image segmentation








Fig.8 Dynamic region merging process as a shortest path in a layered graph (Upper row) the label transitions of a graph node. (Lower
row) The corresponding image regions of each label layer. Starting from layer 0, (in red) the highlighted region obtains a new label
from (in red) its closest neighbor. If the region is merged with its neighbor, they will be assigned to the same name of label. The
shortest path is shown as the group of (in blue) the directed edges
4. SOFTWARE DEVOLOPEMENT
Interactive software is developed to do the reliable monitoring and management of segmentation process. The system software is
made using MATLAB 10 .We are implementing hill climbing technique and k Means clustering first on the plane color image, and
then applying consistency test using SPRT. Dynamic region merging algorithm and nearest neighbor graph on color image. This
operation is totally software part. In the proposed DRM method, there are five parameters that control the consistency condition.
While implementing the system there are four fix parameters, they are , , 1, 2.Here (, ) represent the probability of accepting
an inconsistent model as consistent and rejecting a consistent model as inconsistent .m is used to decide the amount of data
selected for the random test. If we set 2=1, then only 1 is the user input which can be vary.
RESULT
Following images will show the output results:




Input image

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

396 www.ijergs.org



Result image:

ACKNOWLEDGMENT
I would like to thank all the staff members of E&TC Department at Genba Sopanrao Moze College of Engineering, Baner, Pune for
their valuable guidance and support.
Also I would like to thank Prof.Shweta Jain and Prof.Bina Chauhan from E&TC Department at Genba Sopanrao Moze College of
Engineering, Baner, Pune for their valuable guidance and support
CONCLUSION
Thusin this paper we studied the different image segmentation technique at different stages. The use of algorithms like hill climbing
algorithm and K means algorithm are used for auto clustering. The region consistency is check by sequential probability ratio test.The
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

397 www.ijergs.org

nearest neighbor graph and dynamic region merging algorithm combination gives efficient and enhanced output image. Thus total
system makes use of variety of algorithms to get segmented image.

REFERENCES:
[1] Bo Peng, Lei Zhang , David Zhang, Automatic Image Segmentation by Dynamic Region Merging, IEEE Transactions on
imageprocessing,Vol.20, No. 12 December 2011.
[2] D. A. Forsyth and J. Ponce, Computer Vision: A Modern Approach. Englewood Cliffs, NJ: Prentice-Hall, 2002
[3] D.Comaniciu, P.Meer. Mean Shift: A Robust Approach Toward Feature Space Analysis. IEEE Trans. on Pattern Analysis and
Machine Intelligence.24 (5), pp.1-18, May 2002.
[4] E.J.Pauwels, G.Frederix. Finding Salient Regions in Images: Non-parametric Clustering for ImageSegmentation and Grouping.
Journal of Computer Vision and Understanding, 75(1,2), pp.73-85, 1999.
[5] A. Wald, Sequential Analysis, 3rd ed. Hoboken, NJ: Wiley, 1947.
[6] National Programme on Technology Enhanced learning http://nptel.iitm.ac.in/courses/106105032/38.
[7] S. Thilagamani1 and N. Shanthi, A Survey on Image Segmentation through Clustering, International Journal of Research and
Reviews in Information Sciences Vol. 1, No.
[8] R. Bellman, Dynamic Programming. Princeton, NJ: Princeton Univ.Press, 1957.
[9] L. Vincent and P. Soille, Watersheds in digital spaces: An efficient Algorithm based on immersion simulations, IEEE Trans.
Pattern Anal.Mach. Intell., vol. 13, no. 6, pp. 583598, Jun. 1991












International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

398 www.ijergs.org

Secured Communication for Missile Navigation
Kulkarni Laxmi G
1
, Dawande Nitin A
1

1
P.G Scholar, Department of Electronics and Telecommunication, Dr.D.Y.Patil College of Engg, Ambi
E-mail- kulkarnilaxmi.g@gmail.com,

Abstract In order to improve the security of the Military kinds of network this work is proposed. Here the position of missile
navigates as per the users requirement. The user sends the co-ordinates through pc based server on the base station. For security
purpose encryption is done with RC4algorithm implementation. The system that uses Human Computer Interaction and Visualization
technology provides several encryption algorithms and key generators.
Keywords missile navigation, RC4 algorithm, VNC, PN sequence, USB, encryption
INTRODUCTION
In todays world enemy warfare is an important factor of any nations security. The national security mainly depends on army
(ground), navy (sea), air-force (air).The important and vital role is played by the armys artillery such as scud missile, Bo force guns
etc.
As the name suggests we are making a secure Navigation of Missile using encryption based RC4 algorithm. This is done with the
use of an encryption key. This encryption key specifies how the message is to be encoded. An authorized party is able to decode the
cipher text using a decryption algorithm, which usually requires a secret decryption key that adversaries do not have access to.
There are various types of encryption as AES, DES, and RC4 Algorithm etc. Encryption has long been used by militaries
and governments to facilitate secret communication. An encryption based on chaos and AES algorithm [1]where the
design and realization of an encryption system is based on the algorithm on ARM(S3C6410), which can encrypt and
decrypt the information in many kinds of memorizers, such as UDisk, SD card and mobile HDD. The system that uses
Human-Computer Interaction and Visualization technology provides several encryption algorithms and key generators. In
this paper, they designed and implemented an encryption system to encrypt the stored data based on ARM (S3C6410).
The PN sequences with good properties are generated from chaotic map and the system provides two kinds of encryption
algorithm, one is stream cipher with XOR operation, the other is a hybrid algorithm of AES and chaos. In order to
improve the security of the private information in memorizer, an encryption algorithm, which inherits the advantages of
chaotic encryption, stream cipher and AES algorithm, is proposed in this paper.
The chaotic selective encryption of compressed video (CS ECV) exploits the characteristics of the compressed video [2]. Encryption
is needed to protect the multimedia data. Compared with text encryption; multimedia encryption has some unique characteristics, such
as the large size, high throughput, and real-time processing. An efficient, secure, and lightweight encryption algorithm is desirable to
protect the compressed video. A video clip is generally compressed in a transform Domain with some type of entropy coding. To
protect a compressed video, encryption techniques can be applied to the original data, such as block swapping, or the data can be
transformed using DCT or wavelet coefficients, entropy-coded bit streams, or format headers. The encryption has three separate layers
that can be selected according to the security needs of the application and the processing capability of the client computer. The chaotic
pseudo-random sequence generator used to generate the key-sequence to randomize the important fields in the compressed video
stream has its parameters encrypted by an asymmetric cipher and placed into the stream. The resulting stream is still a valid video
stream. CSECV has significant advantages over existing algorithms for security, decryption speed, implementation flexibility, and
error preservation.
The paper presents the design and implementation of a software application for the provision of secure real time communication
services between workstations, based on the AES prototype cryptographic algorithm and an advanced secret key management system
[3]. The application has been designed based on the requirements of a military unit, so as to allow groups of authenticated users to
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

399 www.ijergs.org

communicate and read the transmitted messages. This application can be used as the basis for the design of an integrated
communication system for a military organization. The present design confines its operation within the limits of a local area network,
but the possibilities are open for operation in extended networks or the internet.
Advanced Encryption Standard (AES) is the most secure symmetric encryption technique that has gained worldwide acceptance.
FPGA implementations of advanced Encryption standard: a survey presents the AES based on the Rijndael Algorithm which is an
efficient cryptographic technique that includes generation of ciphers for encryption and inverse ciphers for decryption[4]. Higher
security and speed of encryption/decryption is ensured by operations like Sub Bytes (S-box)/Inv. (Inv.S-box), Mix Columns/Inv. Mix
Columns and Key Scheduling. Extensive research has been conducted into development of S-box /Inv. S-Box and Mix Columns/Inv.
Mix Columns on dedicated ASIC and FPGA to speed up the AES algorithm and to reduce circuit area. This is an attempt, to survey in
detail, the work conducted in the aforesaid fields. The prime focus is on the FPGA implementations of optimized novel hardware
architectures and algorithms.
Fault attacks are powerful and efficient cryptanalysis techniques to find the secret key of the Advanced Encryption Standard (AES)
algorithm [5]. The paper shows that these attacks are based on injecting faults into the structure of the AES to obtain the confidential
information. To protect the AES implementation against these attacks, a number of counter measures have been proposed. In this
paper, a fault detection scheme for the Advanced Encryption Standard is proposed. They present its details implementation in each
transformation of the AES. The simulation results show that the fault coverage achieves 99.999% for the proposed scheme. Moreover,
the proposed fault detection scheme has been implemented on Xilinx Virtex-5 FPGA. Its area overhead and frequency degradation
have been compared and it is shown that the proposed scheme achieves a good performance in terms of area and frequency.
2. PRAPOSED WORK

2.1 Block Diagram

In this project, I am trying to make secure Navigation of Missile using encryption based RC4 algorithm. The main application of the
project is to navigate the Missile position to the users requirement.
The user sends the co-ordinates through PC based server on the base station. The co-ordinates consist of two parts: first the
circular co-ordinates and then the linear co-ordinates. At base station PC send these co-ordinates through pen drive to the field station.
After receiving the co-ordinates the field, then compares the coordinates to the on board DC motor. It drives the DC motors
of the tires of buggy until the received co-ordinates and the received co-ordinates match. After which the buggy indicate the linear co-
ordinates sent by user. In this way the missile can be navigated to destination.
After the connection has been made the user first has to enter the Password. Then the user can enter the co-ordinate of Missile
Navigation. After enter the X and Y co-ordinates the user can send the codes to the Missile unit.

Figure 2.1 Block diagram of Secured Communication for Missile Communication


Liquid Crystal Display:
LCD is used in a project to visualize the output of the application. 16x2 LCD is used which indicates 16 columns and 2 rows.
So, we can write 16 characters in each line. So, total 32 characters we can display on 16x2 LCD.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

400 www.ijergs.org

LCD can also use in a project to check the output of different modules interfaced with the . Thus LCD plays a vital role in a
project to see the output and to debug the system module wise in case of system failure in order to rectify the problem.
Pen drive Interfaced:
The pen drive is the most commonly used device now days. This device is used to store the data via USB interfaced devices
like computers, laptops or other USB hub devices.
VNC1 is a device which is used for mapping the files on the pen drive. give the basic DOS commands and access all the files
functions like copy, paste, store , delete, cut, etc.
With the help of VNC1, we can do all the basic file functions like copy, paste, store , delete, cut, etc without using the computer. We
can control all these file functions using VNC.

2.2 Encryption method used

Encryption is the process of encoding messages (or information) in such a way that third parties cannot read it, but only authorized
parties can. Encryption doesn't prevent hacking but it prevents the hacker from reading the data that is encrypted. In an encryption
scheme, the message or information (referred to as plaintext) is encrypted using an encryption algorithm, turning it into an
unreadable cipher text. This is usually done with the use of an encryption key, which specifies how the message is to be encoded. Any
adversary that can see the cipher text should not be able to determine anything about the original message.

2.2.1 RC4 Algorithm

In the algorithm the key stream is completely independent of the plaintext used. An 8 * 8 S-Box (S0 S255), where each of the entries
is a permutation of the numbers 0 to 255, and the permutation is a function of the variable length key. There are two counters i, and j,
both initialized to 0 used in the algorithm.

Fig2.2.1 RC4 Algorithm
Algorithm Features:
- Uses a variable length key from 1 to 256 bytes to initialize a 256-byte state table. The state table is used for subsequent
generation of pseudo-random bytes and then to generate a pseudo-random stream which is XORed with the plaintext to give
the cipher text. Each element in the state table is swapped at least once.
- The key is often limited to 40 bits, because of export restrictions but it is sometimes used as a 128 bit key. It has the
capability of using keys between 1 and 2048 bits. RC4 is used in many commercial software packages such as Lotus Notes
and Oracle Secure SQL.
- The algorithm works in two phases, key setup and ciphering. Key setup is the first and most difficult phase of this encryption
algorithm. During a N-bit key setup (N being your key length), the encryption key is used to generate an encrypting variable
using two arrays, state and key, and N-number of mixing operations. These mixing operations consist of swapping bytes,
modulo operations, and other formulas. A modulo operation is the process of yielding a remainder from division. For
example, 11/4 is 2 remainder 3; therefore eleven mod four would be equal to three.
- The algorithm works in two phases, key setup and ciphering. Key setup is the first and most difficult phase of this encryption
algorithm. During a N-bit key setup (N being your key length), the encryption key is used to generate an encrypting variable
using two arrays, state and key, and N-number of mixing operations. These mixing operations consist of swapping bytes,
modulo operations, and other formulas. A modulo operation is the process of yielding a remainder from division. For
example, 11/4 is 2 remainder 3; therefore eleven mod four would be equal to three.

3. EXPERIMENTAL RESULTS

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

401 www.ijergs.org

First pen drive is detected by the system after detection of drive; angle and position are to be entered with the help of user interface
as shown below.

Fig.6.1 Directions entered onto pen-drive using the User Interface
When we connect pen- drive to the controller, the missile navigates as per data entered in the pen drive, and display is shown by LCD.


Fig.6.2 Display on the LCD
5. CONCLUSIONS
The goal of this paper is to form secured communication for missile navigation. In military application security of data is the most
important factor. Here I have tried to illustrate a secured communication with the help of encryption method and VNC which is useful
for interfacing of pen drive with which missile can be navigated as per the instructed directions. It is done by entering the position, an
angle of missile and giving directions in forward/ reverse, left/ right directions of missile onto user interface.
The algorithm used for the encryption is simple and easy. There are various types of encryption algorithms, which can are useful in
many applications. Out of this RC algorithm is the easiest algorithm to implement but also is easy algorithm to crack comparatively.

ACKNOWLEDGEMENTS
I would like to thank all the staff members of E&TC Department, Dr. D.Y.College of engineering, Ambi. for their support .





International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

402 www.ijergs.org

REFERENCES:

[1] Chunlei Wang, Guangyi Wang, Yue Sun and Wei Chen ARM Realization of Storage Device Encryption Based on Chaos and
AES Algorithm 2011 Fourth International Workshop on Chaos-Fractals Theories and Applications
[2] Chun Yuan, Yuzhou Zhong, and Yuwen He, Chaos Based Encryption Algorithm for Compressed Video, Chinese Journal of
Computers, Vol.27 No.2, Feb 2004, pp.257- 263.
[3] Nikolaos G. Bardis, Konstantinos Ntaikos, Design of a secure chat application Based on AES cryptographic algorithm and key
management
[4] Shylashree.N; Nagarjun Bhat; V. Shridhar, FPGA implementations of advanced Encryption
standadrd: a survey Directory of Open Access Journals (Sweden), Jan 2012
[5] Hassen Mestiri; Noura Benhadjyoussef; Mohsen Machhout; Rached Tourki, A Robust Fault Detection Scheme for the
Advanced Encryption Standard, Directory of Directory of Open Access Journals (Sweden), Jan 2013
[6] Rui Zhao, Qingsheng Wang, and Huiping Wen, Design of AES algorithm Based On Two Dimensional Logistic and
Chebyshev Chaotic Mapping, Microcomputer
[7] Yi Li, and Xingjiang Pan, AES Based on Neural Network of Chaotic Encryption algorithm, Science Technology and
Engineering, Vol.10 No.29, Oct 2010, pp.7310- 7313.
[8] Ruxue Bai, Hongyan Liu, and Xinhe Zhang, AES and its software implementation based on ARM920T, Journal of
Computer Applications, Vol.31 No.5, May 2011, pp.1295-1301.
[9] Shaonan Han, and Xiaojiang Li, Compatible AES-128AES-192,AES-256 Serial AES Encryyption and Decryption Circuit
Design, Microelectronics & Computer, Vol.27 No.11, Nov 2010, pp.40-50.
[10] http://we pp.7310-nku.baidu.com/view/5ebbd326ccbff121dd36831a.html











International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

403 www.ijergs.org

WNS for Agricultural Monitoring & Development
Piyusha D.Patil, Prof N.A.Dawande
E-mail- piyushapatil23@gmail.com

Abstract By taking into account rapidly increasing population of India, it being difficult fulfill basic needs of mankind. To
overcome this issue one solution is to increase the agricultural productivity in terms of quantity as well as quality. Unfortunately
farmers are affected due to unhealthy climate for the crops. It results in degradation of agricultural products quantity wise & quality
wise. If we implement the system which will help farmer to monitor climatic conditions on regular basis So that he can analyze data &
can take preventive actions accordingly. In this article we are going to implement the system to monitor the environmental conditions
& control the environmental condition as much as possible. The parameters that we are going monitor includes temperature, light,
humidity, soil moisture, motion detector etc. While data is being collected the system itself will take action to maintain the healthy
climate for the crops. Whatever action is being taken by the system will be be immediately informed to framer via sms. In case farmer
do not need to maintain the climate, then he can refuse the automatically action taken by system through sms. In this way framer is
having control on the farms climate all time & from anywhere. To implement this design we are going to use microcontroller ie. PIC
18F4520, sensor block, Radio Frequency module ie.RF module CC2500&GSM ModulenumberSIM900DHere we will implement
wireless sensor nodes that are designed by using RF Module. These nodes will collect information related to farms environmental
condition. On receiver side RF receiver will receive data & will transfer it on operators computer where it will be stored. If
Temperature is rising above certain level which will be harmful for crop then microcontroller will make fan ON until temperature will
be maintained. At the same time, if soil moisture is below required level then controller will make motor ON for required amount of
time. This action taken by the system is informed to farmer through sms by using GSM Module. In this way farmer will have control
on the farms conditions.
Keywords WSN, Radio Frequency Module, PIC Microcontroller, Environmental Parameters, GSM Module, Automatic Preventive
Actions, Control Through SMS
INTRODUCTION
Here we are going to implement the system to help farmers to monitor the environmental conditions. Also this system can maintain
the farms climate so that crops will grow in the healthy environment. In this way this design will help farmers to increase quantity of
agricultural product by default quality will also be maintained.
As we all are aware of the fact of increasing population & degradation of agricultural products due to polluted environment. These
two issues are badly affecting the basic fulfillments of specially lower class population. One solution to minimize this issue is to
concentrate on development of agricultural sector, by using the different techniques
We know that wireless sensor network has several advantages, like it minimizes the complexity, wireless less systems are easy to
handle, these systems are cost efficient, low power requirement, easy to install, small in size, so now a days it has become more
popular and are being used in wide range. Due to above mentioned benefits WSN is used in military, healthcare, domestic &
agricultural sectors effectively.
WSN is made up of number of wireless nodes, which are connected to central operator. These networks can be from simple star
network to complex multi hope wireless mesh networks. The type of network can be decided as per our requirement. Here range of
Radio frequency node is up to 30 Meters. So number of nodes are used to collect the data from whole area to be monitored. The data
which is being collected includes the information regarding temperature, moisture, humidity, obstacle detection, soil moisture etc. This
data is sensed by the different sensor that are going to cover different area of the farm. Now this collected data will be sent to the
central PC via RF trans-receiver. On other end it is collected by the RF trans-receiver And is being sent to the PC, where it will stored.
The collection and storing of the data is done on regular basis. The collected data is being analyzed by microcontroller, to check
whether it is in safe limit or not. If it is safe then no action is taken, and if it will be unsafe then preventive action will be taken by
microcontroller. Whatever action is taken it informed to farmer via sms, Sms is sent by using GSM module. In this way by using the
concept of WSN for monitoring & development we are not only monitor the environmental condition but also going to maintain it as
far as possible. So this system will help farmer to improve the productivity of farm.

2.LITERATURE REVIEW
In last couple of years many researchers have focused on agricultural development with help of wireless sensor network. As we know
in agricultural sector plays most important role in life of Indian economy as well as common mans day today life. In Iindia 70% of
population is engaged in agriculture. Traditionally methods of developing agricultural lands has several drawbacks, & most time
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

404 www.ijergs.org

consuming processes. But nowadays technology has been developed tremendously which can be helpful in optimizing better results
from agricultural sector.
This can be achieved by providing healthy environmental condition for the agricultural land. So it is necessary to monitor the climatic
condition of the agricultural land. In ref.[1] Herman Sahota, Ratnesh Kumar and Ahmed Kamal have implemented WSN for
agriculture using MAC protocol for multiple power modes as well as for synchronization between nodes. In ref.[2] XinYue, Haifeng
Ma, Yantao Wang used the zig bee technology to monitor climatic conditions of the coal mine.In ref.[3] JoobinGharibshah,
SeyedMorsalGhavami, MohammadrezaBeheshtifar, and Reza Farshiued the neural network for monitoring & sensing drought
conditions in Iran. Ref[4] Sahota H, Kumar R, Kamal A, Huang J. have desined energy efficient nodes where data has been collected
periodically.
From all the above overview we came to know that Using WSN we can monitor the environment of a greenhouse. The size must be as
small as possible so that the nodes can provide with many particular applications, also there is limited resource of power processing
and computing for actor sensor nodes. The decision making unit is used to process the necessary action for the sensors to sense the
environment.
The devices are mostly based on event driven model to work efficiently within the constrained memory. Wireless sensor networks
consist of tiny devices that usually have several resource constraints in terms of energy, processing power and memory [2]. The
miniaturization and continuous advancements in wireless technology have made the development of sensor networks to monitor
various aspects of the environment increasingly possible. The concept of wireless sensor networks is based on a simple equation:
Sensing + CPU + Radio frequency nodes = Thousands of potential applications
As soon as people understand the capabilities of a wireless sensor network, hundreds of applications come into the mind. It is a very
good combination of modern technology to emerge in recent years. An effective wireless sensor network requires a combination of the
sensors, radios and CPUs with proper understanding of the both capabilities and limitations of each of the underlying hardware
components, as well as a correct understanding of modern networking technologies and distributed systems theory. Whether powered
by a battery or an energy-scavenging module in wireless sensors, the prime concern is the power efficiency.
3.OVERVIEW OF THE SYSTEM
In our system there are following main equipments which play important role in system design
- One master PC terminal
- Three slaves terminals
- RF module CC2500

The basic idea which we are going to implement is that we are going to design number of nodes to cover different parts of farm. So we
are placing three slaves which will be placed in such a way that they will be always in the range of PC master.
Where we are going to use the PHP software. In PHP we are going to maintain all the information regarding farm conditions with is
node number.





International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

405 www.ijergs.org












Figure[1] Transmitter architecture

Above architecture is single unit for single area. Likewise we are going to design different modules for different area.
Each of these modules will be provided with the unique identity number or code. Here as we can see in the above block diagram we
have used different sensors to monitor the environmental conditions.
As well as we have provided with the relay which is going to operate the fan & motor depending on the requirement, or the collected
data from the sensor







Figure[2]Receiver side
4.BLOCK DIAGRAM
As shown the above Block diagram ie fig.[1] there are slaves which are transmitter or nodes, which will collect data
from different nodes. Where different sensors have been connected to the node. Now this collected data will be analyzed
by the PIC microcontroller, also this data is sent to master PC to maintain all the data. By analyzing this data PIC will
decide which preventive actions has to be taken to maintain required climate for the particular plant.
CC2500 RF MODULE
It is the radio transreceiver which is provided with RF communication at 2.4GHz. It transmits and receives data at 9600
baud rate. It half duplex it provides communication in both direction but in one direction at atime. It supports following
features
Supports Multiple Baud rates ( 9600 )
Works on ISM band (2.4 GHz)

PIC
18F4520
Microcont-
roller
Temp
Sensor
CO2
Sensor
Humidity
Sensor
Water level
Sensor
RF Transmitter
Module
CC2500
ULN2003 DC Motor FAN for
cooling
Motion
Sensor
Light Sensor
PIC
RF
Receiver
Module
CC2500

Master
PC
RS
232
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

406 www.ijergs.org

Designed to be aseasy to use as cables.
No external Antenna required.
Plug and play device.
Works on 5 DC supply.
MASTER PC
In this project the master PC is maintaining the collected data.
This is done by using PHP software. We are going provide with the monitored data time to time by displaying it on PC screen time to
time. As well as it will provide information to the operator time to time. Which will be helpful for operator to maintain the climatic
conditions. The host terminal PC is connected via RF transreceivermodule technology and RS 232 communication. RF transreceiver
module is wireless sensor network can pass signals through wall and can be implemented where wired network is difficult to establish
& maintain. The wireless technology advancement makes it possible to establish a network by placing the communicating nodes at the
required places and switching on the transmitters in them. RF transreceiver can cover area upto 30 Mt. so by using number of nodes
we can make whole area to be covered.
COLLISION AVOIDANCE PROTOCOL
As we know slave sends a request to the master, on other hand master gives response to slaves request. But it may happen that
number of slaves are sending request at a time, in such cases collision may occur during communication. To avoid such critical
situations we are going to use a master request and slave response protocol. Here master requests to slave in this frame it is provided
with slave ID. This request is forwarded to all slaves. This request frame is received by all slaves & this request frame is stored in the
slave. If in the frame slave ID matches with its own slave ID then in that case slave sends response to the master in form of collected
parameter like temp, humidity etc. If the ID sent by the master.is not matching with its own ID then this request from the master is
ignored.
5.HARDWARE DESIGN
The hardware components ement this system are summarized as follows-
Hardware Component:
1. PIC 18F4520 (LPC 2138)
2. Radio frequency transreceiver CC2500
3. Temperature Sensor [LM35D]
4. Light Sensor[LDR]
5. Humidity SHT75
6. Motion sensor
7. Level Sensor
8. 2.4 GHz SMA antenna
9. RS232
10. Relay ULN 2003
6.ACKNOWLEDGMENT
We would like to express our sincere thanks to our guide Prof. N. A. Dawande for his valuable guidance. We would like to thank our
M.E. coordinator Prof. R.Sathyanarayan for his support & co-operation throughout seminar work.We thank our Head Of the
Department Prof. M.M.Mukhedkar for his complete support, references and valuable suggestions.We are grateful to all teaching and
non teaching staff of E&Tc Engineering department of Dr.D.Y.Patil.College of Engineering, Ambi,Pune, for the help.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

407 www.ijergs.org

7.CONCLUSION
The aim of this project is to monitor the environmental conditions of farm or green house, also to sense the water
availability from water resource. Provide all the information to the central PC. As well as to control or to maintain the
environmental condition by taking immediate preventive action.In this way by using WSN we can monitor & maintain the
environmental condition of farm or green house efficiently.

REFERENCES:
[1] Herman Sahota*, Ratnesh Kumar and Ahmed Kamal on A wireless sensor network for precision agricultureand its performance,
Wirel.Commun.Mob.Comput. 2011; 11:16281645 Published online 2011 in Wiley Online Library (wileyonlinelibrary.com). DOI:
10.1002/wcm.1229
[2]Comput.& Inf. Eng. Coll., Heilongjiang Inst. of Sci. & Technol., Harbin, China ; XinYue ; Haifeng Ma ; Yantao Wang "Design of coal mine gas
monitoring system based on zig-bee., Future computer science & education 2011 international conference

[3]JoobinGharibshah, SeyedMorsalGhavami,MohammadrezaBeheshtifar, and Reza Farshi , Nationwide Prediction of Drought Conditions in Iran
Based on Remote Sensing Data IEEE Transactions On Computres, Vol. 63, NO. 1, Janaury 2014.
[4] Sahota H, Kumar R, Kamal A, Huang J. An energyefficientwireless sensor network for precisionagriculture..In Proceedings IEEE Symposium on
Computersand Communications. IEEE Computer Society: Riccione, Italy, June 2010; 347350. [Online].Available:
http://doi.ieeecomputersociety.org/10.1109/ISCC.2010.5546508.
[5] Sahota H, Kumar R, Kamal A. Performance modelingand simulation studies of MAC protocols in sensornetwork performance. In Proceedings
InternationalConference on Wireless Communications and MobileComputing. ACM: Istanbul, Turkey, July 2011.
[6] ZamalloaMZn, Seada K, Krishnamachari B, Helmy A.Efficient geographic routing over lossy links in wirelesssensor networks. ACM
Transactions on SensorNetworks June 2008; 4: 12:112:33. [Online]. Available:http://doi.acm.org/10.1145/1362542.1362543.
[7] Lee S, Choi J, Na J, Kim C-k. Analysis of dynamiclow power listening schemes in wireless sensornetworks.Communications Letters January
2009; 4345. [Online]. Available: http://portal.acm.org/citation.cfm?id=1650422.1650437.
[8] Bianchi G. Performance analysis of the ieee 802.11distributed coordination function. IEEE Journal onSelected Areas in Communications 2000;
18: 535547.
[9]RusliM, Harris R, Punchihewa A. Markov chain-basedanalytical model of opportunistic routing protocolfor wireless sensor networks, In TENCON
2010 -2010 IEEE Region 10 Conference, November 2010;257262.
[10] A.H. Weerts, J. Schellekens, and F.S. Weiland, Real-TimeGeospatial Data Handling and Forecasting: Examples fromDelft-FEWS Forecasting
Platform/System, IEEE J. Selected Topicsin Applied Earth Observations and Remote Sensing, vol. 3, no. 3,pp. 386-394, Sept. 2010.
[11] A. Diouf and E.F. Lambini, Monitoring Land-Cover Changes in Semi-Arid Regions: Remote Sensing Data and Field Observationsin the Ferlo,
Senegal, J. Arid Environments, vol. 48, pp. 129-148,2001.
[12] A.J. Peters, E.A. WalterShea, L. Ji, A. Vin a, M. Hayes, and M.D.Svoboda, Drought Monitoring with NDVI-Based StandardizedVegetation
Index, Photogrammetric Eng. and Remote Sensing,vol. 68, pp. 71-75, 2002.
[13] C. Gouvia, R.M. Trigo, and C.C. DaCamara, Drought andVegetation Stress Monitoring in Portugal Using Satellite Data,Natural Hazards and
Earth System Sciences, vol. 9, pp. 185-195, 2009.
[14] J.D. Bolten, W.T. Crow, X. Zhan, T.J. Jackson, and C.A. Reynolds, Evaluating the Utility of Remotely Sensed Soil MoistureRetrievals for
Operational Agricultural Drought Monitoring,IEEE J. Selected Topics in Applied Earth Observations and RemoteSensing, vol. 3, no. 1, pp. 57-66,
Mar. 2010.
[15] C.M. Rulinda, A. Dilo, W. Bijker, and A. Steina, Characterisingand Quantifying Vegetative Drought in East Africa Using FuzzyModelling and
NDVI Data, J. Arid Environments, vol. 78, pp. 169-178, 2012
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

408 www.ijergs.org

Application of Grey Based Design of Experiment Technique in Optimization of
Charpy Impact Testing
Md. Shadab
1
, Rahul Davis
2

1
Scholar, Department of Mechanical Engineering, SHIATS University
2
Assistant Professor, Department of Mechanical Engineering, SHIATS University

Abstract The Mechanical Properties of different materials are determined by conducting various design experimental runs. That
should be according to the actual working and operating conditions. In this phenomena the type of applied load(s), its duration and the
working conditions play a vital role. Engineering materials are always subjected to external loadings therefore it is of great
significance if the effect of these loadings can be quantified. In the current research work an attempt was made to optimize the process
parameters with the help of surface treatments in order to maximize the impact toughness and minimize the hardness of EN 31 Steel.
For this purpose grey based design of experiment method was used and results works were validated graphically and analytically the
obtain result shows the height of the hammer affected the impact toughness significantly on the other hand thermal treatment was the
most influenced factor that affected materials hardness significantly.
Keywords Impact Value, ANOVA, Heat Treatment, Cryogenic Treatment.
INTRODUCTION
As part of the government project during World War II, United States planed continuous block constructions of all- welded cargo
vessels (DWT 11000, Liberty ship). The construction was started with outbreak of the Pacific war from 1942. 2708 Liberty ships
were constructed from 1939 to 1945. 1031 Ships got damaged due to Brittle fracture were reported by April 1, 1946. More than 200
Liberty Ships were sink or damaged beyond all of repair. These mark the Start of the discipline of fracture mechanics [1].
Schenectady is one of those, which broken in two with a large sound when it was moored at wharf. AASHTO introduced a fracture
control plan [2] in the aftermath of the silver bridge collapse in 1967 due to brittle fracture. The judgment of all these researches
concluded that these fractures was due to lack of understanding of the ductile-to-brittle transition [1,3].The accident was caused by
incidence and development of brittle crack, which were due to the lack of fracture toughness of welded joint. The accident should be
the most exclusive and huge scale experiments of the century. The accident showed importance of fracture toughness, which marked
the birth of the fracture mechanics. Recently many industries and researchers have shown their interested in cryogenic treatment (CT).
Cryogenic treatment is an extension of conventional heat treatment (CHT) which converts retained austenite to martensite. [4] Lipson
(1967) studied the effect of cryogenic treatment on the grain size and suggested that the cryogenic treatment reduces grain size by 1-
4%. This refinement of grain structure would increase in toughness of the specimens. Cryogenic treated materials enhance the
mechanical properties. CT brings about thermal volatility to martensite by means of supersaturating it with carbon which further leads
to migration of carbon atoms and atoms of alloying elements to the nearby lattice defects and separate there [5]. Cryogenic treatment
improves not only toughness but also microstructure of intellectual and decrease residual stresses. Use of cryogenic treatment in
enhancing properties of tool materials has received broad receiving by researchers and industries, recently. The research publications
during the past two decades show an increase in interest, on the use of cryogenic treatment, on various cutting tool materials, die
materials and bearing materials to exploit the positive effects of such a simple and cost effective technique. Improvements in hardness,
fatigue resistance, toughness, and wear resistance of cryogenically treated materials, have been reported invariably in every scientific
publication.
HEAT TREATMENT SEQUENCE FOR MAXIMIZING MARTENSITE TRANSFORMATIONS
The complete treatment process of the steels consists of Austenitizing, Annealing, Cryo-treatment or deep cryogenic treatment (DCT),
and Tempering. To achieve better microstructure of the steel to get most preferred properties, it is recommended by the most
researchers to execute DCT after completion of Austenitizing and before tempering in conventional heat-treatment cycle as shown in
Fig-1. The complete process sequentially consists of the steps Austenitizing, Annealing, Cryogenic treatment and Tempering.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

409 www.ijergs.org

Conventional heat treatment consists of annealing, and tempering, while deep cryogenic treatments involves an more low temperature
treatment cycle to conventional heat treatment process. Arockia Jaswin et.al [6] determined that the cooling rate for EN 52 and 21-4N
valve are respectively 1
0
C /min and 1.5
0
C /min. A. Joseph Vimal et.al [7] state Cryogenic treatment refers to sub-zero temperature of
EN31 steel to 90K in 3 hours and saturated time at that temperature for 24 hours and allowing it to attain room temperature in another
6 hours.
The various heat treatment cycles is indicated in fig.1 below:

Raw material (EN 31)

Annealing

Tempering

Cryogenic Treatment

Low tempering
Medium tempering
High tempering


Fig: 1: Thermal Treatments
GREY RELATIONAL ANALYSIS
Grey relational analysis was proposed by Deng in 1989 as cited in is widely used for measuring the degree of relationship between
sequences by Gray relational grade. Grey relational analysis is applied by several researchers to optimize control parameters having
multi-responses through Grey relational grade. The use of grey relational analysis to optimize the face milling operations with
multiple performance characteristics includes the following steps:
Identify the performance characteristics and impact parameters to be evaluated.
Determine the number of levels for the process parameters.
Select the appropriate orthogonal array and assign the parameters to the orthogonal array
Perform the grey relational generating and calculate the grey relational coefficient
Analyses the experimental results using the grey relational grade.
A. Data Pre-Processing:
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

410 www.ijergs.org

In grey relational analysis, the data pre-processing is the first step performed to normalize the random grey data with different
measurement units to transform them to dimensionless parameters. Thus, data pre-processing converts the original sequences to a set
of comparable sequences. As the original sequence data has quality characteristic as larger-the-better then the original data
is pre-processed as larger-the-best:
(1)
Where is comparable sequence, and are minimum and maximum values respectively of the
original sequence
B.Grey Relation Grade
Next step is the calculation of deviation sequence, from the reference sequence of pre-processes data and the
comparability sequence. The grey relational coefficient is calculated from the deviation sequence using the following relation:
(2)
Where is the deviation sequence of the reference sequence and comparability sequence
= | |
(3)
(4)
is the distinguishing coefficient the distinguished coefficient value is chosen to be 0.5.
The Grey relational grade implies that the degree of influence related between the comparability sequence and the reference sequence.
In case, if a particular comparability and reference sequence has more influence on the reference sequence then the other ones, the
grey relational grade for comparability and reference sequence will exceed that for the other gray relational grades. Hence, grey
relational grade is an accurate measurement of the absolute difference in data between sequences and can be applied to appropriate the
correlation between sequences.
EXPERIMENTAL DETAILS AND RESULTS
Design of Experiment (DOE)
Its method based on statistics [8] and other discipline for incoming at an well-organized and efficient planning of experiments with a
view to obtain valid conclusion from the analysis of experimental data [9]. The design of experiment (DOE) is done in such a way to
find a parameter that will improve the performance characteristics to an acceptable or optimum value. It is also kept in mind that the
design will enable us to find a less expensive, alternative design, material, or methods which will provide equal performance.
Depending on situations experiment were carried out and dissimilar strategies are creature implemented.
The experiment accepted out is based on the principle of Orthogonal Arrays (OAs). This principle [10] state that factors can be
evaluated separately of one another; the effect of one factor does not trouble the opinion of the effect of another factor. DOE is a
balanced experiment: an equal numbers of samples under the various treatment circumstances.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

411 www.ijergs.org

The control parameters were measured for the planned research work for multiple performance characteristics at three different levels
and three different factors and are shown in table 1 below:
Table 1: Different Factors and their Levels for Annealing EN 31

Factors Level 1 Level 2 Level 3
Notch angle (A) 30
0
45
0
60
0

Thermal Treatment (B) Cooling followed by
Tempering (CT)
Cooling followed by
Cryogenic Treatment &
Tempering (CCTT)
Cooling followed by Tempering &
Cryogenic Treatment (CTCT)
Height of the Hammer (C) 1370 1570 1755

In this paper the effect of thermal treatments was studied along with three impact test parameters to maximize the impact toughness of
EN31 steel. The experiment is to find the optimum impact value by combining all parameters like notch angle, thermal treatment, and
height of the hammer at different point.
The material chosen in this work was given various thermal treatments. Specimens were subjected to conventional heat
treatment and deep cryogenic treatment separately.
Table 2.Different Heat Treatments Employed to EN 31 steel
Sr. No. Nomenclature Thermal Treatment
1 ACTLTT Annealing(810
0
c for 1 hr) followed by Cryogenic
treatment & Low Temperature Tempering (250
0
c for 1
hr)
2 ACTMTT Annealing(810
0
C for 1 hr) followed by Cryogenic
Treatment & Medium Temperature Tempering (400
0
C for
1 hr)
3 ACTHTT Annealing(810
0
C for 1hr) followed by Cryogenic
Treatment & high Temperature Tempering (550
0
C for 1
hr)
Chemical compositions of EN31 steel
The chemical composition test of EN 31 steel was performed in the Metal Testing Laboratory, Indian Railways, Bareilly, India. The
details of composition are shown below.
Table 3: Chemical Composition of EN 31 Steel
Sl. No Composition Percentage
1 C% 1.10
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

412 www.ijergs.org

2 Mn% 0.46
3 Si% 0.22
4 Cr% 1.08
5 S% 0.023
6 P% 0.026
Design of experiment is an effective tool to design and conduct the experiments with minimum resources. Orthogonal Array is a
statistical method of defining parameters that converts test areas into factors and levels. Test design using orthogonal array creates an
efficient and concise test suite with fewer test cases without compromising test coverage. In this paper, L27 Standard Orthogonal
Array design matrix was used to set the control parameters to evaluate the process performance. Table 4 shows the design matrix used
in this work.
Charpy Impact Test
The Charpy impact test, also known as the Charpy V-notch test, is a standardized high strain-rate test which determines the amount
of energy absorbed by a material during fracture. This absorbed energy is a measure of a given material's notch toughness and acts as a
tool to study temperature-dependent ductile-brittle transition.
Charpy impact test is practical for the assessment of brittle fracture of metals and is also used as an indicator to determine suitable
service temperatures. The charpy test sample has a size (101055) mm
3
with three V- Notch 30
0
, 45
0
, 60
0
of 2mm depth will be hit
by a pendulum at the opposite end of the notch.


Fig: 2 Dimension of the Specimen Fig: 3 Charpy Impact Test machine
ANALYSIS OF RESULTS
Experiments are carried out using L27 Standard Orthogonal Array design matrix with three levels of the procedure parameters. All
together 27 specimens be taken to be tested with different thermal treatments. All specimens follow the following pattern as
Annealing followed by Cryogenic Treatment & Tempering.
It was also assumed that to test sub-zero temperature of -196
0
C a deep cryogenic treatment was to be employed. The impact values
were the combined effect of test parameters according to Orthogonal Array.
Table 4: Results of Experimental Trials
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

413 www.ijergs.org

Notch Angle
(degree)
Thermal Treatment Height of the
Hammer (mm)
Impact Value
(J)
SNRA1
30 Tempering 1370 95 39.5545
30 Tempering 1570 59 35.4170
30 Tempering 1755 13 22.2789
30 Cryogenic Treatment followed by Tempering 1370 92 39.2758
30 Cryogenic Treatment followed by Tempering 1570 56 34.9638
30 Cryogenic Treatment followed by Tempering 1755 14 22.9226
30 Tempering followed by Cryogenic Treatment 1370 94 39.4626
30 Tempering followed by Cryogenic Treatment 1570 59 35.4170
30 Tempering followed by Cryogenic Treatment 1755 14 22.9226
45 Tempering 1370 95 39.5545
45 Tempering 1570 52 34.3201
45 Tempering 1755 12 21.5836
45 Cryogenic Treatment followed by Tempering 1370 94 39.4626
45 Cryogenic Treatment followed by Tempering 1570 55 34.8073
45 Cryogenic Treatment followed by Tempering 1755 15 23.5218
45 Tempering followed by Cryogenic Treatment 1370 85 38.5884
45 Tempering followed by Cryogenic Treatment 1570 58 35.2686
45 Tempering followed by Cryogenic Treatment 1755 12 21.5836
60 Tempering 1370 88 38.8897
60 Tempering 1570 52 34.3201
60 Tempering 1755 15 23.5218
60 Cryogenic Treatment followed by Tempering 1370 85 38.5884
60 Cryogenic Treatment followed by Tempering 1570 60 35.5630
60 Cryogenic Treatment followed by Tempering 1755 12 21.5836
60 Tempering followed by Cryogenic Treatment 1370 80 38.0618
60 Tempering followed by Cryogenic Treatment 1570 61 35.7066
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

414 www.ijergs.org

60 Tempering followed by Cryogenic Treatment 1755 8 18.0618

All experiments have been performed on Impact testing Machine of energy range 0-300J manufactured by Fuel instruments and
Engineer Private Ltd. The respond changeable measured was Impact value in Joules. Typically superior impact values are attractive.
Thus the data sequences have the larger-the-better individuality, the larger the better methodology.
Using Grey Relational Analysis the data pre-processing was obtained to normalize the random grey data with different
measurement to change them to dimensionless parameters. Therefore it converts the original sequences to a position of similar
sequences.
Table 5: Data Pre-Processing Result
Sr. No. Impact Value (J)
1 0.0000
2 0.4137
3 0.9425
4 0.0344
5 0.4482
6 0.9310
7 0.0114
8 0.4137
9 0.9310
10 0.0000
11 0.4942
12 0.9540
13 0.0114
14 0.4597
15 0.9195
16 0.1149
17 0.4252
18 0.9540
19 0.0804
20 0.4942
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

415 www.ijergs.org

21 0.9195
22 0.1149
23 0.4022
24 0.9540
25 0.1724
26 0.3908
27 1.0000
Table 6: Deviation sequences
Sr. No. Impact value (J)
1 1.0000
2 0.5863
3 0.0575
4 0.9656
5 0.5518
6 0.0690
7 0.9886
8 0.5863
9 0.0690
10 1.0000
11 0.5058
12 0.0460
13 0.9886
14 0.5403
15 0.0805
16 0.8851
17 0.5748
18 0.0460
19 0.9196
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

416 www.ijergs.org

20 0.5058
21 0.0805
22 0.8851
23 0.5978
24 0.0460
25 0.8276
26 0.6092
27 0.0000
Table 7: Calculation of Grey Relational Grade
Sr. No. A B C Grade
1 1 1 1 0.3333
2 1 1 1 0.4602
3 1 1 3 0.8968
4 1 2 1 0.3411
5 1 2 2 0.4753
6 1 2 3 0.8787
7 1 3 1 0.3358
8 1 3 2 0.4602
9 1 3 3 0.8787
10 2 1 1 0.3333
11 2 1 2 0.4971
12 2 1 3 0.9157
13 2 2 1 0.3358
14 2 2 2 0.4806
15 2 2 3 0.8613
16 2 3 1 0.3609
17 2 3 2 0.4652
18 2 3 3 0.9157
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

417 www.ijergs.org

19 3 1 1 0.3522
20 3 1 2 0.4971
21 3 1 3 0.8613
22 3 2 1 0.3609
23 3 2 2 0.4554
24 3 2 3 0.9157
25 3 3 1 0.3766
26 3 3 2 0.4507
27 3 3 3 1.0000

Table No.-8 Response table for Grey Relational Grade for Factors
Levels A B C
1 0.5622 0.5718 0.3477
2 0.5739 0.5672 0.4713
3 0.5855 0.5826 0.9026
Table no.-9 Response table for Signal to Noise Ratios of Impact Values at different Levels of the Parameters
Level Notch Angle
(degree)
Thermal
Treatment
Height
of the
Hammer(mm)

1 32.47 32.30 39.05
2 32.08 32.16 35.09
3 31.59 31.67 22.00
Delta 0.88 0.62 17.05
Rank 2 3 1


Table no.-10 ANOVA Table for main effect for Signal to Noise ratio
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

418 www.ijergs.org

Source DF Adj SS Adj MS F-Value P-Value
Notch Angle (degree) 2 3.50 1.748 1.63 0.221
Thermal Treatment 2 1.93 0.966 0.90 0.422
Height of the Hammer (mm) 2 1433.26 716.631 667.50 0.000
Error 20 21.47 1.074
Total 26 1460.16
In Table no.10 In P-value, factor minimum than 0.05 will be considered as the significant factor. So, Height of hammer with the
value of 0.000 is the significant factor.

Table no.-11 Response table for Means for Impact Values at different Levels of the Parameters
Level Notch Angle (Degree) Thermal Treatment Height of the Hammer(mm)
1 55.11 53.67 89.78
2 53.11 53.44 56.89
3 51.22 52.33 12.78
Delta 3.89 1.33 77.00
Rank 2 3 1
Table no.-12 ANOVA Table for main effect for Means
Source DF Adj SS Adj MS F-Value P-Value
Notch Angle(mm) 2 68.1 34.0 2.36 0.120
Thermal Treatment 2 98.2 4.6 0.32 0.731
Height of the
Hammer (mm)
2 26869.4 13434.7 930.57 0.000
Error 20 288.7 14.4
Total 26 27235.4
In Table no.12 In P-value, factor minimum than 0.05 will be considered as the significant factor. So, Height of hammer with the
value of 0.000 is the significant factor
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

419 www.ijergs.org


Figure no.-4 Main Effects plot for means

Figure no.-5 Main Effects plot for SN ratios
According to Fig.4
As per the observations of the above experimental trial runs, the following results can be drawn out and discussed as follows in terms
of graphical analysis
Indicates that at 1
st
level of notch-Angle (30
0
) the impact value obtained is maximum. Similarly at 1
st
level of Thermal Treatment
(Cryogenic Treatment followed by Tempering) and at 1
st
level of Height of Hammer (1370mm) respectively, the impact value
obtained is highest.
According to Fig.5
As per the observations of the above experimental trial runs, the following results can be drawn out and discussed as follows in terms
of graphical analysis
Indicates that at 1
st
level of notch-Angle (30
0
) the impact value obtained is maximum. Similarly at 1
st
level of Thermal Treatment
(Cryogenic Treatment followed by Tempering) and at 1
st
level of Height of Hammer (1370mm) respectively, the impact value
obtained is highest.
Acknowledgment
My reverential thanks to our Vice Chancellor Prof. (Dr). R. B. Lal, SHIATS, for providing me with an elite academic platform. I
express my sincere gratitude to Er. Rahul Davis (Assistant Professor, Dept of Mechanical Engg.) for his valuable guidance,
painstaking and constant support during my work. I am deeply indebted to my Father Md. Quasim, my Mother Mrs. Zubaida Khatoon
and my siblings for their constant prayer and support, inspirational encouragement and moral support which enable me to do study and
to perform my Research work.


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

420 www.ijergs.org

CONCLUSION
The present research work has successfully verified the application of Grey relational analysis for multi objective
optimization of process parameters in impact testing of EN 31 Steel. The termination can be drawn from this research
paper are as follows:
1.The highest Grey relational grade of 1.0000 was observed for the experimental run 27, shown in table no. 7 of the
average Grey relational grade, which indicates that the optimal combination of control factors and their levels was 60
0
notch angle, height of the hammer of 1755 mm and thermal treatment of Tempering followed by Cryogenic treatment.
2. This research work can also be utilized for further studies in future.

REFERENCES:
Kobayashi, Hideo, Onoue, Hisahiro, Brittle fracture Liberty Ships, March, 1943.
AASHTO, guide specification for fracture critical Non-redundant steel bridge Members,Washington DC, American Association of
state Highway and Transportation Officials, 1978.
Website: http://www.sozogaku.com/fkd/en/cfen/CB1011020.html
S.Harisha, Bensely, D. Mohan Lal, A. Rajadurai, Gyongyver B. Lenkeyd, Microstructural study of cryogenically treated En 31
bearing steel, journal of materials processing Technology 209,2009.
V.Firouzdor, E.Nejati, F.Khomamizadeh, Effect of Deep Cryogenic Treatment on Wear Resistance and Tool Life of M2 HSS Drill,
Journal of materials processing technology 206,2008,467-472.
M. Arockia Jaswin, D. Mohan Lal. Effect of cryogenic treatment on the tensile behavior of EN 52 and 21-4N valve steels at room and
elevated temperatures, Materials and Design (2010).
A. Joseph Vimal, A. Bensely, D. Mohan Lal, K. Srinivasan, Deep cryogenic Treatment improves wear resistance of EN 31 steel.
Raghuraman S, Thirupathi k, Panneerselvam T, Santosh S Optimization of EDM Parameters Using Taguchi Method And Grey
Relational Analysis for Mild Steel is 2026, International Journal of Innovative Research in Science, Engineering And Technology Vol.
2, Issue 7, 2013.
Rahul H. Naravade, U.N.Gujar, R.R. kharde, Optimization of Cryogenic Treatment on Wear Behaviour of D6 Tool Steel by using
DOE/RSM, International Journal of Engineering and Advanced Technology (IJEAT),ISSN: 2249-8958, Volume-2, Issue-2, December
2012.
P.J. Ross, Taguchi techniques for quality engineering, 2
nd
edition. Tata Mc Graw- Hill Publishing Company Ltd, New York, 2005.
A.Bensely, D. Senthilkumar, D. Mohan Lal, G. Nagarajan, A. Rajadurai. Effect of Cryogenic Treatment on Tensile behavior of case
carburized Steel-815 M17.
A D Wale, Prof. V D Wakchauren,Effect of Cryogenic Treatment on Mechanical Pproperties of Cold work Tool Steels, International
Journal of Modern Engineering Research , Vol.3, Issue.1, Pp-149-154.
Molinari, M. Pellizzari, S. Gialanella, G. Staffelini, K. H. Stiansy, Effect of Deep Cryogenic Treatment on The Mechanical Properties
of Tool Steels, Journal of Materials Processing Technology, 118 , 350-355.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

421 www.ijergs.org

Dhinakarraj C.K, Senthil Kumar N, Mangayarkarasi P, Combined Grey Relational Analysis and Methods for Optimization of Process
Parameters in Cylindrical Grinding.
Dong, Y.Lin, X. Xiao, Deep Cryogenic Treatment of High-Speed Steel and its Mechanism. Heat Treatment of Metals, 3, 55-59






















International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

422 www.ijergs.org

Design of Low Area and Low Power Modified 32-BIT Square Root Carry
Select Adder
Garima Singh
1

1
Scholar, School of Electronics, Center of Development of Advance Computing, Noida, India
E-mail- er.garimasngh@gmail.com

Abstract-In digital circuitry, a compact and fast adder is required to carry out computations in large chips.
Carry Select Adder (CSLA) is one of the fast adders used in many data-processing processors to perform fast arithmetic functions.
Although carry select adder is slower than carry look-ahead adder but area is lesser. From the structure of the CSLA, there is scope for
reducing the area and power consumption in the CSLA. The thesis uses a simple and efficient gate -level modification to significantly
reduce the area and power of the CSLA. Based on this modification 32-bit square-root CSLA (SQRT CSLA) architecture has been
developed and compared with the 32-bit conventional SQRT CSLA architecture.
The modification is the use of Binary-To-Excess-1Converter logic instead of the chain of full adder when carry is 1.This
logic has less number of gates as compared to the design without using binary to excess 1 converter logic. The design is checked on
Modelsim 6.4 (a) and synthesized on Xilinx ISE design suite 14.3.The power is calculated on Xilinx Power Estimator tool. The area
comparison is done in respect of LUTs .Proposed design has reduced area and power as compared with the conventional SQRT CSLA
with only a slight increase in the delay. The thesis evaluates the performance of the design in terms of area and power. The result
analysis shows that the modified SQRT CSLA structure is quantitatively superior over conventional SQRT CSLA in terms of area and
power.

Keywords- SQRT CSLA, Modified CSLA, BEC-1, RCA, XILINX ISE Design Suite 14.3, Verilog, VLSI, Modelsim 6.4 a. XILINX
Power Estimator.
INTRODUCTION
In todays digital circuitry, an adder in the data path is required which consumes less area and power with comparable speed. Carry
select adder has less area than carry look-ahead adder but it is slower than carry look-ahead adder. Carry select adder requires more
area and consumes more power as compared to ripple carry adder but offers good speed. Adders in circuits or systems acquire huge
area and consume large power as large additions are done in advanced processors and systems. Adder is one of the key hardware
blocks in Arithmetic and logic unit (ALU) and digital signal processing (DSP) systems. The DSP applications where an adder plays an
important role include convolution, digital filtering like in Discrete Fourier transform (DFT) and Fast Fourier Transform (FFT), digital
communications and spectral analysis. The performance depends on the power consumed while addition operation [2][3].
There is a need in the VLSI market for the low area and low power consumption adders. So, a modified adder is needed. A new adder
i.e., SQRT CSLA is used in many digital systems, here independently generating multiple carries and then selects a carry to generate
the sum. Although the CSLA is not area efficient because it uses multiple pairs of Ripple Carry Adders (RCA) to generate partial sum
and carry by considering carry input Cin=0 and Cin=1, then the final sum and carry are selected by the multiplexers as proposed by
O.bedriji [1].
The SQRT CSLA has been chosen for comparison with the proposed design as it has a more balanced delay, and requires lower power
and area [5][6].The purpose of having low area and power is solved by using Binary to Excess-1 Converter (BEC) instead of RCA
with Cin=1 in the conventional CSLA to achieve lower area and power consumption. The advantage of BEC-1 logic is the lesser
number of logic gates than the n-bit Full Adder (FA) structure. Due to less logic gates used in BEC, there will be less power
consumption. The SQRT CSLA has been chosen for comparison with the conventional design as it has a more balanced delay, and
requires lower power and area.
Section II deals with the various types of adders and the delay area calculation methodology in carry select adder with BEC-1 being
used. Section III describes area analysis before and after modification done in adder. Section IV deals with results and their
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

423 www.ijergs.org

comparison of 32 bit conventional square root carry select adder with the modified square root carry select adder. Section V
describes the simulation and synthesis results of both architectures. Section VI is the conclusion.

CARRY SELECT ADDER
Internal architecture of 4 bit carry select adder:-

Figure 1: 4-bit Carry select adder

In the fig 1 following are the operations done:
- Two ripple carry adder chains are used in parallel to calculate sum for carry 0 and carry1.
- Previous carry will select the carry in of next stage and thus sum is calculated. Next stage depends on previous carry so carry
propagation is serially.
Delay and Area Evaluation Methodology of the Basic Adder Blocks
The AND, OR and Inverter (AOI) implementation of an XOR gate is shown in Fig 2. The gates between the dotted lines are doing the
operations in parallel and the numeric representation of each gate indicates the delay contributed by the gate. The delay and area
evaluation considers all gates to be made up of AND, OR, and Inverter, each having delay of 1 unit and area equal to 1 unit.
We then add up the number of gates in the longest path of a logic block that contributes to the maximum delay. An area evaluation is
done by counting the total number of AOI gates required for each logic block. The CSLA adder blocks of 2:1 mux, Half Adder (HA),
and FA are evaluated and listed in Table 1


Figure 2: Delay and Area evaluation of an XOR gate







International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

424 www.ijergs.org

Table 1
Delay and Area count of the basic blocks of adder


BINARY TO EXCESS-1 CONVERSION TECHNIQUE
As stated above the main idea of this work is to use BEC-1(Binary to Excess-1 Converter) instead of the RCA with Cin=1 in order to
reduce the area and power consumption of the conventional CSLA. To replace the n-bit RCA, an n+1bit BEC-1 is required. The
structure and the functional table of a 4-bit BEC-1 are shown in Fig3 and Table2 respectively.

Figure 3: 4-bit BEC-1circuit




Table 2
Functional table of 4-bit BEC-1



International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

425 www.ijergs.org










Figure 4: 4-bit BEC-1 with 8:4 Mux
Fig 4 describes how the basic function of the CSLA is obtained by using the 4-bit BEC-1 together with the mux. Inputs of the 8:4 mux
are (B
3
, B
2
, B
1
, and B
0
) or input of the mux is the BEC-1 output. This produces the two possible partial results in parallel and the mux
is used to select either the BEC-1 output or the inputs according to the control signal Cin. The importance of the BEC-1 logic results
the large silicon area reduction when the CSLA with large number of bits are designed. The Boolean expressions of the 4-bit BEC-1 is
listed as (note the functional symbols ~NOT, &AND, ^XOR).
X0 = ~ B0
X1 = B0^ B1
X2 = B2 ^ (B0 & 1)
X3 = B3^ (B0 & 1 & 2)

DELAY AND AREA EVALUATION OF CONVENTIONAL 32-BIT SQRT CSLA
The structure of the 32-b conventional SQRT CSLA is shown in Fig 5. It has 4 groups of different sizes RCA and 9 stages. The delay
and area evaluation of each group are shown in Fig 6, in which the numerals within [ ] shows the delay values.

Figure 4: Structure of 32-bit conventional SQRT CSLA
Here second and third stage has group 2 which has 57 logic gates each. Fourth stage has group 3.Fifth and ninth stage has group 4 and
sixth, seventh and eighth has group 5.Area evaluation of each group is same as mentioned in the case of 8 bit, but total area is
different. The steps leading to the evaluation are as follows:
1) The group2 [see Fig 6(a)] has two sets of 2-b RCA. Depends on the consideration of delay values, the incoming time of selection
input c1 [time (t) = 7] of 6:3 mux is earlier than s3 [t = 8] and later than s2 [t = 6]. Thus, sum3 [t = 11] is addition of s3 and mux [t =
3] and sum2 [t = 10].
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

426 www.ijergs.org



Figure 6: Delay and Area Evaluation of conventional 32-bit SQRT CSLA: (a) Group2,
(b) Group3, (c) Group4, and (d) Group5. F is a Full Adder
2) Except for group2, the incoming time of mux selection input is always greater than the arrival time of data outputs from the RCAs.
Thus, the delay of group3 to group5 is determined, respectively as follows:
{c6, sum [06: 04]} = c03 [t = 10] + mux
{c10, sum [10: 07]} = c06 [t = 13] + mux
{Cout, sum [15: 11]} = c10 [t = 16] + mux
3) The one set of 2-b RCA in group2 has 2 FA for Cin = 1 and the other set has 1 FA and 1 HA for Cin = 0. Based on the area count,
the total number of gate counts in group2 is determined as follows:
Gate count = 57(FA + HA + Mux)
FA = 39(3 * 13)
HA = 6(1* 6)
Mux = 12(3 * 4)
Similarly for group 4
Gate count= 117(FA+HA+MUX)
FA=91(7*13)
HA=6(1*6)
MUX=20(5*4)
4) Similarly, the approximate maximum delay and area of the other groups in the conventional SQRTCSLA are evaluated and listed in
table 3.
Table 3
Delay and Area count of groups of conventional SQRT CSLA
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

427 www.ijergs.org


Total gate count for conventional 32-bit sqrt Csla is 833.

PROPOSED DESIGN
Delay and Area Evaluation of Modified 32-bit SQRT CSLA
The structure of the proposed 32-bit SQRT CSLA using BEC-1 for RCA with Cin=1 to optimize the area and power is shown in Fig 8.
We again divide the structure into four groups. The delay and area estimation of each group are shown in Fig 8. The group2 [see Fig
8(a)] has one 2-bit RCA which has 1 FA and 1 HA for Cin=0. Instead of another 2-bit RCA with Cin=1 a 3-bit BEC-1 is used which
adds one to the output from 2-bit RCA.

Figure 7: Structure of 32-bit modified SQRT carry select adder
The steps leading to the evaluation are:
1) The group2 [see Fig 8(a)] has one 2-bit RCA which has 1 FA and 1 HA for Cin= 0. Instead of another 2-bit RCA with Cin = 1 a 3-
bit BEC is used which adds one to the output from 2-bit RCA. Depend on the consideration of delay values of table 2, the incoming
time of selection input c1 [time (t) = 7] of 6:3 mux is earlier than the s3 [t = 9] and c3 [t = 13] and later than the s2 [t = 4]. Thus, the
sum3 and final c3 (output from mux) are depending on s3 and mux and partial c3 (input to mux) and mux, respectively. The sum2
depends on c1 and mux.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

428 www.ijergs.org



Figure 8: Delay and Area Evaluation of modified 8-bit SQRT CSLA: (a) Group 2 (b) Group 3 (c) Group 4
(d) Group5. H is a half adder
2) For the remaining groups the incoming time of mux selection input is always greater than the incoming time of data inputs from
the BECs. Thus, the delay of the remaining groups depends on the incoming time of mux selection input and the mux delay. The area
count of group 2 is:
Gate count = 43(FA + HA + Mux + BEC)
FA = 13(1 * 13)
HA = 6(1* 6)
AND = 1
NOT = 1
XOR = 10(2 * 5)
Mux = 12(3 * 4)

Similarly for group 4
Gate count= 84(FA+HA+MUX+BEC)
FA=39(3*13)
HA=6(1*6)
MUX=20(5*4)
XOR=5(1*5)
AND=6(6*1)
OR=3(3*1)
NOT=4(4*1)
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

429 www.ijergs.org

Total gate count is 674.We can see here that area is minimized by 833-674=159, i.e. 159 times less area overhead. Similarly, the
estimated maximum delay and area of the other groups of the modified SQRT CSLA are evaluated and listed in table 4.
Table 4
Delay and Area count of groups of modified SQRT CSLA

RESULTS AND COMPARISON
We have simulated our design using ModelSim-Altera 6.4a. Coding is done using Verilog. The simulation results of 4-bit, 8-bit and
32-bit adders are shown in Figure 9, Figure 10, and Figure 11 respectively. We have synthesized our designs using Xilinx ISE suite
14.3 and obtained the power using Xilinx Power Estimator, the results are shown in Table 5. For 4-bit and 8 bit design we have used
Spartan 3E XC3S100E and for 32-bit Spartan 6 is used.


Figure 9: Simulation result of 4-bit modified SQRT CSLA


Figure 10: Simulation result of 8-bit modified SQRT CSLA



International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

430 www.ijergs.org

Figure 11: Simulation result of 32-bit modified SQRTCSLA

Table 5
Synthesis Result of Proposed Design

We have compared the Modified results with conventional design. The results, as shown in Table 6, report that our adder design is
compact than other conventional design. Its power is much less compared to the conventional 32-bit
SQRT CSLA .
Table 6
Comparison of area of both the designs

Power results are shown in fig 12 and 13 for Conventional and Modified design respectively.

Figure 12: Power result of 32-bit Conventional SQRT CSLA
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

431 www.ijergs.org


Figure 13: Power result of 32-bit Modified SQRT CSLA

CONCLUSIONS
The area and power is successfully reduced with the help of a method called as BEC-1 technique. With the help of Modelsim 6.4 a,
outputs of conventional and the modified design has been checked and getting the correct and the same result. The working is same
and modified design is correct. With the help of ISE Design Suite 14.3, design is synthesized and area report in terms of LUTs and
slices are obtained. Modified 32-bit SQRT carry select adder is using 75 LUTs in comparison to Conventional SQRT carry select
adder, which uses 89 LUTs. Modified 32-bit SQRT carry select adder is using 36 SLICES in comparison to Conventional SQRT carry
select adder, which uses 51 SLICES. Modified 32-bit SQRT carry select adder consumes 0.031W as compared to 0.045W which is
consumed by Conventional 32-bit Sqrt Csla.

REFERENCES:

[1] O. J. Bedrij, Carry-select adder,IRE Trans. Electron. Comput, pp. 340344, 1962.
[2] J. M. Rabaey, Digital Integrated CircuitsA Design Perspective. Prentice-Hall, 2001, Upper Saddle River, NJ.
[3] N. Weste and K. Eshragian, Principles of CMOS VLSI Designs: A System Perspective, 2nd ed., Addison-Wesley, 1985-
1993.
[4] V.G. Oklobdzija, High-Speed VLSI Arithmetic Units: Adders and Multipliers, in Design of High-Performance
Microprocessor Circuits, Book edited by A.Chandrakasan, IEEE press, 2000.
[5] Youngjoon Kim and Lee-Sup Kim, IEEE International Symposium on Circuits and Systems, vol.4, pp.218-221, A low
power carry select adder with reduced area May 2001.
[6] B.Ramkumar, and Harish M Kittur, (2012) 'Low Power and Area Efficient Carry, Select Adder', IEEE Transactions on Vel),
Large Scale Integration (VLSI) Systems, pp.I-S.
[7] Samiappa Sakthikumaran, S. Salivahanan , V. S. Kanchana Bhaaskaran, V. Kavinilavu, B. Brindha and C. Vinoth (2011) A
Very Fast and Low Power Carry Select Adder Circuit, IEEE.
[8] R.Priya and J.Senthil Kumar, IEEE International Conference on Emerging Trends in Computing, Communication and
Nanotechnology (ICECCN 2013) Implementation and Comparison of Effective Area Efficient Architectures for CSLA.
[9] L. Mugilvannan and S.Ramasamy, IEEE (2013) Low-Power and Area-Efficient Carry Select Adder Using Modified BEC-1
Converter






International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

432 www.ijergs.org

AXI Interconnect Between Four Master and Four Slave Interfaces
Mayank Rai Nigam
1
, Mrs Shivangi Bande
2

1
Scholar, IET DAVV Indore
2
Associate Professor, IET DAVV Indore
E-mail- kayasthmayank19@gmail.com

Abstract The ARM (Advanced RISC Machine) has developed AMBA (Advanced Microcontroller Bus Architecture) bus protocol
which is widely used by System-on-Chip (SoC) designers. Systems-on-Chip are one of the biggest challenges engineers ever faced
which result a mix of microprocessor, memories, buses architectures, communication standards, protocol and interfaces.AMBA buses
act as the high-performance system backbone bus. It supports the efficient connection of processors, on-chip memories and off-chip
external memory interfaces. APB and AHB come under AMBA standard. ARM has come up with its latest on chip bus transfer bus
protocol, called AMBA AXI. AXI stands for Advanced Xtensible Interface.From a technology perspective, AMBA AXI (Advanced
eXtensible Interface) provides the means to perform low latency, high bandwidth on chip communication between multiple masters
and multiple slaves. Moving one stage further, from an implementationperspective, configurability and programmability are becoming
vital to ensuring IP can be tuned for a given application or project requirement.
Keywords: vhdl,fpga,digital design,protocol,axi,Xilinx,channel etc.
Introduction
Interconnect provides efficient connection between master (e.g. ARM processors, Direct Memory Access (DMA) or Digital Signal
Processor (DSP)) and slave (e.g. external memory interface, APB bridge and any internal memory).
The Interconnect is a highly configurable RTL component, which provides the entire infrastructure require to connect
number of AXI masters to a number of AXI slaves. This infrastructure is an integral part of an AXI-based system.
Architecture of interconnect is highly modular with each of the routers and associated control logic partitioned on a per-
channel basis. It ensures, which bus master is allowed to initiate data transfers depending on highest priority or fair access.
As AXI provides many features such as out of order completion, interleaving; interconnect is responsible to take care of
interleaving and out of order. The block level RTL code is automatically configured from a system description file to specify no of
master, slave , width of address bus hence interconnect is implemented depending on the application requirements.
AXI Interconnect takes care of all 5 channels, using which data transfer between master and slave take place.

Example of AXI Interconnect


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

433 www.ijergs.org

Features of Interconnect
The ACI features are:
It is compliant with the AMBA AXI Protocol v1.0 Specification
It multiplexes and demultiplexes data and control information between connected masters and slaves
It enforces the AXI ordering rules that govern the flow of data and control information on different channels
It has a multi-layer capability to allow multiple masters to access different slaves simultaneously
Out-of-order data support
You can configure the following parameters:
number of master and slave interfaces
The ID width of each slave interface
The read and write acceptance capability of each slave interface
The write issuing capability of each master interface
The write interleave capability of each master interface
AIM OF THE PROJECT
The aim of the project is to Design an AXI interconnect between four master and four slave interfaces.
OBJECTIVE
Design related TASKS
Design related tasks that were performed in the project are:
- The Architecture of the design was thought by considering the specifications and then Block Diagram was prepared.
- The Block Diagram was divided into sub- modules which are communicating with each other.
- Block Diagram of 5 channels are made
Write Address Channel
Read Address Channel
Write Data Channel
Read Data channel
Write Response Channel
- Block diagram was analyzed number of times for the correctness of the architecture
- After the designing ,Verilog and VHDL coding is done of low level modules used in all the channels
- These low level modules are combined in a top module for all the block diagram of the 5 channels
- All codes corresponding to these block diagrams have been combined in top level module which constitutes the whole
interconnect
- Whole Design was synthesized to check for the errors.




International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

434 www.ijergs.org

SPECIFICATIONS

- Design the AMBA AXI INERCONNECT for four Ports in which each port behave as AXI based master interfaces and slave
interfaces.
- 32 Bit Address Bus and 64 Bit Data Bus
- Configurable Port Addresses (Slave size is configurable)
- One outstanding transaction
- Support all type of Burst Transaction (Wrap, INCR, Fixed)
- Support Normal and Locked Operation
- Support 200 MHz on VIRTEX 5
- Following is the priority considered for masters :
Master0 > Master1 > Master2 > Master3

BLOCK DIAGRAM AND DESCRIPTION

- Master generates and drives transaction onto the bus.
- Slave device accepts transaction from any master.
- Interconnect routes the AXI requests and responses between AXI masters and AXI slaves. Passive Monitoring, Checking
and Collection of functional coverage specifically targeted at the AXI Interconnect are the mains functions of Interconnect.
Interconnect consist of 5 channels:
- Read address channel: This channel gives information about Transaction ID for read operation, address of slave,
Burst length along with size and type, valid signal to indicate control information is valid and ready.
- Write address channel: This channel gives information about Transaction ID for write operation, address of slave,
Burst length along with size and type, valid signal to indicate control information is valid and ready
- Read data channel: This channel gives information about Transaction ID for read data, read data, read response
along with ready and valid signal
- Write data channel: This channel gives information about Transaction ID for write data, write data with strobe
information ready and valid signal
- Write response channel: This channel gives information about Transaction ID for write data, write response along
with ready and valid signal
- Default slave is used when there is no fully decoded address map physically present. There can be address at
which there are no slave to respond to the transaction, then interconnect effectively routes the access to a default slave. As in
case of AXI protocol it is necessary that all transaction must be complete even there is any error.

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

435 www.ijergs.org

ADDRESS CHANNEL
The address channel conveys address information along with control signal from master to the slave. AXI support different address
buses for write and read transfer, so that through put of the system is increased. Both channels (read/write) have same signals included
in the address channel.
The address channel includes address buswhich is 32 bit, length of burst; it gives exact no of data transfer in burst, sizeof
transfer to indicate bytes in one burst, burst type which is WRAP, FIXED and INCR, lock information along with valid and ready
signals
Block diagram of address channel




one master one slave address channel

D
C
O
D
R
SWCHG
CNTRL
For
Slave0
R
E
G
L0
L1
L2
L3
Aready_P0_Tx
B L
END
M0_s0
M0_s1
M0_s2
M0_s3
M0_def
M0_s0
M1_s0
M2_s0
M3_s0
Master
Select_s0
Salve
Select
(For
Data
Channel)
Busy
Lw/Lr
E
N
B
L
O
R
A
N
D
O
R
Master Select
OR
L
Lw Lr
D
C
O
D
R
SWCHG
CNTRL
For
Slave2
L0
L1
L2
L3
END
M2_s0
M2_s1
M2_s2
M2_s3
M2_def
M0_s2
M1_s2
M2_s2
M3_s2
Master
Select_s2
Salve
Select
(For Data
Channel) Busy
Lock
Ach_p0_Tx
Master Select
Master Select
B
E
N
B
L
O
R
A
N
D
O
R
OR
L
Lw Lr
B
Aready_P0_Rx
R
E
G
B L
Master Select
Ach_p2_Tx
D
C
O
D
R
SWCHG
CNTRL
For
Slave0
R
E
G
L0
L1
L2
L3
Aready_P1_Tx
B L
END
M1_s0
M1_s1
M1_s2
M1_s3
M1_def
M0_s1
M1_s1
M2_s1
M3_s1
Master
Select_s0
Salve
Select
(For
Data
Channel)
Busy
Lw/Lr
E
N
B
L
O
R
A
N
D
O
R
Master Select
OR
L
Lw Lr
D
C
O
D
R
SWCHG
CNTRL
For
Slave2
L0
L1
L2
L3
END
M3_s0
M3_s1
M3_s2
M3_s3
M3_def
M0_s3
M1_s3
M2_s3
M3_s3
Master
Select_s2
Salve
Select
(For Data
Channel)
Busy
Lock
Ach_p1_Tx
Master Select
Master Select
B
E
N
B
L
O
R
A
N
D
O
R
OR
L
Lw Lr
B
R
E
G
B L
Master Select
Ach_p0_Rx
Ach_p2_Rx
Ach_p1_Rx
Ach_p3_Tx
Ach_p3_Tx
Aready_P1_Rx Aready_P3_Rx
Aready_P3_Tx
Aready_P2_Tx
Aready_P2_Rx
P0
P2
P1
M1
M2
M0
S0
S1
S2
P3
M3
S3
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

436 www.ijergs.org

Following points explain the detailed functioning of Address channel interconnects
1. When Master sends valid Address and control signal Slave Decoder decodes that address and generate output to indicate
request from master to slave.
2. Decoder has five output bits, each bit indicates request to particular slave S0, S1, S2, S3 and default slave.
3. Each decoder output is given to the each switching control unit as request.
4. S0 is given to switching control unit for slave0; S1 is to switching control unit for slave1 and so on.
5. Thus each switching control unit receives four request from four masters It gets request from each master for NORMAL and
LOCKED operation, depending on priority it will grant that slave to appropriate master.
- Path select will enable granted master address channel other channel will remain disabled.
- If the select signal is 1000 then master0 address channel is selected
- If the select signal is 0100 then master1 address channel is selected
- If the select signal is 0010 then master 2 address channel is selected
- If the select signal is 0001 then master 3 address channel is selected

6. Slave will accept now valid address and control signals.
7. Slave sends ready signal back to the granted master. This ready signal is given to granted master by ANDing logic. This is
done in same way as address and control signals are routed towards the slave.
Thus Master receives ready from slave.

- If the path select signal is 1000 then rM0 will be set.
- If the path select signal is 0100 then rM1 will be set.
- If the path select signal is 0010 then rM2 will be set.
- If the path select signal is 0001 then rM3 will be set.
- rMx indicates ready signal goes to master x (x = 0,1,2,3)
Master side ready signal is received by OR logic. All signals (Aready_S0, Aready_S1, Aready_S2, Aready_S3) coming to
OR is from each slave. Design assures that out of all incoming signals to OR logic, at a time only one will be set.

8. After receiving ready signal master de-asserts valid signal of address channel.
9. This switching control wont accept any further request for that slave till completion of transaction.
10. Switching control units output will remain in same state till End signal is received. Which indicates transaction is
completed.
For operation of address channel let us understand operation of each block in the channel with detail. Following points explain the
detailed functioning of Address channel interconnect
DECODER:
- Decoder functions as an address decoder which generates control signals to enable or disable the channels to a particular slave
from particular master.
- Decoder can receive the valid request (ARADDR [32] / AWADDR [32]) for read or write operation from any of the four
masters.
- The decoder decodes the address by comparison to memory maps of slaves and generate control signals to enable master
request to the appropriate slave.
If the start address is 00000000 hex and end address is 00000fff then control signal enable the channel for S0 slave

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

437 www.ijergs.org

If the start address is 00001000 hex and end address is 00001fff then control signal enable the channel for S1 slave
If the start address is 00002000 hex and end address is 00002fff then control signal enable the channel for S2 slave
If the start address is 00003000 hex and end address is 00003fff then control signal enable the channel for S3 slave.
If M0 wants to send the valid address and control information to the slave0 then Master0 will generate the address which
lies in the starting and end address of slave 0. Output of the decoder which is 5 bit signal will generate slave0 pin active high that is 1
and the rest of the bits for slave 1, slave 2 , slave3 and default slave is low that is zero .
The active high signal for slave 0 is connected to the switching control unit of slave 0.
Switching control
Switching control Description:
- Switching control accepts requests form all four masters for normal or locked operation.M0L0 bits are for normal and locked
operation request from master0.
Similarly M1L1 bits are from master1, and so on.
- Mx is active high bit, indicates request for slave from master x. It out put of slave decoder.
- Lx bit indicates normal operation if it is 0 else locked operation.
- Other inputs to switching control unit are Busy, Lock and End signal, Busy and Lock signals shows status of slave whether
its being accessed by other master.
- End signal brings switching control unit to idle state on end of transaction.
- Master select outputs used to select channels coming from mater to slave. I.e. address and write data channels.
- Slave select outputs used to select channels coming from slave to master. i.e. Read data channel and write response channels.
READ DATA CHANNEL----------
The read data channel conveys both the read data and read response information from the slave back to the master. The read data
channel includes:
data bus, which can be 8, 16, 32, 64, 128, 256, 512, or 1024 bits wide and read response indicating the completion status of the read
transaction.
Block diagram of Read data channel

PORT-0
PORT-2
Ena
ble
A
N
D
O
R
A
N
D
Ena
ble
O
R
0
Rready
Rready0
R_END
Slave select
AND
PORT-1
Ena
ble
A
N
D
O
R
O
R
O
R
A
N
D
0
Rready
Rready0
R_END
AND
12 8 4 0
13 9 5 1
AND
A
N
D
O
R
O
R
A
N
D
PORT-3
Ena
ble
O
R
AND
A
N
D
O
R
O
R
A
N
D
O
R
O
R
M0
S0
M1
S1
M3
S3
M2
S2
R
E
G
R
E
G
R
E
G
R
E
G
R_END
R_END
Slave select
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

438 www.ijergs.org


In read operation slave will send valid read data, this data is routed by switching control units output. Following points explain the
detailed functioning of read data channel interconnect

1. The process starts when master sends an address and control information on the read address channel along with valid
information.
2. This address is decoded first to find which slave is to be access. Now signal will be given to the switching control logic of
particular slave. It generates appropriate enable signal to select particular masters path to that slave.
In the above case if select signal for slave 0 is generated by the arbiter then this select signal select particular master to read the
data from the slave 0.
- If the select signal is 1000 then data is given to master0
- If the select signal is 0100 then data is given to master1
- If the select signal is 0010 then data is given to master 2
- If the select signal is 0001 then data is given to master 3
- In above case if select signal is 1000 then read data to master 0 is selected from slave 0.
- From this stage read data channel come into the picture. Data from all the four slaves (Rdata_S0, Rdata_S1, Rdata_S2,
Rdata_S3) may available at the master ENABLE block, this enable block will select only that slave which is to be connected
to the particular master.

3. Enable module blocks the data to be unintentionally passing to the master from slave. As master has not given any request no
slave is selected and contains on the data bus is zero.
4. When master will assert Ready_M0 signal on bus at that time data from slave0 is accepted by master. This Ready_M0 signal is
first given to the AND block which will assert only that signal which is going to the slave0. At slave0 Ready_M1, Ready_M2,
Ready_M3 are also connected but as project support only one out standing transaction at a time only one READY signal is
high.
5. Slave internally calculates next address from the address specified by the master in the address channel. Data on that address
location is put by the slave on to the data bus along with valid signal to indicate that, there is valid data present.
6. Master will accept data when he will assert Ready_M0 signal high.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

439 www.ijergs.org

7. This process proceeds until final transfer in the burst takes place. At the final transfer slave asserts the RLAST signal to show
that last data item is being transferred.

R_END signal generator block

When RLAST signal appears on the line from slave along with RVALID and RREADY signal from same master,
are used to generate the R_END signal. This signal is given to the switching machine which will then reset all previous
output set by switching control logic block, as all Read data burst is transferred from slave to the master.
In the above case if slave0 is transferring data to master0 then path select will be 1000 to enable slave0s data
path.
At the RLAST signal from slave0 signal 0 will be active high which is OR with RVALID and result of this is
AND with RREADY signal of master0.
WRITE DATA CHANNEL--------------------
The write data channel conveys write data information from the master to the slave. The write data channel includes data bus,
which can be 8, 16, 32, 64, 128, 256, 512, or 1024 bits wide and strobe indicating the valid byte lanes.
- During the write burst, master can assert the WVALID signal only when it drives valid write data .WVALID must remain
asserted until the slave accepts the write data and asserts WREADY signal.
- Write Strobe Signal, WSTRB enables data transfer on the write data bus. Each write strobe signal corresponds to one byte of
the write data bus. When asserted, a write strobe indicates that the corresponding byte lane of the data bus contains valid
information to be updated in memory. There is one write strobe for each eight bits of the write data bus.
A master must ensure that the write strobes are asserted only for byte lanes that can contain valid data as determined by the control
information for the transaction.
Block diagram of Write data channel



A
N
D
O
R
E
N
B
L
O
R
A
N
D
O
R
Wdata_P0_TX
Wready_2p_Rx
Master select
Master select
E
N
B
L
Master select
O
R
Master select
Wdata_P2_TX
Wready_p0_t
x
Wready_p0_Rx
Wdata_P0_R
X
Wready_2p_t
x
A
N
D
O
R
E
N
B
L
O
R
A
N
D
O
R
Wdata_P1_TX
Wready_2p_R
x
Master select
Master select
E
N
B
L
Master select
O
R
Master
select
Wdata_P3_TX
Wready_p0_t
x
Wready_p0_Rx
Wdata_P3_R
X
Wready_2p_t
x
Wdata_P0_RX
Wdata_P1_RX
PORT-0
M0
S0
PORT-0
M1
S1
PORT-0
M2
S2
PORT-0
M3
S3
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

440 www.ijergs.org


Following points explain the detailed functioning of write data channel interconnect:
- In Write Data channel the master will write data to the slave.
- The process starts when master sends an address and control information on the write address channel along with valid
information.
- This address is decoded to find slave no. now signal will be given to the switching control logic for particular slave. It
generates appropriate signal to enable particular masters path to that slave. The select signal which is 4 bit is generated from
the SWITCHING CONTROL BLOCK. According to this select signal particular master and slave is selected in order to
write the data.
In the above case if select signal for slave 0 is generated by the arbiter then this select signal select particular master
to write the data to the slave 0.
- If the select signal is 1000 then master0 data is selected
- If the select signal is 0100 then master1 data is selected
- If the select signal is 0010 then master 2 data is selected
- If the select signal is 0001 then master 3 data is selected
In above case if select signal is 1000 then write data of master 0 is selected and moves to slave 0
- From this stage write data channel come into the picture. Data from all the four masters (Wdata m0, Wdata m1, Wdata m2,
Wdata m3) is available at the slave ENABLE block, this enable block will select which master is going to write particular
slave.
- At the same time slave0, slave1, slave2, slave3 will send WREADY signal to the master from the AND block.
- This AND block also consist of select signal which selects to which master the WREADY signal is send. If the select signal
is 1000 then master0 is selected to send Wready signal
- As soon as master0 get the WREADY signal master sends the next data to the slave.
- At the end of transfer LAST signals will be sending by the master indicating the end of transaction.

WRITE RESPONSE CHANNEL
The write response channel provides a way for the slave to respond to write transactions. All write transactions use
completion signaling. The completion signal occurs once for each burst, not for each individual data transfer within the
burst.Response channel is mainly used to indicate status of the write transaction. In case of write data transfer all data is coming from
master side and slave does not acknowledge any thing to the master. Hence response channel is combined with the write data channel
for acknowledgement from slave side.
Most important signal of Write response channel is BRESP. This signal is of 2 bit and indicates status, such as OKAY,
EXOKAY, SLVERR and DECERR.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

441 www.ijergs.org



Response channel is used for acknowledgement. Slave can assert signals on this channel to indicate status of transfer. Design of this
channel is same as that of read data channel (as both channel transfer data from slave to master) only signals are different.
In this operation slave will send response signal, which is routed by switching control units output. Following points
explain the detailed functioning of response channel interconnect
1. When master sends an address and control information for write transfer after transferring all data response is send by the slave
i.e. after WLAST signal from master side.
2. When address is send then decoder in address channel will select slave and send request to switching control of particular slave.
Output generated by the switching control, are hold until response channel does not give W_END signal.
If master0 want to access slave0 then select signal for slave 0 is generated by the arbiter. This select signal selects particular
slave for response of write transaction.
- For select signal 1000 Response is given to master0
- For select signal 0100 : Response is given to master1
- For select signal 0010 : Response is given to master2
- For select signal 0001 : Response is given to master3
In above case if select signal is 1000 then read data to master 0 is selected from slave 0.

3. Response channel signal from all the four slaves (Rresp_S0, Rresp_S1, Rresp_S2, Rresp_S3) may available at the master
ENABLE block, this enable block will select only that slave which is to be connected to the particular master.
4. Enable module blocks the response of other channel to be unintentionally passing to the master from slave. If master has not
given any request then no slave path going toward master is selected and output of this block is zero.
5. When master will assert BReady_M0 signal on bus at that time response signal from slave0 is accepted by master0. This
BReady_M0 signal is first given to the AND block which will assert only that signal which is going to the slave0. At slave0
PORT-2
PORT-1
PORT-0
PORT-3
M0
S0
M3
S3
M2
S2
M1
S1
Slave select (o/p of SM-
Switching control)
O
R
E
N
B
L
A
N
D
O
R
A
N
D
Rread
y
O
R
R_END
R
E
G
12 8 4 0
0
1
2
3
Bready_P0_R
x
E
N
B
L
Slave select (o/p of SM
-Switching control)
O
R
R
E
G
14 10 6
2
A
N
D
O
R
R_END
8
9
10
11
O
R
A
N
D
Slave select (o/p of
SM-Switching
control)
O
R
E
N
B
L
A
N
D
O
R
A
N
D
Rread
y
O
R
R_END
R
E
G
13 9 5 1
4
5
6
7
E
N
B
L
Slave select (o/p of SM
-Switching control)
O
R
R
E
G
15 11 7 3
A
N
D
O
R
R_END
12
13
14
15
O
R
A
N
D
Bresp_P0_Tx
Bresp_P2_Tx
Bresp_P2_Rx
Bresp_P3_Rx
Bresp_P3_Tx Bresp_P1_Tx
Bresp_P1_Rx
Bready_P0_Tx
Bready_P2_R
x
Bready_P2_Tx
Bready_P1_Tx
Bready_P1_R
x
Bready_P3_R
x
Bready_P3_Tx
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

442 www.ijergs.org

BReady_M1, BReady_M2, BReady_M3 are also connected but as project support only one out standing transaction at a time
only one BREADY signal will be high.
6. BVALID must remain asserted until the master accepts the write response and asserts BREADY. The default value of
BREADY can be HIGH, but only if the master can always accept
W_END signal generator block
BVALID and BREADY signal are used to generate the W_END signal. This signal is given to the switching
machine which will then reset all previous output set by switching control logic block, as all Write data burst is transferred
from master to the slave.
In the above case if slave0 is transferring response to master0 then path select will be 1000 to enable slave0s
response path.
BVALID signal from slave0 i.e. signal 0 will be active high which is AND with BREADY signal of master0.
DEFAULT SLAVE

With 5 channels, another important block of AXI interconnect is Default slave. When interconnect cannot
successfully decode a slave access (i.e. when slave is not present at Physical location specified by the master), it effectively routes
the access to a default slave, and the default slave returns the DECERR response.
Fig. shows waveform for write data transaction. Master sends address first, READY signal indicate that master can send
write data. After receiving last data from master slave gives response to indicate status of transfer.
The AXI protocol responses are:
OKAY
EXOKAY
SLVERR
DECERR
Decode error is generated typically by an interconnect component to indicate that there is no slave at the transaction address. AXI
protocol requires that, all data transfers in a transaction should be completed, even if an error condition occurs. As ones the master
places an address, it keeps on waiting until the address is not accepted. So someone has to accept this invalid address and complete the
burst corresponding to it.
Therefore any component giving a DECERR response must meet all requirements to complete data transfer and
generate appropriate signal along with response signal DECERR.
This is where DEFAULT SLAVE comes into picture. Default slave will accept such invalid addresses and will complete
transactions corresponding to such addresses by responding with a special error in response called DECERR, which means decoding
error. This error is meant to tell the master that no device is having the address for which transaction has been requested.
So default slave will be having two sections. One of these sections will handle write transactions and other will handle
read transactions.
Default slave write section:
The DECODER in write address channel interconnect enables default slave and routes the invalid addresses along
with control information attached with them to the DEFAULT Slaves write section.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

443 www.ijergs.org


Block Diagram of Default Slave for write transaction
Also the write data corresponding to these invalid transactions is accepted by default slave, as soon as the LAST data arrives, Default
slave places a write response corresponding to this transaction on the write response channel. It also gives ready and BID signal to
fulfill the protocol requirement.
In this way as specified in AXI specification, even the invalid transaction is completed by the default slave.
The block diagram of default slaves write section and its functioning are explained in following sections.
Following points explain the working of write section of default slave:
The DECODER in write address channel interconnect enables default slave. After enable signal default slave will assert AWREADY
signal.
- AWID, AWLEN, AWSIZE, AWBURST are taken into account when AWVALID is high. AWID is used to generate BID
signal.
- After accepting all control information default slave will write location by WDATA.
- As soon as WLAST signal is received default slave will enable write completion channel generator block and this block will
generate appropriate error signal on BRESP bus.
Default slave read section:

Block Diagram of Default Slave for read transaction

This section, also work in the same way; as write section works. The DECODER in read address channel interconnect
enables default slave and routes the invalid addresses along with control information attached with them to the DEFAULT
Slaves read section.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

444 www.ijergs.org

Now default slave will give data to master by reading the information from register. Along with the LAST data,
Default slave places a response corresponding to this transaction on the channel.
In this way as specified in AXI specification, even the invalid transaction is completed by the default slave.
Following points explain the working of write section of default slave:
- The DECODER in write address channel interconnect enables default slave. After enable signal default slave will assert
ARREADY signal.
- ARID, ARLEN, ARSIZE, ARBURST are taken into account when ARVALID is high. ARID is used to generate RID signal.
- After accepting all control information default slave will read location and send data on RDATA.
- It will calculate total burst size by considering ARLEN, ARSIZE and ARBURST signals. This value is decremented.
- As soon as burst size value goes to zero (i.e. End_t signal is generated) default slave will assert RLAST signal along with
error signal on RRESP bus.

simulation results


Simulation result for the decoder

Simulation result for the Switching Simulation result for the Write Response Channel
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

445 www.ijergs.org


Simulation result for the enable

Simulation result of Mux select for read data channel

simulation result for mux select

simulation result of Mux select for write response channel

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

446 www.ijergs.org

Conclusion
- Functional verification is achieved successfully.
- The interconnect can works at 100 MHz frequency at Vertex E as target Device. (synthesized by xilinx ISE 8.2i).
All the Errors and warnings were removed successfully from design coding except one warning signal is assigned but never use
REFERENCES:

Sr. No Title of book Author name
1. AMBA AXI protocol v1.o specification ARM Limited.
2.
PrimeCell AXI Configurable Interconnect (PL300) Technical
Reference Manual
ARM Limited.
3.
AMBA Design Kit
Technical Reference Manual
ARM Limited.
4. VHDL primer j.bhasker
5. Fundamentals of Digital 0-07-116168-6
Logic with VHDL Design.-
McGraw-Hill, 2000.
Stephen Brown,
Zvonko Vranesic
6. The Designer's Guide to 1-55860-674-2
VHDL(2nd Edition).-
Morgan Kaufmann
Peter J.Ashenden

7.

VHDL(3rd Edition).- 0-07-049436-3
McGraw-Hill
Douglas L. Perry






International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

447 www.ijergs.org

A New Pan-Sharpening Method Using Joint Sparse FI Image Fusion
Algorithm
Ashish Dhore
1
, Dr. Veena C.S
2

1
Research Scholar (M.Tech), Department of ECE, Technocrats Institute of Technology, Bhopal, India
2
Associate Professor, Department of ECE, Technocrats Institute of Technology, Bhopal, India
E-mail- ashishanives@gmail.com

Abstract Recently, sparse representation (SR) and joint sparse representation (JSR) have attracted a lot of interest in image fusion.
The SR models signals by sparse linear combinations of prototype signal atoms that make a dictionary. The JSR indicates that
different signals from the various sensors of the same scene form an ensemble. These signals have a common sparse component and
each individual signal owns an innovation sparse component. The JSR offers lower computational complexity compared with SR. The
SparseFI method does not assume any spectral composition model of the panchromatic image and due to the super-resolution
capability and robustness of sparse signal reconstruction algorithms, it gives higher spatial resolution and, in most cases, less spectral
distortion compared with the conventional methods. Comparison among the proposed technique and existing processes such as
intensity hue saturation (IHS) image fusion, Brovey transform, principal component analysis, fast IHS image fusion has been done.
The pan-sharpened high-resolution MS image by the proposed method is competitive or even superior to those images fused by other
well-known methods. In this paper, we propose a new pan-sharpening method named Joint Sparse Fusion of Images (JSparseFI). The
pan-sharpened images are quantitatively evaluated for their spatial and spectral quality using a set of well-established measures in the
eld of remote sensing. The evaluation metrics are ERGAS, Q4 and SAM which measure the spectral quality.To capture the image
details more efficiently, we proposed the generalized JSR in which the signals ensemble depends on two dictionaries.
Keywords JSparseFI, Compressed sensing, image fusion, multispectral(MS) image, panchromatic (PAN) image, remote sensing,
sparse representation.
INTRODUCTION
Pan Sharpening is shorthand for Panchromatic sharpening. It means using a panchromatic (single band) image to
sharpen a multispectral image. In this sense, to sharpen means to increase the spatial resolution of a multispectral image. A
multispectral image contains a higher degree of spectral resolution than a panchromatic image, while often a panchromatic image will
have a higher spatial resolution than a multispectral image. A pan sharpened image represents a sensor fusion between the
multispectral and panchromatic images which gives the best of both image types, high spectral resolution AND high spatial resolution.
This is the simple why of pan sharpening. Pan-sharpening is dened as the process of synthesizing an MS image at a higher spatial
resolution that is equivalent to the one of the PAN image. Pan-sharpening should enhance the spatial resolution of MS image while
preserving its spectral resolution. Pan-sharpening continues to receive attention over years. Most of this paper is concerned with the
how of pan sharpening. First, a review of some fundamental concepts is in order.
A) Multispectral Data
A multispectral image is an image that contains more than one spectral band. It is formed by a sensor which is capable of
separating light reflected from the earth into discrete spectral bands. A color image is a very simpleexample of a multispectral image
that contains three bands. In this case, the bands correspond to the blue, green and red wavelength bands of the electromagnetic
spectrum. The full electromagnetic spectrum covers all forms of radiation, from extremely short- wavelength gamma rays through
long wavelength radio wave.In Remote Sensing imagery, we are limited to radiation that is either reflected or emitted from the earth,
that can also pass through the atmosphere to the sensor.The electromagnetic spectrum is the wavelength(or frequency) mapping of
electromagnetic energy, as shown below.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

448 www.ijergs.org







Fig. 1 : Electromagnetic spectrum
Electro-optical sensors sense solar radiation that originates at the sun and is reflected from the earth in the visible to near-
infrared (just to the right of red in the figure above) region. Thermal sensors sense solar radiation that is absorbed by the earth and
emitted as longer wavelength thermal radiation in the mid to far infrared regions. Radar sensors provide their own source of energy in
the form of microwaves that are bounced off of the earth back to the sensor. A conceptual diagram of a multispectral sensor is shown
below.








Fig. 2: Simplified diagram of a multispectral scanner

In this diagram, the incoming radiation is separated into spectral bands using a prism. We have all seen how a prism is able to do
this and we have seen the earths atmosphere act like a prism when we see rainbows. In practice, prisms are rarely used in modern
sensors.Instead, a diffraction grating which is a piece of material with many thin grooves carved into it is used. The grooves cause the
light to be reflected and transmitted in different directions depending on wavelength. You can see a rough example of a diffraction
grating when you look at a CD and notice the multi-color effect of light reflecting off of it as you tilt it at different angles. After
separating the light into different bins based on wavelength ranges, the multispectral sensor forms an image from each of the bins
and then combines them into a single image for exploitation.Multispectral images are designed to take advantage of the different
spectral properties of materials on the earths surface. The most common example is for detection of healthy vegetation. Since healthy
vegetation reflects much more near-infrared light than visible light, a sensor which combines visible and near-infrared bands can be used to
detect health and less healthy vegetation. Typically this is done with one or more vegetation indices such as the Normalized
DifferenceVegetation Index (NDVI) defined as the ratio of the difference of the red and near-infrared reflectance divided by the sum of these
two values. Some typical spectral signatures of vegetation, soil and water are shown below,
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

449 www.ijergs.org












Fig. 3: Reflectance spectra of some common materials. Red, Green and Blue regions of the spectrum are shown. Near-IR is
just to the right of the Red band. Ultraviolet is to the left of the Blue band.

These are only representative spectra. Each type of vegetation, water, soil and other surface type havedifferent reflectance
spectra, and outside of a laboratory, these also depend on the suns position in the sky and the satellites position as well.When there
are more bands covering more parts of the electromagnetic spectrum, more materials can be identified using more advanced
algorithms such as supervised and unsupervised classification, in addition to the simple but effective band ratioand normalization
methods such as the NDVI.Remote View has several tools which take advantage of multispectral data including the Image Calculator
for performing NDVI and other indices and a robust Multispectral Classification capability which includes both supervised and
unsupervised classification. This paper however is focused on the Pan Sharpening tools within Remote View.

B) Panchromatic data
In contrast to the multispectral image, a panchromatic image contains only one wide band of reflectance data. The data is
usually representative of a range of bands and wavelengths, such as visible or thermal infrared, that is, it combines many colors so it is
pan chromatic. A panchromatic image of the visible bands is more or less a combination of red, green and blue data into a single
measure of reflectance. Modern multispectral scanners also generally include some radiation at slightly longer wavelengths than red
light, called near infrared radiation.
Panchromatic images can generally be collected with higher spatial resolution than a multispectral image because the broad
spectral range allows smaller detectors to be used while maintaining a high signal to noise ratio.
For example, 4-band multispectral data is available from QuickBird and GeoEye. For each of these, the panchromatic spatial
resolution is about four times better than the multispectral data. Panchromatic imagery from QuickBird-3 has a spatial resolution of
about 0.6 meters. The same sensor collects the nearly the multispectral data at about 2.4 meters resolution. For GeoEyes Ikonos, the
panchromatic and multispectral spatial resolutions are about 1.0 meters and 4.0 meters respectively. Both sensors can collect co
registered panchromatic and four-band (red, green, blue and near-infrared) multispectral images.
The developments in the field of sensing technologies multisensor systems have become a reality in a various fields such as
remote sensing, medical imaging, machine vision and the military applications for which they were developed. The result of the use of
these techniques is an increase of the amount of data available. Image fusion provides an effective way of reducing the increasing
volume of information while at the same time extracting all the useful information from the source images. Multi-sensor data often
presents complementary information, so image fusion provides an effective method to enable comparison and analysis of data. The
aim of image fusion, apart from reducing the amount of data, is to create new images that are more suitable for the purposes of
human/machine perception, and for further image- processing tasks such as segmentation, object detection or target recognition in
applications such as remote sensing and medical imaging. For example, visible-band and infrared images may be fused to aid pilots
landing aircraft in poor visibility.
A remote sensing platform uses a variety of sensors. Of the fundamental ones are panchromatic (PAN) sensor and Multi-
Spectral (MS) sensor. The PAN sensor has a higher spatial resolution. In other words, each pixel in the PAN image covers a smaller
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

450 www.ijergs.org

area on the ground compared to the MS image from the same platform. On the other hand, the MS sensor has a higher spectral
resolution, which means that it corresponds to a narrower range of electromagnetic wavelengths compared to the PAN sensor. There
are several reasons behind not having a single sensor with both high spatial and high spectral resolutions. One reason is the incoming
radiation energy. As the PAN sensor covers a broader range of the spectrum, its size can be smaller while receiving the same amount
of radiation energy as the MS sensor. Other reasons include limitation of on-board storage capabilities and communication bandwidth.
I. DIFFERENT METHODS TO PERFORM PAN-SHARPENING
A) IHS Image Fusion: -IHS is one of the most widespread image fusion methods in remote sensing applications. The IHS transform
is a technique where RGB space is replaced in the IHS space by intensity (I), hue (H), and saturation (S) level. The fusion process that
uses this IHS transform is done by the following three steps.
1) First, it converts the RGB space into the IHS space (IHS transform).
2) The value of intensity I (= (R + G + B)/3) is replaced by the value of PAN.
3) Then retransformed back into the original RGB space
B) PCA Method: -The PCA technique is a decorrelation scheme used for various mapping and information extraction in remote
sensing image data. The procedure to merge the RGB and the PAN image using the PCA fusion method is similar to that of the IHS
method. The fusion process that uses this PCA is done by the following three steps.
1) First, it converts the RGB space into the rst principal component (PC1), the second principalcomponent (PC2), and the
third principal component (PC3) by PCA
2) The rst principal component (PC1) of the PCA space is replaced by the value of the PAN image.
3) The retransformed back into the original RGB space (reverse PCA)
C) Brovery Transform (BT):- BT is a simple image fusion method that preserves the relative spectral contributions of each pixel but
replaces its overall brightness with the high-resolution PAN image
II. SPARSEFI ALGORITHM FOR IMAGE FUSION
Pan-sharpening requires a low-resolution (LR) multispectral image Y with N channels and a high-resolution (HR)
panchromatic image X
0
and aims at increasing the spatial resolution of Y while keeping its spectral information, i.e ,generating an HR
multispectral image X utilizing both Y and X
0
as inputs. The Sparse FI algorithm reconstructs the HR multispectral image in an
efcient way by ensuring both high spatial and spectral resolution with less spectral distortion.It consists of three main steps:
1) Dictionary learning
2) Sparse coefcients estimation
3) HR multispectral image reconstruction
A) Dictionary Learning
The HR pan image X
0
is low-pass ltered and down sampled by a factor of F
DS
(typically 410) such that its final explored
spread function is similar to the original image. The resulting LR version of X
0
is called Y
0
. This Y
0
is combined with the co
registration of the different channels that is required, anyway. The LR pan image Y
0
and the LR multispectral image Y are tiled into
small, partially overlapping patches Y
0
and Y
k
, where k stands for the k
th
channel and k =1,...,N. All the LR patches Y
0
with pixel
values arranged in column vectors form the matrix D
l
called the LR dictionary. Likewise, the HR dictionary D
h
is generated by tiling
the HR pan image X
0
into patches X
0
of FDS times the size as the LR pan image patches, such that each HR patch corresponds to an
LR patch. These image patches are called atoms of the dictionaries.
B) Sparse Coefcients Estimation
Sparse coefficients are estimated according to the atoms having least number of PAN patches in the LR dictionary. The
atoms in the dictionary are orthogonal because they can exhibit infinite number of solution. In this step an attempt has been made to
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

451 www.ijergs.org

represent each LR multispectral patch in the particular channel as a linear combination of LR PAN patches. These are referred as the
atoms of the dictionary represented by the
coefficient vector.











Fig 4:Flow chart of sparse FI method
C)HR Multispectral Image Reconstruction
Since each of the HR image patches X
k
is assumed to share the same sparse coefcients as the corresponding LR image
patch Y
k
in the coupled HR/LR dictionary pair, i.e., the coefcients of X
k
in D
h
are identical to the coefcients of Y
k
in D
l
, the nal
sharpened multispectral image patches X
k
are reconstructed by,
X
k
= D
h

k
.
The tiling and summation of all patches in all individual channels nally give the desired pan-sharpened image X.
III. PROPOSED METHOD
Recently sparse signal representation of image patches was explored to solve the pan-sharpening problem for remote sensing
images. Although the proposed sparse reconstruction based methods lead to motivating results, yet none of them has considered the
fact that the information contained in different multispectral channels may be mutually correlated. In this paper, we extend the Sparse
Fusion of Images (SparseFI, pronounced sparsify) algorithm, proposed by the authors before, to a Jointly Sparse Fusion of Images
(JSparseFI) algorithm by exploiting these possible signal structural correlations between different multispectral channels. This is done
by making use of the distributed compressive sensing (DCS) theory that restricts the solution of an underdetermined system by
considering an ensemble of signals being jointly sparse. The given SparseFI algorithm works as stated as above. In this we tried to
improve the parameters which decide the sparsity of the image which is to be fused. The main focus is on improving the clarity of the
image. Although number of algorithms have been developed but this method has shown better performance than others. The main
aspect to worry about is the down sampling factor and the patch size with a regularization parameter.

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

452 www.ijergs.org

IV. A SPARSE REPRESENTATION AND COMPRESSED SENSING
The development of image processing in the past several decades reveals that a reliable image model is very important. In
fact, natural images tend to be sparse in some image. This brings us to the sparse and redundant representation model of
image.Compressed sensing mainly includes sparse representation, measurement matrix and reconstruction algorithm. Where, the
sparse representation is the theory basis of the compressed sensing theory. The sparse representation denotes that fewer coefficients
can better describe the main information of the signal. Most actual signals are nonzero. Most coefficients have small values in certain
transform base (such as: wavelet basis), while less coefficients which bear most information of the signal have large values. The CS
theory shows that the more sparse of the signal, the much accurate of the reconstruction signal. So the suitable transformation base can
guarantee the sparsity and independence of coefficient, and guarantee the reconstruction precision of the compressed sensing while
reducing the compression measurements.
At present, the common transforms are the Fourier Transform, Discrete Cosine Transform, Wavelet Transform etc. This
paper proposed a novel image compressed sensing image fusion algorithm based on joint sparse representation. In order to reduce the
computational burden, this study firstly constructed the joint sparse matrix. On the basis of analyzing the relationship of the
reconstruction and fusion quality, the images are fused by the maximum absolute value fusion rule and reconstructed by the minimum
total variation method. Consider a family of signals{x
i
,i=1 ,2,...,g}, x
i
R
n
. Specially, in this paper, each such signal is assumed to be a
nn image patch, obtained by lexicographically stacking the pixel values. Sparse representation theory supposes the existence of a
matrix D R
nT
, n <<T, each column of which corresponds to a possible image. These possible images are referred to as atomic
images, and the matrix D as a dictionary of the atomic images. Thus, an image signal x can be represented as x = D. For
overcomplete D (n << T), there are many possible satisfying x = D. Our aim is to nd the with the fewest nonzero elements.
Thus, the is called the sparse representation of x with dictionary D. formally, this problem can be obtained by solving the following
optimization problem:
= argmin
0,
Where
0
denotes the number of nonzero components in . In practice, because of various restrictions, we cannot get x directly;
instead, only a small set of measurements y of x is observed. The observation y can be represented as
y = Lx
Where L R
kn
with k<n is interpreted as the encode process of the CS theory, where L is a CS measurement matrix. The CS theory
ensures that under sparsity regularization, the signal x can be correctly recovered from the observation y by
min 0
In this paper, we propose one remote sensing image fusion method from the perspective of compressed sensing. The high-
resolution PAN and low-resolution MS images are referred as the measurements y. The matrix L is constructed by the model from the
high- resolution MS images to the high-resolution PAN and low resolution MS images. Thus, the sparse representation of the high-
resolution MS images corresponding to dictionary D can be recovered from measurements y according to the sparsity regularization,
and the high-resolution MS images are constructed by
x = D.
In fact, when the coefcients are sufciently sparse, this problem can be replaced with minimizing the
0
.
V. PROPOSED IMAGE FUSION SCHEME
A) IMAGE FORMATION MODEL
Remote sensing physics should be carefully considered while designing the pan-sharpening process. Let X
high
p
and Y
low
p
, p
=1,..., P, represent the p
th
band of the high-resolution and low-resolution MS images, respectively, where P denotes the number of
bands of the MS images. The observed low- resolution MS images are modeled as decimated and noisy versions of the corresponding
high-resolution MS images, as shown in Fig. 5.

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

453 www.ijergs.org





Fig 5: Relationship between a single low-resolution MS band and its corresponding high-resolution version
In fact, intensity of the low-resolution image is due to integration of light intensity that falls on the charge-coupled device
sensor array of suitable area compared to the desired high- resolution images, so the low-resolution intensity can be seen as
neighborhood pixels average of the high-resolution intensities corrupted with additive noise. The relationship between X
high
p and Y
low
p is written as
Y l
ow
p = MX
high
p + V
P

Where M is the decimation matrix and V
p
is the noise vector.
In fact, the PAN image usually covers a broad range of the wavelength spectrum; whereas, one MS band covers only a
narrow spectral range. Moreover, the range of wavelength spectrum of the PAN modality is usually overlapped or partly overlapped
with those of the MS bands. This overlapping characteristic motivates us making the assumption that the PAN image is approximately
written as a linear combination of the original MS images
Y
PAN
= p w
p
X
high
p
+ V
p

WhereW
p
is the weight and V
p
is the additive zero-mean Gaussian noise.














Fig. 6: Remote sensing image formation model.
However, we should note that the linear relationship between the PAN and the MS image is only approximated by the linear model
because of the complexity of physics, atmospheric dispersion, and so on. We consider a pan-sharpening case with four spectralbands:
1) Red (R); 2) green (G); 3) blue (B); 4) and near infrared (NIR), and the decimation factor from high to low spatial resolution is four.
Let x =(x
1,1
,...,x
1,16
,...,x
4,1
,...,x
4,16
)
T
represent the high spatial resolution MS image patch and Y
MS
=( y1,y2,y3,y4)
T
is the vector
consisting of the pixels from the low-resolution MS images shown in Fig.6.
Then, we can write
Y
MS
= M
1x
+ v
1

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

454 www.ijergs.org

VI. COMPRESSIVE SENSING AND IMAGE FUSION
Compressive sensing enables a sparse or compressible signal to be reconstructed from a small number of non-adaptive linear
projections, thus significantly reducing the sampling and computation costs. CS has many promising applications in signal acquisition,
compression, and medical imaging. In this paper, we investigate its potential application in the image fusion.As far as a real-valued
finite-length one-dimensional discrete-time signal x is concerned, it can be viewed as a R
N
space N1 dimensional column vector, and
the element is x[n], n=1, 2, , n. If the signal is sparse K, it can be shown as the following formula:
X = s
Where is the NN matrix and s is the coefficient component column vector of dimension N1.
When the signal x in the base of has only non-zero coefficients of K<<N (or greater than zero coefficients), is called the
sparse base of the signal x. The CS theory indicates that if the signal x's (the length is N) transform coefficient which is at an
orthogonal basis is sparse (that is, only a small number of non-zero coefficients can be obtained), if these coefficients are projected
into the measurement basic which is irrelevant to the sparse base , the M1 dimensional measurement signal y can be obtained.
By this approach, the signal x's compressed sampling can be realized.
The advantage that the joint sparse theory has is that the data obtained via the projection measurement is much smaller than the
conventional sampling methods, breaking the bottleneck of the Shannon sampling theorem, so that the high-resolution signal
acquisition becomes possible. The attraction of joint sparse theory is that it is for applications in many fields of science and
engineering and has important implications and practical significance, such as statistics, information theory, coding theory, computer
science theory, and other theories.
Compared with the traditional fusion algorithms, the joint sparse FI based image fusion algorithm theory has shown significant
superiority the image fusion can be conducted in the non-sampling condition of the image with the joint SparseFI technique, the
quality of image fusion can be improved by increasing the number of measurements, and this algorithm can save storage space and
reduce the computational complexity
ACKNOWLEDGMENT
The authors would like to thank Dr. S. C. Shivastava and Prof. Shivendra Singh for their support and provided us a chance to
work on the concept Pan-sharpening.

CONCLUSION
The paper put forward a fusion algorithm based on the compressed sensing having joint sparse representation. Compared with the
traditional methods, the proposed CS based joint SparseFI image fusion algorithm can preserve the image feature information,
enhance the fused image, space detail representation ability and improve the fused image information. The experiment proves that the
approach in this paper is better than the SparseFI algorithm, wavelet transform and Laplace pyramid decomposition, etc. In this paper,
a novel pan-sharpening method based on CS technique is presented. Based on the PAN and MS images generation model, we referred
the pan-sharpening problem.

REFERENCES:
[1] X. Zhu, X. Wang, and R. Bamler, Compressive sensing for image fusionWith application to pan-sharpening, in Proc.
IGARSS Conf., 2011, pp. 27932796.
[2] J. Lee and C. Lee, Fast and efficient panchromatic sharpening, IEEETrans. Geosci. Remote Sens., vol. 48, no. 1, pp. 155
163, Jan. 2010.
[3] S.Mallat, A Wavelet Tour of Signal Processing, 3rded. Amsterdam, The Netherlands: Academic, 2009, pp. 664665.
[4] Z.H. Li andH.Leung,Fusionofmultispectral andpanchromaticimages using a restoration-based method, IEEE Trans. Geosci.
Remote Sens., vol. 47, no. 5, pp. 14821491, May 2009.
[5] V. Buntilov and T. R. Bretschneider, A content separation image fusion approach: Toward conformity between spectral and
spatial information, IEEE Trans. Geosci. Remote Sens., vol. 45, no. 10, pp. 32523263,Oct. 2007.
[6] P B. Aiazzi, S. Baronti, and M. Selva, Improving component substitution pan-sharpening through multivariate regression of
MS +Pan data, IEEETrans. Geosci. Remote Sens., vol. 45, no. 10, pp. 32303239, Oct. 2007.
[7] L. Alparone, L. Wald, J. Chanussot, C. Thomas, P. Gamba, and L. M. Bruce, Comparison of pansharpening algorithms:
Outcome of the 2006 GRS-S data-fusion contest, IEEE Trans. Geosci. Remote Sens., vol. 45, no. 10, pp. 30123021, Oct.
2007.
[8] T.-M. Tu, P. S. Huang, C.-L. Hung, and C.-P. Chang, A fast intensity-hue-saturation fusion technique with spectral
adjustment for IKONOS imagery, IEEE Geosci. Remote Sens. Lett., vol. 1, no. 4, pp. 309312, Oct. 2004
[9] B. Aiazzi, L. Alparone, S. Baronti, and A. Garzelli, Context-driven fu- sion of high spatial and spectral resolution images
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

455 www.ijergs.org

based on oversampled multiresolution analysis, IEEE Trans. Geosci. Remote Sens., vol. 40, no. 10, pp. 23002312, Oct.
2002.
[10] S. Chen, D. Donoho, and M. Saunders, Atomic decomposition by basis pursuit, SIAM Rev., vol. 43, no. 1, pp. 129159,
2001
[11] T. Ranchin and L. Wald, Fusion of high spatial and spectral resolution images: The ARSIS concept and its implementation,
Photogramm. Eng. Remote Sens., vol. 66, no. 1, pp. 4961, Jan. 2000
[12] L. Wald, Some terms of reference in data fusion, IEEE Trans. Geosci. Remote Sens., vol. 37, no. 3, pp. 11901193, May
1999.
[13] P. S. Chavez, S. C. Sides, and J. A. Anderson, Comparison of three different methods to merge multiresolution and
multispectral data: Landsat TM and SPOT panchromatic, Photogramm. Eng. Remote Sens., vol. 57, no. 3, pp. 295303, Mar.
1991




















International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

456 www.ijergs.org

A Survey & Assessment of Noise Removal Methods in Imageing
Nikhil Gupta
1
, Rampal Kushwaha
2

1
Scholar, L.N.C.T Jabalpur
1
Assistant Professor, L.N.C.T Jabalpur
E-mail- gupta1nikhil2@gmail.com

Abstract Image process is employed in several fields like pc vision, remote sensing, medical imaging, AI etc. In several of those
applications the existence of impulsive noise within the noninheritable pictures is one in all the foremost common issues. This noise is
usually aloof from a picture by mistreatment median filter because it preserves the perimeters throughout noise removal. pictures can
even corrupt by the shot noise, known as salt-and pepper noise. This noise is characterized by spots on the image and is typically
related to the noninheritable image because of errors in image sensors and information transmission. This paper makes an attempt to
undertake the study of Denoiseing ways . totally different noise densities are removed by mistreatment filters rippling primarily
based ways . Fourier remodel technique is localized in frequency domain wherever the rippling remodel technique is localized in each
frequency and abstraction domain however each the on top of ways don\'t seem to be information adaptational .In This paper we tend
to attempt to presents a review of some important add the realm of image denoising and finds the one is best for image denoising.
Here, some in style approaches area unit classified into totally different teams .after that we tend to conclude for best technique for
Image Denoising.
Keywords - wavelet, Denoising, Image, Wavelet Transform, Signal to Noise ratio, Filters, thresholding.

INTRODUCTION - Image process is a vital space within the info trade. a vital analysis is a way to filter noise caused by the
character, system and process of transfers then on. Image de-noising has been one among the foremost necessary and wide studied
issues in image process and laptop vision. the necessity to possess a awfully smart image quality is more and more needed with the
appearance of the new technologies in an exceedingly numerous areas like multimedia system, medical image analysis, aerospace,
video systems et al.. Indeed, the noninheritable image is commonly marred by noise which can have a multiple origins such as:
thermal fluctuations; quantify effects and properties of communication channels. It affects the sensory activity quality of the image,
decreasing not solely the appreciation of the image however conjointly the performance of the task that the image has been supposed.
The challenge is to style strategies, which may by selection swish a degraded image while not fixing edges, losing vital options and
manufacturing reliable results. The goal of image de-noising is to estimate a clean version of a given blatant image, utilizing previous
information on the statistics of natural pictures. the matter has been studied intensively with significant progress created in recent
years. The challenge in evaluating such limits is that constructing correct models of natural image statistics may be a long standing
and however unsolved downside. This raises the question of whether or not the error rates of current de-noising algorithms is reduced
abundant additional. At the tougher cases of terribly massive patch sizes or terribly little noise levels, we have a tendency to solely get
a bound on the most effective doable de-noising error. . what is more, varied analysis tries are dedicated to the educational of natural
image priors many works studied the bounds of image de-noising. Some strategies centered totally on SNR arguments while not
taking under consideration the strength of natural image priors.
II.litrature Survey-In 2006, Krishnan Nallaperumal, Senior Member IEEE, Justin Varghese, Student Member, IEEE, S.Saudia,
Student Member, IEEE, R.K.Selvakumar, Member IEEE, K.Krishnaveni, Member IEEE, S.S. Vinsley, Member IEEE Selective
switch Median Filter (SSMF) for the Removal of Salt &amp; Pepper Impulse Noise during this paper, a replacement median
primarily based filtering algorithmic rule is conferred for the removal of impulse noise from digital pictures. a decent analysis of the
constraints of the highest ranking median filters, the Progressive switch Median Filter, PSMF and therefore the Rank-order primarily
based adaptative Median Filter, RAMF is created and area unit overcome terribly effectively by the planned filter that cleans the
impulse corruptions of a digital image in 2 distinct phases of impulse detection and impulse correction.. The detection section
identifies the corrupted pixels into a flag image by a spatial rank ordered approach and therefore the correction section modifies the
corrupted pixels known within the flag image by a lot of appropriate rank ordered worth by considering the neighbor options.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

457 www.ijergs.org

In 2007, Krishnan Nallaperumal, Justin Varghese, S.Saudia, K.Krishnaveni, Santhosh.P.Mathew, P.Kumar, An economical switch
Median Filter for Salt &amp; Pepper Impulse Noise Reduction, like alternative impulse detection algorithms our impulse filter is
developed by previous data on natural pictures, i.e., a noise-free image ought to be domestically swimmingly varied, and is separated
by edges. The noise thought-about by this detection formula is merely salt pepper impulsive noise that means:
1) solely some of the image pixels ar corrupted whereas alternative pixels ar noise-free and
2) A noise pel takes either a really giant worth as a positive impulse or a really tiny worth as a negative impulse.
The options of the projected switch Median Filter are delineate to point out the economical restoration of extremely impulse corrupted
pictures. The a lot of excellent associated computationally economical Filter takes care to revive solely the impulse corrupted pixels
by a lot of excellent median from an applicable neighbourhood by keeping the signal content of the uncorrupted pixels. The filter
identifies the clamant pixels by testing them for corruption with a lot of acceptable noise detector and is replaced by a far valid
intensity which will maintain the image fidelity to an oversized extent. The projected filter restores solely the corrupted image signals
of the digital image. associate improved impulse noise reduction filter that offers an appropriate and recognizable restoration of
pictures corrupted in any respect noise levels to concerning ninety six. whereas most alternative median filters develop several
impulse patches creating the rehabilitated image troublesome to acknowledge at higher noise levels, the projected switch Median
Filter yields recognizable, patches free restoration with a trifle degradation in fidelity.
In 2007, Krishnan Nallaperumal, Senior Member, IEEE, Justin Varghese, Student Member, IEEE, S.Saudia, Student Member, IEEE,
K.Arulmozhi, Member, IEEE, K.Velu, S.Annam, Student Member, IEEE, on Salt &amp; Pepper Impulse Noise Removal
victimization adaptational switch Median Filter, a good median filter for salt &amp; pepper impulse noise removal is bestowed. This
computationally economical filtering technique is enforced by a 2 pass algorithm: within the 1st pass, identification of corrupted
pixels that are to be filtered are utterly detected into a flag image employing a variable sized detection window approach; within the
second pass, victimization the detected flag image, the pixels to be changed are known and corrected by a lot of appropriate median.
In 2008, James C. Church, Yixin Chen, and author V. Rice, A spatial Median Filter for Noise Removal in Digital Images, during
this paper, six completely different image filtering algorithms square measure compared supported their ability to reconstruct noise
affected pictures. the aim of those algorithms is to get rid of noise from a sign that may occur through the transmission of a picture. A
spatial Median Filter is introduced and compared with current image smoothing techniques. Experimental results demonstrate that the
planned formula is resembling these techniques. A modification to the current formula is introduced to realize a lot of correct
reconstructions over alternative in style techniques. within the results, they notice the simplest threshold T to use within the MSMF
and determined that the simplest threshold is four once employing a 33 mask size. exploitation these as parameters, this filter was
enclosed during a comparison of the Mean, Median, element Median, Vector Median, and spatial Median Filters. during this
comparison of noise removal filters, it had been terminated that for pictures containing p = zero.15 noise composition, the MSMF
performed the simplest which the element Median Filter performed the simplest overall noise compositions tested. This work was
supported by the University of Mississippi.
In Proceedings of the seventh World Congress on Intelligent management and Automation, June 25-27, 2008, Chongqing, China, the
authors Youlian Zhu, Cheng Huang, Zhihuo Xu has worked on Image De-noising formula supported the Median Morphological
Filter, for image de-noising by morphological filter that causes helpful data missing. The formula will the norm operation through
erosion and dilation operations, improves and optimizes structuring part units. The experiment proves that this methodology
overcomes the inherent inadequate of ancient morphological limit operation, and effectively removes the impulse noise of image;
particularly in low signal to noise magnitude relation surroundings, the de-noising performance has obvious blessings than the normal
morphological filter and median filter algorithms. Therefore, it has a broad application prospect in image process.
In 2008 International conference on data Science and Engineering, DENG Xiuqin, XIONG, Yong PENG Hong, has developed A
new quite weighted median filtering formula used for image Processing, geared toward the excellence and disadvantage of the
normal median filtering formula, this paper proposes a replacement adaptative weighted median filtering (AWMF) formula. The new
formula 1st determines noise points in image through noise detection, then adjusts the dimensions of filtering window adaptively in
step with variety of noise points in window, the picture element points within the filtering window square measure sorted adaptively
by bound rules and offers corresponding weight to every cluster of picture element points in step with similar it, finally the noise
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

458 www.ijergs.org

detected square measure filtering treated by suggests that of weighted median filtering formula. The results of simulation experiment
indicates that the new formula can't solely filter off noise effectively, however additionally favorably reserve image details with a
filtering performance higher than ancient median filtering formula. To verify the formula during this paper will filter off impulse
noises of various density and defend image elaborate data, the formula during this paper is compared with filtering formula of an
equivalent kind with the quality take a look at image river of 256 256 eight bits as original image on Matlab seven.0 platform, add
impulse noise with intensity of fifty, 10%, 30%, four-hundredth and hour, severally adopt the quality median filtering formula radio
frequency of 33 and 55 windows, the adaptative median filtering formula AMF, and formula during this paper for noise removal
treatment of pictures to check the benefits and downsides of performance in terms of noise filtering and details protection. The paper
raises a replacement weighted median filtering formula used for image method within the light-weight of the strong/weak points of
ancient median filtering formula and therefore the advantage of enormous window filtering and little window filtering similarly as
centres weighted median filtering. The new formula can't solely modify the dimensions of filtering window mechanically in step with
the amount of noise points that square measure detected, however additionally cluster the picture element points within filtering
window in step with the similarity by suggests that of hard the similarity of picture element points within filtering window similarly as
offer corresponding weight W to the picture element points in cluster W. because the formula offers larger weight to window centre
purpose and purposes for the most part kind of like the window centre point, it will defend the image details higher. This approach to
self-adjust the dimensions of the filtering window and provides completely different weight to every picture element purpose has
eased the contradiction between noise depression and detail conserving to an enormous degree, that has greatly increased the noise
filtering and detail conserving capabilities, so has higher total performance in noise filtering than the quality median filter formula.
In 2009, Cheng Huang, Youlian Zhu, enhance the previous morphological filter and presented a New Morphological Filtering
Algorithm for Image Noise Reduction. Conventional morphological filter is disabling to effectively preserve image details while
removing noises from an image. This algorithm of the self-adaptive median morphological filter is implemented as follows. First, the
extreme value operation is displaced by the median operation in erosion and dilation. Then, the structuring element unit (SEU) is built
based on the zero square matrix. Finally, the peak signal to noise ratio (PSNR) is used as the estimation function to select the size of
the structuring element. Both the characteristics of morphological operations and the SEU determine the image processing effect.
Following the increase of the noise density, the conventional morphological filtering algorithm and the median filtering algorithm
become unavailable quickly. However, the proposed morphological filtering algorithm still has better effect in image noise reduction,
especially in low SNR situations. Thus, the proposed algorithm is obviously superior to others.
In 2010, Jiafu Jiang, Jing Shen An Effective Adaptive Median Filter Algorithm for Removing Salt & Pepper Noise in Images, this
paper proposes an adaptive median filter algorithm based on modified PCNN model and has made some improvements and
innovations as follows:
(1) The simplified PCNN model is proved to fail to detect pepper noise using reduction ad absurdum;
(2) The above model is improved using the method of divide and rule;
(3) The size of the filtering window is adaptively determined according to the output of the modified PCNN model. PCNN model was
originally proposed by Eckhorn, focused on the synchronized pulse releases of the visual cortices of cats. But the original PCNN
model has some limitations on practical image processing. Based on it, the adaptive median filter algorithm is achieved by detecting
the pollution level, ascertaining the specific location of the noise and determining the size of the median filtering window adaptively.
In order to verify the validity of this method, at Matlab 7.0 platform, we use the image Lena as the experimental material to do the
simulations. The image Lena, which is added by different level of salt and pepper noise, is filtered by three different filtering methods.
MAE (Mean Absolute Error) and PSNR (Peak Signal to Noise Ratio) are calculated and compared.

Conclusion- Filterization technique is computationally faster and gives better results. Some aspects that were analyzed in this paper
may be useful for other denoising schemes, objective criteria for evaluating noise suppression performance of different significance
measures. filters is superficially related to wavelets. If we make wavelets using modified thresholding, there is a much more powerful
technique, however, capable of finding the underlying factors or sources when these classic methods fail completely.

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

459 www.ijergs.org

REFERENCES:

[1] Zhu Youlian, Huang Cheng, An Improved Median Filtering Algorithm Combined with Average Filtering, Third International
Conference on Measuring Technology and Mechatronics Automation, 2011.

[2] WANG Chang-you, YANG Fu-ping, GONG Hui, A new kind of adaptive weighted median filter algorithm, 2010 International
Conference on Computer Application and System Modeling (ICCASM 2010).

[3] Shuangteng Zhang, Image De-noising Using FIR Filters Designed with Evolution Strategies, Intelligent Systems and Applications
(ISA), 2011 3rd International Workshop, 2011. Chenguang Yan and Yujing Liu, Application of Modified Adaptive Median Filter for
Impulse Noise, International Conference on Intelligent Control and Information Processing - Dalian, China August 13-15, 2010.

[4] HongJun Li, ZhiMin Zhao, Image Denoising Algorithm Based on Improved Filter in Contourlet Domain, World Congress on
Computer Science and Information Engineering, 2009.

[5] LIU Wei ,New Method for Image Denoising while Keeping Edge Information,2009 IEEE.

[6] S. Balasubramanian, S. Kalishwaran, R. Muthuraj, D. Ebenezer, V. Jayaraj, An Efficient Non-linear Cascade Filtering Algorithm
for Removal of High Density Salt and Pepper Noise in Image and Video sequence , International Conference on Control,
Automation, Communication and Energy Conservation June 2009.

[7] TANG Quan-hua, YE Jun, Yan Zhou, A New Image Denosing Method, International Conference on Intelligent Computation
Technology and Automation,2008.

[8] DENG Xiuqin, XIONG Yong PENG Hong , A new kind of weighted median filtering algorithm used for image Processing,
International Symposium on Information Science and Engieering, 2008.

[9] I. Aizenberg, C. Butakoff and D. Paliy, "Impulsive noise removal using threshold boolean filtering based on the impulse detecting
functions," IEEE Signal Proc. Letters, vol. 12, no. 1, pp. 63-66, 2005.

[10] S. M. Mahbubur Rahman, M. Omair Ahmad Fellow, IEEE, M. N. S. Swamy, Fellow, IEEE, Wavelet-domain Image De-noising
Algorithm Using Series Expansion of Coefficient P.D.F. in Terms of Hermite Polynomials, 2005.

[11] Li Dan Wang Yan Fang Ting, Wavelet Image De-noising Algorithm Based on Local Adaptive Wiener Filtering, International
Conference on Mechatronic Science, Electric Engineering and Computer, Jilin, China August 19-22, 2011.

[12] Zuo-feng Zhou, Jian-zhong Cao, Hao Wang, Wei-hua Liu, Image Denoising Algorithm via Doubly Bilateral Filtering,IEEE 2009.

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

460 www.ijergs.org

[13 ]Ashek Ahmmed, Politecnico di Milano, Piazza L. Da Vinci, Image De-noising using Gabor Filter Banks, Computers &
Informatics (ISCI), 2011 IEEE Symposium.
[14] Jing Liu; Fei Gao; Zuozhou Li, A model of image de-noising based on partial differential equations, Multimedia Technology
(ICMT), International Conference, 2011.

[15] Baopu Li, Max, Q.-H. Meng, and Huaicheng Yan, Image De-nosing by Curvature Strength Diffusion, Proceedings of the 2009
IEEE International Conference on Information and Automation, Zhuhai/Macau, China, June 22 -25, 2009.

[16] JIANG Bo, HUANG Wei. Adaptive Threshold Median Filter for Multiple-Impulse Noise. Journal of Electronic Science and
Technology of China.2007.

[17] Dong Fuguo, Fan Hui, Yuan, A Novel Image Median Filtering Algorithm based on Incomplete Quick Sort Algorithm,
International Journal of Digital Content Technology and its Applications Volume 4, Number 6, September 2010.

[18] WANG Chang-you, YANG Fu-ping, GONG Hui, A new kind of adaptive weighted median filter algorithm, 2010 International
Conference on Computer Application and System Modeling (ICCASM 2010).

[19] Gonzalez R.C, Woods. R.E., Digital Image Processing, 3
rd
edition, Pearson Prentice Hall, 2009.

[20] Behrooz Ghandeharian, Hadi Sadoghi Yazdi and Faranak Homa Youni, modified adaptive center weighted median filter for
suppressing impulsive noise in images, International Journal of Research and Reviews in Applied Sciences, Volume 1, Issue
3(December 2009).

[21] T.-C. Lin, P.-T. Yu, , A new adaptive center weighted median filter for suppressing impulsive noise in images, Information
Sciences 177 (2007) 10731087.

[22] Krishnan Nallaperumal, Justin Varghese, S.Saudia et.al.,Selective Switching Median Filter for the Removal of Salt & Pepper
Impulse Noise, in proc. of IEEE WOCN 2006, Bangalore, India, April 2006.

[23] Krishnan Nallaperumal, Justin Varghese, S.Saudia, K.Krishnaveni, Santhosh.P.Mathew, P.Kumar, An efficient Switching
Median Filter for Salt & Pepper Impulse Noise Reduction, 1st International Conference on Digital Information Management, 2006





International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

461 www.ijergs.org

Effect of Altitude on the Efficiency of Solar Panel
Manoj Kumar Panjwani
1
, Dr. Ghous Bukshsh Narejo
1

1
Department of Electronic Engineering, NEDUET, Pakistan
E-mail- manoj_panjwani@hotmail.com

Abstract- Our previous research work suggests that the efficiency of solar panel is drastically effected when it comes to humidity
changes. In this research paper, we observe the effect of power accession of solar panel if it is kept at a altitude/height. According to
the experiments conducted, at the same time and at the same intensity of sunlight, the power accession of 7-12% was observed due to
placement of Solar Panel at a particular height of 90 foot/27.432m above the datum/ground Level.
Keywords- Solar energy, altitude/height factor, power accession, sea level, efficiency.
Introduction
If we talk about the energy which is received from Sun, Earth receives approximately of 1413 W/m
2
and the actual consumption
which appears on the scale formulated is approximately 1050W/m
2
as recorded by Pacific Northwest Forest and Range Experiment
Station Forest Service, U.S. Department of Agriculture, Portland, Oregon, USA in in 1972.As per the facts observed, approximately of
30% energy is lost in between. As per the statistical figures stated, that the Earths top of the atmosphere sunlights intensity is about
30% more intense than the actual received on the land. In the Solar panels what we use today, actually we make use of the 70% energy
coming from the Sun and utilize the working of our panels to fulfill our energy needs. [1-3]
As the Solar Panel is placed at a probable altitude of 27.432 meters/90 foot from the ground level, it is observed that the gases and the
humidity factor along with factors affecting from the presence of population, which consists of emission of different gases from the
masses, the usage of fossil fuels and much more are actually playing their role in stopping or limiting certain amount of proper
intensities to reach to the Solar Panel and hence making the Solar panel less effective.
As per our previous research work done in the same particular area, the humiditys effect was observed to cast a considerable amount
of deviation to the Power accession .It was also accounted with the help of Hygrometer and it was proved out to deviate from the
readings of the ground level as the amount of humidity was reduced as we turn to a particular height.
As per our experiments which were conducted on a pure sunny day with humidity as 30% , with 3 solar panels , the readings were
noted which accounts the normal readings observed at a ground level appears to be under practice these days when it comes to Solar
Grounds / Solar Gardens and introduction of Solar Village.
At the same time there were 3 Solar Panels installed at about 27.432 meters above the ground level, and the humidity was observed
up there was nearby 26% , with a temperature deviation of 1*C , the readings were taken simultaneous to the one on the ground.
It appears to formulate an interesting fact that Power accession of about 7-12% was observed considered the ground level readings on
the datum. It might also be the soul representation of Physics, that the more close you go forward to a light emitting source , the more
light would be observed with higher intensity.
Apart from the above stated physics involved, the thing which was observed was, as there appears drastic change in the humidity
which also effects the power accession as was observed in our last research work. Apart from the normal readings noted with the
effect of humidity , there appears many factors which started neglecting at a height , the certain gasses appears to have less effect on
the intensity as they appears to show less resistance in form of reflection or refraction of the light .
Usage of fossil fuels actually delivers the CO2 gas in our environment, due to deforestation in the modern era, appears to cast a lower
impact in the absorbance of above stated gas.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

462 www.ijergs.org

The manufacturers of Solar Panels report in the specification sheet that the Panel responds at 1000W/m
2
, 25
0
C. However, the
performance of the Solar Panel is strongly affected by various external factors and which may cause the Panel to deviate from the
standard values as prescribed by the manufacturers. [3]
This mainly depends on the Solar Radiation reaching the Earth and the values corresponding to Incident irradiation Pi.
Where Pi is the Incident irradiation in W/m
2
and is given by :

Where E (Lambda) is the spectral irradiance, I
sc
is the short circuit current and V
oc
is the open circuit voltage. The open/short current
of the circuit is readily affected by the increasing of humidity range and which indirectly make the system deviate from the standard
values provided by the manufacturer.
The spectral irradiances vary at the ground level at its intensities because of various atmospheric parameters.
As far as the Short Current Density, Jsc, is concerned, it is directly related with the spectral irradiance which is
[4]
As the placement of the panel is kept on the altitude/height, there appears many factors which are neglected, the most which is counted
appears the deviation in the humidity factor and other due to height, there appears neglection to the effect of certain gases on the Panel
which actually allows the
Panel to work more effectively than before.[4]
As per the facts when the light consisting of energy/Photon strikes the water layer/unwanted gases which in fact appears denser on the
ground level, Refraction appears which results in decreasing of intensity of the light which in fact appears to be the root cause of
decreasing of efficiency. Additional there appears minimum components of Reflection which also appears on the site and in that, there
appears light striking is subjected to more losses. If there appears the placement of Solar Panels on a particular height, there appears
less effect of these factors on efficiency as there appears less humidity and less amount of Gas which indirectly restricting the process
of Reflection and Refraction to have less effect on the utilization the energy coming from the Sun.[5]
Experiment and Analysis:
Various experiments were conducted and in the test bench included 6 Solar panel specified as 50W BP Solar Panel having
specification of Vamp = 17.3V and Imp = 2.9A with temperature coefficient of Isc= (0.065 0.015) % / C, Temperature coefficient
of Voc=-(80 10) mv / C and Temperature coefficient of power=-(0.5 0.05) % / C, Hygrometer, Thermometer, 6 Output Loads
as tungsten filament bulbs (15, 20,25W) , 12 Millimeters. The test bench includes 3 Solar Panel installation on the ground and 3 Solar
panels on the altitude of about 90 foot/27.432 meters. Results were calculated initially with normal temperature in Karachi which was
34*C (305K) and humidity 30
Temperature(K) Humidity (%) Voltage (DC) Current Amps(DC) Powers(watts)
307 30 16.32 2.41 39.331
307 30 17.3 2.34 40.482
307 30 16.45 2.51 42.789
Table 1: Humidity vs. voltage, current and power readings taken through the experimental set up as discussed on the ground level.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

463 www.ijergs.org

Simultaneously the Panels kept on the Altitude of about 27.432 meters, following is the data which was observed. The initial were,
due to placement of the system on the altitude , the temperature appeared to deviate a bit from the standard as because of the presence
of water particles and other gasses which actually make contribution in lowering the usual temperature. Humidity was observed to be
26% on the altitude
Temperature(K) Humidity (%) Voltage (DC) Current Amps(DC) Powers(watts)
308 26 16.76 2.63 44.078
308 26 17.64 2.53 44.629
308 26 17.08 2.69 45.945
As it can be observed that there appears a drastic change in the power accession when it comes to placement of Solar Panel on the
particular altitude. To compare the respective Power readings in order to observe the difference, the following formulae were used.
Power Accessed = (Power (Altitude)-Power (Ground))/Power (Ground)
Solar Panel Power(Ground)
Watts
Power(Altitude)
Watts
% Accession
1 39.331 44.078 12.06
2 40.482 44.629 10.24
3 42.789 45.945 7.37

Acknowledgment
I would like to thank Dr.Lachhman Das Dhomeja, Professor at Institute of Information and Communication Technology, University of
Sindh , Indra Devi Sewani , PHD Student at Sindh University Jamshoro and Radha Mohanlal , Lab Engineer at IOBM for being
supportive and informative in my goals and for unconditional help without which this task submission of Research wouldnt have
been ever possible.
.Conclusion:
After the experiments conducted, it was clearly observed that the Power accession of 7-12% is observed when the Solar Panel is
installed at a particular altitude ahead of the ground which indeed can be identified as the most probable and easy solution in order to
utilize the less resources in getting maximum output.
Future Directions and suggestions:
As it can be observed that by applying simple techniques, the power accession can be varied up to a considerable amount. The
agencies which are currently working in the concerned area with same goals should place the panels well above the ground so that to
make the best utilization of Power coming from the Sun and make the best efforts in order to utilize the blessing of Sunlight in
Pakistan.
REFERENCES:
"Chapter 8 Measurement of sunshine duration" (PDF). CIMO Guide. World Meteorological Organization
Natural Forcing of the Climate System". Intergovernmental Panel on Climate Change. Retrieved 2007-9-29.Radiation Budget".
NASA Langley Research Center. 2006-10-17.
"Introduction to Solar Radiation". Newport Corporation. Archived from the original on Oct. 29, 2013.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

464 www.ijergs.org

M. Chegaar, P. Mialhe , Effect of Atmospheric parameters on the silicon cells performance.
http://web.physics.ucsb.edu/~lgrace/chem123/troposphere.htm
Dill, Lawrence M. "Refraction and the spitting behavior of the archerfish (Toxotes chateaus)." Behavioral Ecology and Sociobiology
2.2 (1977): 169-184





















International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

465 www.ijergs.org

Cash Flow of High Rise Residential Building
Dipti R. Shetye
1
, Dr. S.S.Pimplikar
2

1
Scholar, Civil Department, M.I.T Pune
2
H.O.D, Civil Department, M.I.T Pune
E-mail- dipti.shetye@yahoo.in

ABSTRACT- Cash flow is very essential in every construction project as it gives the detail idea about how much money
or amount we are spending on project as a cash outflow and how much amount we are getting back from the project as a
cash inflow & when we combine the both inflow and outflow with the help of graph we can understood that how much
inflow and outflow is their in each month so its easy to compare and to know that how much profit we are getting and also
we can plot S curve of month Vs cumulative cost from which we can know month wise how much cost get added up to
the last month of the project.
To define all this in detail I have taken case study of my project in the paper on which I have worked. That is Pebbles 9
Floored building in Bavdhan for this case study I have collected data like item wise quantities in each activity then basic
labor & material rates, then total consumption of items for each activity & total BOQ is prepared from that which shows
item wise quantity, rate and amount.
Then I have used Microsoft project software in this project so I have put all the activities in software as a work breakdown
structure i.e. (WBS) then duration for activities given and then linking part is done then set the baseline and tracking part
is done by doing tracking grant. It gives planned and actual duration of the project from that we can understand whether
project is as per plan or it is lagging and if it is lagging then by how much duration its lagging then resource cost summery
report generated in MSP using visual reports for whole project duration and we can do this for each month also to
understand how much cost spend on each resource in each month. Then Inflow and Outflow generated then combination
of Inflow and outflow generated from which we can understand simultaneously then inflow and outflow in each month so,
that we can easily calculate the project duration.
Like this cash flow of the any project get generated which is very necessary for clear understanding that how much money
we are spending and in that how much money we are getting back.
KEYWORDS- Cash flow, Inflow, Outflow, Resource cost summary report, Microsoft project software, consumption,
project duration.
INTRODUCTION-
The case study I have taken in my project is the 9 floored building Pebbles constructed in bavdhan.I have collected data
regarding this from the site like data required to work out quantities of all the items from which quantities of items worked
out and also collected basic labor and material rates and area details etc. then all the collected data entered in Microsoft
project software step by step to generate the required output that is cash flow of the project.
For that steps I have followed are given below-
I have entered WBS.
Prepared list of activities with its durations with start date and finish dates it gives total duration of activity
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

466 www.ijergs.org

Then linking part is done
Then set the baseline
Then tracking done by using tracking gantt option it gives the planned and actual duration of the project
Then outflow generated by using visual reports
Then inflow generated
After combining inflow and outflow it gives the total cashflow of the project
Then s curve generated that is month vs. cumulative cost.

Typical floor plan of Building-


Information about project site-
Name of Site: Pebbles
Location: Survey no.340/3348/1 near dsk ranvara
Bavdhan budruk, Pune.
Type of Project: High rise Residential Building.
Project Manager: Mr. Santosh Runwal
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

467 www.ijergs.org

Site Engineer. Mr. Nitin Chougule.
Name of Contractor/Builder- Rainbow Housing
Landscape consultant: Designterra
Plumbing Consultant: Amit Infrastructure Consultant
Structural Consultant: Hansal Parekh & Associate
Electrical Consultant: Consolidated Consultants & Engrs. Pvt. Ltd
Project Architect: Abhikalpan Architects & Planners
Area Statement-
Slab Area Saleable area Unit
22,457 17,808 Sqft
Resource sheet in MSP- All required resources inserted in resource sheet in Microsoft project software and its per unit
cost also mentioned and it gives amount of each resources.
Resource sheet in MSP

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

468 www.ijergs.org



International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

469 www.ijergs.org


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

470 www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

471 www.ijergs.org


Activities and durations entered in Microsoft project software and also done linking part , all the project activities entered
in project like shown below that is first and last part.
Activities and durations has been set as given below-
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

472 www.ijergs.org


Tracking-
Then set the base line and tracking part is done by using tracking gantt of all the activities from which we can understand
planned and actual duration of each activity. Whether activity is on time or lagging or before time, which shown below in
first and last part.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

473 www.ijergs.org



Like in excavation activity planned duration was 11 june 11 but in actual its extended upto23 june 11, in fire fighting
system planned date was 19 march 12 but its extended upto 4 april 12
Resource cost summary report-
Then resource cost summary report generated using visual reports from which we can understand how much amount
spent in each month on each resource. For example shown below the resource cost summary report for month march 2012
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

474 www.ijergs.org


Resource cost summary report

Then outflow generated which shows expenditure in each month that is shown below

Cash outflow report
4
0
4
9
7
.
0
6
2
6
2
5
5
6
3
.
3
9
6
8
5
5
1
.
1
8
1
2
7
6
3
4
0
.
2
2
6
9
1
9
8
5
7
8
.
8
7
8
4
3
9
2
4
6
.
5
5
2
1
0
4
4
4
0
.
3
2
7
7
7
4
6
.
1
7
6
1
1
2
5
1
0
.
4
7
0
8
7
9
6
6
.
7
2
8
1
9
2
4
5
.
6
1
8
4
9
6
7
3
.
5
9
2
8
7
3
0
8
2
4
.
8
0
2
4
1
4
8
4
7
5
.
9
9
4
1
5
4
6
7
5
.
8
3
6
6
6
5
8
4
.
3
0
7
2
9
8
2
9
.
9
6
1
2
3
4
4
4
5
0
8
9
6
.
4
6
3
7
6
3
.
2
2
5
0
0
0
4
0
0
0
0
7
0
0
0
1
6
0
2
7
2
0
1
0
5
3
5
.
6
8
1
7
8
0
9
6
.
0
8
2
2
5
2
8
1
.
2
8
5
2
8
2
8
8
.
5
4
5
3
9
2
9
.
8
3
2
2
4
5
.
5
9
0
0
0
0
0
1
0
5
0
0
0
0
1
2
6
0
0
1
8
2
0
0
2
2
4
0
0
0
2
0
0
0
0
4
5
0
0
1
0
5
0
0
7
5
0
0
0
0
2
3
4
1
6
9
.
0
4
3
9
4
2
0
4
.
3
6
5
1
9
8
7
8
2
9
.
7
6
1
9
0
200000
400000
600000
800000
1000000
1200000
1400000
1600000
1800000
T
o
p

T
e
r
r
a
c
e

W
a
t
e
r
p
r
o
o
f
i
n
g

-

I
P
S

L
M
R

7
5
m
m

t
h
k
.
3
0
0
X
3
0
0

A
n
t
i
s
k
i
d

T
i
l
e

S
t
a
i
r
c
a
s
e

T
r
e
a
d

-
T
a
n
d
o
o
r
-

6
0
0
X
6
0
0

V
i
t
r
i
f
i
e
d

T
i
l
e

s
k
i
r
t
i
n
g

3
0
0
X
3
0
0

A
n
t
i
s
k
i
d

T
i
l
e

s
k
i
r
t
i
n
g

D
A
D
O
-
C
M
(
1
:
4
)

b
e
d

o
f

2
0
m
m

D
o
o
r

J
a
m
b
s
-
J
e
t

B
l
a
c
k

U
m
b
r
a

P
a
t
t
i
-
G
r
a
n
i
t
e

-
(
w
i
d
e
-

M
S

W
i
n
d
o
w

G
r
i
l
l
s
M
S

L
a
d
d
e
r

(
4
5
0
m
m

w
i
d
e
)
M
S

m
a
n
h
o
l
e

c
o
v
e
r
-
6
3
0
M
M

P
a
i
n
t
i
n
g
-
W
h
i
t
e

W
a
s
h

f
o
r

L
i
f
t

P
a
i
n
t
i
n
g
-
O
B
D

w
i
t
h

p
r
i
m
e
r

&

P
a
i
n
t
i
n
g
-
W
o
o
d
e
n

O
i
l

p
a
i
n
t

L
i
f
t

(

P
+
7

F
l
o
o
r

)
-
(
1
.
5

M

X
1
.
6

L
e
t
t
e
r

B
o
x
e
s
V
i
d
e
o

D
o
o
r

P
h
o
n
e
N
o
t
i
c
e

B
o
a
r
d

a
t

P
a
r
k
i
n
g
F
i
r
e

f
i
g
h
t
i
n
g

s
y
s
t
e
m

f
o
r

(
P
+
7
)

D
e
v
e
l
o
p
m
e
n
t

W
o
r
k
March
2012
Resource Cost Summary Report March 2012
Total
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

475 www.ijergs.org

Above cash outflow report generated from visual reports which gives total expenditure of the project.
cash outflow of the project- Rs.39619286

Cash inflow of the project-
Cash inflow get from the clients in specific percentage after completion of the specific work that payment schedule is
given below










Payment schedule for inflow
Basically amounts received in particular months only like-
After completion of Excavation amount received-Rs. 17965600
After completion of R.C.C amount received- Rs.44914000
After completion of Brickwork amount received- Rs.8982800
After completion of Plastering amount received- Rs.8982800
After completion of Flooring amount received- Rs.4491400
After completion of Finishing amount received -Rs.4491400

As in some months there was no inflow but to maintain inflow of the project the amount received previously was utilized
to keep smooth flow.so we get the proper inflow of every month. Finally we get inflow more than outflow after deducting
overheads of 7% from total inflow
After completion of work- Percent of amount received
Excavation 20
R.C.C 50
Brickwork 10
Plastering 10
Flooring 5
Finishing 5
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

476 www.ijergs.org


Cash inflow report of project
Cash inflow report get generated from visual reports.
Cash inflow of the project- Rs.83540040
Cash flow of the project

Outflow of project- Rs.39619286
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

477 www.ijergs.org

Inflow of project- Rs.89828000-Rs.6287960
After deducting overheads =Rs.83540040

Cash flow of project report shows combination of the project inflow and project outflow. Red color showing project
inflow and blue color showing project outflow. As we can see project inflow is greater than project outflow that mean this
construction project is successful with gaining proper required profit
S Curve of the project-
S curve report shows the graph time vs. Cumulative cost. It shows the progress of the project after feeding all required
data and costs its get generated from the visual reports. It means adding every month outflow one by one so final we get
sum that is cumulative cost. It shows the progress and flow of the total project.

S Curve of the project
Acknowledgement-
Its my Pleasure to prepare this paper under the valuable guidance of Prof S.S.Pimplikar, H.O.D. civil department, M.I.T
.Pune and also I am thankful to Prof. Baliram Ade for help.


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

478 www.ijergs.org

Conclusion-
From all above things we can conclude that-
1. Cash flow is the backbone of any construction project and if we fail to manage that then project can fail.
2. All the items should be considered in cash flow like material cost according to its quantity,

charges, labor wages, fixed cost, overhead expenses and all direct and indirect cost expenses.
3. Poor cash flow hampers on construction project and results in delay of project completion, increase in costs etc.
4. Special attention required in case of execution of High Rise Buildings due to increase in variables, which needs
special study and analysis experienced at different stages in construction
5. From this we understand that cash flow is essential to work out because it gives total inflow and outflow of the
project and combination of it gives cash flow of the project. And from that easy to determine profit of the project
6. From this we also understand that how the cash flow generated with the help of Microsoft project software
7. with the help of this software it becomes easy to calculate cash flow as we gives input and comes output.















International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

479 www.ijergs.org

Factors Influencing Anxiety among Epilepsy Patients at Selected Hospital in
Chennai
Dr. S. Sujithra
1

1
Lecturer, Saveetha College of Nursing, Saveetha University
E-mail- Sujithra.mathi@yahoo.com

Abstract: World Health Organisation and its partners recognize that epilepsy is a major public health concern. Projects to reduce the
treatment gap and morbidity of people with epilepsy, train and educate health professionals, dispel stigma, identify potential for
prevention and develop models integrating epilepsy control into local health systems are ongoing in many countries. In a project
carried out in China, the treatment gap was reduced by 13% and there was improved access to care for people with epilepsy. The most
common psychiatric conditions in epilepsy are depression, anxiety, and psychoses. Anxiety is common in patients with epilepsy; out
of 49 patients with epilepsy attending a tertiary epilepsy care center, 57% had high-level anxiety. Anxiety in patients with epilepsy can
be ictal, postictal, or interictal. Up to 50 or 60% of patients with chronic epilepsy have various mood disorders including depression
and anxiety. Whereas the relationship between epilepsy and depression has received much attention, less is known about anxiety
disorders. It is now recognized that anxiety can have a profound influence on the quality of life of patients with epilepsy. The
relationship between anxiety disorders and epilepsy is complex. It is necessary to analyse the factors which influence anxiety among
epilepsy patients. Non Experimental, Descriptive research design using Survey Approach data was collected from 60 epilepsy patients
who were attending the OPD on regular schedule. On data analysis there was an association between the level of anxiety with the
selected demographic variables, chi-square reveals significant with p=0.00%with employed patients.
Keywords:Assess,Factors influencing anxiety, Epilepsy.
Introduction:
Epilepsy accounts for 0.5% of the global burden of disease, a time-based measure that combines years of life lost due to
premature mortality and time lived in states of less than full health. The most common psychiatric conditions in epilepsy are
depression, anxiety, and psychoses.







International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

480 www.ijergs.org

Table below shows the Prevalence Rates of Psychiatric Disorders in Patients With Epilepsy and the General Population
(2007 data)








Anxiety is an experience of fear or apprehension in response to anticipated internal or external danger, accompanied by some
or all of the following signs: muscle tension, restlessness, sympathetic hyperactivity, and cognitive signs and symptoms
(hypervigilance, confusion, decreased concentration, or fear of losing control).
Anxiety is common in patients with epilepsy; out of 49 patients with epilepsy attending a tertiary epilepsy care center, 57%
had high-level anxiety. Anxiety in patients with epilepsy can be ictal, postictal, or interictal.
GABA is the most important inhibitory transmitter in the central nervous system (CNS). Evidence suggests that the abnormal
functioning of GABA receptors could be of great importance in the pathophysiology of epilepsy and anxiety disorders.

Although, as shown above, studies looking into the association between anxiety and epilepsy have been performed, because of the
difficulty in separating the anxiety that accompanies a chronic disease from pathologic anxiety, studies investigating anxiety in
epilepsy have nonetheless been relatively few. Hence the researchers would like to assess the level of anxiety among patients with
epilepsy in this present scenario.
Up to 50 or 60% of patients with chronic epilepsy have various mood disorders including depression and anxiety. Whereas
the relationship between epilepsy and depression has received much attention, less is known about anxiety disorders. It is now
recognized that anxiety can have a profound influence on the quality of life of patients with epilepsy. The relationship between anxiety
disorders and epilepsy is complex. It is necessary to distinguish between different manifestations of anxiety disorder: ictal,
postictal,andinterictal anxiety.
Despite the high prevalence of anxiety disorders in patients with epilepsy, there are no systematic treatment studies or evidence-based
guidelines for best treatment practice
Psychiatric Disorder Controls

PatientsWith Epilepsy



Major depressive disorder 10.7% 17.4%
Anxiety disorder 11.2% 22.8%
Mood/anxiety disorder 19.6% 34.2%
Suicidal Ideation 13.3% 25.0%
Others 20.7% 35.5%
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

481 www.ijergs.org

Literature survey:
. WHO, the International League Against Epilepsy (ILAE) and the International Bureau for Epilepsy (IBE) are carrying
out a global campaign, Out of the Shadows to provide better information and raise awareness about epilepsy, and strengthen public
and private efforts to improve care and reduce the disorder's impact.
Swinkels WA, Kuyk J, de Graaf EH, van Dyck R, Spinhoven P A recent study looks for psychopathology using a
standardized diagnostic interview in inpatients with all types of epilepsy obtained similar results: The 1-year prevalence of anxiety
disorders was 25%, and that of mood disorders, 19% .
Goldstein et al (2010)found that patients with epilepsy with high seizure frequency had lower anxiety scores than did
patients with lower seizure frequency. The risk of anxiety is higher in focal (more frequent in temporal lobe) epilepsy than in
generalized epilepsy. In patients with temporal lobe epilepsy, Trimble et al reported that 19% of the patients were diagnosed with
anxiety and 11% were diagnosed with depression.
Torta and Keller (2010) estimated that fear due to anxiety occurs as an aura in as many as 15% of patients.
Goldstein and Harden (2009) concluded from several studies that anxiety is one of the most common ictalemotions.Ictal
anxiety symptoms manifest as fear or panic, sometimes with other characteristics of temporal discharges, such as depersonalization
and dj vu, as well as other psychological and autonomous phenomena.
International Epilepsy Associationstudied anxiety in association with type of epilepsy and frequency of seizures. The
highest rates of psychiatric comorbidities, including anxiety, are reported in patients with chronic, refractory seizure disorders.
Edeh and Toone (2008) found that patients with temporal lobe epilepsy scored higher for anxiety than did those with focal,
non temporal lobe epilepsy. Anxiety can also be seen in frontal lobe epilepsy.
According to the MEDSCAPE India Anxiety in epileptic patients may occur as an ictal phenomenon, as normal interictal
emotion or as part of an accompanying anxiety disorder, as part of an accompanying depressive disorder, or in association with
nonepileptic, seizurelike events as part of an underlying primary anxiety disorder. Interictal anxiety has a great
influence on the quality of life of patients, since most of them have a permanent fear of new discharges.
Torta and Keller(2012) have estimated that as many as 66% of patients with epilepsy report interictal anxiety.
Blum D, Reed M, Metz A.studies have shown that the rate of mood disorder is higher in patient with epilepsy than in those
with other chronic medical conditions such as diabetes and asthma .

Swinkels WA, Kuyk JSeizure frequency has been linked with severity of anxiety in some . This does not necessary imply
ictal fear, but rather that as the burden of epilepsy increases, so does the anxiety. Yet clinically, the degree of anxiety is dissociated
from seizure frequency in that it is the individual_s perception of danger(e.g., of falling or dying) that is critical.

Baker GA, Jacoby A, Buck D, Brooks J, Age and gender have a relatively subtle effect: for example, first-onset epilepsy in
late life may be linked with higher levels of anxiety .
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

482 www.ijergs.org

Goldstein MA, Harden CL.The risk of anxiety disorders appears to be higher in focal (especially temporal lobe) than in
generalized epilepsies, but they are also seen in
patients with frontal lobe epilepsy as well as primary or generalized seizures . Several groups have found a link with the left temporal
lobe but this is not entirely consistent in the literature .

AIM:
To identify the factors influencing anxiety among epilepsy patients.
OBJECTIVES:
1.To assess the factors influencing anxiety among epilepsy patiens.
2.To associate the factors influencing anxiety with selected demographic variables.
ASSUMPTION:
factors influencing anxiety among epilepsy patient remains unidentified .
OPERATIONAL DEFINITIONS:
ASSESS:
It refers to the estimation of anxiety among epilepsy patients.
FACTORS INFLUENCING ANXIETY:
It refers to the factors that influence anxiety such as age , education in years , employment status , employment type , current
economic status , seizure frequency , the number of antiepileptic drugs , family life/social life dissatisfaction ), social support , the
symptoms of anxiety and depression and ADL dysfunction
EPILEPSY PATIENTS:
Adult generalized epilepsy patient in the age group of 20-40 years old who are attending the Nerve centre,T.nagar.
LIMITATIONS:
1. Study is limited to four weeks
2. Study is limited with generalized epilepsy patient
PROJECTED OUT COME:
1. The results of the study will provide the information on the factors influencing anxiety among epilepsy patient.
2. The results will bring an awareness among para professionals to provide vigilant care.
Problem Statement:
A study to assess the factors influencing anxiety among epilepsy patients at selected hospital in Chennai
Methodology:
RESEARCH DESIGN:
Design choosen for the study is Non Experimental, Descriptive research design
RESEARCH APPROACH; Survey Approach
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

483 www.ijergs.org

SETTING:
The study was conducted in Nerve center clinic, Chennai. The number of patients who attend the OPD are 40 patients with epilepsy
per week(Both New and Old cases).
SAMPLE
Adult epilepsy patients in the age group of 20-40years who come to neurological OPD Nerve center.
POPULATION
It Includes all the epilepsy patients in the age group of 20-40 years.
SAMPLE SIZE:
60 adult epilepsy patients.
SAMPLING TECHNIQUE:
Non probability Purposive sampling technique
SAMPLING SELECTION CRITERIA:
1. Inclusion criteria:
- Adult epilepsy patient who are diagnosed with generalized epilepsy.
- Adult generalized epilepsy patient who are attending private clinic
- Adult generalized epilepsy patient in the age group of 20-40 years
- Adult generalized epilepsy Patients who can understand Tamil and English
- Adult generalized Patients who are willing to participate.
2. Exclusion Criteria:
- Adult patients with idiopathic epilepsy.
- Patients who are not willing to participate.
Data collection procedure:
Permission was obtained from Neurological OPD Chennai,. After obtaining Informed consent from the patients factors
influencing epilepsy among epileptic patients will be identified and analysed.
Scoring Key:

The scores range from 20-80.
Little or none =1
Some of the time =2
A large part of the time =3
Most of the time = 4


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

484 www.ijergs.org

SCORE INTERPRETATION
20-44 Normal Range
45-59 Mild to Moderate Anxiety Levels
60-74 Marked to Severe Anxiety Levels
75 and above Extreme Anxiety Levels

Results and Discussion:
Table:1 shows the factors influencing anxiety among patients with epilepsy
s.no Factors influencing anxiety Mean Standard Deviation
1. Physical causes 80.2 62.5
2. Psychological causes 5.92 13.32
3. Financial causes 96.23 70.2
4. Familial causes 6.2 16.5
5. Sociological causes 80.45 70.6

The First objective of the study was to assess the factors influencing anxiety among epilepsy patients.
The factors influencing anxiety among epilepsy patients the mean and standard deviation of physical causes are 80.2 and
62.5 respectively, with regard to the mean and standard deviation of psychological causes are 5.92 and 13.32.The financial causes
mean was96.23 and the standard deviation was 70.2.Coming on to the familial causes mean was 6.2 and standard deviation was
16.5.With regard to the sociological causes the mean score was 80.45 with a standard deviation of 70.6 respectively. According to
Goldstein and Harden (2010), epileptic events can produce symptoms indistinguishable from those of primary anxiety disorder.
Symptoms of anxiety in epilepsy may result or be exacerbated by psychological reactions, including responses to the unpredictability
of seizures and restrictions of normal activities. This results in low self-esteem, stigmatization, and social rejection.(Fear and anxiety
are often associated with simple partial seizures.
Conclusion:
With regard to the association between the level of anxiety with the selected demographic variables, chi-square reveals significant
with p=0.00%with employed patients.
Future scope:
Epilepsy has significant economic implications in terms of health-care needs, premature death and lost work productivity. An
Indian study calculated that the total cost per epilepsy case was US$ 344 per year (or 88% of the average income per capita). The total
cost for an estimated five million cases in India was equivalent to 0.5% of gross national product.
Although the social effects vary from country to country, the discrimination and social stigma that surround epilepsy worldwide are
often more difficult to overcome than the seizures themselves. People with epilepsy can be targets of prejudice. The stigma of the
disorder can discourage people from seeking treatment for symptoms and becoming identified with the disorder. Hence further studies
to be focused on the various aspects of the Epilepsy patients towards wellbeing and indirectly towards improving the social wellbeing
of the country too.


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

485 www.ijergs.org

REFERENCES:
1.Perini G, Mendius R. Depression and anxiety in complex partial seizures. J NervMent Dis 1984;172:28790.
2.Altshuler LL, Devinsky O, Post RM, Theodore W. Depression, anxiety and temporal lobe epilepsy: laterality of focus and
symptomatology. Arch Neurol 1990;47:2848.
3.Vazquez B, Devinsky O, Luciano D, Alper K, Perrine K. Juvenile myoclonic epilepsy: clinical features and factors related to
misdiagnosis. J Epilepsy 1993;6:2338.
4.Cutting S, Lauchheimer A, Barr W, Devinsky O. Adult-onset idiopathic generalized epilepsy: clinical and behavioral features.
Epilepsia 2001;42:13958.
5. Hermann BP, Seidenberg M, Bell B. Psychiatric comorbidity in chronic epilepsy: identification, consequences, and treatment of
major depression. Epilepsia 2000;41(Suppl. 2):S3141.
6.Kanner AM, Palac S. Depression in epilepsy: a common but often unrecognized comorbid malady. Epilepsy Behav 2000;1:3751.
7. Kanner AM. Depression in epilepsy: prevalence, clinical semiology,pathogenic mechanisms, and treatment. Biol Psychiatry
2003;54:38898.
8.Kanner AM, Barry JJ. The impact of mood disorders in neurological diseases: should neurologists be concerned? Epilepsy Behav
2003;4(Suppl. 3):S3S13.
9.Gilliam F, Hecimovic H, Sheline Y. Psychiatric comorbidity, health, and function in epilepsy. Epilepsy Behav 2003;4(Suppl.):S26
30.
10.Marsh L, Rao V. Psychiatric complications in patients with epilepsy: a review. Epilepsy Res 2002;49:1133.
11.Harden CL, Goldstein MA. Mood disorders in patients with epilepsy: epidemiology and management. CNS Drugs 2002;16:291
302.
12. Blum D, Reed M, Metz A. Prevalence of major affective disorders and manic/hypomanic symptoms in persons with epilepsy: a
community survey. Presented at the American Academy of Neurology 54th Annual Meeting, Denver, CO, 1320 April 2002.
13. Ettinger AB, Weisbrot DM, Nolan EE, et al. Symptoms of depression and anxiety in pediatric epilepsy patients. Epilepsia
1998;39:5959.
14.Goldstein MA, Harden CL. Epilepsy and anxiety. Epilepsy Behav 2000;1:22834.
15. Vazquez B, Devinsky O. Epilepsy and anxiety. Epilepsy Behav 2003;4(Suppl. 4):S205.
16. Currie S, Heathfield KW, Henson RA, Scott DF. Clinical course and prognosis of temporal lobe epilepsy: a survey of 666 patients.
Brain 1971;94:17390.
17. Swinkels WA, Kuyk J, de Graaf EH, van Dyck R, Spinhoven P. Prevalence of psychopathology in Dutch epilepsy inpatients: a
comparative study. Epilepsy Behav 2001;2:4417.
18. Jones JE, Hermann BP, Barry JJ, Gilliam F, Kanner AM, Meador KJ. Clinical assessment of Axis I psychiatric morbidity in
chronic epilepsy: a multicenter investigation. J Neuropsychiatry ClinNeurosci 2005;17:1729.
19.Choi-Kwon S, Chung C, Kim H, et al. Factors affecting the quality of life in patients with epilepsy in Seoul, South Korea.
ActaNeurolScand 2003;108:42834.
20. Johnson EK, Jones JE, Seidenberg M, Hermann BP. The relative impact of anxiety, depression, and clinical seizure features on
health-related quality of life in epilepsy. Epilepsia
2004;45:54450.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

486 www.ijergs.org

21. Diagnostic and statistical manual of mental disordersFourth Edition (DSM-IV), Washington, DC: American Psychiatric
Association. 1994.
22. Yates WR, Mitchell J, Rush AJ, et al. Clinical features of depressed outpatients with and without co-occurring general medical
conditions in STAR*D. Gen Hosp Psychiatry 2004;26:4219.
23. Pariente PD, Lepine JP, Lellouch J. Lifetime history of panic attacks and epilepsy: an association from a general population
survey. J Clin Psychiatry 1991;52:889.
24.Piazzini A, Canevini MP, Maggiori G, Canger R. Depression and anxiety in patients with epilepsy. Epilepsy Behav 2001;2:4819.
25.Issacs KL, Philbeck JW, Barr WB, Devinsky O, AlperK.Obsessivecompulsive symptoms in patients with temporal lobe epilepsy.
Epilepsy Behav 2004;5:56974.
26.] Gaitatzis A, Carroll K, Majeed A, Sander JW. The epidemiology of the comorbidity of epilepsy in the general population.
Epilepsia 2004;45:161322.
27.Noyes R. The relationship of hypochondriasis to anxiety disorders. Gen Hosp Psychiatry 1999;21:817.
28. LoringDW,Meador KJ, Lee GP. Determinants of quality of life in epilepsy. Epilepsy Behav 2004;5:97680.
29. Mintzer S, Lopez F. Comorbidity of ictal fear and panic disorder. Epilepsy Behav 2002;3:3307.
30.Kessler RC, McGonagle KA, Zhao S, et al. Lifetime and 12- month prevalence of DSM-III-R psychiatric disorders in the United
States: results from the National Comorbidity Survey. Arch Gen Psychiatry 1994;51:819














International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

487 www.ijergs.org

Application of Response Surface Method in Optimization of Impact Toughness
of EN24Steel
Rohit Pandey
1
, Rahul Davis
2
1
Research Scholar (M.Tech), Department of Mechanical Engineering, Shepherd School of Engineering and Technology
2
Assistant Professor, Department of Mechanical Engineering, Shepherd School of Engineering and Technology
SHIATS,, Allahabad, Uttar Pradesh, India
E-mail- rohit.pandey8118@gmail.com

Abstract Impact testing methodology is findings the applications for determining the impact strength of the different materials.
Oneof the most important characteristics of structural steels, toughness is assessed by the Charpy V-notch impact test. The objective
of the research was to maximize the impact toughness by selecting various combinations of Charpy Impact test parameters. In this
paper, experiments were carried out to Study the Effect of Thermal Treatments (annealing, cryogenic treatment and tempering) on
Impact Toughness. Cryogenic treatment (CT) is the supplementary process to conventional heat treatment process in steels, by deep
freezing materials at cryogenic temperatures to enhance the mechanical and physical properties of materials being treated. For this
purpose, the temperature was used -196
o
C as deep cryogenic temperature. The effects of cryogenic temperature (deep), cryogenic time
(kept at cryogenic temperature for 36 hr) on the wear behavior of EN24 steel were studied. The findings showed that the cryogenic
treatment decreases the retained austenite and hence improves the wear resistance and hardness. The process has various advantages
like increase in hardness, increase in wear resistance, reduced residual stresses, fatigue resistance, increased dimensional stability,
increased thermal conductivity, toughness, by transformation of retained austenite to martensite, the metallurgical aspects of eta-
carbide formation, precipitation of ultra- fine carbides, and homogeneous crystal structure. The findings show that the cryogenic
treatment improves the wear resistance and hardness of EN24 steel. En24 steel is generally used in the hardened and tempered
condition to achieve an optimum combination of hardness and ductility.
In the present study, heat treatment process of En24 steel was done which includes Annealing and Tempering at high temperature.
The specimens tempered at different temperatures (in the range 473823 K) exhibited decreasing hardness with increase in tempering
temperature. Response surface methodology was adopted in designing the experiments for three factors with 27 levels.
KeywordsEN24 Steel, Impact Toughness, Thermal Treatment, Cryogenic Treatments, Hardness, Austenite, Martensite, Carbide
formation, Tempering, Wear Resistance.
INTRODUCTION
Engineering materials, mostly steel and their alloys are heat treated to alter their mechanical and physical properties so as to meet the
engineering applications. Impact testing methodology is finding its applications for determining the impact toughness of any materials.
The absorbed impact energy and the transition temperature defined at a given Charpy energy level are regarded as the common criteria
for toughness assessment[1]. The Charpy impact testing process consists of hammering the steel specimens with a reasonable height
with certain velocity on the reverse side of the notch, so that the amount of energy required for the failure of steel specimen can be
determined [2].
Here, EN24 steel specimen is used for the impact testing experiment& the effect of different process parameters on the value of
impact toughness of EN24 steel is being determined in this paper [2].
During the World War II, the US Army used many Liberty Ships but a lot of them got damaged due to brittle fracture. The term
brittle fracture is used to describe rapid propagation of cracks without any excessive plastic deformation at a stress level below the
yield stress of the material. The brittle fracture that occurred in the Liberty Ships was caused by low notch toughness at low
temperature of steelspecimens [3]. CryogenicSteel experiences ductile fracture at high temperature and brittle fracture at low
temperatures; therefore steel shows the characteristic of ductile-to-brittle transition. Brittle fracture usually occurs under the conditions
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

488 www.ijergs.org

of low temperature, high loading rate. Cryogenic treatments are useful in these types of cases [3].As recent researches were going on,
the scientists showed a lot of confidence and interest on the Deep Cryogenic Treatment of steels.According to the experimental
results, martensitic transformation occurred after the deep cryogenic treatment[4]. Grain shape and size gets refined and is made
uniform. Defect elimination takes placeand inter atomic distance is reduced [5].Cryogenic treatment is an extension of conventional heat
treatment process which converts austenite to martensite.The findings showed that the cryogenic treatment decreases the retained
austenite and hence improves the wear resistance and hardness [6].A thermal treatment of steel specimens of EN24 grade steel is done.
Heat treatment of the steel specimens consists of Austenitizing (Annealing) and Tempering.The mechanical properties such as ductility,
toughness, strength, hardness and tensile strength can easily be modified by heat treating the steel specimens to suit a particular design
purpose [7]. And secondly the Cryogenic treatment is done at low temperature in a jar known as Cryocan filled with liquified nitrogen.
The Cryogenic treatment involves cooling of steel specimens of EN24 at very low temperature (-1960 C) for about 36 hours. Due to more
homogenized carbide distribution as well as the elimination of the retained austenite, the deep cryogenic treatment demonstrated more
improvement in wear resistance and hardness compared with the conventional heat-treatment [8].Cryogenic treatment improves hardness,
microstructure of metal (retained austenite to marten site), dimensional stability and decreases residual stresses [17].A Comparative study
on conventionally heat treated and cryogenic treated EN24 steel specimens has been presented in this paper. Specimens initially subjected
to conventional heat treatment at austenitizing temperature of 810 C and goes under deep cryogenic treatment at 195 C for 24 hours [10].

RAW MATERIAL


CRYOGENIC TREATMENT TEMPERING AN ANNEALING CRYOGENIC TREATMENT TEMPERING



TEMPERING
MATERIAL AND METHODS
Design of Experiment is a methodology based on statistics and other discipline which is used for analyzing the experimental data for
obtaining the valid conclusion with an efficient and effective planning of experiments. Design of experiments is a series of tests in
which purposeful changes are made to the input variables of a system or process and the effects on response variables are measured.
Design of experiments is applicable to both physical processes and computer simulation models. The Experimental analysis
Experimental design is an effective tool for maximizing the amount of information gained from a study while minimizing the amount
of data to be collected. An exact optimization can be determined by the Response Surface Method. The Response Surface
methodology is based on experimental design with the final goal of evaluating optimal functioning of industrial facilities, using
minimum experimental effort. Here, the inputs are called factors or variables and the outputs represent the response that generates the
system under the causal action of the factors.
Orthogonal Array is a statistical method of defining parameters that converts test areas into factors and levels. Test design using
orthogonal array creates an efficient and concise test suite with fewer test cases without compromising test coverage. The experiment
carried out is based on the principle of L27 Orthogonal Arrays (OAs).

The control parameters were considered for the proposed research work for multiple performance characteristics at three different
levels and three different factors and are shown in Table 1 below:

Table no. 1: Different Factors and their Levels:
Factors Level 1 Level 2 Level 3
Notch Angle(A) 30
0
45
0
60
0

Thermal Treatment(B) Cooling followed by
Tempering(CT)
Cooling followed by Cryogenic
Treatment & Tempering(CCTT)
Cooling followed by Tempering
& Cryogenic Treatment(CTCT)
Height of the Hammer(C) 1370 1570 1755

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

489 www.ijergs.org

In this paper the effect of thermal treatments is studied along with three impact test parameters to maximize the impact toughness of
EN24 steel. The experiment is to find the optimum impact value by combining all three parameters like notch angle, thermal treatment
and height of hammer. The Chemical composition test of EN 24 steel was performed in the Metal Testing Laboratory, Indian
Railways in Bareilly (U.P), India. The details of composition are shown below:
Table no. 2: Chemical Composition of EN24 Steel:
ExperimentalDetails
Experiments were carried out by using 27 specimens of EN24 steel which are thermally treated in 3stages, each stage containing 9
specimens. Specimens are heat treated in an Electric furnace whose size is 150mm X 150mm X 300mm and resolution is 1
0
C. The
Thermal treatment process of first 9 specimens follows the sequence as follows:Annealing followed by Tempering. After this other 9
specimens are taken for their thermal treatmentwhich follows the pattern i.e. Annealing followed by Cryogenic treatment followed
Tempering (LTT, MTT,and HTT). L.T.T stands for Lower Temperature Tempering, M.T.T stands for Medium Temperature
Tempering and H.T.T stands for High Temperature Tempering. Cryogenic treatment is a process in which steel specimens of EN24
are kept at very low temperature(-196
0
C) in a bottle known as Cryocan which is filled with liquefied nitrogen. At last the remaining 9
specimens are thermally treated in an order which follows: Annealing followed by Tempering followed by Cryogenic treatment.Heat
treatment can alter the mechanical properties of steel by changing the size and shape of the grains of which it is composed, or by
changing its micro-constituents. It is applied to improve machinability, refine grain size, increases resistance to wear and corrosion.
Annealing was carried out on the specimen by heating the metal slowly at 810
o
C. It is held at this temperature for sufficient time
(about 1 hour) for all the material to transform into austenite. It is then cooled slowly inside the furnace to room temperature. The
grain structure has coarse pearlite with ferrite or cementite. Special Electric furnaces are used in the annealing process.Tempering is
the process of reheating the steel at predetermined temperatures which is lower than the transformational temperature to obtain
different combinations of mechanical properties in steel. Tempering can also be defined as steady heating of martensite steel at a
temperature below the recrystallization phase, followed by a gradual cooling process. Tempering reduces residual stresses, increases
ductility, toughness and ensures dimensional stability. During tempering, martensite rejects carbon in the form of finely divided
carbide phases. The end result of tempering is a fine dispersion of carbides in the -iron matrix, which bears little structural stability to
the original as-quenched martensite. Hence, the micro stresses and hardness of all the samples are reduced after tempering.


Fig.1 Specimen filled in Cryocan for cryogenic treatmentFig.2 Cryogenically treated specimens of EN24 Steel
MATERIAL CARBON % SILICON % NICKEL % CHROMIUM % MOLYBDENUM%
EN24 0.40 0.30 1.50 1.20 0.25
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

490 www.ijergs.org


Fig.3 Heat treatment of EN24 Steel in Electric furnace Fig.4 Specimen after Cryogenic Treatment

Fig.5 EN24 Steel Specimens after Tempering Fig.6EN24 specimen taken out after Cryo-process
Charpy Impact Testing
The Charpy impact test continues to be used as an economical quality control method to determine the notch sensitivity and impact
toughness ofengineering materials. The Charpy test is done to measure the ability of a material to resist brittle fracture. The principle
of the test differs from that of the Izod test in that the test piece is tested as a beam supported at each end; a notch is cut
across the middle of one face, and the striker hits the opposite face directly behind the notch. It is widely applied in industry, since it
is easy to prepare and conduct and results can be obtained quickly and cheaply. The Charpy test sample has a sizes (10 X 10 X 55)
mm with three V- Notch 30, 45 and 60 of 2 mm depth.






Fig.7 Specimen for Charpy test
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

491 www.ijergs.org


Fig.8 Charpy Impact Testing
Table no. 3: L27 Orthogonal Array for conducting Chapy Impact Testing
Sr.
No.
Notch Angle
(degree)
Thermal Treatment Height of the
Hammer(mm)
Impact
Value(J)
SNRA1
1 30
Tempering
1370
183 45.2490
2 30
Tempering
1570
110 40.8279
3 30
Tempering
1755
71 37.0252
4 30
Cryogenic Treatment followed by
Tempering
1370
178 45.0084
5 30
Cryogenic Treatment followed by
Tempering
1570
125 41.9382
6 30
Cryogenic Treatment followed by
Tempering
1755
69 36.7770
7 30
Tempering followed by Cryogenic
Treatment
1370
156 43.8625
8 30
Tempering followed by
CryogenicTreatment
1570
100 40.0000
9 30
Tempering followed by Cryogenic
Treatment
1755
64 36.1236
10 45
Tempering
1370
95 39.5545
11 45
Tempering
1570
114 41.1381
12 45
Tempering
1755
75 37.5012
13 45
Cryogenic Treatment followed
byTempering
1370
154 43.7504
14 45
Cryogenic Treatment followed by
Tempering

1570
108 40.6685
15 45
Cryogenic Treatment followed by
Tempering
1755
79 37.9525
16 45
Tempering followed by Cryogenic
1370
186 45.3903
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

492 www.ijergs.org

Treatment
17 45
Tempering followed by Cryogenic
Treatment
1570
106 40.5061
18 45
Tempering followed by Cryogenic
Treatment
1755
75 37.5012
19 60
Tempering
1370
142 43.0458
20 60
Tempering
1570
103 40.2567
21 60
Tempering
1755
73 37.2665
22 60
Cryogenic Treatment followed by
Tempering
1370
162 44.1903
23 60
Cryogenic Treatment followed by
Tempering
1570
116 41.2892
24 60
Cryogenic Treatment followed by
Tempering
1755
76 37.6163
25 60
Tempering followed by Cryogenic
Treatment
1370
151 43.5795
26 60
Tempering followed by Cryogenic
Treatment
1570
55 34.8073
27 60
Tempering followed by Cryogenic
Treatment
1755
67 36.5215
Experimental Results
Table no. 4: Response Table for Signal to Noise Ratios
Level 1 Notch Angle
(degree)
Thermal
Treatment
Height of the
Hammer(mm)
1 40.76 41.02 43.74
2 40.44 40.21 40.16
3 39.84 39.81 37.14
Delta 0.92 1.21 6.59
Rank 3 2 1

Table no. 5: Response Table for Means
Level 1 Notch Angle
(degree)
Thermal
Treatment
Height of the
Hammer(mm)
1 117.33 118.56 156.33
2 110.22 107.33 104.11
3 105.00 106.67 72.11
Delta 12.33 11.89 84.22
Rank 2 3 1
Table no. 6: Analysis of Variance
Source DF Adj SS Adj MS F-Value P-Value
Notch Angle (degree) 2 689.9 344.9 0.87 0.435
Thermal Treatment 2 803.2 401.6 1.01 0.382
Height of the Hammer(mm) 2 32533.6 16266.8 40.96 0.000
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

493 www.ijergs.org

Error 20 7942.7 397.1
Total 26 41969.4

Table no. 7: Analysis of Variance
Source DF Adj SS Adj MS F-Value p-Value
Notch Angle (degree) 2 3.890 1.945 0.75 0.484
Thermal Treatment 2 6.860 3.430 1.33 0.288
Height of the Hammer(mm) 2 196.134 98.067 37.91 0.000
Error 20 51.733 2.587
Total 26 258.617

According to Table no.1: This table is called the Response table which shows that the Height of hammer has the highest response
grade of 1. So, Height of the hammer is the main effective factor which mostly affects the Impact values.
According to Table no.2:This table is called the Anova table which shows that the p-value is below 0.05 only for the Height of
hammer. So it is clear from the table that Height of hammer is the most significant and influential factor amongst the three factors.
Results and Discussions
Graphs Plotted on the basis of Experiments Performed:


Fig.9: Main effect plot for Means (Data Means)
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

494 www.ijergs.org


According to Fig.9: At the first level of Notch angle (30
0
), first level of Thermal treatment (Cryogenic treatment followed by
Tempering) and first level of Height of hammer (1370) respectively. The Impact value was found to be maximum at Main effects plot
for Means(Data Means).

Fig.10: Main effect plot for SN ratios (Data Means)

According toFig.10:At the first level of Notch angle (30
0
), first level of Thermal treatment(Cryogenic treatment followed by
Tempering) and first level of Height of hammer (1370) respectively. The Impact value was found to be maximum at Main effect plot
for SN ratio (Data Means).


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

495 www.ijergs.org


Fig.11: Contour plot of Impact value vs. Thermal treatment, Notch angle(degree)


Fig.12: Contour plot of Impact value vs Thermal treatment, Notch angle(degree)
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

496 www.ijergs.org



Fig.13Contour plot of Impact Value vs. Thermal treatment, Notch angle (degree)



According to Fig.11:The Contour plot is among Thermal treatment and Notch angle, the maximum region covered by the colour
depicts that the maximum number of Impact value were found in the range of 150 to 160(J) while the Height of hammer is hold at
1370mm..

According to Fig.12: The Contour plot is among Thermal treatment and Notch angle, the maximum region covered by the colour
depicts that the maximum number of Impact value were found in the range of 105 to 110(J) while the Height of hammer is hold at
1562.5mm.

According to Fig.13: The Contour plot is plotted among Thermal treatment and Notch angle, the maximum region covered by the
colour depicts that the maximum number of Impact value were found in the range of 71 to 76(J) while the Height of hammer is hold at
1755mm.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

497 www.ijergs.org


Fig.14: Surface plot of Impact value (J) vs. Thermal treatment, Notch angle (degree)
According to Fig.14: Surface plot of Impact value (J) vs. Thermal treatment; Notch angle (degree) shows the 3D figure of parameter.
Using the general steps involved in the Design of Experiments and the L27 Orthogonal Array (Reference: Table 3). The obtained
results for Impact values have been analyzed analytically and graphically using RESPONSE SURFACE METHOD with the
application of MINITAB 17 software.
The combination of the optimum levels of the factors has been determined with the application of S/N RATIO for Impact values. The
Parameter Design has been obtained i.e. the combination of optimum levels of the factors. In the obtained Parametric Design, each
optimum combination has the greatest influence on the Impact Toughness and the least variation from the target of the design.
Acknowledgment
I express my sincere gratitude to my Advisor, Mr. Rahul Davis, Assistant Professor, Mechanical Engineering Department, SHIATS,
Allahabad for his valuable guidance, proper advice, painstaking and constant encouragement during the course or my work on thesis.
I also feel very much obliged to Mr. James Peter, Head of Department of Mechanical Engineering for his encouragement and inspiration for
execution of the thesis work.
I am deeply indebted to my parents for their inspiration and ever encouraging moral support, which enabled me to pursue my studies.
I am also very thankful to the entire faculty and staff members of Mechanical Engineering Department for their direct and indirect
help and cooperation .

Dated- 09/06/2014 Rohit Pandey

CONCLUSION

The present research work has successfully demonstrated the application of Response Surface Method for the optimization of process
parameters in Impact testing of EN24 steel. The conclusions can be drawn from the present work are as follows:

1. The highest Response surface grade of 1.0000 was observed for the experimental process, as shown in Response
table no. 4 and 5.
2. It is also observed through ANOVA that the Height of hammer is the most influential factor among the three
process parameters investigated in the present work.
3. The order of importance for the controllable factors was Height of hammer, followed by Notch angle and
Thermal treatment.

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

498 www.ijergs.org

REFERENCES:
[13] Chen, M.-Y., Linkens, D.A. Computational Intelligence Methods and Applications, 2005 ICSC Congress on Print ISBN:1-4244-0020-1
Digital Object Identifier:10.1109/CIMA.2005.1662335.
[14] Narinder Pal Singh, V.K Singla Experimental study and Parametric Design of Impact test Methodology in June 2009.
[15] Kobayashi, Hideo, Onoue, Hisahiro, Brittle Fracture of Liberty Ships, March,1943.
[16] D.N.Collins, Deep Cryogenic Treatment of Tool Steels: a Review, Heat Treatment of Metals, 1996.2, pp. 40-42.
[17] P. I. Patil, R. G. Tated Comparison of Effects of Cryogenic Treatment onDifferent Types of Steels: A Review in July, 2014.
[18] [3]
1
R.H.Naravade,
2
S.B.Belkar,
3
R.R.Kharde Effects of Cryogenic Treatment, Hardening and Multiple Tempering On Wear
Behavior of D6 Tool Steel, (IJES), 01-15||2013||.
[19] Rajesh K. KHATIRKAR, Prashant YADAV and Sanjay G. SAPATE,Structural and Wear Characterization of Heat Treated
En24 Steel, ISIJ International, March (2012).
[20] P Sekhar Babu, P Rajendran, Dr K N Rao, Cryogenic Treatment of M1, EN19 and H13 Tool Steels to Improve Wear
Resistance, IE(I) Journal-MM, October 2005.
[21] Barron, R.F. Cryogenic treatment of metals to improve wearresistance, Cryogenics 22, 1982, 409414.
[22] A. Akhbarizadeh, A. Shafyei, M.A. Golozar; Effects of cryogenic treatment on wear behavior of D6 tool steel; 2009.
[11] Cord Henrik Surberg, Paul Stratton, and Klaus Lingenhole; The effect of some heat treatment parameters on the dimensional stability of AISI
D2;
Cryogenics, 48 (2008).
[12] Kalpak jian, S., Manufacturing Processes for Engineering Materials., 1985.
[13] Debdulal Das,Raj deepSarkar, Apurba Kishore Dutta, Kalyan Kumar Ray; Influence of sub-zero treatments on fracture toughness of AISI
D2 Steels, (2010).
[14] Collins, D.N. Cryogenic treatment of tool steels. Adv. Mater. Process,1998.
[15] Jha, A.R. Cryogenic Technology and Applications; 2006.
[16] V.Firouzdor, E.Nejati, F.Khomamizadeh, Effect of Deep Cryogenic Treatment on Wear Resistance and Tool Life of M2 HSS Drill, 2008.
[17] D.Das, A.K.Dutta, K.K.Ray Correlation of Microstructurewith Wear Behavior of Deep Cryogenically Treated AISID2 Steel, 2009.
[18] Reitz. W, Pendray J, Cryoprocessing of materials-A review of current status, Materials and manufacturing process,2001.
[19] Y. Sahin, M. Erdogan and M. Cerah: Wear, 265 (2008).
[20] D.Das, K.K.Ray, A.K.Dutta, Influence of Temperature of Sub-Zero Treatments on the Wear Behaviour of Die Steel, 2009











International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

499 www.ijergs.org

Effect of Humidity on the Efficiency of Solar Cell (photovoltaic)
Manoj Kumar Panjwani
1
, Dr. Ghous Bukshsh Narejo
1

1
Department of Electronic Engineering, NEDUET, Pakistan
E-mail- manoj_panjwani@hotmail.com

Abstract: Out of 100% energy coming from sun approximately 30% of the energy is either reflected back or is absorbed by clouds,
oceans and land masses. In cities where the humidity is more, like Karachi Mumbai, Malaga, Hamburg and Los Angles where in
average humidity ranges in (40-78 %), results in a minimal layer of water vapor at the front solar cell directly facing Sun. The Solar
energy which actually strikes the solar cell is subjected to loss in absorption/reflection of energy. There have been approximate losses
of about 15-30% of the energy in addition to 30%. One of the effects that we found out after our experimental analysis was of the
humidity that it brings down the utilization of solar energy approximately to 55-60% from just 70% approximately of utilized energy.
Keywords:Solar energy, humidity factor, absorption, effect, reflection, efficiency, approximation.
Introduction
If we talk about the energy which is received from Sun, Earth receives approximately of 1413 W/m
2
and the actual consumption
which appears on the scale formulated is approximately 1050W/m
2
as recorded by Pacific Northwest Forest and Range Experiment
Station Forest Service, U.S. Department of Agriculture, Portland, Oregon, USA in in 1972.As per the facts observed, approximately of
30% energy is lost in between. As per the statistical figures stated, that the Earths top of the atmosphere sunlights intensity is about
30% more intense than the actual received on the land. In the Solar panels what we use today, actually we make use of the 70% energy
coming from the Sun and utilize the working of our panels to fulfill our energy needs. [1-3]
As per the fact that the earths crust mainly consists of 70% of Water, the energy which strikes the earth is indirectly striking the
water/oceans which helps in increasing of humidity level on the overall basis. The humidity doesnt only create hurdles for the energy
actually received at the top of the atmosphere but also effects the device consumptions by many aspects.[4-5]
The aspects what we covered is the effect of humidity on the Solar panels which create obstacles for drastic variation in the power
generated , indirectly making the device work less efficient than it could have without it. The cities where in the humidity level is
above the average range of 30 actually results in the minimal layer of water on the top of the Solar panel which results in decreasing of
the efficiency.
As per the facts when the light consisting of energy/Photon strikes the water layer which in fact is denser, Refraction appears which
results in decreasing of intensity of the light which in fact appears the root cause of decreasing of efficiency. Additional there appears
minimum components of Reflection which also appears on the site and in that, there appears light striking is subjected to more losses
which after the experiments conducted resulted approximately in 30% loss of the total energy which is not subjected to utilization of
Energy for the Solar panel.[6]
AS far as the efficiency of the Solar cell is concerned, Efficiency is termed as the amount of the light that can be converted into usable
format of electricity. Because of the efficiency depends upon the value of Maximum Power Point of the Solar cell , due to the above
effect of humidity ,the maximum power point is deviated and that indirectly results in decreasing of the Solar cell Efficiency.[7-8]
Interesting facts appears to provide surprising figures about the population in Coastal areas around the globe. According to National
Oceanic and Atmospheric Administration USA, about 52% of the Population in USA lives coastal counties Los angles, Texas, Calif
Etc. [9]
According to the top world users of Solar Energy ,Germany(9785MW),Spain(3386MW), Japan(2633MW) and USA(1650) appears to
dominate among the users of Solar Energy and where in the coastal humidity ranges among the cities as Hamburg(Germany) as 50-
70% , Malaga(Spain) as 65-80%,Tokyo(Japan) 45-65% and Log angles (USA) as 70-95%.[10]
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

500 www.ijergs.org

The usage of the Solar Panel is readily effected by the effect of humidity and the values corresponds to change is the humidity is
subjected to change
Experiment and Analysis:
Various experiments were conducted and in the test bench included 50W BP Solar Panel having specification of Vamp = 17.3V and
Imp = 2.9A with temperature coefficient of Isc= (0.065 0.015) % / C, Temperature coefficient of Voc=-(80 10) mv / C and
Temperature coefficient of power=-(0.5 0.05) % / C, and Tungsten Halogen Bulb of 1000W, 2 Humidifier, Hygrometer,
Thermometer, Output Load as tungsten filament bulbs(15,20,25W) , 2 Millimeters.
Results were calculated initially with normal temperature in Karachi which was 32*C (305K) and humidity 25.The humidifier was
used as to increase the humidity level of the area where in the Solar panel was connected with the load and was subjected to a constant
intensity by using tungsten Halogen bulb and the distance was kept 2 foot. The readings were noted and the humidity was carefully
calculated by the use of Hygrometer.
The results showed the drastic change in the readings when the humidity was subject to gradually increase. Below is the chart in
where the readings are noted
Temperature(K) Humidity (%) Voltage (DC) Current Amps(DC) Powers(watts)

305


25 17.10 2.78 47.538
305 30

16.72 2.63 43.973
305 35 16.53 2.42 40.002
305 40 16.45 2.3 37.605
305 45 16.41 2.14 35.117
305 50 16.33 2.04 33.313
305 55 16.32 1.88 30.681
Table 1: Humidity vs. voltage, current and power readings taken through the experimental set up as discussed.
Below are the graphs where in the relation between Humidity to voltage, current and power are calculated
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

501 www.ijergs.org


Fig1. Graph between Humidity and Voltage. Humidity appears as X axis and Voltage appears at Y axis

. Graph between Humidity and Current. Humidity appears as X axis and Current appears at Y axis
16.2
16.3
16.4
16.5
16.6
16.7
16.8
16.9
17
17.1
17.2
0 10 20 30 40 50 60
Humidity vs. Voltage
Voltage
0
0.5
1
1.5
2
2.5
3
0 10 20 30 40 50 60
Humdidty Vs. Current
Current
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

502 www.ijergs.org


Fig3. Graph between Humidity and Power. Humidity appears as X axis and Power appears at Y axis

After the results which are obtained so far , the readings clearly shows that the humidity level do effect the working of the solar panel
and can drag down the efficiency of the Solar panel is installed in cities where in the normal humidity level appear more.

Percent Reduction in Power =(P (Without Humidity)-P (with Humidity))/P (Without Humidity)*100

5% Humidity Increased
1
st
= ((47.538-43.973)/47.538)*100=7.499% Approx.
10% Humidity Increased
2
nd
= ((47.538-40.002)/47.538)*100=15.85% Approx.
15% Humidity Increased
3
rd
= ((47.538-37.605)/47.538)*100=20.89% Approx.
20% Humidity Increased
4
th
= ((47.538-35.117)/47.538)*100=26% Approx.
25% Humidity Increased
5
th
= ((47.538-33.313)/47.538)*100=29.92% Approx

Acknowledgment
I would like to thank Dr.Lachhman Das Dhomeja, Professor at Institute of Information and Communication Technology, University of
Sindh , Indra Devi Sewani , PHD Student at Sindh University Jamshoro and Radha Mohanlal , Lab Engineer at IOBM for being
0
5
10
15
20
25
30
35
40
45
50
0 10 20 30 40 50 60
Humidity vs. Power
Power
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

503 www.ijergs.org

supportive and informative in my goals and for unconditional help without which this task submission of Research wouldnt have
been ever possible

CONCLUSION
After the experiments conducted, humidity drastically effect the performance of the Solar Panel and proves out to decrease the Power
produced from the Solar Panels up to 15-30% if subjected to environment where in the Humidity level remains high

Future Prospects and Suggestions:
After observing such a drastic change when it comes to change in humidity level , the required Solar Panels should be designed in
such a way specially in Pakistan which could be made to have less effects of humidity level on the Power ratings.

REFERENCES:
"Chapter 8 Measurement of sunshine duration" (PDF). CIMO Guide. World Meteorological Organization
Natural Forcing of the Climate System". Intergovernmental Panel on Climate Change. Retrieved 2007-9-29.Radiation Budget".
NASA Langley Research Center. 2006-10-17.
"Introduction to Solar Radiation". Newport Corporation. Archived from the original on Oct. 29, 2013.
"CIA - The world fact book". Central Intelligence Agency.
Somerville, Richard. "Historical Overview of Climate Change Science" (PDF). Intergovernmental Panel on Climate Change.
Dill, Lawrence M. "Refraction and the spitting behavior of the archerfish (Toxotes chateaus)." Behavioral Ecology and Sociobiology
2.2 (1977): 169-184.
"Photovoltaic Cell Conversion Efficiency". U.S. Department of Energy.
Mustafa, G. ; Khan F, An efficient method of Solar Panel Energy Measurement System, Dhaka 2009
nancyschuelke.hubpages, Top-5-Users-of-Solar-Energy-In-the-World.
18997-population-coastal-areas-infographic.html, .live science.
M.C. Alonso-Garcia, J.M.Ruiz. Experimental study of mismatch and shading effects in the I-V characteristic of a photovoltaic
module. Solar Energy Materials & Solar cells, 2006,
Ravi Prakash Tiwari, Rajesh M, K. Sudhakar Energy and energy analysis of solar photovoltaic system, 2012 Bhopal
Ralph, E.L., Linder, E.B. Advanced solar panel designs, Washington, DC 2006.
Zeller, P., Libati, H.M.Utilization of solar energy for electrical power supply in rural African areas, Nairobi 2009
Design and proper sizing of solar energy schemes for electricity production in Malaysia, 2003


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

504 www.ijergs.org

DGS Technique for Parameter Enhancement of MSA
SunilKumar Vats
1
, Hitanshu Saluja
2

1,2
Electronics & Communication Department,
1,2
Maharshi Dayanand University
1
vatss90@gmail.com, +91-9999321192
2
hitanshuu@gmail.com, +91-9050272255
1
Student,
2
Asstt. Prof.,
1,2
School of PG Engineering (A Unit of Ganga Technical Campus)
Bahadurgarh-Badli Road, Village- Soldha, Bahadurgarh - 124507(Hry.), INDIA

ABSTRACT
In this paper parameters of the micro strip patch antenna (MSA) which is feed by coaxial feed technique is enhanced by using the
DGS technique, and comparing the results of different DGS shapes with the original MSA antenna using the HFSS (High Frequency
Structure Simulator), which is a commercially available electromagnetic simulator based on finite element method and adaptive
meshing technique to achieve the desired specification.

Keywords-HFSS, DGS, MSA and Co-axial feed

1.INTRODUCTION
A Micro strip Patch antenna consists of a radiating patch on one side of a dielectric substrate which has a ground plane on the other
side as shown in Figure 1. The patch is generally made of conducting material such as copper or gold.The radiating patch and the feed
lines are usually photo etched on the dielectric substrate. Micro strip patch antennas radiate primarily because of the fringing fields
between the patch edge and the ground plane. Micro strip antennas are characterized by a larger number of physical parameters than
conventional microwave antennas. They can be designed to have many geometrical shapes and dimensions but rectangular and
circular Micro strip resonant patches have been used extensively in many applications. In this paper, the design of probe feed
rectangular micro strip antenna is for satellite applications is presented and is expected to operate within 2GHz - 2.5GHz frequency
span. This antenna is designed on a double sided Fibre Reinforced (FR-4) epoxy and its performance characteristics which include
Return Loss, VSWR, and input impedance are obtained from the simulation results, these parameters of the MSA is enhanced using
the DGS technique in this paper.










FIGURE 1:Micro strip patch antenna
1.1 Overview of DGS technique
DGS is an etched periodic or non-periodic cascaded configuration defect in ground of a planar transmission line (e.g., micro strip,
coplanar and conductor backed coplanar wave guide) which disturbs the shield current distribution in the ground plane cause of the
defect in the ground. This disturbance will change characteristics of a transmission line such as line capacitance and inductance. In a
word, any defect etched in the ground plane of the micro strip can give rise to increasing effective capacitance and inductance. There
are many shapes of DGS available but in this paper we are using Dumbbell shape shown in figure 2.











FIGURE 2 dumbbell shape DGS
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

505 www.ijergs.org



1.2 ANTENNA DESIGN
Figure 3(a) and 3(b) shows the geometry of proposed coaxial fed micro strip patch antennas with single band operation for WLAN
application. The antenna is excited by coaxial feed line designed for a 50 ohm characteristic impedance and is printed on substrate
with a thickness of 1.6mm, relative permittivity 4.4 and loss tangent of 0.0009. The dimensions of the proposed antenna are written
below:












Table 1 Dimensions of the Patch Antenna Design for 2.25 GHz frequency

The proposed antenna is fed by coaxial cable with a characteristic impedance of 50 ohm. So, the outer conductor (from bottom of
ground to top of ground) is made of substrate material and inner conductor (from bottom of ground to top of patch) is made of PEC
material. The feed point for the proposed antenna is found to be (30.9, 16.66) where the best impedance matching of 46.48 ohm has
been achieved which is very much close to 50 ohm. This has been done by applying parametric sweep for locating the feed point in
the full range of x-axis in the window of transient solver. Proper impedance matching always yields the best desired result.
Location of the DGS shapes and dimensions are explained in the below table for the proposed antenna design.












Table 2 Dimensions of the MSA Antennas Resonating at 2.25 GHz Frequencies with DGS technique.

















FIGURE 3(a) Designed Structure of MSA on HFSS resonating at 2.25 GHz
Variable Value
Width of the patch (Wp) 40.57mm
Length of the patch (Lp) 31.43mm
Height of the patch (Hp) 1.6mm
Width of the ground (Wg) 50.32mm
Length of the ground (Lg) 41.19mm
Feed Inner Core radius (r1) .3mm
Feed Outer Core radius (r2) .68mm
Variable Value
Width of the patch (Wp) 40.57mm
Length of the patch (Lp) 31.43mm
Height of the patch (Hp) 1.6mm
Width of the ground (Wg) 50.32mm
Length of the ground (Lg) 41.19mm
Feed Inner Core radius (r1) .3mm
Feed Outer Core radius (r2) .68mm
Dimensions of dumbbell shaped
slot on the ground
.2mm,3mm and 3mm,7mm
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

506 www.ijergs.org

2.00 2.50 3.00 3.50 4.00
Freq [GHz]
0.00
10.00
20.00
30.00
40.00
50.00
60.00
m
a
g
(
Z
t (
C
o
a
x
p
i n
_
T
1
, C
o
a
x
p
i n
_
T
1
)
)
Ansoft Corporation HFSSDesign1
XY Plot 1
m1 Curve Inf o
mag(Zt(Coaxpin_T1,Coaxpin_T1))
Setup1 : Sweep1
Name X Y
m1 2.2500 44.3010














FIGURE 3(b) Simulated Design of MSA with Dumbbell shape DGS


2. SIMULATIONS AND RESULTS
The simulation results of the above designed antenna of various parameters like return loss, impedance, directivity and VSWR are
given by using HFSS.















Figure 4 Simulated Return Loss of MSA Resonating at 2.25 GHz

The above designed antenna shows good return loss approximately -24dB which is an excellent result. This antenna resonates at 2.25
GHz frequencies which is applicable for Wireless Local Area Network (WLAN standards- 2.2-2.483 GHz for IEEE 802.11 b/g)
applications and gives a bandwidth of approximately 90 MHz. the bandwidth is calculated by subtracting lower frequency from the
upper frequency at -10dB.The proposed antenna design gives good impedance of approximately 45 ohms which shows that the
antenna is perfectly matched and the power loss is minimum. The result of the designed antenna is given below.















Figure 5 Simulated impedence of MSA Resonating at 2.25 GHz
2.00 2.20 2.40 2.60 2.80 3.00
Freq [GHz]
-25.00
-20.00
-15.00
-10.00
-5.00
0.00
d
B
(
S
t
(
C
o
a
x
p
i
n
_
T
1
,
C
o
a
x
p
i
n
_
T
1
)
)
Ansoft Corporation HFSSDesign1
XY Plot 1
m1
Curve Inf o
dB(St(Coaxpin_T1,Coaxpin_T1))
Setup1 : Sweep1
Name X Y
m1 2.2500 -23.5737
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

507 www.ijergs.org


The designed antenna shows the VSWR of 1.1421 at 2.25 GHz frequency

Figure 6 Simulated VSWR of MSA at 2.25 GHz

The results of the above designed antenna are summarized in the following table.








Table 3 Summary of Parameter Values of Designed Antenna at 2.25 GHz frequency

2.1 Simulation Result for dumbbell shape
The above designed antenna gives return loss of -28.13dB and bandwidth 90MHz of at 2.25GHz. The simulated results of the
proposed antenna are given below


Figure 7 Simulated Return Loss of MSA Resonating at 2.25 GHz with dumbbell DGS
2.00 2.50 3.00 3.50 4.00
Freq [GHz]
-30.00
-25.00
-20.00
-15.00
-10.00
-5.00
0.00
d
B
(
S
t (
C
o
a
x
p
i n
_
T
1
, C
o
a
x
p
i n
_
T
1
)
)
Ansoft Corporation HFSSDesign1
XY Plot 1
m1
Curve Inf o
dB(St(Coaxpin_T1,Coaxpin_T1))
Setup1 : Sweep1
Name X Y
m1 2.2500 -28.0942
Parameters Values
Operating frequency 2.25Ghz
Return loss -23.5
Impedance 45
VSWR 1.1421
Bandwidth 90
2.00 2.50 3.00 3.50 4.00
Freq [GHz]
0.00
5.00
10.00
15.00
20.00
25.00
30.00
35.00
40.00
m
a
g
(
V
S
W
R
t
(
C
o
a
x
p
i
n
_
T
1
)
)
Ansoft Corporation HFSSDesign1
XY Plot 1
m1
Curve Inf o
mag(VSWRt(Coaxpin_T1))
Setup1 : Sweep1
Name X Y
m1 2.2500 1.1421
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

508 www.ijergs.org

The designed antenna gives an impedance of around 47 ohms which is nearby 50 ohms so it is acceptable. The simulated result of
impedance is given below

Figure 8 Simulated impedance of MSA Resonating at 2.25 GHz with dumbbell DGS


The simulated antenna gives VSWR of 1.08 at 2.25GHz. The simulation results of VSWR at 2.25 resonating frequency are given as
follows.

Figure 9 Simulated VSWRof MSA Resonating at 2.25 GHz with dumbbell DGS

The results of the above designed antenna are summarized in the following table








Table 4 Summary of Parameter Values of Designed Antenna at 2.25 GHz frequency

2.00 2.50 3.00 3.50 4.00
Freq [GHz]
0.00
5.00
10.00
15.00
20.00
25.00
30.00
35.00
40.00
m
a
g
(
V
S
W
R
t
(
C
o
a
x
p
i
n
_
T
1
)
)
Ansoft Corporation HFSSDesign1
XY Plot 1
m1
Curve Inf o
mag(VSWRt(Coaxpin_T1))
Setup1 : Sweep1
Name X Y
m1 2.2500 1.0820
Parameters Values
Operating frequency 2.25Ghz
Return loss -28.09
Impedance 46.85
VSWR 1.08
Bandwidth 90
2.00 2.50 3.00 3.50 4.00
Freq [GHz]
0.00
10.00
20.00
30.00
40.00
50.00
60.00
m
a
g
(
Z
t
(
C
o
a
x
p
i
n
_
T
1
,
C
o
a
x
p
i
n
_
T
1
)
)
Ansoft Corporation HFSSDesign1
XY Plot 2
m2
Curve Inf o
mag(Zt(Coaxpin_T1,Coaxpin_T1))
Setup1 : Sweep1
Name X Y
m2 2.2500 46.8524
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

509 www.ijergs.org


3. CONCLUSION
In this paper, we presented the design of a rectangular patch antenna covering the 2GHz2.5 GHz frequency spectrum. It has been
shown that this design of the rectangular patch antenna produces a bandwidth of approximately 4% with a stable radiation pattern
within the frequency range. The design antenna exhibits a good impedance matching of approximately 50 Ohms at the centre
frequency. The parameters of the antenna is enhanced by the DGS technique used above, a comparison table between the MSA
antenna and the antenna with DGS is shown below which conclude that DGS improves the Overall efficiency of the MSA.

















Table 5 Comparision of Parameter Values of different Antennas at 2.25 GHz frequency

The table above shows that the antenna is acceptable, for various applications. The return loss, VSWR is minimum in dumbbell shape
DGS technique and higher line impedance compare to other antenna which is very good for the communication.

ACKNOWLEDGEMENT
The author would like to thank Mr. HitanshuSaluja Asst. Prof at School of PG Engineering (A Unit of Ganga Technical Campus)
(hitanshuu@gmail.com) for his support in this work.

REFERENCES:
1) Balanis, C.A., Antenna Theory Analysis and Design, 3rd Edition. New Jersey, John Wiley and Sons, 2005.
2) Theodore S. Rappaport, Wireless Communication and Practice, Second Edition, 2002.
3) Lee, H. F., and W. Chen, Advances in Microstrip and Printed Antennas, New York, John Wiley & Sons, 1997
4) RajeshwarLalDua, Himanshu Singh and NehaGambhir, International Journal of Soft Computing and Engineering (IJSCE)
ISSN: 2231-2307, Volume-1, Issue-6, January 2012.
5) Ashwini K. Arya, M.V. Kartikeyan and A.Patnaik, Frequenz 64 (2010) 56.
6) J. P. Geng, J. J. Li, R. H. Jin, S. Ye, X. L. Liang and M. Z. Li, "The Development of Curved Microstrip Antenna with
Defected Ground Structure" Progress In Electromagnetic Research,PIER, vol. 98, pp 53-73, 2009.
7) JaswinderKaur and Rajesh Khanna, Co-axial Fed Rectangular Microstrip Patch Antenna for 5.2 GHz WLAN Application,
Universal Journal of Electrical and Electronic Engineering 1(3):94-98, 2013.
8) Ansoft HFSS for SI 3rd Edition HFSS Release: 11.1,Published Dat



Parameters MSA
without
DGS
MSA with
Dumbbell DGS
Operating
frequency
2.25 GHz 2.25 GHz
Return loss -23.5 -28.09
Impedance 45 46.85
VSWR 1.1421 1.08
Bandwidth 90 90
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

510 www.ijergs.org

Design of a High Speed FIR Filter on FPGA by Using DA-OBC Algorithm
Vijay Kumar Ch
1
, Leelakrishna Muthyala
1
, Chitra E
2

1
Research Scholar, VLSI, SRM University, Tamilnadu, India
2
Assistant Professor, ECE, SRM University, Tamilnadu, India
E-mail- leelakrishna424@gmail.com
Abstract: The main objective of the project is to implement FIR filter on FPGA using Distributed Arithmetic-Offset Binary Coding
(DA-OBC) reduction technique. Digital filtering algorithms are most commonly implemented using general purpose digital signal
processing chips for audio applications, or special purpose digital filtering chips and application- specific integrated circuits (ASICs)
for higher rates. This paper describes an approach to the implementation of digital filter algorithms based on field programmable gate
arrays (FPGAs). Implementing hardware design in Field Programmable Gate Arrays (FPGAs) is a formidable task. There is more than
one way to implement the digital FIR filter. Based on the design specification, careful choice of implementation method and tools can
save a lot of time and work. Mat Lab is an excellent tool to design filters. There are toolboxes available to generate VHDL
descriptions of the filters which reduce dramatically the time required to generate a solution. Time can be spent evaluating different
implementation alternatives. Computation algorithms are required that exploit the FPGA architecture to make it efficient in terms of
speed and/or area. By using this algorithm, memory size can be reduced and also the speed of the operation can be increased. The FIR
filter can be simulated on FPGA device by VHDL language in MODEL SIM and simulation on MATLAB is in this project
Key Words: DA-OBC ALGORITHIM
1. INTRODUCTION
The most common approaches to the implementation of digital filtering algorithms are general purpose digital signal processing chips
for audio applications, or special purpose digital filtering chips and application-specific integrated circuits (ASICs) for higher rates[1]
. This project describes an approach to the implementation of digital filter algorithms on field programmable gate arrays
(FPGAs).Digital filters are basic units in many digital signals processing system. Finite-impulse-response (FIR) filters are basic
processing elements in applications such as video signal processing and audio signal processing. There are two kinds of the realization
method of FIR Filter at present. One is hardware implementation used the chips such as digital signal processor(DSP) Application-
Special Integrated Circuit (ASIC) and field programmable gate arrays (FPGA),The other is software implementation used advanced
language such as C/C++, MATLAB. Implementing hardware design in Field Programmable Gate Arrays (FPGAs) is a formidable task
[2]. There is more than one way to implement the digital FIR filter. Based on the design specification, careful choice of
implementation method and tools can save a lot of time and work. Mat Lab is an excellent tool to design filters. There are toolboxes
available to generate VHDL descriptions of the filters which reduce dramatically the time required to generate a solution [3]. Time can
be spent evaluating different implementation alternatives. Proper choice of the computation algorithms can help the FPGA
architecture to make it efficient in terms of speed and/or area. The design method of multiplication and accumulation (MAC)
operation is the core of FIR filter implementation. The method can be classified in two categories generally. One is direct- multiply
structure, which are expensive in hardware because of logic complexity and area usage. The other is Distributed Arithmetic (DA) [1],
which convert calculation of MAC to a serious of look-up table accesses and summations. This solution especially suited for LUT-
based FPGA architectures and can greatly improve the speed of operation. Distributed arithmetic (DA) is commonly used for signal
processing algorithms where computing the inner product of two vectors comprises most of the computational workload. This type of
computing profile describes a large portion of signal processing algorithms, so the potential usage of distributed arithmetic is
tremendous.
The inner product is commonly computed using multipliers and adders. When computed sequentially, the multiplication of
two B-bit numbers requires from B=2 to B additions, and is time intensive. Alternatively, the multiplication can be computed in
parallel using B=2 to B adders, but is area intensive. Whether a K-tap filter is computed serially or in parallel, it requires at least B=2
additions per multiplication plus the (K 1) additions for summing the products together. In the best case scenario, K - (B + 2) =2 -1
additions are needed for a K-tap filter using multipliers and adders. A competitive alternative to using a multiplier is distributed
arithmetic [4]. It compresses the computation of a K-tap filter from K multiplications and K-1 additions into a memory table and
generates a result in B-bit time using B-1 additions. DA significantly reduces the number of additions needed for Filtering [1]. This
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

511 www.ijergs.org

reduction is particularly noticeable for filters with high bit precision. This reduction in the computational workload is a result of
storing the pre computed partial sums of the filter coefficients in the memory table. When compared with other alternatives,
distributed arithmetic requires fewer arithmetic computing resources and no multipliers. This aspect of distributed arithmetic is a
favorable one for computing environments with limited computational resources, especially multipliers [3]. This type of computing
environments can be found on older field-programmable gate arrays (FPGAs) and low-end, low-cost FPGAs. By using distributed
arithmetic, these types of devices can be used for low latency, area constrained, high-order filters. Implementing such a filter using a
multiplier based approach would be difficult.
Digital Filters
Digital filters are used extensively in all areas of electronic industry. This is because Digital filters have the potential to attain much
better signal to noise ratios than analog filters and at each intermediate stage the analog filter adds more noise to the signal, the digital
filter performs noiseless mathematical operations at each intermediate step in the transform[5]. As the digital filters have emerged as a
strong option for removing noise, shaping spectrum, and minimizing inter-symbol interference in communication architectures. These
filters have become popular because their precise reproducibility allows design engineers to achieve performance levels that are
difficult to obtain with analog filters. FIR and IIR filters are the two common filter forms. A drawback of IIR filters is that the closed-
form IIR designs are preliminary limited to low pass, band pass, and high pass filters, etc. Furthermore, these designs generally
disregard the phase response of the filter. For example, with a relatively simple computational procedure we may obtain excellent
amplitude response characteristics with an elliptic low pass filter while the phase response will be very nonlinear. In designing filters
and other signal-processing system that pass some portion of the frequency band undistorted, it is desirable to have approximately
constant frequency response magnitude and zero phases in that band [2]. For casual systems, zero phases are not attainable, and
consequently, some phase distortion must be allowed. As the effect of linear phase with integer slope is a simple time shift. A
nonlinear phase, on the other hand, can have a major effect on the shape of a signal, even when the frequency-response magnitude is
constant. Thus, in many situations it is particularly desirable to design systems to have exactly or approximately linear phase.
Compare to IIR filers, FIR filters can have precise linear phase. Also, in the case of FIR filters, closed-form design equations do not
exist. While the window Method can be applied in a straightforward manner, some iteration may be necessary to meet a prescribed
specification. The window method and most algorithmic methods afford the possibility of approximating more arbitrary
frequency response Characteristics with little more difficulty than is encountered in the design of low pass filters [4]. Also, it appears
that the design problem for FIR filters is much more under control than the IIR design problem because there is an optimality theorem
for FIR filters that is meaningful in a wide range of practical situations. The magnitude and phase plots provide an estimate of how the
filter will perform; however, to determine the true response, the filter must be simulated in a system model using either calculated or
recorded input data. The creation and analysis of representative data can be a complex task. Most of the filter algorithms require
multiplication and addition in real-time. The unit carrying out this function is called MAC (multiply accumulate). Depends on how
good the MAC is, the better MAC the better performance can be obtained. Once a correct filter response has been determined and a
coefficient table has been generated, the second step is to design the hardware architecture. The hardware designer must choose
between area, performance, quantization, architecture, and response.
Design and implementation of digital FIR filter
MATLAB combines the high-level, mathematical language with an extensive set of pre-defined functions to assist in the creation and
analysis of filter data. Toolbox are available for designing filter response and generating coefficient tables, each with varying levels of
sophistication. Graphical filter design tools provide selections for specifying pass band, filter order, and design methods, as well as
provide plots of the response of the filter to various standard forms of inputs[5]. FDA tool from The Math Works, which can generate
a behavioral model and coefficient tables. Once a correct filter response has been determined and hardware architecture has been
defined, the implementation can be carried out. Three choices of technology exist for the implementation of filter algorithms. These
are: Programmable DSP chips, ASICs and FPGAs. At the heart of the filter algorithm is the multiply-accumulate operation;
Programmable DSP chips typically have only one MAC unit that can perform one MAC in less than a clock cycle. DSP processors or
programmable DSP chips are flexible, but they might not be fast enough. The reason is that the DSP processor is general purpose and
has architecture that constantly requires instructions to be fetched, decoded and executed. ASICs can have multiple dedicated MACs
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

512 www.ijergs.org

that perform DSP functions in parallel. But, they have high cost for low volume production and the inability to make design
modifications after production makes them less attractive. FPGAs have been praised for their ability to implement filters since the
introduction of DSP savvy architectures . Which can be efficiently realized using dedicated DSP resources on these devices. More
than 500 dedicated multiply-accumulate blocks are now available, making them exceptionally well suited for high-performance, high-
order filtering applications that benefit from a parallel, non-resource shared hardware architecture. In this particular project, FPGA has
been chosen as the implementation tool. To program FPGA, hardware description language is needed. VHDL synthesis offers an easy
way to target a model towards different implementation.
Architecture of FIR Filter by Using DA-OBC coding unit

FIR Filter Using shift register
One of the most fundamental elements for a DSP system is an FIR Filter Impulse Response A set of FIR coefficients, which
represent all possible frequencies.
Tap - A coefficient/delay pair. The number of FIR taps is an indication of the amount of memory required to implement the filter.
DUE to the intensive use of FIR filters in video and communication systems[3], high performance in speed, area and power
consumption is demanded. Basically, digital filters are used to modify the characteristic of signals in time and frequency domain and
have been recognized as primary digital signal processing operations. They are typically implemented as multiply and accumulate
(MAC) algorithms with the use of special DSP devices and how MAC is implemented with N multiplications and (N-1) additions per
sample to compute the result

FIR Filter Using shift register
FIR Filter Using Distributed Arithmetic
Distributed Arithmetic (DA) is a different approach for implementing digital filters[2]. The basic idea is to replace all multiplications
and additions by a table and a shifter accumulator. DA relies on the fact that the filter coefficients in are known, so multiplying
c[n]x[n] becomes a multiplication with a constant Distributed Arithmetic (DA) can be used to compute sum of products. Many DSP
algorithms like convolution and correlation are formulated in a sum of products (SOP) fashion.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

513 www.ijergs.org


Fig 4.7 DA Block Diagram

Flow Chart Diagram
Multiplication in FPGA
Multiplication is basically a shift add operation. There are, however, many variations on how to do it. Some are more suitable for
FPGA use than others. Bit parallel multiplier are of two main types, array and tree multipliers. Because of speed and power
consideration, the selection, here, is a tree multiplier structure. There are many tree structures one of them is Wallace tree. A Wallace
tree is an implementation of an adder tree designed for minimum propagation delay. Rather than completely adding the partial
products in pairs like the ripple adder tree does, the Wallace tree sums up all the bits of the same weights in a merged tree. Usually full
adders are used, so that 3 equally weighted bits are combined to produce two bits: one (the carry) with weight of n+1 and the other
(the sum) with weight n. Each layer of the tree, therefore, reduces the number of vectors by a factor of 3:2. The tree has as many layers
as is necessary to reduce the number of vectors to two (a carry and a sum).The structure for this type of multiplier Wallace tree is a
tree of carry-save adders. A carry save adder consists of full adders like the more familiar ripple adders, but the carry output from each
bit is brought out to form second result vector rather than being wired to the next most significant bit. The carry vector is 'saved' to be
combined with the sum later, A Wallace tree multiplier is one that uses a Wallace tree to combine the partial products from a field of
1x n multipliers (made of AND gates). If the Wallace tree combined with a fast adder can offer a significant advantage there. As
Wallace tree is optimal in speed but it has a complicated routing, which in makes it impractical to implement since the cells in the tree
has different loads and must be individually optimized, so a modification for fast parallel multiplier using both Wallace tree and Booth
algorithms the overturned-Stairs adder is one of the modification of the Wallace tree which has the same speed of Wallace tree and is
sufficient for most DSP and communication application.

BOOTH MULTIPLIER
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

514 www.ijergs.org

A power-efficient twin-precision modified-Booth multiplier is presented. For full-precision operation (16-bit inputs), the twin-
precision modified-Booth multiplier shows an insignificant increase of critical-path delay (0.4% or 32 PHS) compared to a
conventional multiplier. To cancel the power overhead of extra logic and to achieve overall power reduction, the implemented 16-bit
twin-precision multiplier needs to run in dual 8-bit mode for more than 4.4% of its computations. Recent development of embedded
systems indicates an increased interest in reconfigurable functional units that dynamically can make the data path adapt to varying
computational needs. A system may need to switch between, for example, one application for speech encoding that requires functional
units operating at 8-bit precision and another application that is based on 16-bit functional units to perform audio decoding. The twin-
precision multiplier can switch between N bit and N/2-bit precision multiplications without significant performance or area overhead.
Previous work introduced a twin-precision technique for radix-2 tree multipliers, but when higher performance is needed, multipliers
with higher radix than two may be an option. In this paper we therefore explore the possibility of implementing the twin-precision
concept on modified-Booth multipliers.
CARRY-SAVE-ADDER
Carry-save-adder(CSA) is the most often used type of operation in implementing a fast computation of arithmetics of register-transfer
level design in industry. This paper establishes a relationship between the properties of arithmetic computations and several op- timing
transformations using CSAs to derive consistently better qualities of results than those of manual implementations. In particular, we
introduce two important concepts, operation-duplication and operation-split, which are the main driving techniques of our algorithm
for achieving an extensive utilization of CSAs. Experimental results from a set of typical arithmetic computations found in industry
designs indicate that automating CSA optimization with our algorithm produces designs with significantly faster timing and less

Circuit Design of 8-tap FIR Filter in Matlab

Mat lab Results:
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

515 www.ijergs.org


8-tap FIR Filter input signal

8-tap FIR Filter output signal
Here the Low Pass FIR Filter the present input signal and the output signal is to be same there is no loss of noise at the output signal
without any loss of data. In this X-axis is for time period in micro seconds and Y-axis is amplitude the band width is 10 kHz.
5.2. ModelSim results:
Simulation result of fir filter
The simulated result of FIR Filter by DA_OBC algorithm for 8 order LUT. In this we are taking the input as 10 it will multiply
with each bit value and added simultaneously for all 8 ordered inputs, we get the outputs as 90,280,630
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

516 www.ijergs.org


5.3. Xilinx simulated results



The RTL schematic report of Booth multiplier with Carry-Save-Adder in DA-OBC coding unit working for the FIR Filter.
The above simulated results explain how the booth multiplier and carry-save-adder operates the FIR filter in DA-OBC coding unit in
this Booth Multiplier multiplies the operands and automatically add it to the carry-save-adder .
ANALYSIS REPORT OF MULTIPLIER AND ADDERS BY DA-OBC
NORMAL MULTIPLIER
Power
Power summary: I(mA) P(mW)
----------------------------------------------------------------
Total estimated power consumption: 42
Timing and memory report
Total 7.165ns (6.364ns logic, 0.801ns route)
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

517 www.ijergs.org

(88.8% logic, 11.2% route)
Total memory usage is 214620 kilobytes
Power leakage is 0.081
BOOTH MULTIPLIER
Power
Power summary: I(mA) P(mW)
----------------------------------------------------------------
Total estimated power consumption: 27
Timing and memory report
Total 7.165ns (6.364ns logic, 0.801ns route)
(88.8% logic, 11.2% route)
Total memory usage is 194064 kilobytes
Power leakage is 0.027
CONCLUSION
My project is mainly for the high speed of fir filter. For increasing the speed the DA-OBC algorithm is best method in that we are
using the multipliers and adders for filtering operation my enhanced system is DA-OBC coding unit by using the Booth-Multiplier and
Carry-Save-Adder replacing of normal multiplier and normal adder. The power and memory usage is very less for the Booth-
Multiplier with Carry-Save-Adder compared with normal multiplier and normal Adder the power leakage is also less, based on all
these factors speed of the FIR filter is increased.

REFERENCES:
[1] [1]. Xiumin Wang, Implementation of FIR Filter on FPGA Using DAOBC Algorithm , in IEEE,2010.
[2] Shunwen Xiao,Yajun chen, "The design of FIR filter based on improved DA algorithm and its FPGA implementation ",IEEE
International conference on computer and automation engineering(ICCAE10), 2010,Vol.2,PP.589-591.
[3] New Approach to Look-up-Table Design and Memory-Based Realization of FIR Digital Filter Pramod Kumar Meher, Senior
Member, IEEE.
[4] Design & implementation of FPGA based digital filters Ankit Jairath Department of Electronics & Communication, Gyan
Ganga Institute of Technology and Sciences, Jabalpur (M.P) Issue 7, September 2012 199 All Rights Reserved 2012
IJARCE





International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

518 www.ijergs.org

Stabilization of Ammonium Nitrate for Phase Modification (ll) by
Co-crystallization with Copper (ll) Nitrate (Trihydrate)
Manish Kumar Bharti
1
, Sonia Chalia
2

1
Assistant Professor, Department of Aerospace Engineering, Amity School of Engineering and Technology
Amity University Gurgaon, Manesar, Haryana, India
E-mail- mkbharti@ggn.amity.edu

Abstract The present study has been aimed to investigate the stabilization effects imposed by the addition of Copper (II) Nitrate
(Trihydrate) (Cu(NO
3
)
2
.3H
2
O) on the phase modification (II) of Ammonium Nitrate (AN) and eventually on its thermal decomposition
behavior. Cu(NO
3
)
2
.3H
2
O was co-crystallized with AN in weight percentages as 3%, 6% and 10% for the preparation of three samples
of Phase Stabilized Ammonium Nitrate (PSAN). The thermal decomposition behaviors of untreated AN and prepared samples of
PSAN were assessed and compared using Differential Scanning Calorimetry (DSC) to observe the effectiveness of Cu(NO
3
)
2
.3H
2
O as
a potential stabilizer. The present study indicated that Cu(NO
3
)
2
.3H
2
O, in low weight percentages, was able to provide a significant
delay in the onset temperature range of near-room-temperature phase modification (III) of AN occurring at around 32 C 34 C.
Also, increasing weight percentage of Cu(NO
3
)
2
.3H
2
O in the composition resulted in the complete stabilization of phase modification
(II) occurring at around 85 C 87 C.
Keywords Ammonium Nitrate, Copper (II) Nitrate (trihydrate), Phase Stabilized Ammonium Nitrate, Co-crystallization,
Differential Scanning Calorimetry, Eco-friendly Solid Oxidizer.
INTRODUCTION
The unmatched and extensive use of Ammonium Perchlorate (AP) as an oxidizer in Composite Solid Propellant (CSP) rockets is the
outcome of its superior ballistic performance and available thorough knowledge. But in spite of its unparalleled advantages, the use of
AP in the formulation of CSP rockets comprises of some shortcomings as well. During the firing of large size boosters of launch
vehicles such as Space Shuttle (containing around 503 tons of propellant), each of its Solid Rocket Boosters (SRBs) produces on an
average of 100 tons of Hydrogen Chloride (HCl) gas which is let to the atmosphere. The resulting residue of the combustion of 503
tons of propellant in the initial 120 seconds of flight is 21% HCl and 30.2% Alumina (Al
2
O
3
) [1].
The aforementioned problem i.e. emission of HCl gas, associated with the use of AP as an oxidizer for solid propellant rockets, poses
a crucial environmental concern. The emitted HCl gas lowers down the vapor pressure of water vapor in the air, and the HCl co-
condenses with the water vapor to form aerosol by serving as a nucleation site for water vapors [2]. Thus formed aerosol creates a
visible smoke signature which may eradicate the element of stealth and surprise in a combat. To overcome these problems, it is
imperative to replace AP with a Green or Eco-friendly propellant that is free from any kind of toxic combustion products emission.
Also, it is vital that the propellant composition is nearly free or completely smoke free during combustion to minimize the
detectability of the trajectory of the missile and/ or the launching site during a combat. One such favorable green oxidizer is
Ammonium Nitrate (AN) as it delivers HCl- free as well as almost smoke- free combustion. But in spite of being a benign eco-
friendly solid oxidizer, AN suffers a very restricted use in CSP formulations. The key reasons behind this limited use of AN- based
propellants is its two inherent and unfavorable characteristics, i.e. extensive hygroscopicity and five phase modifications occurring
over a wide temperature range, i.e. 200 C to 125 C [3]. The transition temperature ranges of the various phase modifications of
untreated AN are illustrated in Table 1.




International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

519 www.ijergs.org

Table 1: Phase Modifications and associated temperature range
Phase Modifications Temperature Range
Phase Modification V 200 C to 18 C
Phase Modification IV 18 C to 32 C
Phase Modification III 32 C to 84 C
Phase Modification II 84 C to 125 C
Phase Modification I Above 125 C

These phase modifications result in severe structural changes in the crystal lattice of AN leading to substantial volumetric and density
variations [4], [5]. These volumetric changes decrease the adherence between the crystallites and thus the structural strength of the
product decreases significantly during these thermal cycles [6]. The occurrence of such phase modifications near the propellant
processing and storage temperatures, thus, prove to be a stumbling block for the application of AN as a solid oxidizer in the
formulation of CSP grains. The structural instability of AN leads to the formation of cracks in the propellant grains which eventually
result in unpredicted ballistic performance and/ or disastrous failure of the mission. In order to achieve stable and anticipated
combustion of AN- based propellants, it is imperative to stabilize AN for one or more of its phase modifications, preferably those
occurring around near- room and storage temperatures.
Various attempts had been made to stabilize these phase modifications by addition of different inorganic as well as organic
compounds [7], [8], [9]. Alkali metal nitrates, oxides and diamine complex of various metals had been studied extensively to evaluate
their abilities to deliver any stabilization effects [10], [11].
For this study, Cu(NO
3
)
2
.3H
2
O was selected for investigation as a potential stabilizer because of the presence of O
2
and NO
2

molecules in its thermal decomposition products. The thermal decomposition reaction of Cu(NO
3
)
2
.3H
2
O is represented below as
equation 1:
2[Cu(NO
3
)
2
.3H
2
O] (s) 2CuO (s) + 4NO
2
(g) + O
2
(g) + 6H
2
O (g) (1)
The presence of free oxygen molecules in the thermal decomposition products of Cu(NO
3
)
2
.3H
2
O is advantageous when used as a
stabilizer for the modification of an oxidizer. Also, the weak nature of NO bond makes NO
2
a good oxidizer as at elevated
temperatures (around 150 C) as NO
2
decomposes with release of free oxygen through an endothermic process (H=114 kJ/ mol)
[12]. Such expedient properties make Cu(NO
3
)
2
.3H
2
O a viable candidate for a potential stabilizing agent.
In the present study, Cu(NO
3
)
2
.3H
2
O was co-crystallized with pure or untreated AN in varying weight percentages and Differential
Scanning Calorimetry (DSC) was carried out to investigate the modifications achieved on the onset temperature range and/ or
complete stabilization of any of the phase modifications of AN, preferably those occurring at and above near- room temperatures.
EXPERIMENTAL WORK
CO-CRYSTALLIZATION
The stabilizing compound is supposed to be introduced in the crystallographic structure of AN to accomplish effective stabilization.
Out of the available techniques for co-crystallization, evaporation technique was preferred owing to its feasibility and less complexity
under normal laboratory conditions. Since the selection of the solvent is greatly influenced by its volatility, Methanol (CH
3
OH) was
preferred as a solvent to carry out the co-crystallization.
Required quantities of AN and Cu(NO
3
)
2
.3H
2
O were mixed with methanol in a 20ml beaker. The quantity of methanol was kept to a
minimum and just enough to dissolve the chemicals completely. Since AN is less soluble in methanol under normal laboratory
conditions, a few drops of distilled water were also added to ease and quicken the dissolution of both the chemicals. The solution was
mildly heated and continuously stirred over a hot plate equipped with a magnetic stirrer till the complete dissolution of both the
chemicals was achieved.
After dissolution, the heating rate was lowered and the stirring was done manually and occasionally till all of the solvent had
evaporated. After saturation, a film was observed on the surface of the solution and the solution was allowed to cool down to room
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

520 www.ijergs.org

temperature resulting in the solidification of the solution. Wet needle like co-crystals of PSAN were obtained to which a small
quantity of acetone was added for re-crystallization and for removal of any moisture content present with the co-crystals. The co-
crystals, thus yielded, were filtered and dried over vacuum and stored in air-tight vials.
The related product specifications for the chemicals used and co-crystallization yield data for the three prepared samples have been
listed in Table 2 and 3, respectively.
Table 2: Specifications of chemicals used
Chemical Make
Molar mass
(g/ mol)
Melting/ Boiling Point
(in C)
Density
(g/ ml)
Ammonium Nitrate Merck 80.052 169.6 1.725
Copper (II) Nitrate (Trihydrate) Merck 241.60 114.5 2.32
Methanol Merck 32.04 65 0.7918
Acetone Merck 58.08 57 0.791

Table 3: Co-crystallization yield data for the samples
Ammonium Nitrate Copper (II) Nitrate (Trihydrate)
Total Batch Weight (g) Solvent Yield (g)
Weight (%) Weight (g) Weight (%) Weight (g)
97 1.455 3 0.045 1.5 CH
3
OH + H
2
O 1.32
94 1.41 6 0.09 1.5 CH
3
OH + H
2
O 1.23
90 1.35 10 0.15 1.5 CH
3
OH + H
2
O 1.28

DSC Analysis
After co-crystallization, the prepared samples were put to DSC analysis to assess the net effect of addition of Cu(NO
3
)
2
.3H
2
O on the
thermal decomposition behavior of AN and thereby providing the information on its stabilizing potential. NETZSCH Simultaneous
Thermal Analyzer (STA 409/ PG) was used to carry out the DSC analysis and the thermal analysis was done in ultrapure nitrogen
atmosphere being purged into the furnace at a rate of 60 ml/min. The sample mass of 1.5mg was taken in an alumina crucible for each
run. Repeated runs of prepared samples of PSAN as well as untreated AN were carried out at a heating rate of 10 C/min and were
compared.
RESULTS AND DISCUSSIONS
Thermal Decomposition of Untreated AN
The thermal decomposition behavior of untreated AN was observed and has been shown in fig. 1. The DSC thermogram of untreated
AN revealed five endothermic peaks when the sample was heated from 25 C to 350 C. The first three endothermic peaks with the
onset temperatures as 32.2 C, 87.4 C and 125.5 C, respectively, were observed due to the three phase modifications of untreated
AN. The fourth endothermic peak, having an onset temperature of around 167.7 C, represented the absorption of heat for the melting
of AN and the fifth endothermic peaks showed the heat absorption for the final and complete decomposition of AN.
Thermal Decomposition of Co-crystallized AN
The use of Cu(NO
3
)
2
.3H
2
O as a stabilizer proved to be effective in delaying the onset of the phase modification (III) by a range of
19.62 C 20.16 C depending on its weight percentage in the co-crystals. A maximum delay of 20.16 C was observed in the onset
temperature of phase modification (III) when Cu(NO
3
)
2
.3H
2
O was incorporated as 10% by weight.
The addition of 3% of Cu(NO
3
)
2
.3H
2
O delayed the onset temperature of the first endothermic peak but failed to provide any effect on
the rest of the decomposition behavior of AN as shown in fig. 2. But when the weight percentage of Cu(NO
3
)
2
.3H
2
O was raised to 6%,
it showed a successful stabilizing effect to the phase modification (II) along with a delay in the onset of phase modification (III) as can
be seen in fig. 3. Similar effect was observed by incorporating 10% of Cu(NO
3
)
2
.3H
2
O by weight as represented in fig. 4. No
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

521 www.ijergs.org

significant change was observed in the melting point temperature range of AN irrespective of the weight percentage of the stabilizing
agent in the co-crystals. Although, all three weight percentages of the Cu(NO
3
)
2
.3H
2
O provided an appreciable reduction of around 54
C 58 C in the decomposition temperature range of untreated AN.



Fig 1: DSC Thermogram of the thermal decomposition Fig. 2: DSC Thermogram of 97% AN Co-crystallized with 3% of
behavior of untreated AN Copper (II) Nitrate (Trihydrate)

Fig. 3: DSC Thermogram of 94% AN Co-crystallized Fig. 4: DSC Thermogram of 90% AN Co-crystallized with 10% of
with 6% of Copper (II) Nitrate (Trihydrate) Copper (II) Nitrate (Trihydrate)
CONCLUSION
The present experimental study indicated that Cu(NO
3
)
2
.3H
2
O, in as low weight percentage as 6%, can be utilized to completely
stabilize the AN for the phase modification (II). The addition of Cu(NO
3
)
2
.3H
2
O also increased the onset temperature of phase
modification (III) by a significant range of around 20 C. The stabilizing agent under investigation significantly lowered down the
decomposition temperature range of the PSAN which may lead to enhanced burning rates of the AN- based propellant grains due to
reduced amount of required heat for the combustion of the solid propellant.

REFERENCES:
[1] Fahey, D. W., Scientific assessment of Ozone depletion, 2006 Update, Global Ozone Research and Monitoring Project Report
No. 50, World Meteorological Organization, Geneva, Switzerland, pp. 572, 2007.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

522 www.ijergs.org

[2] Korting, P. A. O. G., Zee, F. W. M. and Meulenbrugge, J. J., Combustion characteristics of low flame temperature Chlorine-
free composite solid propellants, AIAA, Vol. 6, Issue 3, pp. 1803- 1811, 1987.
[3] Kim, J. H., Preparation of Phase Stabilized Ammonium Nitrate (PSAN) by salting out process, Journal of Chemical
Engineering, Japan, Vol. 30, Issue 2, pp. 336- 338, 1997.
[4] Hendricks, S. B., Posnjak, E. and Kracek, F. C., Molecular rotation in the solid state; the variation of the crystal structure of
Ammonium Nitrate with temperature, Journal of American Chemical Society, Vol. 54, Issue 7, pp. 2766- 2786, 1932.
[5] Brown, R. N. and McLaren, A. C., On the mechanism of the thermal transformations in solid Ammonium Nitrate, Proceeding
of the Royal Society, Vol. 266, pp. 329- 343, 1962.
[6] Herrmann, M. J. and Engel, W., Phase transitions and lattice dynamics of Ammonium Nitrate, Propellants, Explosives and
Pyrotechnics, Vol. 22, Issue 3, pp. 143- 147, 1997.
[7] Falck-Muss, R., Newman, D. J. and Atkin, S., Stabilized Ammonium Nitrate, US Patent No. 3649173, 1972.
[8] Mishra, I. B., Phase Stabilization of Ammonium Nitrate with Potassium Fluoride, US Patent No. 4552736 A, 1986.
[9] Campbell, A. N. and Campbell, A. J. R., The effect of a foreign substance on the transition: AN (IV- III), Canadian Journal of
Research, Vol. 24 (b), Issue 4, pp. 93- 108, 1946.
[10] Eisenreich, N., Deimling, A. and Engel, W., Phase transitions of Ammonium Nitrate doped with alkali nitrates studied with fast
X- Ray diffraction, Journal of Thermal Analysis and Calorimetry, Vol. 38, Issue 4, pp. 843- 853, 1992.
[11] Engel, W. and Heinisch, H., Process for producing Phase Stabilized Ammonium Nitrate, US Patent No. 6508995, 2003.
[12] Rosser, W. A. and Wise, H., Thermal decomposition of Nitrogen Dioxide, Journal of Chemical Physics, Vol. 24, pp. 493-494,
1956














International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

523 www.ijergs.org

E-Waste Management Practices: Specific Focus on Indore & Jabalpur
Dr. Devendra S Verma
1
, Shekhar Agrawal
1

1
Department of Mechanical Engineering, Institute of Engineering & Technology, DAVV- Indore
E-mail- devl_ver@yahoo.co.in

Abstract- The e-wastes are old electronic and electrical product or communication and computer products which attain the end of
life. These e-wastes are categorised into different categories. The developed countries are used to send the e-waste to developing
countries and it now becomes a matter of concerns for developing countries. The changing technology is also a reason for increase in
e-waste generation. In this paper a study on e-waste management practices was done by conducting a survey in two cities of India i.e.
Indore and Jabalpur and results were obtained regarding awareness among people about e-waste, method of e-waste management and
suggestions were obtained about e-waste management. Based on this survey it was recommended that role of government should be
increased for controlling the informal method of e-waste management and for promoting the formal method and increasing the
awareness among people about hazardous effect of e-waste and for its proper disposal. Also the responsibility of manufacturer should
also increases for buy back method of e-waste management by the manufacturer.
Keywords e-waste, hazardous, formal and informal, suggestions, developing, responsibility
INTRODUCTION
India is a developing country and is having worlds second largest population after china. The present growth rate of 4.7 percent of
GDP and achieved growth of 8 percent during eleventh five year plan from 2007 to 2012.[1] As per this growth rate needs and
lifestyle of Indian people changes continuously. Due to a huge revolution in technology there is advancement in every sector. The
electronic and communication market is also booming in India. Due to this a large number of people are changing their old electronics
and communication product with new product. Because of it a large amount of old electronic and communication products are
enhancing large quantum of e-waste in India.
Electronic waste (e-waste) comprises of old electronics/electrical items which are not fit to deliver good services and intended use
or have reached their end of life. This may include items such as computers, servers, mainframes, monitors, CDs, printers, scanners,
copiers, calculators, fax machines, battery cells, cellular phones, transceivers, TVs, medical apparatus and electronic components
besides white goods such as refrigerators and air-conditioners. E-waste contains valuable materials such as copper, silver, gold and
platinum which could be processed for their recovery.[2]

A. Categorisation of e-waste:
The e-waste are categorised into following different categories as shown in table below.
Table 1. WEEE categories according to the EU directive on WEEE(EU, 2002a)[3]
S No Category Label
1 Large Household appliance Large HH
2 Small household appliance Small HH
3 IT and Telecommunication s equipment ICT
4 Consumer equipment CE
5 Lighting equipment Lighting
6 Electrical and electronic tools(with the exception of large-scale stationary industrial tools) E & E tools
7 Toys, leisure and sports equipment Toys
8 Medical devices(with the exception of all implanted and infected products) Medical equipment
9 Monitoring and control instruments M&C
10 Automatic dispenser Dispenser
Source: Rolf Widmera et al., (2005)
Out of the ten categories listed in above table, category 1-4 contributes for almost 95% of the WEEE generated. These categories
include following products which leads to e-waste generation:[4]
- Large Household Appliances- Washing machines, Dryers, Refrigerators, Air conditioners, etc.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

524 www.ijergs.org

- Small Household Appliances- Vacuum cleaners, Coffee Machines, Irons, Toasters, etc.
- Office, Information & Communication Equipment- PCs, Laptops, Mobiles, Telephones, Fax Machines, Copiers, Printers etc.
- Entertainment & Consumer Electronics- Televisions, VCR/DVD/CD players, Hi-Fi sets, Radios, etc.
- Lighting Equipment- Fluorescent tubes, sodium lamps etc. (Except: Bulbs, Halogen Bulbs)
- Electric and Electronic Tools- Drills, Electric saws, Sewing Machines, Lawn Mowers etc. (Except: large stationary
tools/machines
- Toys, Leisure, Sports and Recreational Equipment- Electric train sets, coin slot machines, treadmills etc

B. Process of e-waste management:
According to the Basel Action Network (BAN) which works for prevention of globalisation of toxic chemicals, it has found that 50
to 80 per cent of e-waste collected by the US is exported to India, China, Pakistan, Taiwan and a number of African countries. It is
possible because of cheaper labour availability in these countries and in US export of e-waste is legal.[5] In India, the process of e-
waste management is done mostly by informal sector. The practices of e-waste management performed by informal sector are very
dangerous for environment and for human being also.[6]
METHODOLOGY
In view to know pattern of growth and disposal of e-waste management in India, a survey had been conducted in two cities i.e.
Indore and Jabalpur of Madhya Pradesh state. The process of survey was done in two steps which are given below:
1. Collection of secondary data
2. Primary data collection & analysis
A. Collection of secondary data:
In this step The status of electronic and communication products in Indore and Jabalpur could be estimated by measuring the
percentage of household possessing Television, Computer/Laptop and Mobile phone which are the main constituents leading
to e-waste generation. The percentage of household possessing these items in Indore and Jabalpur are shown below.
Table 2: Percentage of household possessing different products
District Total
Household
Total Percentage of Household having
Television Computer/Laptop Mobile phone
Indore 6,15,334 75.5 18.1 62.1
Jabalpur 5,15,029 55.4 11.4 43.7
Source: Census of India 2011

From the above table it could be concluded that majority of household possesses television followed by computer/laptop and mobile
phone in both the cities. Therefore the percentage of e-waste generated from these household follow the same pattern.
B. Primary data collection:
Sample size:
A. For this survey a sample size of ten sellers in each of the following business group have been identified according to different
market location, size of firm, and type of firm like retailer, dealer or branded showroom owner. The data collected is shown
below.
B. Group A- Computer & peripheral Seller
C. Group B- Electrical & Electronic goods Seller
D. Group C- Mobile & Accessories Seller
E. In addition to it some scrap dealer/ vendor of e-waste were also interviewed and findings are obtained.
Mode of communication:
For communicating to respondents of all groups interviewing technique was used and relevant information was obtained.

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

525 www.ijergs.org

DATA COLLECTION, ANALYSIS AND RESULTS
The survey conducted mainly has three points to be identified which are
1. Awareness about e-waste and its management among seller
2. Use of item received under exchanged from purchaser
3. Suggestion/ recommendation regarding e-waste management
4. Result from scrap dealers or vendors

A. Awareness about e-waste and its management among seller
The data regarding awareness among seller about e-waste and its management was obtained by asking them questions regarding
hazardous effect of e-waste, safe disposal of e-waste and knowledge of recyclers. Based on the answers given by them all the three
group seller are categorised into three categories as seller having deep knowledge, shallow knowledge and no knowledge. The
findings are summarised into table and result is shown in graph.
Table 3: Awareness & Knowledge among Businesses on e-waste management
Level of knowledge about
e-waste
Number of Group A respondents Number of Group B respondents Number of Group C respondents
Deep knowledge 3 2 2
Shallow knowledge 6 6 7
No knowledge 1 2 1


Fig 1: Result showing percentage wise distribution of different product seller according to their knowledge about e-waste
Result from the level of knowledge of first three categories product seller about e-waste:
It was observed from graph that about 30% computer and peripheral seller, about 20% electrical and electronic goods seller and about
20% mobile and accessories seller have deep knowledge of e-waste, which means they know about how e-waste is generated, how it is
passes from one customer to other and how it get disposed and recycled.
It was also found that about 60% computer and peripheral seller, about 60% electrical and electronic goods seller and about 70%
mobile and accessories seller have shallow knowledge of e-waste, it means they have brief understanding about e -waste generation
but lack of knowledge on the environmentally sound disposal of end of life IT/communication and electronic product, they only know
about the process used by them to manage e-waste coming to them through exchanged process.
Group A Group B Group C
30%
20% 20%
60% 60%
70%
10%
20%
10%
Knowledge level of different seller about E-waste
Deep knowledge Shallow knowledge No knowledge
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

526 www.ijergs.org

Further it was observed that about 10% computer and peripheral seller, about 20% electrical and electronic goods seller and about
10% mobile and accessories seller have no knowledge of e-waste.
B. Use of item received under exchanged by first three groups
The seller takes old product of customer in exchange of new product. The seller then manages these old product i.e. e-waste in
different methods such as sell to second hand market/ mechanic, or sell to scrap dealer/ vendor.

Group A: Computer and peripheral sellers:

The finding obtained from computer and peripheral seller is summarised in table and result is shown in graph.

Table 4: Computer & peripheral showroom owner/ retailer using different methods to manage e-waste
S No. Code of showroom owner/retailer 1 2 3 4 5 6 7 8 9 10
1. Sell to second hand market /mechanic Y Y Y Y N Y N N N Y
2. Sell to scrap dealer/ vendor Y N N Y N Y N Y N Y
3. Whether exchange facility available Y Y Y Y N Y N Y N Y
4. Company support for recycling N N N N N N N N N N


Fig 2: Result showing percentage wise distribution of Computer and peripheral showroom owner/retailer according to method of e-waste management
Result from computer & peripheral related e-waste
From the study it is found that about 40% computer and peripheral seller sell their old exchanged item from customer to both in
second hand market and to scrap dealer.
About 20% computer and peripheral seller sell their old exchanged item from customer to second hand product customer or mechanic.
About 10% computer and peripheral seller sell their old exchanged item from customer to scrap dealer or vendor.
About 30% computer and peripheral seller who sell computer and laptops of branded company only do not offer exchange facility.
Many computer manufacturer companies mention about e-waste programme and collection facility in their website but at ground level
there is no such information to the showroom owner and retailer.
Group B- Electrical & Electronic goods Seller
The finding obtained from Electrical and Electronics seller is summarised in table and result is shown in graph.




40%
20%
30%
10%
Percentage of Computer & peripheral Seller
Sell to both 2nd hand market/mechanic and
scrap dealer/vendor
Sell to 2nd hand market/ mechanic
Do not exchange
Sell to scrap dealer/vendor
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

527 www.ijergs.org

Table 5: Electrical & Electronic showroom owner/ retailer using different methods to manage e-waste
S No. Code of showroom owner/retailer 1 2 3 4 5 6 7 8 9 10
1. Sell to second hand market/mechanic Y N Y N N N N N N N
2. Sell to scrap dealer/ vendor N N N Y N Y Y Y Y Y
3. Whether exchange facility available Y N Y Y N Y Y Y Y Y
4. Company support for recycling N N N N N N N N N N
Y-Yes, N No

Fig 3: Result showing percentage wise distribution of Electrical & Electronics showroom owner/retailer according to method of e-waste management
Result from electronic goods seller:
As found from the survey about 10% of electronic goods seller sells their old electronic items exchanged from customer to second
hand market or to mechanic.
About 80% of electronic goods seller sells their old electronic items exchanged from customer to scrap dealer or vendor.
About 10% of electronic goods seller does not provide exchange facility.
Most of the electronic goods manufacturer does not provide any support to retailer and showroom owner for management of
exchanged items.
Group C- Mobile & Accessories Seller
The finding obtained from Mobile and accessories seller is summarised in table and result is shown in graph.
Table 6: For Mobile Company/ private showroom owner/ retailer using different methods to manage e-waste
S No. Code of showroom owner/retailer 1 2 3 4 5 6 7 8 9 10
1. Sell to second hand market /mechanic Y N N N N N N N N N
2. Sell to scrap dealer/ vendor N Y Y Y Y Y Y Y Y Y
3. Whether exchange facility available Y Y Y Y Y Y Y Y Y Y
4. Company support for recycling N N N N N N N N N N
Y- Yes, N- No
20%
60%
20%
Percentage of Electrical & Electronic goods seller
Sell to second hand market/mechanic
Sell to scrap dealer/vendor
Do not exchange
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

528 www.ijergs.org


Fig 4: Result showing percentage wise distribution of mobile and accessories showroom owner/retailer according to method of e-waste management
Result from Mobile Company/ private showroom owner:
About 10% of mobile phone seller sells the old mobile phones exchanged from customer in second hand market or to mechanic.
About 90% of the mobile phone seller sells old mobile phones exchanged from customer to scrap dealer or vendor.
C. Suggestion given by different product seller:
During the survey, different product seller provide different suggestion and recommendation based on their knowledge of e-waste.
Based on these suggestions a table comprising different suggestions is formed as shown below.
Table 7: Data on suggestion given by different product seller:
S.
No.
Seller
Suggestions
Group A Group B Group C
1. Increase in govt responsibility by
providing training to scrap dealer and
increasing e-waste collection/recycling centre
4 2 1
2. Increase in awareness among people about e-waste and its ill
effect
1 - 5
3. Company should establish buy back channel for old used
product
1 3 -
4. Not willing to give any suggestion 4 5 4


10%
90%
Percentage of Mobile Company/ private showroom owner
Sell to 2nd hand market/mechanic
Sell to scrap dealer/ vendor
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

529 www.ijergs.org


Fig 5: Result showing percentage wise distribution of different product seller according to suggestion given by them:
Result of suggestion given by different type of seller:
It was observed from graph that about 40% computer and peripheral seller, about 20% electrical and electronic goods seller and about
10% mobile and accessories seller suggest to increase the government responsibility by establishing more collection centre and
recycling facility and by promoting door-to-door collection.
It was also found that about 10% computer and peripheral seller and about 50% mobile and accessories seller suggest to increase the
awareness about e-waste among people by organizing awareness camp in school, collages, market place, offices, and in other public
places. Awareness could also be increases by highlighting the ill effect of e-waste on human being and environment.
Further it was observed that about 10% computers and peripheral seller and about 30% electrical and electronic seller suggest
establishing buy back facility by the manufacture. The manufacture should establish the process of taking back old and obsolete
product from customer and from retailers.
It was also observed from graph that about 40% computer and peripheral seller, about 50% electrical and electronic goods seller and
about 40% mobile and accessories seller are not willing to give any suggestion as most of them are company employees and are not
authorised to give any such suggestion which have inverse effect on their company.
D. Results from scrap dealers or vendors:
As it is observed from the survey that majority of the sellers of all the three categories are managing their e-waste by giving it to
scrap dealer or vendor. Therefore some scrap dealer or vendor are approached during the survey and try to find out what these scrap
dealer do with the e-waste collected from households, shops and offices. The finding obtained could be described as a process which
includes following steps:
Step 1: Sourcing by informal recyclers- In this step the e-waste is collected by informal scrap dealer from household and business. The
household sell e-waste to second hand market or to showroom owner/retailers in exchange schemes. Sometimes scrap collector
directly collect the e-waste from household. Informal scrap collector collects e-waste also from government organization or business
firms by participating in auction or by directly approaching the offices or through exchange scheme.
Step 2: Aggregation- After the e-waste collected by scrap dealer they checks the material receive and divide it into three part as
material which can be resold in second hand market as first part, second part include items which could be repaired or refurbished and
resold and third part consist of what is to be sent for recycling.
Step 3: Segregation & dismantling- Those parts of e-waste which cannot be resold in original form are dismantled either by scrap
collector himself or sell it to a dismantler. The dismantling of only electrical and electronic products mainly fridge, TV, and washing
machine are performed at local level. Most of the computers and mobiles could not be dismantled here and are taken by e-waste
collector from Delhi where dismantling is done by experts of informal sector. After dismantling the product, the components are again
checked to know if any part or components could be reused. The reusable components or parts are sold at higher price as compared to
non reusable part.
40%
10% 10%
40%
20%
0%
30%
50%
10%
50%
0%
40%
Suggestoion 1 Suggestoion 2 Suggestion 3 Suggestion 4
Percentage wise distribution of seller according to their suggestion
Group A Group B Group C
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

530 www.ijergs.org

Step 4: Recycling- After segregating and dismantling the waste electronic product the parts which could not be resold are recycled.
Each stage workers are expertise in their job and perform the specific job. Most of the recycling job is performed by informal sectors.
Although there is a availability of government authorised recycler in Indore which is the only e-waste recycler in Madhya Pradesh,
most of the scrap dealer/vendor do not give the recyclable e-waste to that recycler. Instead they retain it as a dump or give it to e-waste
collector from Delhi who gives them a handsome amount for that e-waste. Due to this approach of scrap dealer/vendor, this formal e-
waste recycler could not get enough amount of e-waste to run its recycling plant regularly. It is the only authorised recycler of Madhya
Pradesh registered under Hazardous waste (Management, Handling and Transboundary, Movement) Rules, 2008.
ACKNOWLEDGEMENT
The author wishes to express his thanks to the entire respondent who had met during the survey, for their support in carrying out
this work.
CONCLUSION & RECOMMENDATIONS
By performing the above analysis of data it is concluded that the responsibility should be decided at various levels. The government
should develop a model through which the informal recycler could also get involved in environment friendly recycling of e-waste. The
government should permit only those recyclers for taking part in auction of e-waste who perform recycling in environment friendly
manner. The government should also make planes to increase the awareness among people regarding the hazardous effect of e-waste
and emphasis them to give their e-waste to collector who recycle it in environment friendly manner. The companies should also spread
awareness among end user about the hazardous effect of e-waste and provide proper information about disposal of product after using
it along with the product. The company should also promote buy back facility for proper disposal of e-waste by the end user. For this
collection and dropping centre should be open by company for customer other then the already available dealer location.

REFERENCES:
[1] The Economic Times, Feb 20, 2014, webpage: articles.economics.indiatimes.com
[2] Saurabh Kumar, Rajesh Singh, Dhananjay Singh, Rajmohan Prasad and Tushar Yadav, Electronics-waste Management, International Journal
of Environmental Engineering and Management, Volume 4, Number 4 (2013), pp. 389-396
[3] Rolf Widmera, Heidi Oswald-Krapfa, Deepali Sinha-Khetriwalb, Max Schnellmannc, Heinz Bonia, Global perspectives on e-waste,
Environmental Impact Assessment Review 25 (2005) 436458
[4] Aravindhan.L, Raaghavan.P, Vivek Narayanji.S.G & Jaysri Thangam. A, Impact of Supply Chain Stratergies On the Reduction of E-WASTE,
International Journal of Interscience Management Review (IMR) Volume-2, Issue-2, 2012, pp 26-29
[5] Electronics for you, January 2009 E-waste Management In India, www.efymag.com
[6] AnweshaBorthakur,PardeepSingh, Electronic waste in India: Problems and policies, INTERNATIONAL JOURNAL OF ENVIRONMENTAL
SCIENCES Volume 3, No1, 2012.
[7] CENSUS OF INDIA 2011, HOUSING HOUSEHOLD AMENITIES AND ASSESTS, MADHYA
PRADESHCOMMUNICATION








International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

531 www.ijergs.org

Dynamic Behavior of 2-D Flexible Porous Vertical Structure Exposed to
Waves and Current A Numerical Simulation
Sonia Chalia
1

1
Assistant Professor, Amity University, Haryana, India
E-mail- Schalia@ggn.amity.edu
Abstract The main objective is to find out the environmental effects on the Flexible porous vertical structure and its stability when
exposed to different parameters of waves and current. A numerical model of 2D net is modeled and simulated to analyze the response
of the structure and tension in the mooring lines in these conditions. The system included a flexible net attached to floaters on the top
and weight suspended on the bottom and high tensioned mooring with fixed floatation. The analysis is performed by OrcaFlex
software which is a 3D non-linear time domain finite element program capable of dealing with arbitrarily large deflections of the
flexible from the initial configuration. Input forcing parameters in the model consisted of regular waves, with or without steady
current. The results analyzed are the effect of different wave, wave height, wave period, influence of the floater movement on the
structural forces in the net, current on mooring line tension, and how the bottom weight affects the mooring line tension.
Keywords Flexible porous vertical Structure, Net, Mooring tension force, Numerical model, OrcaFlex, Waves and Current

INTRODUCTION
Flexible porous structure also use as breakwater to reduce the intensity of wave action in inshore waters as well as fishing net which
used in a fishing cage system. This structure is mainly composed of supple net, mooring lines and floaters. Flexible structure
undergoes large deformation under external and internal forces as compare to rigid structures with the same environmental conditions.
For example, the shape of a net in a current readily changes with change in flow speed and the length of netting twine undergoes
extensional deformation as a result of various loads during fishing, because material used in net is highly non-linear, small
environmental change can produce large deformation. To make the structure more accurate and stable, it is important to be able to
predict the behavior of such structures in different wave situations.
Until recently, the study of the interaction between water waves and net has been addressed mainly in the coastal engineering literature
with respect to the wave trapping qualities of porous structures. Wave Interaction with a Flexible Porous Breakwater in a Two-Layer
Fluid has been studied by P. Suresh Kumar and T. Sahoo. The wave reflection and transmission by the vertical porous barriers have
been studied analytically and experimental by (Chwang and Chan, 1998; Garcia et al., 2004; Twu and Lin, 1991). An investigation of
Water Waves on Flexible and Porous Breakwaters has been done by Keh-Han Wang and XuguiRen. Numerical methods are currently
being developed to study the dynamical behavior of flexible net structures in waves and current (Lader et al., 2003). Furthermore, Wu
et al. (1998) theoretically investigated a damping effect of a horizontally submerged perforated plate. With their research as a
background Williams et al. (2000) studied a freely floating cylinder with partly permeable walls. The fish net as a breakwater structure
was investigated by Chan and Lee (2001). With wave exposure becoming more and more critical, the understanding of the interaction
between waves and structure is important. It is necessary to understand both the dynamic forces acting on the net structure and tension
in the mooring system.
Today, much research remains before the dynamic behaviors of flexible porous structures are satisfactorily understood. It is therefore
important to study the principal behavior of these structures in general, and in particular develop numerical models which can predict
their behavior in different load conditions. This research presents a simple numerical model of a flexible porous structure. The model
is used to study the behavior of the net panel exposed to waves and current. The motivation for studying only a single net sheet is to
get a principal understanding of the complex dynamic and hydroelastic properties of flexible netting structures.
MODEL DESCRIPTION
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

532 www.ijergs.org

ASSUMPTIONS
Model is composed of many meshes, and each mesh can be regarded as a construction of four mesh bars connected each other at their
two ends as shown in Fig. 1. In order to build up the numerical model, the following assumptions are used:
(1) There is only tension in the axis direction of a mesh bar and the tension is constant across the cross-section of the mesh bar.
(2) The relative displacements of all points on the cross-section of the mesh bar are equal.
(3) The cross-sectional area of the mesh bar remains constant during deformation.
(4) The netting twine is completely flexible and easily bent without resistance.

Fig. 1 Numerical model of 2-D Flexible porous vertical structures in OrcaFlex
MATERIAL USED
In making a net for a specific purpose many considerations are to be taken into account, such as the forces applying on the net, their
distribution around the net, the kind of materials the net and mooring lines are made from, and the way in which these are used. The
main forces on any net structure are those arising from winds, waves and currents, and from the interaction of the structure and its
mooring system with the resulting movements. The rope and net industry has seen many changes in the last number of years. Initially
only steel and natural fibers were on the market as raw material. In the 40s and 50s high tenacity polymeric fibers were developed,
such as polyamides and polyester, with many advantages over traditional materials. This opened the way to new low
weightconstructions with rot resistant materials. Because of their chemical composition, polyester and polyamides have intrinsic
advantages for use in the marine environment. Water hardly affects their properties and cold-water shrinkage is virtually zero, so they
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

533 www.ijergs.org

can be regarded as a very stable material. Apart from its insensitivity to water, the chemical composition of polyester and polyamide
results in properties such as UV weathering and wet abrasion. Material used in this model in Nylon 210D/96.
OVERVIEW
The model described here is a simple model of a single net sheet consists mainly of a netting system, floaters (Buoys), a weight
system to provide the tension in the net and a mooring system to hold the structure with sea bed and fixed floaters. The structure,
being flexible and porous, is held fixed in the sea bed with mooring lines. The net can be modeled as a series of lumped point masses
that are interconnected with springs without mass. Lumped point masses are set at each knot and at the center of the mesh bar. Each
knot point mass is assumed to be a spherical point at which the fluid force coefficient is constant in motion direction. Because the
mesh bar is cylindrical, however, the fluid forces acting on the point masses at each mesh bar should differ in different directions.
Therefore, it is assumed that the lumped points at each mesh bar have the fluid dynamic characteristics of cylindrical elements, and
that the fluid force coefficients vary with the relative fluid velocity direction.
The importance of this model is the behavior of the net as a complete entity. The net mesh needs to be modeled in sufficient
refinement to show the distribution of loading. This means an equivalent mesh can be generated that has the same resultant loads but
does not need to show each individual knot and line. This is basically the same as defining the mesh refinement on a surface for an FE
model. These nets are suspended below floating buoys and the whole structure is then moored using more lines.
BUOYS
Buoys can be classified into two categories:
3D Buoys are simplified point elements with only 3 degrees of freedom: X, Y and Z. They do not rotate, but remain aligned with the
global axes as shown in the Fig .2 and 3. They therefore do not have rotational properties and moments on the buoy are ignored. They
should therefore be used only where structure need to be still and also known as fixed floatation system.

Fig. 2 Numerical model of 3D Buoy Fig. 3 3D Buoy

6D Buoys are objects having all six degrees of freedom 3 translational (X, Y, Z) and 3 rotational (Rotation 1, 2, 3). Buoys have both
mass and moments of inertia, and forces and moments from many different effects can be modeled.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

534 www.ijergs.org

Lines attached to a 6D Buoy can thus experience both moment effects and translations as the buoy rotates. Lines can be attached to an
offset position on a buoy this allows the direct study of line clashing, including the separation introduced by spaced attachment
points.
Lumped type 6D Buoys (shown in Fig. 4 and 5) used in the model because it restricts the accuracy with which interactions with the
water surface are modeled. Where a lumped buoy pierces the surface it is treated for buoyancy purposes as a simple vertical stick
element with a length equal to the specified height of the buoy, and buoyancy therefore changes linearly with vertical position without
regard to orientation.


Fig. 4 Numerical model of 6D Lumped Buoy Fig. 5 6D Buoy
Buoys act as a cushion to absorb the hydrodynamic impact forces that impinge on the structure and as boundary markers to denote the
size of the full system; the innermost buoys act as supporting floats. To avoid the environmental forces when a typhoon is coming, the
floaters and the net are submerged under the surface of the water by manually opening the submerged collar valves in the floaters and
letting sea water automatically flood in the floating tubes to gain extra weight; thus, the total buoyant forces of the innermost buoys
must be sufficient to overcome the total weight of the netting system so that it does not sink to the sea floor and encounter some kind
of abrasion problem.
MOORING SYSTEM
The main purpose of the mooring system is to fasten net at a specific location and to prevent it from drifting away as environmental
loadings act on them. Therefore, the strength and durability of the material used for mooring lines are important factors. The material
most commonly used by the local fishing industry is Nylon, PET (Polyester), and PP (Polypropylene). The specific gravity of Nylon is
1.14 and PET is 1.38; both are heavier than the waters specific gravity, and when installed in the field these materials tend to sink to
the sea floor. The specific gravity of PP is about 0.91, and it may float on the water surface if disconnected from the bottom anchors.
A mooring system failure can occur if the system encounters severe environmental forces, such as those that occur during a strong
typhoon. Once a cable breaks, it may induce a domino effect so that other mooring lines pop and the whole net system may wash
away instantly. To reduce the impact forces that affect mooring lines, distance buoys (fixed floatation) are installed to absorb these
undesired forces. To anchor the system to the sea floor three types of anchor are commonly used in the field: embedment anchors, pile
anchors, and deadweight anchors. Iron embedment anchors are only suitable on sandy or muddy bottoms, whereas pile and
deadweight anchors can be used in rocky or sandy/ muddy bottoms. Pile anchors must be inserted deeply into the substrate to gain
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

535 www.ijergs.org

enough holding capacity, and deadweight anchors rely totally on the friction forces with the sea bottom to resist the horizontal tension
forces acting on the mooring lines and the weight of anchors to take care the vertical tension force
SIMULATION
The net in the simulation is a 3 m wide and 3 m deep net sheet in water depth of 10 m, divided into 10 equally sized elements. The net
was oriented parallel to the y-axis, and was subjected to regular waves and current running in the positive x-direction. The top point of
the net was forced to follow the vertical displacement of the wave surface. The net behavior is simulated over multiple wave periods
and the net behavior is shown at several time instances in the simulations
RESULTS AND DISCUSSION
The net sheet used in the case studies is described in table 1. Table 2 shows the parameters and parameter settings used in the different
cases. In each case, one of the parameters was varied, while the other parameters were kept constant.
Table 1 Net specification.
Material Nylon 210D/96
Depth (m) 3
Twine diameter (m) 0.0085
Mess size (m) 0.3
Elastic coefficient (kN/m2) 350900
Table 2, the parameter values for each of the six cases
Case Parameter Default values Value settings
1 Wave geometry

v = 0 H = 0.44, T = 2,

H = 1, T = 3,

H = 1.78, T = 4,

2 Wave height

T = 4, v = 0.5 H = 0.44,

H = 1,

H = 1.78,

3 Wave period

H = 1, v = 0.5 T = 3,

T = 6,

T = 9,

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

536 www.ijergs.org

4 Current

H = 1, T = 3,

v = 0 v = 0.3 v = 0.6
5 Top point
movement

H = 1, T = 3, v = 0 fixed Follow with heave
6 Bottom weight

H = 1, T = 3, v = 0.5

Weight=10kg Weight=20kg Weight=30kg
Where,
V = current speed,
H = wave height,
T = wave period.
Case 1: Wave geometry
It can be observed that the wave with the longest period (T = 4 s) and largest height (H = 1.78) produces the largest forces in the netas
shown in Fig.6
Case 2: Wave height
When the wave period and current constant, wave with the highest height (H = 1.78) produces the largest force on the mooring as
shown in Fig.7
Case 3: Wave period
From the result we can conclude that the shortest wave (t = 3), produce the largest load on mooring when compare to other waves as
shown in Fig. 8
Case 4: Current
Three different levels of current in combination with waves (T = 3 s, H = 1 m) are applied to the net. The dynamic amplitude of the
drag force is larger forthe current cases than for the no current case. This is dueto a change in the angle between the top element and
thehorizontal plane because of the current. As illustrated in Fig. 9, the current causes the angle between the topelement and the
horizontal plane to be smaller, andconsequently the drag force (horizontal component of thetop element force) becomes larger
(assuming constantelement force). Thus, the presence of current results inhigher drag forces and higher loads on the mooring lines
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

537 www.ijergs.org



Fig 6 Effect of different waves on mooring line tension

Fig. 7 Effect of wave heights on mooring line tension

Fig. 8 Effect of wave period on mooring line tension
Case 5: Top point movement
This case illustrates clearly the maximum structural force between net/floater joint and net/bottom weight. Fig. 10 shows the time
history of the element structural force in the top and bottom element when the structure is exposed to waves. The maximum element
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0 2 4 6 8 10
T
e
n
s
i
o
n
(
k
N
)
t/T
H=.44, T=2, v=0
H=1, T=3, v=0
H=1.78, T=4, v=0
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
0 2 4 6 8
T
e
n
s
i
o
n
(
k
N
)
t/T
H=.44, T=4, v=.5
H=1, T=4, v=.5
H=1.78, T=4, v=.5
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
1.1
0 2 4 6 8 10
T
e
n
s
i
o
n
(
k
N
)
t/T
H=1, T=3, v=.5 H=1, T=6, v=.5
H=1, T=9, v=.5
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

538 www.ijergs.org

structural force in these two elements shows the structural forces in the joint between the net and the floater, and between the net
andthe bottom weight. If the structural force between joints exceeds the limit specified by the material used then result will be failure
of the system. The parameters used are Wave: 3 s/ 1 m, no current, bottom weight 10 kg.
Two cases analyzed here:
(i) calm environment when floater is fixed
(ii) Harsh environment when floater moves with the waves
The dynamic amplitude of the force is approximately five times larger in the floater/net joint and net/bottom weight point.
Therefore forces in floater when moving with waves contributes to analysis forces and tensions in the net.

Fig. 9 Effect of current speed on mooring line tension

Fig. 10 Elemental structural forces in floater (left) and bottom weight (right)
For the net with the moving top point it can be observed that the force in the top element goes to zero, causes slack in the net when the
movement of the floater is too large.
0
0.2
0.4
0.6
0.8
1
1.2
0 2 4 6 8 10
T
e
n
s
i
o
n
(
k
N
)
t/T
H=1, T=3, v=0
H=1, T=3, v=.3
H=1, T=3, v=.6
0
0.01
0.02
0.03
0.04
0.05
0.06
0.07
6 7 8 9
S
t
r
u
c
t
u
r
a
l

f
o
r
c
e
s

(
k
N
)
t/T
fixed follow wave in heave
-0.005
0
0.005
0.01
0.015
0.02
0.025
0.03
0.035
6 7 8 9
S
t
r
u
c
t
u
r
a
l

f
o
r
c
e
s

(
k
N
)
t/T
fixed
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

539 www.ijergs.org

The large dynamic amplitude of the force in the bottom element is a direct consequence of this slack; the spikes in the curve
representing the minimum and maximum force in the bottom element coincide with the beginning and ending of the period when the
top element force is zero.
Situations where the net experiences slack should be avoided since this causes large forces which can result in net failure.
This implies the importance of modeling the behavior of the floater accurately in order to obtain good estimates for the structural
forces in the net.
Case 6: Bottom weight
The main function of the bottom weight on a net pen is to prevent deformation of the net when exposed to waves and current. For this
purpose, the weightshould be as large as possible, but a large bottomweight also increases the loads on the net as can be seen in Fig.
11. When the bottom weight increases from 10 to 30 kg dynamicamplitude increases approximately seven times. An increase in
bottom weight will thus have a larger impact on the forces on the joint between the net and the bottom weight than on the joint
between the net and the floater.

Fig. 11 Effect of bottom weight on mooring line
CONCLUSION
Based on the above simulation cases, several important features of the dynamic behavior of flexible porous structures exposed to
waves and current have been identified:
Wave with the longest period and largest height produces the largest forces in the net. The floater motion can cause slack in the net
structure. A slack in the top of the net structure results in large dynamic forces in the bottom of the net structure. Higher current speed
can produce larger forces in structure so larger mooring tension. The dynamic amplitude of the wave induced force on the mooring is
less when the net is exposed to a current in either direction. A short wave cause a more load on the structure. The motion of the floater
is the main contributor to the forces in the net. An increase in the mass of the bottom weights lead to an increase in the dynamic force,
especially at the bottom of the net
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
0 2 4 6 8 10
T
e
n
s
i
o
n
(
k
N
)
t/T
H=1, T=3, v=.5, weight=10kg
H=1, T=3, v=.5, weight=20kg
H=1, T=3, v=.5, weight=30kg
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

540 www.ijergs.org

REFERENCES:
[1] Robert A. Dalrymple, Water Wave Mechanics for Engineers and Scientists
[2] A.T. Chan , S.W.C. Lee, Wave characteristics past a flexible fishnet
[3] OrcaFlex Manual, version 9.4
[4] T.L. Yip, T. Sahoo, Allen T. Chwang, Trapping of surface waves by porous and flexible structures
[5] Pal F. Lader a,, Anna Olsen a, Atle Jensen b, Johan KristianSveen c, Arne Fredheim a, BirgerEnerhaug, Experimental
investigation of the interaction between waves and net structuresDamping mechanism
[6] Pal F. Lader, Arne Fredheim and Egil Lien, SINTEF Fisheries and Aquaculture, Dynamic Behavior of 3D Nets Exposed to
Waves and Current
[7] Chai-Cheng Huang a, Hung-Jie Tang a, Jin-Yuan Liu b, Dynamical analysis of net cage structures for marine aquaculture:
Numerical simulation and model testing
[8] Yun-Peng Zhao a, Yu-Cheng Li a,b, Guo-Hai Dong a, Fu-Kun Gui c, Hao Wu a, An experimental and numerical study of
hydrodynamic characteristics of submerged flexible plane nets in waves
[9] P. Suresh Kumar, T. Sahoo, Wave Interaction with a Flexible Porous Breakwater in a Two-Layer Fluid
[10] Keh-Han Wang, XuguiRen Water Waves on Flexible and Porous Breakwaters
[11] R.J. Kennedy and J. Marsalek Flexible porous floating breakwaters
[12] Hany Ahmed Wave Interaction with Vertical Slotted Walls as a Permeable Breakwater













International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

541 www.ijergs.org

Enchanced Fault RIDE-THROUGH Technique for PMSG Wind Turbine
Systems Using DC Link Based Rotor-Side Controlled
Anas Abdulqader khalaf, Prof P.D. Bharadwaj Assistant professor
E-mail- anas_abed1988@yahoo.com

Abstract Stations that run on electric power production of energies employer is wind power - as technology be a healthy
environment disbursement of oil may be non-existent and more dynamic and depend on the nature and especially the movement of
wind and the cost of a simple piece increased interest by the consumer - and one of the types of engines that are used by engine wind
PMSG is a device fluctuates according to the volatility of the atmosphere, especially wind speed and electric power unsteady be
important in the network to produce electric power by the movement and direction of the wind - is important to intervene network
with wind power inverter where - there are good things and some not so good use generators variable interval and speed - and a
magnet permanently be mostly from based on wind generator to change load WECS bearing on the conditions of network variable - in
technology generator has the ability to turn side to reduce the oscillations obtained from the train engine and is controlled by the part
of the network by controlling - and there are laws that bind b laws that serve the network control - and works to reduce to some extent
the changes that occur in the network and with the mistakes that may occur on the side of the network - in some generators often,
reduce current by converting automatically by the generator and keep any effort strategy compared to Alternative Carriers Dc
chopper.

Keywords PMSGs, WECS, ESS, STATCOM, LSC.

Compare DESIGN OF WIND TURBINE BASED PMSG between literature survey CONTROL SCHEME and
PROPOSED CONTROL SCHEME:
A PMSG-based WECS is simulated and analyzed when subjected to the system faults. The PMSG-based wind power unit connected
to the utility grid via a step-up transformer and transmission line. This PMSG-based WECS was implemented in MATLAB/
SIMULINK, where the above three different converter models are used separately for the purpose of comparison.









International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

542 www.ijergs.org

Design of the Unified Power Control for the MW Class PMSG-Based WECS IN MATLAB

Fig. 1: PMSG BASED WECS WITH DC CHOPPER.Fig.2: PMSG BASED WECS WITH OUT DC CHOPPER
We have different between the CONVENTIONAL CONTROL SCHEME and PROPOSED CONTROL SCHEME in block diagram
(RSC)and(GSC) in fig3 .fig4 .fig5 fig6

Fig3/the system RSC in proposed control schemeFig4/ the system RSC literature survey control scheme

Fig5/the system GSC in proposed control schemeFig6/the system GSC literature survey control scheme
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

543 www.ijergs.org

SIMULATION RESULTS WITH PROPOSED AND LITERATURE SURVEY
1)Operation With Unsymmetrical Grid Faults
A)With literature survey without Dc chopper and proposed technique with chopper AG:
Fig.7 :
Rotor speed, EM torque and stator A phase current Fig.8: Rotor speed, EM Torque and stator A phase current for
for literature survey control scheme system for AG fault proposed system with chopper for un symmetrical fault AG


Fig.9: DC link voltage for literature survey control schemeFig10:DC link voltage for proposed system with
without dc chopper system for AG faultchopper for un symmetrical fault AG

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

544 www.ijergs.org


Fig.11: stator voltage and current for literature surveyFig.12: Grid voltage and current for proposed system
Without dc copper control scheme system for AG faultwith chopper for un symmetrical fault AG

B)With literature survey without dc chopper and proposed technique with chopper abg fault

Fig.14: Rotor speed, EM torque and stator A phase currentFig.15: Rotor speed, EM Torque and stator A phase current
for literature survey without dc chopper control scheme system for proposed system with chopper for
for ABG fault un symmetrical fault ABG





International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

545 www.ijergs.org


Fig16: DC link voltage for literature survey Fig17: dc link voltage across capacitor for proposed system with dc control scheme
system for ABG fault chopper for un symmetrical fault ABG

Fig.18 stator voltage and current for literature Fig.19 Grid voltage and current for proposed system
survey control scheme system for ABG faultwith chopper for un symmetrical fault ABG.
2) Operation With Symmetrical Grid Faults: With literature survey without dc copper and proposed with dc chopper technique

Fig.20: Rotor speed, EM Torque and stator AFig.21: Rotor speed, EM torque and stator A phase current for literature
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

546 www.ijergs.org

phase current for proposed system with choppersurvey without dc chopper control scheme system for ABCG fault
It can be observed that speed and torque oscillations were high without chopper compared to with chopper and stator current drops to
nearly zero value.


Fig.22: DC link voltage for proposed system without chopperFig.23: DC link voltage for literature survey without dc chopper
symmetrical fault.control scheme system for ABCG fault
The DC link voltage at capacitor also decreases from 1000V to zero without chopper and voltage decreases from 1000V to 400Volts.
Hence voltage can be maintained much better with chopper circuit. It can also be verified that voltage and current at generator
terminals decreases from unity pu to zero value without chopper and is maintained at 0.1pu with chopper.



Fig.24: Grid voltage and current for proposed system Fig.25 stator voltage and current for literature survey control scheme
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

547 www.ijergs.org

with dc chopper for symmetrical faultsystem for ABCG fault
ACKNOWLEDGMENT
I am very grateful to my institutes, BharatiVidyapeeth Deemed University College of Engineering Pune and my guideProf.
P.D.BharadwajAssistant professor,other faculty and associates of electrical engineering department who are directly or indirectly
helped me for this work. This work is done by research scholar department of Electrical Engineering BharatiVidyapeeth Deemed
University College of engineering pune.
CONCLUSION
Speed of energy is the main reason why WECS preclude the ability and improve the balance - this system may be based on a
maximum limit on the devices Power - systems sometimes need a technique developed that meet the needs of wattage - PMSG based
WECS is strategy be working on the basis of the generator are correct and be control by organizing force in the side of the network
and the changes that occur in the engine part may cause increased problems on thrusting, which does not accept the basic idea - and
control on the part of the birth of the b damping and collects rings extra accuracy on the part of the network and gives speed and more
accurate when error occurs in the network-Dc chopper is controlled and change is necessary strategy required that cause changes
(distortions) and works to reduce when the error occurs - it is important for engineering designs to be simple implementation and be
responsive any variable occurs on the part of the generator and the link voltage may cause volatility must strategy useful for some kind
of change and about the best piece to be requested operation used when the error occurs

REFERENCES:
[1] M. CHINCHILLA, S. ARNALTES, AND J. C. BURGOS, CONTROL OF PERMANENT-MAGNET GENERATORS APPLIED TO VARIABLE-SPEED
WIND-ENERGY SYSTEMS CON-NECTED TO THE GRID,IEEE TRANS. ENERGY CONVERS., VOL. 21, NO. 1, PP. 130
135, MAR. 2006.
[2] H. POLINDER, F. F. A VAN DER PIJL, AND P. TAVNER, COMPARISON OF DIRECT-DRIVEAND GEARED GENERATOR CONCEPTS FOR
WIND TURBINES, IEEE TRANS. ENERGYCONVERS., VOL. 21, NO. 3, PP. 543550, SEP. 2006.
[3] F. IOV, A. D. HANSEN, P. SRENSEN, AND N. A. CUTULULIS, MAPPING OF GRIDFAULTS AND GRID CODES, RIS NAT. LAB., TECH.
UNIV. DENMARK, ROSKILDE,DENMARK, TECH. REP. RIS-R-1617(EN), JUL. 2007.
[4] C. SANITER AND J. JANNING, TEST BENCH FOR GRID CODE SIMULATIONS FOR MULTI-MW WIND TURBINES, DESIGN AND
CONTROL,IEEE TRANS. POWER ELECTRON.,VOL. 23, NO. 4, PP. 17071715, JUL. 2008.
[5] J.-H. JEON, S.-K.KIM, C.-H.CHO, J.-B.AHN, AND E.-S. KIM, DEVELOPMENTOF SIMULATOR SYSTEM FOR MICRO-GRIDS WITH
RENEWABLE ENERGY SOURCES, J.ELECTR. ENG. TECHNOL., VOL. 1, NO. 4, PP. 409413, FEB. 2006.
[6] R. POLLIN, H. G. PELTIER, AND H. SCHARBER, GREEN RECOVERY: A NEWPROGRAM TO CREATE GOOD JOBS AND START BUILDING A
LOW-CARBON ECONOMY,CENTER FOR AMERICAN PROGRESS, WASHINGTON, DC, SEP. 2008.
AVAILABLE:HTTP://WWW.PERI.UMASS.EDU/FILEADMIN/PDF/OTHER_PUBLICATION_TYPES/GREEN_ECONOMICS/PERI_REPORT.PDF.
[7] F. K. A. LIMA, A. LUNA, P. RODRIGUEZ, E. H. WATANABE, AND F. BLAABJERG,ROTOR VOLTAGE DYNAMICS IN THE DOUBLY FED
INDUCTION GENERATOR DURING GRIDFAULTS, IEEE TRANS. POWER ELECTRON., VOL. 25, NO. 1, PP. 118130, JAN.2010.
[8] L. G. MEEGAHAPOLA, T. LITTLER, AND D. FLYNN, DECOUPLED-DFIG FAULT RIDE-THROUGH STRATEGY FOR ENHANCED STABILITY
PERFORMANCE DURING GRID FAULTS,IEEE TRANS. SUSTAINABLE ENERGY, NO. 3, PP. 152162, OCT. 2010.
[9] Q. SONG AND W. LIU, CONTROL OF A CASCADE STATCOM WITH STAR CON-FIGURATION UNDER UNBALANCED CONDITIONS, IEEE
TRANS. POWER ELECTRON.,VOL. 24, NO. 1, PP. 4558, JAN. 2009.
[10] W. H. ZHANG, S.-J.LEE, AND M.-S. CHOI, SETTING CONSIDERATIONS OF DISTANCERELAY FOR TRANSMISSION LINE WITH
STATCOM, J. ELECTR. ENG. TECHNOL.,VOL. 5, NO. 4, PP. 522529, JUL. 2010.
[11] H. M. PIROUZY AND M. T. BINA, MODULAR MULTILEVEL CONVERTER BASED STAT-COM TOPOLOGY SUITABLE FOR MEDIUM-
VOLTAGE UNBALANCED SYSTEMS, J.POWER ELECTRON., VOL. 10, NO. 5, PP. 572578, SEP. 2010.
[12] G. BRANDO, A. COCCIA, AND R. RIZZO, CONTROL METHOD OF A BRAKING CHOPPERTO REDUCE VOLTAGE UNBALANCE IN A 3-
LEVEL CHOPPER, INPROC. IEEE INT. CONF.IND. TECHNOL., 2004, VOL. 2, PP. 975978.
[13] J. F. CONROY AND R. WATSON, LOW-VOLTAGE RIDE-THROUGH OF A FULL CONVERTERWIND TURBINE WITH PERMANENT MAGNET
GENERATOR,IET RENEWABLE POWER.GENERATION, VOL. 1, NO. 3, PP. 182189, SEP. 2007.
[14] W. LI, C. ABBEY, AND G. JOOS, CONTROL AND PERFORMANCE OF WIND TURBINEGENERATORS BASED ON PERMANENT MAGNET
SYNCHRONOUS MACHINES FEEDING ADIODE RECTIFIER, INPROC. IEEE POWER ELECTRON. SPEC. CONF., JUN., 2006,PP. 16.
[15] B. SINGH, R. SAHA, A. CHANDRA, AND K. AL-HADDAD, STATIC SYNCHRONOUSCOMPENSATORS (STATCOM): A
REVIEW,IETPOWER ELECTRON., VOL. 2, NO. 4,PP. 297324, 2009.
[16] A. D. HANSEN AND G. MICHALKE, MULTI-POLE PERMANENT MAGNET SYN-CHRONOUS GENERATOR WIND TURBINES GRID
SUPPORT CAPABILITY IN UNINTERRUPTEDOPERATION DURING GRID FAULTS, IET RENEWABLE POWER GENERATION,VOL.3,NO. 3, PP.
333348, 2009.
[17] X. YUAN, F. WANG, D. BOROYEVICH, Y. LI, AND R. BURGOS, DC-LINK VOLTAGECONTROL OF A FULL POWER CONVERTER FOR
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

548 www.ijergs.org

WIND GENERATOR OPERATING IN WEAK-GRIDSYSTEMS, IEEE TRANS. POWER ELECTRON., VOL. 24, NO. 9, PP. 21782192,SEP. 2009.
[18] D.-C. LEE, G.-M.LEE, AND K.-D. LEE, DC-BUS VOLTAGE CONTROL OF THREE-PHASE AC/DC PWM CONVERTERS USING FEEDBACK
LINEARIZATION,IEEETRANS.IND. APPL., VOL. 36, NO. 3, PP. 826833, MAY 2000.
[19] D.-E. KIM AND D.-C. LEE, FEEDBACK LINEARIZATION CONTROL OF THREE-PHASEUPS INVERTER SYSTEMS,IEEE TRANS. IND.
ELECTRON., VOL. 57, NO. 3, PP. 963968, MAR. 2010.
[20] S. MUYEEN, M. ALI, R. TAKAHASHI, T. MURATA, J. TAMURA, Y. TOMAKI,A. SAKAHARA, AND E. SASANO, COMPARATIVE STUDY
ON TRANSIENT STABILITYANALYSIS OF WIND TURBINE GENERATOR SYSTEM USING DIFFERENT DRIVE TRAIN MOD-ELS, IET RENEWABLE
POWER GENER., VOL. 1, NO. 2, PP. 131141, JUN. 2007.
[21] V. AKHMATOV, ANALYSIS OF DYNAMIC BEHAVIOR OF ELECTRIC POWER SYSTEMSWITH LARGE AMOUNT OF WIND POWER, PH.D.
DISSERTATION, DEPT. ELECT.ENG.,RSTED DTU, DENMARK, 2003.
[22] B. RAWN, P. LEHN, AND M. MAGGIORE, CONTROL METHODOLOGY TO MITIGATETHE GRID IMPACT OF WIND TURBINES, IEEE
TRANS. ENERGY CONVERS., VOL. 22,NO. 2, PP. 431438, JUN. 2007.
[23] A. HANSEN, G. MICHALKE, P. SRENSEN, T. LUND, AND F. IOV, CO-ORDINATED VOLTAGE CONTROL OF DFIG WIND TURBINES IN
UNINTERRUPTED OPERATION DURING GRID FAULTS, WIND ENERGY, VOL. 10, NO. 1, PP. 5168,2007
[24] H. K. KHALIL,NONLINEAR SYSTEMS, 3RD ED. ENGLEWOOD CLIFFS, NJ: PRENTICE-HALL, 2002.
[25] J. HU AND B. WU, NEW INTEGRATION ALGORITHMS FOR ESTIMATING MOTOR FLUXOVER A WIDE SPEED RANGE, IEEE TRANS.
POWER ELECTRON., VOL. 13, NO. 5,PP. 969977, SEP. 1998.
[26] P. RODRIGUEZ, A. V. TIMBUS, R. TEODORESCU, M. LISERRE, AND F. BLAABJERG,FLEXIBLE ACTIVE POWER CONTROL OF
DISTRIBUTED POWER GENERATION SYSTEMSDURING GRID FAULTS, IEEE TRANS. IND. ELECTRON., VOL. 54, NO. 5, PP. 25832592, OCT.
2007.
[27] F. WANG, J. L. DUARTE, AND M. A. M. HENDRIX, PLIANT ACTIVE AND REACTIVEPOWER CONTROL FOR GRID-INTERACTIVE
CONVERTERS UNDER UNBALANCED VOLTAGEDIPS, IEEE TRANS. POWER ELECTRON., VOL. PP, NO. 99, PP. 11, 2010














International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

549 www.ijergs.org

Numerical Simulation for Heat Transfer Enchancement in a Triangular
Ribbed Channel with Winglet Vortex Generators
1
Amit Garg,
2
Sunil Dhingra
1
M.Tech. Scholar, Department of Mechanical Engineering, U.I.E.T, Kurukshetra, India
2
Assistant. Prof., Department of Mechanical Engineering, U.I.E.T, Kurukshetra, India
E-mail-amit.garg337@gmail.com

Abstract: Counter rotating longitudinal vortices produced by winglet in a channel is known to enhance heat transfer. In the present
investigation, numerical simulation are performed to study forced convection heat transfer and friction loss behavior in a triangular
ribbed channel with longitudinal winglet vortex generator for turbulent airflow through constant heat flux. The cross-section of the
ribs placed inside the opposite channel walls to create a reverse flow is an isosceles triangle shape. The rib arrangement, is of
staggered array, are introduced. Also, two pairs of the WVGs with various attack angles (a) of 60
0
, 45
0
and 30
0
are mounted on the test
duct entrance to create a longitudinal vortex flow through the test channel. Work are carried out for a rectangular duct of aspect ratio,
AR = 10 and height, H = 30 mm with a single rib height, e/H = 0.13 and rib pitch, P/H = 1.33. The flow rate is in terms of Reynolds
numbers based on the inlet hydraulic diameter of the channel ranging from 5200 to 22,000. The processes in solving the simulation
consist of modeling and meshing the basic geometry using the package ANSYS-CFD. Then the boundary condition will be set before
been simulate in Fluent based on the research papers experimental data. Finally result has been examined in CFD-Post. This work
presents a numerically study on the mean Nusselt number, friction factor and thermal enhancement characteristics. The simulation
results show a significant effect of the presence of the rib turbulator and the WVGs on the heat transfer rate and friction loss over the
smooth wall channel. The values of Nusselts number and friction factor for utilizing both the rib and the WVGs are found to be
considerably higher than those for using the rib or the WVGs alone.
Key words: CFD, Heat transfer, Friction factor, rib, Turbulent flow model, winglet, longitudinal vortex generator, swirl flow
INTRODUCTION
Heat transfer enhancement is the process of modifying a heat transfer surface to increase the heat transfer coefficient. In processing
plants, air conditioning plants; petrochemical, biomedical and food processing plants serve to heat and cool different types of fluids.
Performance of these heat exchangers can be improved by adding protrusion type vortex generators such as fins, ribs, wings, winglets,
etc. on the gas side of the core. When longitudinal vortex generators are placed near a heat transfer surface, they increase the heat
transfer by transporting fluid from the wall into the free stream and vice versa. The effectiveness of a vortex generator in enhancing
the heat transfer depends on the vortex strength generated per unit area of the vortex generator. Winglets pair kept at an angle of attack
is very effective as the longitudinal vortices generated by it persist for hundreds of wing chords downstream of the winglets serve for
heat enhancement.
In the past decades, many researchers have investigated the effect of fins, ribs, wings, winglets on heat transfer and friction factor
values for a smooth channel in both experimental and numerical studies. Han et al. [1] studied experimentally the heat transfer in a
square channel with ribs on two walls for nine different rib configurations for P/e = 10 and e/H = 0.0625. They found that the angled
ribs and V ribs yield higher heat transfer enhancement than the continuous ribs and the heat transfer rate and the friction factor were
highest for the 60
0
orientation amongst the angled ribs. For heating either only one of the ribbed walls or both of them, or all four
channel walls, Han et al. [2] also reported that the former two conditions resulted in an increase in the heat transfer with respect to the
latter one. By using a real time Laser Holographic Interferometry to measure the local as well as average heat transfer coefficient,
Liou and Hwang [3],[4] investigated experimentally the performance of square, triangular and semi-circular ribs and found that the
square ribs give the best heat transfer performance among them. This is contrary to the experimental result of Ahn [5] indicated that
the triangular rib performs better than the square one.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

550 www.ijergs.org



















Tanda [6] examined the effect of transverse, angled ribs, discrete, angled discrete ribs, V-shaped, V-shaped broken and parallel broken
ribs on heat transfer and friction and reported that 90
0
transverse ribs provided the lowest thermal performance while the 60
0
parallel
broken ribs or 60
0
V-shaped broken ribs yielded a higher heat transfer augmentation than the 45
0
parallel broken ribs or 45
0
V-shaped
broken ribs. Thianpong et al. [7] again investigated the thermal behaviors of isosceles triangular ribs attached on the two opposite
channel walls with AR = 10 and suggested the optimum thermal performance of the staggered ribs could be at about e/H = 0.1 and
P/H = 1.0.
In general, the swirl/vortex flow generator is used in augmentative heat transfer in several engineering applications to enhance the rate
of the heat and mass transfer equipment such as heat exchanger, vortex combustor, drying process, etc. The best methods of
generation of decaying swirl/vortex flow are of winglet classified as delta, triangular and rectangular winglet types [815]. These
winglets are designed to create longitudinal vortices that help to increase turbulence levels resulting in improved heat transfer
performance, albeit with a minimal pressure loss penalty. Heat transfer enhancement by winglet type vortex generators mounted at the
leading edge of a flat plate was found to be about 5060% improvement in average heat transfer over the surface of the plate [9-14].
Nomenclature
A convection heat transfer area of channel, m
2

AR aspect ratio of channel (W/H)
Cp specific heat capacity of air, J/kg K
D hydraulic diameter, m(2HW/(H+W))
e rib height, m
f friction factor
H channel height, m
h average heat transfer coefficient, W/m
2
K
k thermal conductivity of air, W/m K
L length of tested channel, m
Nu Nusselt number (hD/k)
P pitch (axial length of rib cycle), m
P pressure drop, Pa
Pr Prandtl number
Re Reynolds number (UD/)
Q heat transfer, W
T
s
Average temperature of heated wall, K
T
o
Average temperature of outlet
T
i
Inlet temperature
t thickness of rib, m
U mean velocity, m/s
V volumetric flow rate, m
3
/s
W width of channel
WVGs winglet type vortex generators

Greek letters
attack angle of WVGs, degree
density of air, kg/m
3

thermal enhancement factor
fluid dynamic viscosity, kg s
1
m
1

Subscripts
o smooth channel
conv convection
i inlet
o out
pp pumping power
s channel surface

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

551 www.ijergs.org

Biswas et al. [16] carry out numerical and experimental study of flow structure and heat transfer effects of longitudinal vortices in a
fully developed channel flow. They define a performance quality factor which indicates heat transfer enhancement for a given
pressure loss penalty. Based on the value of this factor, they conclude that the performance of the winglet is best for of 15
0
.
Sohankar and Davidson [17] attempt unsteady three-dimensional Direct Numerical Simulation (DNS) and Large Eddy Simulation
(LES) of heat and fluid flow in a plate-fin heat exchanger with thick rectangular winglet type vortex generators at Reynolds and
Prandtl numbers of 2000 and 0.71, respectively. Pongjet [18] conducted experimental result indicated that the use of the ribs along
with the WVGs causes a moderate pressure drop increase, f/f
0
=2.25.5, especially for the in-line rib array and the larger attack angle,
and also provides considerable heat transfer augmentations, Nu/Nu
0
=2.22.6, depending on the attack angle and Reynolds number
values. The combined staggered rib and the WVGs with lower angle of attack should be applied instead of using the rib/WVGs alone
to obtain higher heat transfer and performance of about 4065%, leading to more compact heat exchanger.
In the present study, the effect of swirling flow generating by the winglet vortex generator on heat transfer and pressure drop
characteristics in a staggered channel by CFD analysis in ANSYS Fluent 14.0 software. Comparisons of Nusselt number and friction
factor with previous correlation [18]. Nusselt number, friction factor and thermal performance factor () are examined under uniform
wall heat flux using air as testing fluid.
NUMERICAL SIMULATION
2.1 Physical Model. With reference to Fig.2.1.The geometrical details of the flow simulation are: the channel configuration is
characterized by the channel height, H and the axial length of cycle or pitch, P, the respective values of which are 30 mm and 40 mm.
Each of the ribbed walls 300 mm wide and 440 mm long (L). The rib dimensions are 4 mm high (e) and 20 mm thick (t). Each of the
WVGs sheet, 60 mm long and 20 mm high as sketched in Fig.2.2 and placed on the lower plate entrance with the attack angles () of
60
0
, 45
0
and 30
0
with axial direction. In this work, the combination of the two phenomena, (1) the re-circulating/reverse flow induced
by the ribs and (2) the vortex flow created by the WVGs, are supposed to be effective in the vicinity of the tested channel wall, where
thermal resistance is high


Fig 2.1: Test section with WVGs:
(a) In-line (b) staggered rib
Fig 2.2: Configuration of WVGs pairs


2.2 Numerical Method. The numerical simulations were carried out using ANSYS-14.0 CFD Software package Fluent-6 version that
uses the finite-volume method to solve the governing equations. Geometry was created for air flowing in an electrically heated copper
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

552 www.ijergs.org

channel. Meshing has been created in ANSYS model with tetrahedral shapes (Fig.2.3). In this study Reynolds number varies between
5200 to 22000.






Fig 2.3: ANSYS volume-meshing

For turbulent, steady and incompressible air flow with constant properties .We follow the three-dimensional equations of continuity,
momentum and energy, in the fluid region.

These equations are below:
Continuity equation:
. 0 ) .( = V +
c
c
v

t
. (1)
Momentum equation:
. ) .( ) .(
) (
g p
t
t vv
v
+ V + V = V +
c
c

.. (2)
Energy equation:
)). ( .( )) ( .(
) (
.
v t v

eff eff
T k p E
t
E
+ V V = + V +
c
c
. (3)
Reynolds stress to the mean velocity gradients as shown below:
. ) (
3
2
) (
' '
ij
k
k
t
i
j
j
i
t j i
x
u
k
x
u
x
u
u u o
c
c
+
c
c
+
c
c
= . ... (4)
An appropriate turbulence model is used to compute the turbulent viscosity term
t
. The turbulent viscosity is given as

c

2
k
C
t
= . . (5)
Velocity and pressure linkage was solved by SIMPLE algorithm. For validating the accuracy of numerical solutions, the grid
independent test has been performed for the physical model.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

553 www.ijergs.org

Table 1.1: Properties of air at 25
0
C
Properties Value
Density, 1.225 kg/m
3

Specific heat capacity C
p
1006 J/kg K
Thermal conductivity, k 0.0242 W/m K
Viscosity, 1.7894 x10
-5
kg/m s

Table 1.2: Nodes and Element in geometry are below:
Turbulator Nodes Elements
In-line without WVGs 594336 551250
Staggered without WVGs 654736 607500
WVGs 60
0
staggered rib 270181 1406439
WVGs 45
0
staggered rib 268145 1495412
WVGs 30
0
staggered rib 271617 1495482
Table 1.2 shows that WVGs with staggered ribs have maximum nodes and element in comparison of other turbulators.
In addition, a convergence criterion of 10
-6
was used for energy and 10
-3
for the mass conservation of the calculated parameters. The
air inlet temperature was specified as 300 K and three assumptions were made in model: (1) the uniform heat flux was along the
heated wall. (2) Wall of the inlet calming section was adiabatic. (3) Steady and incompressible flow. In fluent, velocity was taken at
inlet section and pressure was taken at outlet section.
2.3 Data Reduction
Three important parameters were considered-friction factor, Nusselt number and thermal performance factor, which determined the
friction loss, heat transfer rate and the effectiveness of heat transfer enhancement in the rectangular channel respectively.
The Reynolds number based on the channel hydraulic diameter (D) is given by
R
e
= UD

The average heat transfer coefficients are evaluated from the measured temperatures and heat inputs. With heat added uniformly to
fluid (Q) and the temperature difference of heated wall and fluid (T
s
, T
b
), average heat transfer coefficient will be evaluated via the
following equations:
h = __ Q___
A (T
s
-T
b
)
in which,
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

554 www.ijergs.org


T
b
= (T
o
+T
i
)
2
The term A is the convective heat transfer area of the heated channel wall. Then, average Nusselt number is written as:
Nu = hD
k

The friction factor ( f ) is investigated from pressure drop P A across the length of channel (L) using the following equation:
f =_ 2__ P
(L/D) U
2


RESULTS AND DISCUSSION
3.1 Validation of setup. The CFD numerical result of the smooth channel has been validated with the experimental data as shown in
Figures 3.1 and 3.2. These results are within 9% deviation for heat transfer (Nu) and 7% for the friction factor (f) with each-other.
In low Reynolds number the deviation become small in experimental and CFD results but when Reynolds number become more then
these deviation slightly higher in experimental and CFD results, respectively.

Fig 3.1: Nusselt Vs Reynolds number
0
10
20
30
40
50
60
70
80
0 5000 10000 15000 20000 25000
N
u
Re
Exp. sim corr.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

555 www.ijergs.org


Fig 3.2: Friction factor Vs Reynolds number

3.2 Heat Transfer. Effect of the rib geometry and WVGs on the heat transfer rate is presented in the form of Nusselt number as
depicted in Figure-3.3.In the figure, the rib turbulators in conjunction with the WVGs provide considerable heat transfer enhancements
in comparison with the smooth channel and the Nusselt number values for using both turbulators increase with the rise of Reynolds
number. This is because the ribs interrupt the development of the boundary layer of the fluid flow and create the reverse/recirculating
flow behind the rib while the WVGs pairs generate the longitudinal vortex flows that assist to wash up the reverse flow trapped behind
the ribs into the core flow. The use of the rib and the WVGs provides a higher heat transfer rate than that of the rib alone at some 40%
The nusselts number with combined staggered and WVGs is 90%, 85% and 80% of the smooth channel for the WVGs with =30
0
,45
0

and 60
0
respectively.

The Nusselt number ratio, Nu/Nu
0
, defined as a ratio of augmented Nusselt number to Nusselt number of smooth channel plotted
against the Reynolds number value is displayed in Fig.3.4. In the figure, the Nusselt number ratio tends to decrease slightly with the
rise of Reynolds number for using the combined turbulators. It is interesting to note that at higher Reynolds number, the Nu/Nu
0

values of the in-line and staggered combined turbulators are nearly the same.

Fig 3.3: Variation of Nu with Re for using ribs & WVGs.
0
0.005
0.01
0.015
0.02
0.025
0.03
0.035
0.04
0.045
0 5000 10000 15000 20000 25000
f
Re
corr. exp. sim.
0
20
40
60
80
100
120
140
0 5000 10000 15000 20000 25000
N
u
Re
smooth channel staggered rib in-line
staggered rib,WVG 60 staggered rib,WVG 45 staggered rib,WVG 30
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

556 www.ijergs.org


Fig. 3.4: Nu/Nu
o
Vs Re for using ribs & WVGs.

3.3 Friction Factor. The variation of the pressure drop is shown in fig.3.5 in terms of friction factor with Reynolds number. In the
figure, it is apparent that the use of the combined ribs and WVGs leads to a considerable increase in friction factor over that of the rib
alone or the smooth channel. As expected, the friction factor obtained from the combined ribs and WVGs is significantly higher than
that from the rib alone, especially for higher the attack angle and the in-line array. The increase in friction factor of the combined rib
and WVGs is in a range of 2.3 5.8 times over the smooth channel, depending on the attack angle, the array and Reynolds number
values. The friction factor value of the combined rib and WVGs is found to be higher than that of the rib alone around 25125%.

Fig. 3.6 presents the variation of the friction factor ratio, f/f
0
, with the Reynolds number value. It is observed that the friction factor
ratio tends to increase with raising the Reynolds number for all.

Fig. 3.5: Variation of f with Re for using ribs & WVGs.
0
0.5
1
1.5
2
2.5
0 5000 10000 15000 20000 25000
N
u
/
N
u
0
Re
staggered rib in-line staggered rib,WVG 60 staggered rib,WVG 45 staggered rib,WVG 30
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0 5000 10000 15000 20000 25000
f
Re
smooth channel staggered rib in-line
staggered rib,WVG 60 staggered rib, WVG 45 staggered rib, WVG 30
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

557 www.ijergs.org


Fig. 3.6: f/f
o
Vs Re for using ribs & WVGs.

3.4 Thermal Performance Factor. The variation of the thermal enhancement factor () with the Reynolds number values for all
turbulators is depicted in Fig. 3.7. For all, the data obtained by measured Nusselt number and friction factor values are compared at an
equal pumping power. It is visible in the figure that the enhancement factors () for the combined turbulators generally are found to be
above unity and to be much higher than those for employing a single use of turbulators. This indicates that the use of ribs in
conjunction with the WVGs leads to the advantage over that of a single turbulator. The enhancement factor tends to decrease with the
rise of Reynolds number values for all turbulators applied. The 60
0
WVGs yields the enhancement factor lowest among all the WVGs
because of the high flow blockage and thus, the larger attack angle of the WVGs should be avoided. The enhancement factor () of the
combined staggered ribs and 30
0
WVGs is found to be the best among all turbulators used and is about 1.67 is highest at the lowest
value of Reynolds number.

Fig.3.7: Variation of thermal enhancement factor with Reynolds number for various turbulators.

0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
5
0 5000 10000 15000 20000 25000
f
/
f
0
Re
staggered rib in-line staggered rib,WVG 60
staggered rib,WVG 45 staggered rib,WVG 30
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
0 5000 10000 15000 20000 25000

Re
staggered rib in-line staggered rib,WVG 60
staggered rib,WVG 45 staggered rib,WVG 30
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

558 www.ijergs.org

3.5 Velocity and Pressure Contour Plots. In Fig.3.8 velocity contour along the z-y plane shows velocity profile along the WVGs
shown that the insertion of the ribs interrupt the development of the boundary layer of the fluid flow and create the
reverse/recirculating flow behind the rib while the WVGs pairs generate the longitudinal vortex flows that assist to wash up the
reverse flow trapped behind the ribs into the core flow. .Velocity increases due to WVGs and turbulence will be created in channel.

In Fig.3.9 we can see properly that pressure contour shows the pressure drop decreases along the channel length. Due to WVGs
pressure drop will also increase because of turbulence in fluid along the WVGs in staggered ribs rectangular channel

Fig 3.8: Velocity contour along z-y plane


Fig 3.9: Pressure contour along z-y plane

Fig.3.10: Temperature contour along z-x plane

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

559 www.ijergs.org

Fig. 3.11(a) & Fig. 3.11(b) shows the velocity contour, and we can find all the values at any point. As we see around the wall and
WVGs velocity becomes zero. With the insertion WVGs inside the high aspect ratio channel velocity will be decreased at all Reynolds
number compared to that of simple staggered. Fig8. (c), (d) shows the temperature contour, when velocity increases then temperature
will be decreases along the flow regime inside channel. Insertion of WVGs causes swirling flow around channel that causes
temperature increase in flow. Fig8.(e) (f) shows the pressure contour along the mid plane of rectangular channel. Pressure will
increase due to increase of Reynolds number along the length of channel and due to insertion of WVGs pressure will increase due to
back flow generated by the wings of vortex pair.
Fig 3.11 Velocity, Temperature and pressure contour at Re No. 5200, 14300, 19100 and 22000 for staggered rib and combined
turbulator drawn at Fig. 3.11 (a),(b),(c),(d),(e) and (f)


Fig 3.11 (a): Velocity contour for staggered rib Fig 3.11 (b) velocity contour for combined rib & WVGs


Fig 3.11 (c): Temperature contour for staggered rib Fig 3.11 (d): Temperature contour for combined rib & WVGs
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

560 www.ijergs.org


Fig 3.1(e) Pressure contour for staggered rib Fig 3.11(f) Pressure contour for combined rib & WVGs
compared to that of simple staggered. At low Reynolds no. the difference is no more but as we go forward there is large difference due
to recirculating flow generated by the wings of WVGs
CONCLUSION
The effect of the combined triangular staggered ribs and WVGs turbulators in a high aspect ratio channel for the turbulent regime,
Reynolds number varying from 520022,000 on the heat transfer (Nu), friction factor (f) and thermal performance factor () have
been investigated numerically by ANSYS-14 software. The following conclusions are below:

We clearly seen that as the Reynolds number goes on increasing, the heat transfer coefficient also goes on increasing .The combined
staggered rib and WVGs for attack angle 30
0
, 45
0
, and 60
0
heat transfer rate increases 90%, 85%, and 80% more than smooth channel.
The in-line and staggered channel performs lower friction factor values than the combined rib and WVGs. The ratio of friction factor
for the combined rib and WVGs with attack angle of 30
0
,45
0
and 60
0
are in the range of 2.27-3.63, 2.51-3.9 and 2.75-4.3 respectively.
It has been observed that the thermal performance factor tends to decreases with an increasing attack angle for WVGs and with
increases in the Reynolds number. The maximum thermal performance for using combined turbulator WVGs of 30
0
,45
0
and 60
0
found
to be 1.674, 1.513, and 1.589 respectively.
ACKNOWLEDGEMENTS
The authors gratefully acknowledge the financial support by the Kurukshetra University Research Found.

REFERENCES:
Han JC, Zhang YM, Lee CP. Augmented heat transfer in square channels with parallel, crossed and V-shaped angled ribs. ASME
Heat Transfer 1991;113:590 6.
Han JC, Zhang YM, Lee CP. Influence of surface heat flux ratio on heat transfer augmentation in square channels with parallel,
crossed, and V-shaped angled ribs. ASME J Turbomach 1992;114:87280.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

561 www.ijergs.org

Liou TM, Hwang JJ. Turbulent heat transfers augmentation and friction in periodic fully developed channel flows. ASME J Heat
Transfer 1992;1145664.
Liou TM, Hwang JJ. Effect of ridge shapes on turbulent heat transfer and friction in a rectangular channel. Int J Heat Mass Transfer
1993;36:93140.
Ahn SW. The effects of roughness types on friction factors and heat transfer in roughened rectangular duct. Int Commun Heat Mass
Transfer 2001;28:93342.
Tanda G. Heat transfer in rectangular channel with transverse and V-shaped broken ribs. Int J Heat Mass Transfer 2004;47:22943
Thianpong C,Chompookham T, Skullong S, Promvonge P. Thermal characterization of turbulent flow in a channel with isosceles
triangular ribs. Int Commun Heat Mass Transfer 2009;36(7):7127.
Biswas G, Mitra NK, Fiebig M. Heat transfer enhancement in fin-tube heat exchangers by winglet type vortex generators. Int J Heat
Mass Transfer 1994;37:28391.
Gentry MC, Jacobi AM. Heat transfer enhancement by delta-wing vortex generators on a flat plate: vortex interactions with the
boundary layer. ExpTherm Fluid Sci 1997;14:23142.
Biswas G, Torii K, Fujii D, Nishino K. Numerical and experimental determination of flow structure and heat transfer effects of
longitudinal vortices in a channel flow. Int J Heat Mass Transfer 1996;39:344151.
Chen Y, Fiebig M, Mitra NK. Heat transfer enhancement of finned oval tubes with staggered punched longitudinal vortex generators.
Int J Heat Mass Transfer 2000;43:41735.
Torii K,Kwak KM, Nishino K. Heat transfer enhancement accompanying pressure-loss reduction with winglet-type vortex generators
for fin-tube heat exchangers. Int J Heat Mass Transfer 2002;45:3795801.
Gentry MC, Jacobi AM. Heat transfer enhancement by delta-wing-generated tip vortices in flat-plate and developing channel flows.
ASME J Heat Transfer 2002;124:115868.
Kwak KM, Torii K, Nishino K. Simultaneous heat transfer enhancement and pressure loss reduction for finned-tube bundles with the
first or two transverse rows of built-in winglets. Exp Therm Fluid Sci. 2005;29:62532.
Allison CB, Dally BB. Effect of a delta-winglet vortex pair on the performance of a tube-fin heat exchanger. Int J Heat Mass Transfer
2007;50:506572.
Biswas, G., Torii, K., Fujii, D., Nishino, K., 1996. Numerical and experimental determination of flow structure and heat transfer
effects of longitudinal vortices in a channel flow. International Journal of Heat and Mass Transfer 39, 34413451.
Sohankar, A., Davidson, L., 2003. Numerical study of heat and fluid flow in a plate-fin heat exchanger with vortex generators.
Turbulence Heat and Mass Transfer 4, 11551162.
Promvonge P, Thianpong C, Chompookham T, Kwankaomeng S, 2010.Enhanced heat transfer in a triangular ribbed channel with
longitudinal vortex generators. Energy conversion and management 51 (2010) 12421249



International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

562 www.ijergs.org

CFD Analysis for Heat Transfer Enhancement inside a Circular Tube Using
with Different-2 Length Ratio Twisted Tapes
Sunny Gulia
1
, Dr. Anuradha Parinam
1

1
Department of Mechanical Enginering, University Institute of Engineering & Technology, kurukshetra, India

ABSTRACT:Computational Fluid Dynamics (CFD) is a useful tool in solving and analyzing problems that involve fluid flows, while
twisted tape is inserted in tube with a view to generating swirl flow .With the help of this helical tape, pressure will be drop and heat
transfer rate will be increase due to swirl flow. The flow rate of the tube is considered in a range of Reynolds number between 4000 to
20000. The Numerical simulation of helical tape swirl generation in the tape with different Length Ratio ( LR)-0.29,0.43,0.57 and
1.The processes in solving the simulation consist of modeling and meshing the basic geometry of twisted tape using the package
ICEM-CFD. Then the boundary condition will be set before been simulate in Fluent based on the research papers experimental data.
Finally result has been examined in CFD-Post. This work presents a numerically study on the mean Nusselt number, friction factor
and enhancement characteristics in a round tube with helical tapes insert under uniform wall heat flux 1000
2
/ m w boundary
condition. The full length twisted tape is inserted into the tested tube at a single twist ratio of 4 / = w y . While the short-length tapes
mounted at the entry test section are used at several tape length ratio (LR=
f s
l l ) of 0.29, 0.43, 0.57 and 1.
Key words: CFD, Heat transfer, Friction factor, twisted tape with length ratios, swirl flow, Thermal performance.
1. Introduction
Heat transfer enhancement is the process of modifying a heat transfer surface to increase the heat transfer coefficient between the
surface and a fluid. A majority of heat exchangers used in thermal power plants, chemical processing plants, air conditioning
equipment, and processing plants, air conditioning equipment, and refrigerators, petrochemical, biomedical and food processing
plants serve to heat and cool different types of fluids. Both the mass and overall dimensions of heat exchangers employed are
continuously increasing with the unit power and the volume of production. In the past decades, many researchers have investigated the
effect of geometry of twisted-tape on heat transfer and friction factor values in a circular tube in both experimental and numerical
studies. For the experimental work, Saha and Dutta
| | 1
determined the influence of Prandtl number [water, 2.5<Pr<5.18] on friction
factor and heat transfer rate in a circular tube with short length, full length, smoothly varying pitch. Tariq et al
| | 2
Investigated the heat
transfer coefficient in internally threaded tube approximately 20 per cent higher than that in smooth tube using air [1300<Re<10000].
Saha and Bhunia
| | 3
conducted the heat transfer characteristics depended on the twist ratio, Re and Pr and uniform pitch performed
better than gradually decreasing pitch with twist tape inserts (twist ratio, 2.5<y<10) using servotherm medium oil (45<Re<840). Ray
and Date
| | 4
numerically investigated the local Nusselt number peaks at cross-sections where tape aligned with diagonal of duct by
using full-length twisted tape with width equal to side of duct [water,100<Re<3000,Pr<5]. Lokanath and Misal
| | 5
investigated large
value of overall heat transfer coefficient produced in water-to-water mode with oil-to-water mode [water, 3<Pr<6.5, lube oil (Pr=418)]
with twisted tape. Saha et al
| | 6

conducted experiment in circular tube with twisted tape (regularly spaced). Effects of the (1) pinching
of twisted tape gave better results than connecting thin rod for thermo hydraulic performance and (2) reducing tape width gave poor
results; larger than zero phase angle not effective. Al-Fahed and Chakroun
| | 7
investigated experiment in single shell and tube heat
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

563 www.ijergs.org

exchanger by using twisted tape with twist ratios 3.6,5.4,7.1 and microfin, for low twist ratio resulting low pressure drop, tight fit will
increases more hear transfer.

Nomenclature

E Total Energy

f Friction Factor =
)) 2 / )( / ((
2
U D L P A


p
C
specific heat of fluid, J kg
1
K
1
D inside diameter of test tube, m
h heat transfer coefficient, W m
2
K
1
k

thermal conductivity of fluid, W m
1

K
1

L length of test section, m LR tape-length ratio=
f s
l l


Nu Nusselt number= hD/k

P A pressure drop across test tube, Pa

f
l full-length twisted tape
s
l short-length twisted tape
P pressure of flow in stationary tube, Pa Pr Prandtl number= C
p
/k
Re Reynolds number= v D/ t thickness of test tube , m
v velocity of fluid, m/s T temperature , K
w tape width , m y pitch length of twisted tape (180 rotation),m
y/w twist ratio CR clearance ratio = C/D

Greek symbols

fluid density, kg m
3

t

eddy viscosity
tape thickness, m c turbulent dissipation rate

fluid dynamic viscosity, kg s


1
m
1

q
thermal performance factor






Oil used as a fluid and micro fins are not used for laminar flow. Rayond Date
| | 8
investigated numerical study in square duct and found,
higher Prandtl numbers and lower twist ratios can give good performance. Kumar et al
| | 9
investigated numerical study in a square
ribbed duct with twisted tape, in this rib spacing and higher twist ratio for high Prandtl fluids and for low Prandtl fluid rib spacing
should be higher and twist ratio should be lower. Zhang et al
| | 10
investigated numerical study in circular tube, the simulation results
verify the theory of core flow heat transfer enhancement which leads to the separation of the velocity boundary layer and the
temperature boundary layer and thus enhances the heat transfer greatly while the flow resistance is not increased very much.
Rahimi et al
| | 11
investigated experimentally and CFD studies, maximum increase of 31% and 22% were observed in calculated Nusselt
and performance of the jagged insert as compared with those obtained for the classic one. Air circulated in modified twisted tapes.
Eiamsa-ard et al
| | 12
numerical study in circular tube with turbulent flow and found heat transfer rates for the tube with twisted tape
inserted for y/w=2.5 and CR=0.0,0.1,0.2 and 0.3 were respectively,73.6%, 46.6%,17.5% and 20.1% higher than for the plain tube.
Yadav et al
| | 13
investigated numerical study in circular tube with turbulent flow and found heat transfer coefficient and the pressure
drop were 9-47% and 31-144% higher than those in the plain tube. Smith Eiamsa-ard
| | 14
conducted experimental result indicated that
the length twisted tapes of LR=0.29,0.43 and 0.57 perform lower heat transfer and friction factor values than the full length tape
around 14%,9.5% ,6.7% and 21%,15.3%,10.5%. The full length tape was inserted into the tested tube at a single twist ratio of y/w=4.

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

564 www.ijergs.org

In the present study, the effect of swirling flow generating by the twisted tape insert on heat transfer and pressure drop characteristics
in a circular tube by CFD analysis in ANSYS Fluent 14.0 software. Comparisons of Nusselt number and friction factor of twisted tape
with previous correlation
| | 14
. Nusselt number, friction factor and thermal performance factor (q ) are examined under uniform wall
heat flux 1000
2
/ m w using air as testing fluid.


Fig.1 Diagram of a twisted tape insert inside a tube

2. Numerical Simulation
2.1 Physical Model. The numerical simulations were carried out using ANSYS-14.0 CFD Software package Fluent-6 version that uses
the finite-volume method to solve the governing equations. Geometry was created for air flowing in an electrically heated copper
circular tube of 47.5mm diameter(D) and length 1250mm. Meshing has been created in ICEM model with tetrahedral shapes (Fig.2).
In this study Reynolds number varies between 4000 to 20000 .
2.2 Numerical Method. For turbulent, steady and incompressible air flow with constant properties .We follow the three-dimensional
equations of continuity, momentum and energy, in the fluid region.


Fig.2 ANSYS-ICEM tetrahedral volume meshing

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

565 www.ijergs.org


Fig 2.1 ANSYS-ICEM volume meshing in z-y plane

These equations are below:
Continuity equation:

. 0 ) .( = V +
c
c
v

t
. (1)
Momentum equation:

. ) .( ) .(
) (
g p
t
t vv
v
+ V + V = V +
c
c
(2)
Energy equation:
)). ( .( )) ( .(
) (
.
v t v

eff eff
T k p E
t
E
+ V V = + V +
c
c
. (3)
Reynolds stress to the mean velocity gradients as shown below:
. ) (
3
2
) (
' '
ij
k
k
t
i
j
j
i
t j i
x
u
k
x
u
x
u
u u o
c
c
+
c
c
+
c
c
=
.... (4)
An appropriate turbulence model is used to compute the turbulent viscosity term
t
. The turbulent viscosity is given as

c

2
k
C
t
=
. . (5)
Table 1.1 Properties of air at 25
0
C
Properties value
Density, 1.225 kg/m
3

Specific heat capacity, C
p
1006 J/kg K
Thermal conductivity, k 0.0242 W/m K
Viscosity, 1.7894 x10
-5
kg/m s

Velocity and pressure linkage was solved by SIMPLE algorithm. For validating the accuracy of numerical solutions, the grid
independent test has been performed for the physical model. The tetrahedral grid is highly concentrated near the wall.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

566 www.ijergs.org


Table 1.2 Nodes and Element in geometry are below:

Length Ratios Nodes Elements
0.29 191132 728439
0.43 279990 1094526
0.57 329233 1223398
1 507466 1831876

Table 1.2 shows that Full length twisted tape (L.R=1) have maximum nodes and element in comparison of other short length twisted
tapes. In addition, a convergence criterion of 10
-6
was used for energy and 10
-3
for the mass conservation of the calculated parameters.
The air inlet temperature was specified as 298 K and three assumptions were made in model: (1) the uniform heat flux was along the
length of circular tube. (2) Wall of the inlet calming section was adiabatic. (3) Steady and incompressible flow. In fluent, velocity was
taken at inlet section and pressure was taken at outlet section.

3. Data Reduction. Three important parameters were considered-friction factor, Nusselt number and thermal performance, which
determined the friction loss, heat transfer rate and the effectiveness of heat transfer enhancement in the circular tube, respectively. The
friction factor ( f ) is investigated from pressure drop, P A across the length of circular tube (L) using the following equation:


) 2 ).( (
2
U D L
P
f

A
=
...... (6)

The Nusselt number is defined as


k
D h
Nu
.
=
... (7)

The average Nusselt number can be obtained by


. ) (
}
c
=
L
x
x Nu Nu
avg
(8)


The Nusselt number and the Reynolds number were based on the average of the circular tube wall and the outlet temperature, the
pressure drop across the test section, and the air flow velocity were measured for heat transfer of the heated tube with different twisted
tape inserts. The average Nusselt numbers and friction factors were obtained and all fluids properties were found at the overall bulk
mean temperature.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

567 www.ijergs.org

Thermal performance factor was given by:



3 1
) (
o
o
f f
Nu Nu
= q
.. (9)

Where in
o o
f Nu Nu , , and f were the Nusselt numbers and friction factors for the plain tube and the tube with twisted tapes.
o


3. Results and Discussion

3.1 Validation of setup. The CFD numerical result of the plain tube (P.T) without a twisted-tape insert has been validated with the
experimental data as shown in Figures 3.1 and 3.2. These results are within % 8 deviation for heat transfer (Nu) and % 4 the
friction factor ( f ) with each-other. In low Reynolds number the deviation become small in experimental and CFD results but when
Reynolds number become more then these deviation slightly higher in experimental and CFD results, respectively.

3.2 Heat Transfer. Effect of the different-2 length ratio twisted tapes on the heat transfer rate is presented in Figure- 3.3.The results for
the tube fitted with different-2 L.R have been compared with those for a pain tube under similar operating conditions for y/w=4 .All
the Reynolds numbers used due to the induction of high reverse flows and disruption of boundary layers. We clearly seen that as the
Reynolds number goes on increasing, the heat transfer coefficient also goes on increasing .The twisted tapes with L.R=0.29,0.43,0.57
and 1, heat transfer rate increases 15%,18.8%,22.6% and 31% more than plain tube.
Comparison of CFD results with experimental data in Figure 3.1 and 3.2 is below:

Fig 3.1 Nusselt Vs Reynolds number Fig 3.2 Friction factor Vs Reynolds number

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

568 www.ijergs.org


Fig 3.3 H.T coefficient Vs Reynolds number

3.3 Friction Factor. The variation of the pressure drop is presented (eq.6) in terms of friction as Figure 3.4.It shows the friction factor
Vs the Reynolds number ,for different-2 L.R twisted tapes in circular tube. It is seen that friction factor decreases with an increases in
Reynolds number.
It was found that the pressure drop for the twisted tape inserts was 34.85%,45%,58% and 92% for the L.R=0.29,0.43,0.57 and 1.
The short length twisted tape of L.R=0.29, 0.43 and 0.57 performs lower friction factor values than the full length tape around 34.7%,
30.7% and 17%, respectively.

Fig 3.4 Pressure Drop Vs Reynolds number Fig 3.5 Friction Factor Vs Reynolds number

Fig3.6 Thermal Performance Vs Reynolds number
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

569 www.ijergs.org

3.4 Thermal Performance Factor. From Figure 3.6, it has been observed that the thermal performance factor tends to decreases with
an increasing twist parameter and with increases in the Reynolds number for different-2 LR of twisted tapes in circular tube. The
maximum thermal performance for using LR=0.29, 0.43, 0.57 and 1 is found to be 1.23, 1.3, 1.32 and 1.37, respectively.
3.5 Velocity Streamline and Contour Plots. Streamline through the circular tube with twisted tape inserts have been shown in Figure 4.
It is shown that the insertion of the tape induces for swirling flow and turbulence will also be increases in circular tube. The twisted
tapes generate two types of flows which are (1) a swirling flow and (2) an axial or straight flow near the tube wall. Figure

Fig.4 Velocity streamline along the twisted tape (Re-4800)


Fig.5 Velocity contour along the z-y plane (Re-4800)
In Fig.5 velocity contour along the z-y plane shows velocity profile along the twisted tapes. Velocity becomes zero at twisted surface
and at wall surface due to drag forces. Fluid moving molecules comes in contact with wall stationary molecules and velocity become
zero, which shows blue color at wall and twisted tape surfaces .Velocity increases due to twisted tape and turbulence will be created in
circular tube.


Fig.6 Pressure contour along the z-y plane (Re-4800)
In Fig. 6 we can see properly that pressure contour shows the pressure drop increases with increases of Reynolds number. Due to
inserts tape pressure will also drop because of turbulence in fluid along the twisted tapes in circular tube.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

570 www.ijergs.org




Fig.7 Temperature contour along the z-y plane (Re-4800)
Figure.8: Velocity,temperature and Pressure contour draw at 8.(a),(b),(c),(d),(e) and (f) at Re-8000,12500,16000 and 20000 are below:


8. (a) Velocity contour in z-y plane, LR-1 8.(b) Velocity contour in x-y plane, Z=1200 mm ,LR-1
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

571 www.ijergs.org


8.(c)Temperature-contour ,LR-1 8.(d)Temperature-contour , LR-1

8. (e) Pressure contour in Plain tube at x-y plane, Z=250 mm 8.(f) Pressure contour (LR-1) at x-y plane, Z=250 mm
Fig8. (a), (b) shows the velocity contour, and we can find all the values at any point. At Re-8000 velocity becomes at wall and around
twisted tape are 0 and 2.04 m/s. At Re-20000 velocity becomes at wall and around twisted tape are 0 and 6.814 m/s. Fig8. (c), (d)
shows the temperature contour, when velocity increases then wall temperature will be decreases. At Re-8000, wall and twisted tape
temperature becomes 390.5K and 325K. At Re-20000, wall and twisted tape temperature becomes 325K and 317K. At Fig8 (f)
Z=1200mm pressure increases along twisted tape to the wall surface and it decreases along the length of circular tube. At Re-20000,
wall and along the twisted pressure becomes 58 and 48 Pa.




International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

572 www.ijergs.org

4. Conclusion
When the twisted tape inserts at several tape-length ratios (LR-0.29, 0.43, 0.57 and 1) on the heat transfer (Nu), friction factor (f) and
thermal performance factor (q ) have been investigated numerically by ANSYS-14 software. The following conclusions are below:
1. We clearly seen that as the Reynolds number goes on increasing, the heat transfer coefficient also goes on increasing .The twisted
tapes with LR=0.29,0.43,0.57 and 1, heat transfer rate increases 15%,18.8%,22.6% and 31% more than plain tube.
2. Pressure drop for the twisted tape inserts are 34.85%,45%,58% and 92% more than plain tube for the LR=0.29,0.43,0.57 and 1. The
short length twisted tape of L.R=0.29, 0.43 and 0.57 performs lower friction factor values than the full length tape around 34.7%,
30.7% and 17%, respectively.
3. It has been observed that the thermal performance factor tends to decreases with an increasing twist parameter and with increases in
the Reynolds number for different-2 LR of twisted tapes in circular tube. The maximum thermal performance for using LR=0.29, 0.43,
0.57 and 1 is found to be 1.23, 1.3, 1.32 and 1.37, respectively.
Acknowledgements
The authors gratefully acknowledge the financial support by the Kurukshetra University Research Found.

REFERENCES:
[1]. Saha, S. K. and Dutta, A. (2001), Thermo-hydraulic study of laminar swirl flow through a circular tube fitted with twisted tapes. Trans. ASME,
J. Heat Transfer, Vol.123, pp. 417421.
[2]. Tariq, A. Kant, K. and Panigrahi, P. K. (2000), Heat transfer enhancement using an internally threaded tube. In Proceedings of 4th ISHMT
ASME Heat and Mass Transfer Conference, India, pp. 277281.
[3]. Saha, S. K. and Bhunia, K. (2000), Heat transfer and pressure drop characteristics of varying pitch twisted-tape-generated laminar smooth swirl
flow. In Proceedings of 4th ISHMTASME Heat and Mass Transfer Conference, India, pp. 423428.
[4]. Ray, S. and Date, A. W.(2003), Friction and heat transfer characteristics of flow through square duct with twisted tape insert, Int. J. Heat and
Mass Transfer, Vol. 46, pp.889902.
[5]. Lokanath, M. S. and Misal, R. D. (2002), An experimental study on the performance of plate heat exchanger and an augmented shell and tube
heat exchanger for different types of fluids for marine applications. In Proceedings of 5th ISHMTASME Heat and Mass Transfer Conference, India,
pp. 863868
[6]. Saha, S. K. Dutta, A. and Dhal, S. K. (2001), Friction and heat transfer characteristics of laminar swirl flow through a circular tube fitted with
regularly spaced twisted-tape elements. Int. J. Heat and Mass Transfer, Vol. 44, pp. 42114223.
[7]. Al-Fahed, S. Chamra, L. M. Chakroun, W. (1999), Pressure drop and heat transfer comparison for both micro-fin tube and twisted-tape inserts
in laminar flow. Exp. Thermal and Fluid Sci., Vol.18, pp.323333.
[8]. Ray, S. Date, A.W. (2001), Laminar flow and heat transfer through square duct with twisted tape insert, International journal of Heat and Fluid
Flow, Vol. 22, pp. 460-472.
[9]. Kumar, P. M. Kumar, K. (2012), Enhancement of heat transfer of laminar flow in a square ribbed duct with twisted tape.International Journal of
Engineering Science and Technology, Vol. 4, pp. 3450-3456 .
[10]. Zhang, X. Liu, Z. Liu, W. (2012) Numerical studies on heat transfer and flow characteristics for laminar flow in a tube with multiple regularly
spaced twisted tapes, International Journal of Thermal Sciences Vol.58, pp. 157-167 .
[11]. Rahimi, M. Shabanian, S.R. Alsairafi. A.A. (2009), Experimental and CFD studies on heat transfer and friction factor characteristics of a tube
equipped with modified twisted tape inserts, Chemical Engineering and Processing, Vol.48, pp.762770 .
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

573 www.ijergs.org

[12]. S. Eiamsa-ard, S. Wongcharee, K. Sripattanapipat, S. (2009), 3-D Numerical simulation of swirling flow and convective heat transfer in a
circular tube induced by means of loose-fit twisted tapes, International Communications in Heat and Mass Transfer Vol.36, pp. 947955 .
[13]. Yadav, R. J. Padalkar, A.S. (2012), CFD Analysis for Heat Transfer Enhancement inside a Circular Tube with Half-Length Upstream and Half-
Length Downstream Twisted Tape, Journal of Thermodynamics,Vol.1, PP.1-12
[14]. S.Eiamsa-ard, C. Thianpong, P.E-ard, P. Promvonge (2009),Convective heat transfer in a circuler tube with short-length twisted tape insert,
International Communication in Heat and Mass Transfer 36(2009)365-371




















International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

574 www.ijergs.org

Numerical Differential Protection of Power Transformer using Walsh
Hadamard Transform and Block Pulse Function Based Algorithm
Kumari Rashmi
1
, Dr. Ramesh Kumar
2

1
Research Scholar (M.Tech), Department of Electrical Engineering, NIT Patna, India
1
Professor, Department of Electrical Engineering, NIT Patna, India

ABSTRACT: In this paper, application of Walsh Hadamard transform and Block pulse function have been described for
numerical protection of power transformer. Numerical relay algorithms are developed to extract fundamental, second and
fifth harmonic components. These components are further used for harmonic restraint differential protection of power
transformers. In comparison with Walsh Hadamard transform, block pulse function based method is computationally
simple and flexible to use with any sampling frequency. Different graphs are plotted and compared for Walsh Hadamard
transform and block pulse function based methods for Inrush, Over-excitation and internal fault conditions. Simulated
results indicate that the block pulse function algorithm can provide fast and reliable trip decisions.
Keywords: Walsh Hadamard transform, Block pulse function, power transformer protection, and numerical differbtial
relay.
I. INTRODUCTION
For protection of power transformer, differential relay is commonly used [2]. This is based on comparison of the
fundamental, second and fifth harmonic components of post fault current. A differential protection scheme with harmonic
restraint is the usual way of protecting a power transformer against internal faults and restraining the tripping operation
during non fault conditions, such as magnetizing inrush currents and over-excitation currents [2].
Several algorithms have been proposed for numerical protection of power transformers. Here result of walsh hadamard
transform and block pulse function based algorithm have been compared for numerical differential protection of power
transformer.
II. Walsh Hadamard Transform
The algorithm for extracting the fundamental frequency components from the complex post-fault relaying signals is based
on Walsh-Hadamard Transform (WHT). The Walsh coefficients are obtained by using the Walsh-Hadamard
transformation on the incoming data samples [3]. A fast algorithm known as Fast Walsh-Hadamard transform (FWHT) is
available to compute the Walsh coefficients.
The FWHT is an algorithm to compute the WHT coefficients. FWHT reduces the computation to N log
2
N additions and
subtraction [2].
Walsh coefficients are calculated as shown below
W
w0
=1/16(x
0
+x
1
+x
2
+x
3
+x
4
+x
5
+x
6
+x
7
+x
8
+x
9
+x
10
+x
11
+x
12
+x
13
+x
14
+x
15
)
W
w1
= 1/16(x
0
+x
1
+x
2
+x
3
+x
4
+x
5
+x
6
+x
7
-x
8
-x
9
-x
10
-x
11
-x
12
-x
13
-x
14
-x
15
)
W
w2
=1/16(x
0
+x
1
+x
2
+x
3
-x
4
-x
5
-x
6
-x
7
-x
8
-x
9
-x
10
-x
11
+x
12
+x
13
+x
14
+x
15
)
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

575 www.ijergs.org

W
w3
=1/16(x
0
+x
1
+x
2
+x
3
-x
4
-x
5
-x
6
-x
7
+x
8
+x
9
+x
10
+x
11
-x
12
-x
13
-x
14
-x
15
)
W
w4
= 1/16(x
0
+x
1
-x
2
-x
3
-x
4
-x
5
+x
6
+x
7
+x
8
+x
9
-x
10
-x
11
-x
12
-x
13
+x
14
+x
15
)
W
w5
=1/16(x
0
+x
1
-x
2
-x
3
-x
4
-x
5
+x
6
+x
7
-x
8
-x
9
+x
10
+x
11
+x
12
+x
13
-x
14
-x
15
)
W
w6
=1/16(x
0
+x
1
-x
2
-x
3
+x
4
+x
5
-x
6
-x
7
-x
8
-x
9
+x
10
+x
11
-x
12
-x
13
+x
14
+x
15
)
W
w7
=1/16(x
0
+x
1
-x
2
-x
3
+x
4
+x
5
-x
6
-x
7
+x
8
+x
9
-x
10
-x
11
+x
12
+x
13
-x
14
-x
15
)
W
w8
=1/16(x
0
-x
1
-x
2
+x
3
+x
4
-x
5
-x
6
+x
7
+x
8
-x
9
-x
10
+x
11
+x
12
-x
13
-x
14
+x
15
)
W
w9
=1/16(x
0
-x
1
-x
2
+x
3
+x
4
-x
5
-x
6
+x
7
-x
8
+x
9
+x
10
-x
11
-x
12
+x
13
+x
14
-x
15
)
W
w10
=1/16(x
0
-x
1
-x
2
+x
3
-x
4
+x
5
+x
6
-x
7
-x
8
+x
9
+x
10
-x
11
+x
12
-x
13
-x
14
+x
15
)
W
w11
= 1/16(x
0
-x
1
-x
2
+x
3
-x
4
+x
5
+x
6
-x
7
+x
8
-x
9
-x
10
+x
11
-x
12
+x
13
+x
14
-x
15
)
W
w12
=1/16(x
0
-x
1
+x
2
-x
3
-x
4
+x
5
-x
6
+x
7
+x
8
-x
9
+x
10
-x
11
-x
12
+x
13
-x
14
+x
15
)
W
w13
=1/16(x
0
-x
1
+x
2
-x
3
-x
4
+x
5
-x
6
+x
7
-x
8
+x
9
-x
10
+x
11
+x
12
-x
13
+x
14
-x
15
)
W
w14
= 1/16(x
0
-x
1
+x
2
-x
3
+x
4
-x
5
+x
6
-x
7
-x
8
+x
9
-x
10
+x
11
-x
12
+x
13
-x
14
+x
15
)
W
w15
=1/16(x
0
-x
1
+x
2
-x
3
+x
4
-x
5
+x
6
-x
7
+x
8
-x
9
+x
10
-x
11
+x
12
-x
13
+x
14
-x
15
)
Fundamental fourier coefficient is calculated as
F
1
= 0.9W
w1
0.373W
w5
-0.074W
w9
o.o179W
w13

F
2
=0.9W
w2
+0.373W
w6
-0.074W
w10
+0.179W
w14
Second harmonic component

F
3
= 0.9W
w3
-0.373W
w11
F
4
= 0.9W
w4
+0.373W
w12
And fifth harmonic component
F
9
= 0.180W
w1
+0.435W
w5
+0.65W
w9
- 0.269W
w13

F
10
=0.180W
w2
-0.435W
w6
+0.65W
w10
+0.269W
w14
III. Block Pulse Function
The BPF is a set of rectangular pulses, having magnitude unity, which comes one after the other as a block of pulses [4].
The algorithm is computationally simple and flexible to use with any sampling frequency. The fundamental frequency
component is extracted by using this algorithm and operating conditions of relay is decided according to the value of this
frequency component. The current samples acquired over a full cycle data window at the sampling rate of 12 samples per
cycle. The Computations based on this algorithm require less memory space [2]. Taking the fundamental period as 1,
current i(t) which is given by time function can be expressed in terms of Fourier coefficients as
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

576 www.ijergs.org

...... ) 4 ( cos 2 ) 4 ( sin 2
) 2 ( cos 2 ) 2 ( sin 2 ) (
2 2
1 1 0
+ + +
+ + =
t B t A
t B t A A t i
t t
t t
) 10 ( cos 2 ) 10 ( sin 2 ...
5 5
t B t A t + t + In terms of BPF
coefficient a
n
:

Fundamental component
) ( 1125 . 0
) ( 0824 . 0
) ( 0302 . 0
10 9 4 3
11 8 5 2
12 7 6 1 1
a a a a
a a a a
a a a a A
+ +
+ +
+ =
) ( 0302 . 0
) ( 0824 . 0
) ( 1125 . 0
10 9 4 3
11 8 5 2
12 7 6 1 1
a a a a
a a a a
a a a a B
+ +
+ +
+ =
Second Harmonic component
)) ( 1125 . 0
( 05626 . 0
11 8 5 2 12
10 9 7 6 4 3 1 2
a a a a a
a a a a a a a A
+ +
+ + + =
)
( 09746 . 0
12
10 9 7 6 4 3 1 2
a
a a a a a a a B
+
+ + =
Fifth
Harmonic component
) ( 0225 . 0
10 9 4 3 5
a a a a A + = ) ( 06149 . 0
11 8 5 2
a a a a + + +
) ( 084 . 0
12 7 6 1
a a a a + + ) ( 0225 . 0
12 7 6 1 5
a a a a B + =
) ( 06149 . 0
11 8 5 2
a a a a + + + ) ( 084 . 0
10 9 4 3
a a a a + +
IV. APPLICATION OF DIFFERENTIAL PROTECTION OF TRANSFORMERS
Here trip decision is based on the relative amplitude of fundamenral component compared to the second and fifth
harmonic component in the differential current. Two indices are used to obtain the relative amplitude.
K2= ((A2)
2
+ (B2)
2
)
1/2
/ ((A1)
2
+ (B1)
2
)
1/2

K5= ((A5)
2
+ (B5)
2
)
1/2
/ ((A1)
2
+ (B1)
2
)
1/2

Predefined value for K2 is 0.15 and for k5 is 0.05 for restraining relay action.
Testing of the schemes:
A 132kv/11kv three phase wye-wye transformer system has been simulated during present work. Table 3.1 gives the value
of transformer parameters in present simulation and table 3.2 gives the value of transmission line parameters.


Table1: Transformer Parameters
Transformer nominal power
and frequency
10 MVA 50Hz
Transformer Winding
parameters
R=0.002 pu
L=0.08 pu
Transformer core loss
rsistance
500 pu

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

577 www.ijergs.org

Table2: Transmission line parameters
Length 300 km
Frequency used for RLC
specification
50 Hz
Positive and zero sequence
resistances(ohms/km)
0.01273 and
0.3864

Positive and zero sequence
inductance(H/km)
0.9337e-3
and 4.1264e-
3
Positive and zero sequence
capacitance(F/km)
12.74e-9 and
7.751 e-9

V. RESULTS
The plots below provide values of phase A, similar results have been obtained for other phases as well
Inrush condition
Result from FWHT



International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

578 www.ijergs.org

Result from BPF


Over-excitation condition
Result from FWHT

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

579 www.ijergs.org


Result from BPF


Internal fault condition
Result from FWHT

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

580 www.ijergs.org


Result from BPF



VI. CONCLUSION
The simulation results from MATLAB sim power system reveal that differential current is high in case of internal fault
condition, inrush condition and over-excitation condition.
Fault conditions can be distinguished from non fault conditions within a cycle in both algorithms. In non fault conditions
either K2 or K5 are above their respective threshold values, restraining trip action of protective relay. In internal fault
condition, none of the indices are above the threshold value and tripping action takes place. Block pulse function requires
less number of samples per cycle in comparison to walsh hadamard transform. It gives satisfactory result at sampling rate
of 12 samples per cycle where as in case of walsh hadamard transform it is 16 samples per cycle.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

581 www.ijergs.org


Fig: Matlab simulation diagram of Transformer

REFERENCES:
[1] Prof. Rahman M.A., and Prof. Dash P.K., (1982) Fast algorithm for digital protection of power transformers, IEEE
PROC, Vol. 129.
[2] Ram B. and Vishwakarma D.N. (2011), Power system protection and switchgear second edition, McGraw Hill,
IndiA.
[3] Jeyasurya B., and Rahman M.A. (1985) Application of Walsh Functions For Microprocessor-Based Transformer
Protection IEEE, Vol. EMC-27, NO.4.
[4] Kolla S.R. (1989) Application of Block Pulse Functions for Digital Protection of Power Transformers IEEE Vol.31,
NO.2.
[5] Stankovic Radomir S., and Falkowski Bogdan J., (2003) The Haar wavelet transform: its status and achievements,
Computers and Electrical Engineering Volume - 29 Page No - 2544.
[6] Hamouda Abdelrahman H., and Al-Anzi Fadel Q., and Gad HUssain K., and Gastli Adel (2013) Numerical
Differential Protection Algorithm For Power Transformers IEEE, Page No. - 17-20






International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

582 www.ijergs.org

Multimodal Biometrics Information Fusion for Efficient Recognition using
Weighted Method
Shalini Verma
1
, Dr. R. K. Singh
2
1
M. Tech scholar, KNIT Sultanpur, Uttar Pradesh
2
Professor, Dept. of Electronics Engg., KNIT Sultanpur, Uttar Pradesh
E-mail:
1
shalinive1990@gmail.com,
2
singhrabinder57@gmail.com

Abstract Biometrics do not provide unique identification. The matching process is probabilistic and is liable to measureable lapse
error. A mistaken verfication or identification where the wrong person is matched against an enrolled user is termed a False
Acceptance and the rate at which these occur is the False Acceptance Rate (FAR). Conversely, an error that occurs where a legitimate
user fails to be recognised is termed a False Rejection and the corresponding rate is the False Rejection Rate (FRR). These errors are
dependent not only on the technology but also on the application and the environment of use.

Keywords Multimodal biometrics, Fingerprint, Face, Iris, Fusion, Weighted Method
INTRODUCTION
Biometrics term refers to life measurement, but the Biometric term is typically related with the use of unique physiological or
behavioural traits to recognise a single person. One of the applications which most people relate with biometrics is security. However,
biometric identification or verification has eventually a much broader relevance as computer interface becomes more natural. Since
the fraud has increases day by day there is the requirement for the highly secure identification system. In recent years, biometrics
authentication has seen considerable enhancements in reliability and accuracy, with some of the traits offering great performance.
One way to overcome of unimodal biometrics (UB) problems with the use of multi-biometrics system (MBS). Driven by less
equipment costs, a multi biometric system uses multiple sensors for information procurement[2]. This problem can be solved by
installing multiple sensors that capture different biometric traits. These type of systems is, known as multimodal biometric systems.
So, MBS to be more reliable due to the presence of multiple characteristics of proof these systems are also able to meet the stringent
performance requirements imposed by different applications. This paper proposes an efficient multimodal biometric system which can
be used to drecrease/reduce the limitations of unimodal biometrics systems. Next section presents an how to reduce limitation of UBS
using multimodal biometric system. Finally, the individual characteristics are fused at matching score level using weighted sum
method.

1. Limitation of unimodal biometrics: Limitations of unimodal biometrics are following as:

Non-universality: If every individual is able to present the biometric trait for recognition, then this charteristics is said to be universal.
Non-universality leads to failure to enrollment error in a biometric system.
Intra-class variations: The biometric data acquired during variation will not be identical to the data used for generating template
during enrollment for personal trait. This is known as intra-class variation. Large intra-class variation increases the false reject rate
(FRR) of a biometric system.
Inter-class similarities: Inter class similarity refers to the overlap of feature spaces corresponding to multiple individuals. Large
Inter-class similarity increases the false acknowledgement rate (FAR) of a biometric system[3].
Susceptibility biometrics: Behavioral traits like signature and voice are more susceptible to such attack than physiological
charteristics.

2.Multimodal Biometrics System: Multi modal biometric systems utilize more than one physiological or behavioural
characteristic for enrolment, verification or identification for the improvement and accuracy of recognition. So, the reason to combine
different charteristics is to improve the accuracy recognition rate. The aim of multi biometrics is to remove/reduce one or more of the
following:
False accept rate (FAR)
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

583 www.ijergs.org

False reject rate (FRR)
Failure to enroll rate (FTE)

Multi modal biometric systems take input from multiple or single sensors measuring two or more different modalties of biometric
characteristics. For example, a system with face and fingerprint recognition would be considered multimodal even if the OR rule
was being applied, allowing users to be verified using either of the modalities [4].

2.1. Multi algorithmic biometric systems: Multi algorithmic biometric systems take a single sample from a single sensor and
proceed that sample with two or more than two different algorithms[5].

2.2. Multi-instance biometric systems: Multi-instance biometric systems use one sensor or possibly more sensors to capture samples
of two or more different instances of the same biometric traits. Example is capturing images from multiple fingers.

2.3. Multi-sensorial biometric systems: Multi-sensorial biometric systems sample the same instance of a biometric trait with two or
more distinctly different sensors[11]. Processing of the multiple samples can be done with one algorithm or combination of
algorithms. Example face recognition application could use both a visible light camera and an infrared camera coupled with specific
frequency[12].

3.Fusion in Multimodal Biometric System (MBS) system:
A technique that can combine the classification results from each biometric channel is called as biometric fusion. We need to design
this fusion.
Multimodal biometric fusion combines measurements from different biometric traits to enhance the strengths. Fusion at matching
score, rank and decision level has been extensively studied in the literature. Different levels of fusion are: Sensor level, feature level,
matching score level and decision level[1].



Figure 1 Multimodal System using three levels of Fusion (taken from Ross & Jain, 2003)
From the architecture of MBS system:
1. Fusion at sensor level
2. Fusion at feature level
3. Fusion at matching score level
4. Fusion at decision level
Fusion at the matching scores level: [1] and our work deals with fusion at the matching score level. Each system (Fingerprint, Face,
Iris) provides a matching score indicating the proximity of a feature vector with a template vector. These scores are normalized and
then combined using same weight and different weight techniques which are described in the later sections.


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

584 www.ijergs.org

4.Algorithm For Designing MBS System:

Step 1: To generate the scores for Fingerprint, Face and Iris
Step 2: Normalizing scores
The max scores are :
FP max = 37.4775
Face max = 3.7372e+017
Iris max = 1.6199
Step 3: Generating the vector data. And after normalizing each triplate looks like

X

= ( X
Fingerprint
, X
Face
, X
Iris
)

Step 4: fusion using Weighted Method: Same weight and Weighted Method
Step 5: plot the ROC curve

5.Matchers to generate respective scores:
5.1 Fingerprint: A few matchers to generate Fingerprint scores were available on the internet. One such matcher was a work done in
MATLAB by Chonbuk National University[6]. This matcher preprocessed a fingerprint image to enhance the image by Short Time
Fourier Transform analysis[7]. Then, three sets of invariant moment features for traits, as a kind of texture features, are extracted from
three different sizes of Region of Interest (ROI) areas based on the reference point from the enhanced fingerprint image. Every set of
invariant moments contain five different moments. Fingerprint verification is acknowledged by Euclidean distance between the two
corresponding features of the test fingerprint image and template fingerprint image in the database.
5.2 Face: A few matchers to generate Face distances using the standard PCA based method, 'eigenface' were available[8,15]. We used
one such matcher implemented in MATLAB to generate the 'distance' scores.
5.3 Iris: A matcher for Iris recognition was available as a MATLAB code. The system basically inputs an eye Image, and outputs a
binary biometric template. The scores are calculated as Hamming distance between the templates.


6. Experiment: Datasets:

6.1 Preparing the individual Dataset: Data for each trait: We had dataset(per trait) for 50 users with 5 samples per user. We
followed the author's approach[1] to generate the Genuine and Imposter Scores. Genuine Score: For every user, we have combinations
of sample pairs(which doesn't include the pairing of a sample with itself). We have
5
C
2
combination i.e. 50 and we obtained 50*10 i.e
500 genuine score. Hence we have 10 genuine scores per user, therefore a total of 500 genuine scores per trait.

Imposter Score: For every user, we picked a random sample, and generated the respective score with every sample of every other
user. Hence, we obtained 49 * 5 i.e. 245 scores for every random sample. The total number of imposter scores obtained per trait was
50 * 245 i.e. 12250 scores.

6.2 Combining the Datasets: Data pertaining to all three modalities were not available for a single set of users. The mutual no
dependence of the biometric indicators(traits) allows to assign the biometric data of one user to another. Each user from respective
traits were randomly paired(triplets) with one user from the other traits.

Normalizing scores: Suppose that for each trait, the maximum distance obtained is Max. The minimum distance is 0. Max maps to
0 as a similarity score and 0 distance maps to a score of 100[10]. Hence, the normalized score for every respective distance score was
obtained from the following equation:
Max normalized Score = (100 Max) - (100 obtained Distance)[13,14].

7. Experiment and Results:

Combining the three modalities.We used three different approaches to combine(fuse) the scores of the three modalities. The results for
each approach are listed in the corresponding section below.


7.1 Weighted Method: For each score vector X = ( X
Fingerprint
, X
Face
, X
Iris
) , the weighted sum was calculated as
X
weightedSum
=
i=Face;Fingerprint;Iris
weightiXi
where
i=Face;Fingerprint;Iris
Weight
i
= 1
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

585 www.ijergs.org

The weights were also calculated using two different approaches.
1. The 1st approach was the same as the author's approach. All the weights were equal i.e. 1/3.
Since the dataset was itself random, we paired user
(Face;i)
, user
(Fingerprint;i)
and user
(Iris;i)
.

Pairing of samples: For every triplet formed, say user
i
we followed the author's approach and paired the k
th
score of face with k
th

score of fingerprint and k
th
score of iris. Hence now we had 500 triplets of genuine score and 12250 triplets of imposter scores. This
pairing was done after normalizing the scores to matching scores of the same domain[0,100].
The values of Max obtained for each trait was:
1. Fingerprint
Max
= 37:4775
2. Face
Max
= 3:7372e + 017
3. Iris
Max
= 1:6199
After normalizing, each triplet looks like : X =

(

X
Fingerprint

,
X
Face

,
X
Iris
)
There was 500 such genuine vectors and 12250 such imposter vectors.
2. In this approach, we calculated the approximate area under the TAR(True Accept Rate) v/s TRR(True Reject Rate) graph for each
of the trait. The weights assigned were
Weight
i
= Area
i
/
k=Face;Fingerprint;Iris
Area
j

where i = Face; Fingerprint; Iris
The ROC curves follow after the tabulation of the FAR, FRR values for Fingerprint alone, Face alone and Iris alone, Weighted sum
with equal weights and Weighted sum with different weights for a few threshold values ranging from 1 to 99 for the range of scores 0
to 100. To get the tables corresponding to each threshold value. Please go through threshold table 1 and table 2.

































Table 1: FAR and FRR values in percentage for individual traits
Threshold
FAR
Face
FRR
Face FAR
FP
FRR
FP

FAR
Iris
FRR
Iris
1 99.9918 0 99.9918 0 99.9918 0
4 99.9836 0 99.9836 0 99.9755 0
8 99.9755 0 99.8857 0 99.6897 0
11 99.9673 0 99.8204 0 97.5510 0
12 99.9673 0 99.7795 0 96.0489 0
17 99.9591 0 99.2163 0 78.6612 0
19 99.9591 0 99.0938 0 75.0857 0
20 99.9510 0 98.9551 0 71.8938 0
25 99.8693 0 97.9183 0 60.0326 0
30 99.7714 0 96.1632 0 53.6408 0
35 99.5673 0 92.8163 0 51.4857 0
40 99.3714 0 87.5346 0 51.4285 0
43 99.2326 0 84.8489 0.2 51.4285 0
45 98.8408 0 80.7428 0.2 51.4285 0
47 98.5795 0 77.7959 0.4 49.2816 1.8
50 98.0326 0 73.3877 0.6 49.2816 1.8
56 96.8489 0 64.2040 2.4 49.2816 1.8
60 95.6163 0 56.5469 4.6 49.2816 1.8
65 93.1346 0 44.2448 7.6 49.2489 1.8
71 89.7142 0 28.0979 15.6 48.3918 2.8
75 86.6448 0 18.0081 23.4 42.9306 7.6
80 80.5387 0 8.4571 36.4 27.1428 25.2
85 72.4979 0 1.9591 53.6 12.8816 49.4
86 70.2775 0.2 1.2653 57.8 10.4897 55.2
88 65.5673 0.6 0.4326 67 6.6204 68.4
90 59.6163 0.8 0.0734 77.6 3.6979 77
92 52.4571 0.8 0.0081 88.4 1.5755 88.2
94 43.2163 1.2 0 95.2 0.4653 94.8
97 19.0040 4 0 100 0.0244 99.2
99 1.8530 11.4 0 100 0 99.8

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

586 www.ijergs.org

Threshold
FAR
EqualWt
FRR
EqualWt
FAR
DiffWt
FRR
DiffWt
1 100 0 100 0
4 100 0 100 0
8 100 0 100 0
11 100 0 99.9918 0
12 99.9918 0 99.9918 0
17 99.9826 0 99.9826 0
19 99.9755 0 99.9836 0
20 99.9755 0 99.9755 0
25 99.9591 0 99.9591 0
30 99.8285 0 99.8530 0
35 99.4938 0 99.5673 0
40 98.4326 0 98.7020 0
43 97.6897 0 98.1387 0
45 95.8693 0 96.8571 0
47 94.2367 0 95.5428 0
50 90.3428 0 92.6530 0
56 76.6612 0 80.9387 0
60 64.6775 0 69.4612 0
65 51.2000 0 54.3102 0
71 40.3183 0 41.3632 0
75 30.9877 0.4 31.8775 0
80 16.3836 4.6 17.2897 3.4
85 4.4816 18.8 4.9224 17
86 3.1020 24.8 3.5346 22.6
88 1.1918 39.6 1.3877 36
90 0.2367 61.2 0.2938 55.6
92 0.0163 82 0.0244 79
94 0 94.8 0 92.6
97 0 99.8 0 99.8
99 0 100 0 100

TABLE 2: FAR AND FRR VALUES IN PERCENTAGE FOR WEIGHTED METHOD


Figure 2 : ROC curve for individual trait, same weight and different weight fusion technique
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

587 www.ijergs.org


Figure 2 represents the results of Min-Max normalization for a spectrum of fusion methods. The simple same and different weight sum
fusion method yields the best performance over the range of FARs. Fusion techniques at FARs of 1% and 0.1% respectively. At 1%
FAR, the total of probabilities fusion works the best. However, results of same weight and different weight fusion technique do not
hold true at a FAR of 0.1%. The simple sum rule generally performs all over the range of normalization techniques. These results of
MBS system demonstrate the utility of using multimodal biometric systems for achieving better reliable matching performance. These
systems shows also that the method chosen for fusion has a significant impact on the resulting performance. In operational biometric
systems, the selection of tolerable error rates are drive by the application requirement and in both unimodal and multimodal biometric
systems, implementers are compelled to make a trade-off between security and usability.
8.ACKNOWLEDGMENT:
We are thankful to our guide Dr.R.K.Singh is the professor of KAMLA NEHRU INSTITUTE OF TECHNOLOGY and teaching
experience of almost 30 years. He is providing valuable guidance and technical support for this research.
9.CONCLUSION:
As we can see from the figure 2 of FAR versus FRR graph, the IRIS curve deviates a lot. The weighted methods help nullifying this
anomaly. From the results, these methods also minimize the FRR for a given FAR. As we are improve the performance of multi-
biometric system as compared to the unimodal biometrics using the weighted method in the term of reliability, security ,accuracy and
usability.

REFERENCES:
[1] Information fusion in biometrics, Arun Ross and Anil Jain

[2] A. Ross & A. K. Jain, Information Fusion in Biometrics, Pattern Recognition Letters, 24 (13), pp. 2115-2125, 2003.

[3] K.Sasidhar, Vijaya L Kakulapati, Kolikipogu Ramakrishna& K.KailasaRao, Multimodal Biometrics System Study to improve
accuracy And Performance International Journal of Computer ScieWence & Engineering Survey (IJCSES) Vol.1, No.2, November
2010.

[4] A. Ross, A. K. Jain & J.A. Riesman, Hybrid fingerprint matcher, Pattern Recognition, 36, pp. 16611673, 2003
[5] W. Yunhong, T. Tan & A. K. Jain, Combining Face and Iris Biometrics for Identity Verification, Proceedings of Fourth
International Conference on AVBPA, pp. 805-813, 2003
[6] S. C. Dass, K. Nandakumar & A. K. Jain, A principal approach to score level fusion in Multimodal Biometrics System,
Proceedings of ABVPA, 2005
[7] J. Kittler, M. Hatef, R. P. W. Duin, & J. Mates, On combining classifiers, IEEE Transactions on Pattern Analysis and Machine
Intelligence, 20(3), pp. 226239, 1998
[8] G. Feng, K. Dong, D. Hu & D. Zhang, When Faces Are Combined with Palmprints: A Novel Biometric Fusion Strategy, ICBA,
pp. 701-707, 2004
[9] I. Craw, D. Tock & A. Bennett, Finding Face Features, Proceedings Second European Conference Computer Vision, pp. 92-96,
1992.
[10] C. Lin & Kuo-Chin Fan, Triangle-based approach to the detection of human face, Pattern Recognition, 34, pp. 1271-1284,
2001.
[11] Juwei Lu, K. N. Plataniotis & A. N. Venetsanopoulos, Face Recognition Using Kernel Direct Discriminant Analysis
Algorithms, IEEE Transactions on Neural Networks, 14 (1), pp. 117-126, 2003.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

588 www.ijergs.org

[12] D. S. Bolme, J. R. Beveridge, M. L. Teixeira & B. A. Draper, The CSU Face Identification Evaluation System: Its Purpose,
Features and Structure, Proceedings 3rd International Conference on Computer Vision Systems, 2003.
[13] N.K.Ratha, K.Karu, S.Chen & A.K.Jain, A Real-time Matching System for Large Fingerprint Database, IEEE Transactions on
Pattern Analysis and Machine Intelligence, 18 (8), pp. 799-813, 1996.
[14] M. Ammar, T. Fukumura, & Y. Yoshida, A new effective approach for off-line verification of signature by using pressure
features. 8
th
International Conference on Pattern Recognition, pp. 566-569, 1986.
[15] A.K.Jain, L.Hong & R.M.Bolle, On-line Fingerprint Verification, IEEE Transactions on Pattern Analysis and Machine
Intelligence, 19(4), pp. 302-313, 1997



















International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

589 www.ijergs.org

Microstrip Transmission Line Sensor for Rice Quality Detection: An Overview
Dinesh Kumar Singh
1
, Prateek Kumar
2
, Naved Zafar Rizvi
3

1,2
Scholar (PG), School of ICT Gautam Buddha University, India
3
Faculty Associate, School of ICT Gautam Buddha University, India
E-mail- prateekkumar203@gmail.com

Abstract: This paper describes the comparative analysis of different types of microstrip transmission sensor for rice
quality detection. Basically the cylindrical slot antenna and microstrip line based structure have been discussed. This
paper shows focus on the advantage and tremendous characteristics of microstrip couple line sensor over the other
microwave sensors and the substantial amount of efficiency has been achieved by the use of Coupled line filter approach.
The analysis on the basis of various parameters like, characteristics impedance, microstrip width and dielectric constant of
the substrate, modes of reflection coefficient, insertion loss, radiation, moisture content in rice grain, and the frequency
applied. By applying the different principles and methods and finally with Vector Network Analyzer the reflection
coefficient measurement reading give us the measurement of broken rice.
Keywords-Coupled Line Filter, Microstrip, Moisture Content (m.c), Slot Antenna, Vector Network Analyzer
(VNA).
1. INTRODUCTION
Microstrip is a transmission medium used in printed circuit board over a ground plane, it is a printed circuit version of
transmission wire. It is widely used in many applications as a planer transmission medium [1]. The microstrip
transmission line sensor uses this basic array structure to design an application for rice quality detection. The rice quality
detection contains two properties, moisture content and broken rice percent. The initial concept of microstrip based sensor
came out near about 1970 which have given birth to so many ideas to make sensors. The first commercial sensor was for
flesh meat processing [4], for ripeness for palm fruits for oil[5], for moisture content in green tea leaf[6] , to find out t he
moisture content in rice[3], and to find out the broken rice percentage[2]. In current the rice food consumption covers
more than half of the world population [2] but there is problem in milling process which leaves large amount of rice either
unused or as waste. There is another view about the rice characterization it is that most of the customer wants the best
quality rice in the form of long grains.
The quality of the rice can be determined by moisture content, shape, chalkiness, whiteness and number of broken rice
grains at less cost.The one of the most important criteria for determining the quality of the rice is head rice yield. Tan et.al
[6] discussed about the appearance quality of rice, which represents a major problem of rice production in many of rice
producing areas of the world, and this specially is a lot more significant in case of hybrid rice production. Currently, there
is a strong emphasis to increase the total world rice production by improving the quality of hybrid rice. The most serious
problems lie in eating quality,cooking quality and processing quality and to some extent, in milling quality. According to
available knowledge, cooking and eating quality are mostly determined by amylose content, gelatinization temperature
and gel consistency of the grain endosperm. The appearance of grain determines the quality of rice to huge extent. The
parameters of visual inspection are grain length, grain width, the width length ratio and translucency of the endosperm.
Quality is an important factor at the front and back end of rice production. If quality milled rice is expected at the end
insurance of quality paddy is must at beginning of the process. According to the International Rice Research Institute
(IRRI), measurement of quality provides data that can be used for decision making, optimization and the development of
processes and technologies as well as for evaluating the properties, function, quality and reliability of the same. Several
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

590 www.ijergs.org

interrelated features determine the quality of paddy which includes moisture content, purity, varietal purity, cracked
grains, immature grains, damaged grains and discolored/fermented grains. These characteristics are governed not only by
the weather conditions during production, crop production practices, soil conditions and harvesting, but also by the post-
harvest practices. Moisture content (MC) influences all aspects of paddy and rice quality, making it essential that rice be
milled at the proper MC to obtain the highest head rice yield. IRRI said, Paddy is at its optimum milling potential at an
MC of 14% wet weight basis. Higher moisture contents are too soft towithstand hulling pressure, which results in
breakage and possible pulverization of the grain. Grain that is too dry is brittle and has greater breakage. MC and drying
temperature is also critical, because it determines whether small fissures and/or full cracks can occur or not in the grain
structure. Mixing paddy varieties can cause problems during milling, resulting in reduced capacity, excessive breakage,
lower milled rice recovery and reduced head rice. Different sized and shaped grains makes it difficult to adjust equipment
such as hullers, whiteners and polishers. This results in low initial husking efficiencies, a higher percentage ofre-
circulatedpaddy,non-uniform whiteningand a lower quantity of milled rice. Grain size and shape, or the length-width ratio,
is different for the varying paddy varieties. Long, slender grains typically have greater breakage than short, bold grains
and therefore have a lower milled rice recovery. The dimensions to some degree dictate the type of milling equipment to
be employed. Exposing mature paddy to fluctuating temperatures and moisture conditions can lead to the development of
fissures and cracks in individual kernels. Cracks in the kernel are the most important factor contributing to rice breakage
during milling [7]. This also results in reduced milled rice recovery and head rice yields. The amount of immature paddy
grains in a sample greatly impacts the head rice yield and quality. The immature kernels are very slender and chalky,
which results in excessive production of bran, broken grains and brewers rice. Grain should be harvested at about 20% to
24% moisture or about 30 days after flowering. If the harvest is too late, grains are lost through shattering or dry out and
are cracked during threshing, which causes grain breakage during milling. Milled rice is classified in groups based on the
percentage of amylose:
Waxy,1%to2%
Verylowamylose,2%to9%
Lowamylose,10%to20%
Intermediateamylose,20%to25%
High amylose,25% to 33%.
This paper gives an overview of different microstrip transmission line approaches being used describing the rice
characteristic. One of the most famous of thecharacterizations is the moisture content characterization which has average
threshold value of 12% [3] below which the rice grain will be considered as dried quality. This moisture content can be
measured with the help of coupled microstrip line sensor. The coupled line application sensors are one of the most
advance sensors in current sensing devices for rice moisture content.
2. DEVELOPMENTRELATED STUDIES OF MICROSTRIP TRANSMISSION LINE SENSORS

Microstrip Model

Microstrips are the printed circuit transmission medium, which are widely used in industrial electronics for PCB
designing. As shown in figure 1 microstrip transmission line contains a substrate with dielectric material
r
, conductor 1
with thickness t and width w. One of the most important properties of microstrip transmission line design is dielectric
constant which is inversely proportional to the radiated power i.e. if we have to transfer maximum power to the load then
the radiated power should be as less as possible. Microstrip transmission lines should be designed with a utmost care. The
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

591 www.ijergs.org

basic structure of microstrip transmission line is fabricated on PCB or its fabrication process is similar to the Printed
Circuit Board (PCB).

Fig: 1.Microstrip transmission line [1]
Parallel Edge Coupled Microstrip Line model

The parallel edge coupled line model consists of an array of microstrp transmission line in a manner for providing
matching between the input impedance and the output impedance. In rice characterization the current approach is using
resonator structure based filters to detect the moisture content and broken rice percent.A single band pass filter does not
result in good filter performance with gradual pass band to stop band transitions, this gradual change can be overcome
bycascading these building blocks which ultimately results in high performance filters [9]. The figure 2 shows the edge
coupled filter.The filter structure consists of more than one pair of microstrip transmission line for matching input
impedance to the output impedance.

Fig 2. Edge couple microstrip transmission line filter. [10]
Edge coupled microstrip transmission line exhibits the properties of even and odd mode characteristic impedance
according to which different pair have the different width, separation and length which is shown in the figure as
s
1
,w
1
,l
1
,s
2
,w
2
,l
2
,s
3
,w
3
,l
3
etc. There are three kind of impedances in transmission medium which affects the structure (i)
Characteristic impedance (ii) even mode impedance(Z
0e
) (iii) Odd mode impedance(Z
0o
) [8,9]. Theeven and odd mode
impedance of coupled microstrip line are determined by, using J
i,i+1
characteristic admittance of J-inverters is given as,
(Z
0e
)i,i+1=Z
0
[1+Z
0
J
i,i+1
+ (Z
0
J
i,i+1
)
2
] (1)

(Z
0o
)i,i+1=Z
0
[1-Z
0
J
i,i+1
+ (Z
0
J
i,i+1
)
2
] (2)
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

592 www.ijergs.org

In both the equations i ranges from 0 to n.
According to these even and odd mode impedance the width, space and physical length of the transmission line can be
calculated. In the current scenario our main motto is to represent microwave methods which are used in rice characteristic
detection so further detail of designing coupled microstrip transmission line filter is discussed in [8] and [9].
Microstrip Ring Resonator Sensor

According to Semouchkina et.al [11] microstrip ring resonators are widely used in many microwave devices, particularly
in filters, mixers, oscillators, and couplers. The interest of researchers and communication industry engineers to these
structures has recently increased due to the application of ferroelectric thin-film substrates and high-temperature
superconducting microstrip lines in ring resonator fabrication [12, 13].The efficient design of microwave structure usually
depends on size, weight and quality factor.The current designs of microstrip ring resonator are small in size, light in
weight and are of high quality because of the superconductivity in microstrips. Due to the sensitivity of the substrate to
changes in dc electric fields these are also easily tunable. In order to successfully integrate new microstrip ring
components into communication systems, it is very important to have a clear understanding of the resonance processes in
ring resonators, and to model their responses adequately. This ring resonator approach is limited; as it cannot be used
either for arbitrary microstrip geometries or for a large dielectric constant of the substrate, and is not appropriate for high
frequencies. The geometrical view of microstrip ring resonator is shown in figure 3, in which the feed port, coupling port
and resonator are shown with tags.

Fig 3. Microstrip Ring Resonator [14]


3.Comparative Study of Microwave Approaches For Rice Charecterization
In this section we have studied and compared three well known rice quality detection microwave approaches.
Cylindrical Slot Antenna Sensor

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

593 www.ijergs.org

You et al. [15] have discussed that cylindrical slot antenna approach to detect the quality of milled rice.They have used
this approach to determine the quality on the basis of percentage of moisture content present in rice grain and the
percentage of the broken rice. The methodology to detect the rice quality is to use the reflection coefficient of one or two
slot antenna sensors of infinite ground plane in the frequency range from DC to 6 GHZ Vector Network Analyzer
(VNA).Then the calibration equations have been generated to relate the reflection coefficient to the moisture content and
broken rice percentage. Figure 4 shows this approach for single slot and couple slot antenna,


Fig 4. (a) Single slot sensor (b) Coupling slot cylindrical antenna sensor [15]

Initially the radiated wave from slot antenna spread around the mixture sample of rice and air. The low frequency, long
wavelength signal is required to reduce the sensitivity of wave to the air gap between the rice grains. Thus shorter the
wavelength tends to high frequency and tries to increase theair gap in the rice grain because the broken rice have high
density and low air gap compared to the unbroken rice grain. Hence the different moisture quality depends on the air gap
in rice grain and shows a different reflection coefficient with Vector Network Analyzer. In methodology they have used
frequency range from 1 GHz to 13.5 GHz and rice sample were placed over the acrylic holder with a height of 30 mm for
6 hours at 130
o
C, then the moisture content can be calculated by following equation using the weighs of rice grain before
dry and after dry m
Before_Dry
and m
After_Dry
[16],
m.c=
m
Before _Dry
m
After _Dry
m
Before _Dry
100% (3)
so five varieties of rice contents have been measured in this DCto 6 GHz frequency range [15]. The moisture content is
measured between 12- 16%.
Wide Ring Sensor

Mun et.al [2] have discussed that conventional microstrip ring with loose coupling exhibits a high insertion
loss of about 10 dB at the resonant frequency [Chang and Hsieh, [16]). When a signal is transmitted through the
microstrip ring with high insertion loss, the transmitted signal will become very low. Because of this drawback the
transmission quality of signal becomes very weak and low quality or low cost signal usually not detects these signals.
Figure 6 shows the microstrip wide ring structure with SMA connector for providing feed through coaxial transmission
lines with width of the ring W
r
, width of the feed line W
f
, length of the feed line l, inner radius of the ring R
i
& outer
radius of the ring R
o
.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

594 www.ijergs.org



Fig 6. Wide-ring sensor [2].

Hence the new microstrip ring sensor designed by the researchers has low insertion loss and high reflection coefficient. It
is designed to operate within a frequency range from 1 GHz to 3 GHz and exhibit low insertion loss for the
percentage of broken rice grain determination. Both sensors operate at a relatively low frequency, within 1 GHz t o
3 GHz, which can reduce the cost when compared with other devices that operate at a higher frequency. The
microstrip ring is designed to have a wide ring in order to provide a relatively large contact area with the rice grains.
Also, the 50 feed lines are directly coupled with the ring to realize low insertion loss at the resonant frequency.
Moreover, calibration equations for both sensors at selected frequencies are developed based on the relationship
between the percentage of BR with corresponding measured magnitude and phase of transmission coefficient. The
minimum insertion loss for wide-ring sensor is close to 0.67 dB or |T| equal to 0.93, while the minimum insertion loss
for a couple-line sensor is close to 1.81 dB or |T| equal to 0.81.the experimental result for wide ring microstrip sensor
shows that |T| magnitude of transmission coefficient increases gradually as the BR percentage increases within the
frequency range from 1.80 GHz to 2.28 GHz but reverses from 2.38 GHz to 2.52 GHz, while decreases with
the increase in percentage of BR in the frequency range of 1.80 GHz to 2.60 GHz. The selection of frequency was
determined by the largest changing rate of the |T| and with respect to frequency within the sensitive frequency
range. The sensitive frequency range for a wide-ring sensor is within 1.80 GHz to 2.28 GHz and 2.38 GHz to 2.52 GHz.
Also, the sensitive frequency range for a coupled-line sensor is within 1.80 GHz to 2.10 GHz and 2.20 GHz to 2.42
GHz. Calibration equation relates the percentage of broken rice with the magnitude and phase of transmission coefficient
for the wide-ring sensor at different frequencies. The calculation of broken rice percentage is shown in the following
table1 as,
Table 1. Calibration equations for broken rice calculation using wide ring sensor [2]

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

595 www.ijergs.org

The principle of wide ring sensor says that in free space the particular frequency of microstrip depends mainly on the
effective permittivity
eff
and the mean circumference of the conductor ring. The resonant frequency of the ring sensor can
be approximated by using the mean radius of ring r, with n number of modes and where speed of light is c,
f
r
=

2 eff
(4)
The
eff
is affected by the parameters like dielectric constant of substrate
r,sub
,width of the ring Wr, thickness of substrate
h, and dielectric constant of the material that covers the ring sensor
r
`.

eff
=
r,sub +r `
2
+
r,sub r`
2
(1+12

)
-0.5
( 5)
When the ring sensor is fully filled with air, then
r
is equal to 1. Whereas, if the ring sensor is overlaid with rice grain,
the ring sensor will produce a resonant frequency shift and a broadening of the resonance curve (a change in the
transmission coefficient) when compared to free space. The
r
relates with the capability of energy storage in the
electric field in the material (Nelson and Trabelsi, [17]).
Coupled Line Sensor
A microstrip coupled-line has been widely used in the filter application. It can be employed as a band pass filter with
low insertion loss and can be easily designed at any desired centre frequency. Besides this, it offers an attractive
physical dimension that is small in size, light in weight and easy to fabricate. Although a microstrip coupled-line exhibits
many advantages, it has not been applied to the determination of the percentage of broken rice [2, 3]. The measurement
parameter for the study of these sensors is reflection/transmission method.
The method contains different order or we can say the higher order coupled transmission line design. Yeow et.al [3] has
given the comparative design of 2
nd
order and 4
th
order microstrip couple transmission line band pass filter.As discussed in
coupled line filter sensitivity is greater than wide ring microstrip filter. The 2
nd
order couple line filter has less sensitivity
compared to the 4
th
order filter. The 2
nd
order filter have less number of coupled microstrip transmission line compared to
4
th
order filter hence it will provide less area for rice grain sensing whereas 4
th
order filter provides large area for rice grain
sensitivity. The 2
nd and
4
th
order filter structure design have been shown in figure 7 using [2, 3] as

(a)

(b)
Fig 7. (a) 2
nd
order BPF (b) 4
th
order BPF with dimensions widthW, lengthL, size coupling gap S. [3]
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

596 www.ijergs.org

The measurement techniques for rice moisture content uses an E5071C Network Analyzer at frequency range 1.5 GHz to
3GHz.Before measurements, the two port calibration of both the ends of the coaxial cable have been done using
calibration kits. So finally to measure the quality of the milled powder rice sample placed in acrylic holder sensor with
15mm height.Experimental setup for both wide ring and microstrip sensor has been shown in following figure 8.

Fig8. Experimental setup for wide ring and coupled microstrip line resonator [2]
The part of the experiments shows that the 4
th
order filters have higher sensitivity than 2
nd
order. The following conclusion
have been found in [3]
Higher order filter have higher density compared to the lower order filter.
The reflection coefficient and transmission coefficient T depends on the order of the filter deign.
Different substrate properties are used for PCB based microstrip filter design. Each substrate shows different properties of
transmission and reflection coefficient.
Rice powder should be use for measuring the moisture content in the place of rice grain.
So we have studied the different approaches microwave sensor design for rice quality detection. There comparison given
in following table 2.
Table 2. Microwave Sensor Model Comparison

Microwave Parameters Cylindrical slot antenna
model
Wide Ring microstrip
model
Couple microstrip line
Reflection coefficient It measures reflection
coefficient for coupleslot
antenna
Its reflection coefficient
based on the reflection
ring reflection from rice
grain
Reflection coefficient is
high among three.

Frequency Range 1 GHz to 13.5 GHz 1GHz to 3 GHz 1.5 GHz to 3 GHz
Sensitivity Less sensitivity Higher than slot antenna Higher among three
Cost High Less Less
Broken Rice(%) 0-100 0-20,0-40,0-100 0-20,0-40,0-100
Average Error (%) Unknown 2.32,3.79,8.97 9.57,9.59.9.88
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

597 www.ijergs.org

Complexity Simple Simple Simple
Speed Fast Fast Fast
Above table gives an overview of different microwave sensor approach for rice quality detection and concludes
thatmicrostrip transmission line approach is better to design sensor devices for rice quality detection.
4. Conclusion
The increasing technologies to measure the quality of food are making the human work easier and faster. The rice quality
measurement is a big question in current industry, because conventional methods are not so fast and not easy to
implement. The slot antenna pair is was probably the first microwave broken rice detection techniques which works for
both moisture content and broken rice measurement. But the drawback is in the frequency range as it works on higher
frequency which makes it less suitable for rice detection. The other approach is microstrip approach in which there are
two approaches one is wide ring sensor and other one coupled line transmission approach. Both are good in their
mechanism. But couple line approach is more sensitive than the wide ring microstrip. These microstrip line approach use
average frequency 1- 3 GHz which makes it less expensive. Hence, finally we can conclude that the microstrip line
approach should be used to make more and more sensors. In futurewe can design 5
th
order edge coupled transmission line
for finding out the moisture content using a rice powder sample because it will result in an increased reflection coefficient
.Acknowledgment
We would like to thank our guide Mr. Navaid Zafar Rizvi., School Of ICT, Gautam Buddha University, who guided us to
complete this work and also would like to thank everyone who supported us to achieve the target.

REFERENCES:
[1] Mukesh Kumar, Rohini Saxena, and Arman Alam Ansari A Survey on Theoretical Analysis of Different Planer Transmission Lines International Journal of
Advanced Research in Computer Science and Software Engineering, Volume 3, Issue 4, April 2013.
[2] Hou Kit Mun , Kok Yeow You, and Mohamad Ngasri Dimon Broken rice detection based on microwave measurement technique using microstrip wide ring sensor
and microstrip coupled-line sensor Australian Journal of Crop Science, 2013.
[3] You Kok Yeow, Siti Nurul Ain Jalani, Zulkify Abbas and You Li Ling Application of Bandpass Filter as a Sensor for Rice Charecterisation International
Conference on Computer Application and Industrial Electronics (ICCAIE), 2010.
[4]M.Kent The use of strip_line configuration in microwave moisture measurement,Jounal of Microwave power and Energy, vol.7, pp.185-193.
[5]K.Khalid,and Z.Abbas, A microstrip sensor for determination of harvesting time for oil palm fruits (Tenera: Elaeis Guineesis), Journal ofMicrowave Power and
Electromagnetic Energy, vol. 27, pp. 3-10, 1992.
[6] Y.F. Tan, Y.Z. Xing, J.X.LI, S.B. Yu, C.G. Xu and Qifa Zhang Genetic bases of appearance quality of rice grains in shanyou 63, an elite rice hybrid Springer
Verlag 2000.
[7] Susan Reidy(2012) Striving for paddy, milled rice quality(online) http//www.world grain .com.
[8]Reinhold Ludwig, Gene Bogdanov, 2
nd
edition, Dorling kidersley (india) Pvt.Ltd.RF Circuit Design Theory and Applications second edition, 2011.
[9] Pawan Shakdwipee, Kirti Vyas Design Edge-Coupled Stripline Band Pass Filter at 39 GHz International Journal of Emerging Technology and Advanced
Engineering, 2013.
[10] Yuan-Wei Yu, Jian Zhu,Yi Shi and Li-Li Jiang A 1216 GHz microelectromechanical system-switchable bandpass filter Journal of Micromechanics and
Microengineering ,2012.
[11] Elen Semouchkina,Wenwu Cao,Raj Mittra, and Wenhua Yu Analysis of Resonance Processes in Microstrip ring Resonators by the FDTD Method Jhon Wiley
& Sons, Inc. 2001.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

598 www.ijergs.org

[12] L.Giauffret, J.M. Laheurte, and A. Papiernik, Study of various shapes of the coupling slot in the CPW-fed microstrip antennas,IEEE Trans Antennas
Propagatio,1997.
[13] C.Y. Huang and K.L. Wong, Coplanar waveguide-fed circularly polarized microstrip antenna, IEEE Trans Antennas Propagation, 2000.
[14] S.N. Mathad, R.N. Jadhav,and Vijaya PuriMicrowave studies by perturbation of Ag thick film microstrip ring resonator using superstrate of bismuth strontium
manganites Microelectronics International, Vol. 30 , 2013.
[15] K.Y. You, J. Salleh, Z. Abbas, and L.L.You Cylendrical Slot Antennas for Monitoring the Quality of Milled RicePIERS Proceedings, Suzhou, China, September
12-16, 2011.
[16] Chang K, Hsieh LH Microwave ring circuits and related structures, 2nd edn. John Wiley & Sons, Hoboken, New Jersey, 2004.
[17] Nelson SO, Trabelsi S Permittivity measurements and agricultural applications In: Kupfer K (ed) Electromagnetic aquametry. Springer, Berlin, 2005.
[18] Mudrik Alaydrus Designing Microstrip Bandpass Filter 3.2 GHz International Journal on Electrical Engineering and Infor matics, 2010.
[19] Min Zhang, Yimin Zhao, Wei Zhang The simulation of microstrip Band Pass Filters based on ADS Antennas, Propagation & EM Theory (ISAPE), 2012.
[20] Chi-Feng Chen, Ting-Yi Huang, and Ruey-Beei Wu, Design of Microstrip Bandpass Filters with Multiorder Spurious-Mode Suppression IEEE transactions on
microwave theory and techniques, 2005.
[21]Ansoft Designer. Vers. 2.2. (Internet-Adress: http://www.ansoft.com). Computer software. www.elektronikschule.de
[22] M.A. Othman, M. Sinnappa, M.N. Hussain, M.Z.A. Abd. Aziz, M.M. Ismail Development of 5.8 GHz Microstrip Parallel Coupled Line Bandpass Filter for
Wireless Communication System International Journal of Engineering and Technology (IJET), 2013.
[23] Pawan Shakdwipee, Kirti Vyas Design Edge-Coupled Stripline Band Pass Filter at 39 GHz International Journal of Emerging Technology and Advanced
Engineering, 2013.
[24] R. Levy, S. B. Cohn, A History of Microwave Filter Research, Design, and Development, Microwave Theory and Techniques, IEEE Transactions, 1984.
[25] A. R Othman, I.M. Ibrahim, M. F. M. Selamat, M. S. A. S. Samingan, A. A. A. Aziz, H. C. Halim, 5.75 GHz microstrip bandpass filter for ISM band, Applied
Electromagnetics, APACE Asia-Pacific Conference ,2007.
[26] S. Seghier, N. Benahmed, F. T. Bendimerad, N. Benabdallah, Design of parallel coupled microstrip bandpass filter for FM Wireless applications, Sciences of
Electronics, Technologies of Information and Telecommunications (SETIT), 2012.
[27] John T. Taylor and Qiuting Huang, CRC Handbook of Electrical Filters, CRC Press, 1997.
[28]Man&Tel Co., Ltd., MW-2000 Microwave Communication Trainer, Man&Tel Co., Ltd., 2005.
[29] David M. Pozar, Microwave Engineering, John Willey and Sons, Inc, Third edition, 2005








International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

599 www.ijergs.org

A Survey of Power Supply Techniques for Silicon Photo-Multiplier Biasing
R. Shukla
1
, P. Rakshe
2
, S. Lokhandwala
1
, S. Dugad
1
, P. Khandekar
2
, C. Garde
2
, S. Gupta
1

1
Tata Institute of Fundamental Research, Mumbai
2
Vishwakarma Institute of Information Technology, Pune
pankaj.rakshe@gmail.com

Abstract In the past few years Silicon Photo-multiplier (SiPM) has emerged as a new detector in various applications due to its
promising characteristics for low light detection. However SiPM has certain limitations out of which temperature dependence of gain
is an important parameter which limits the application fields to temperature controlled or indoor environment. Change in temperature
changes the breakdown voltage and hence the gain; therefore the biasing voltage needs to be trimmed according to temperature for
maintaining the constant gain. This paper explores different power supply techniques that have been used for SiPM. Many commercial
power supplies are compared and different approaches which are tried by various research groups for maintaining stable gain are
discussed.
Keywords Silicon Photomultiplier, SiPM, Power Supply, Temperature Compensation, Bias Correction of SiPM, Gain stabilization,
Automatic Gain Correction.
I. INTRODUCTION
Avalanche Photo-diodes (APDs) are widely used in photon detection since very long time. In past few years, Silicon Photo-
Multiplier (SiPM) has emerged as a new detector in applications where very low density photon detection is required such as high
energy physics experiments, medical imaging, etc. The promising characteristics and performance of SiPM has proved it as a solid
state alternative to the traditional Photo Multiplier Tube (PMT). SiPM offers features such as compact size, single photon resolution,
high gain (~10
6
), fast response (~100 ps), high photon detection efficiency (~60%), insensitivity to magnetic fields and relatively
small operating voltage (~100V) when compared with PMT [1-3]. SiPM also has certain limitations such as dark counts (~0.1
MHz/mm
2
), optical cross-talk, and after-pulsing along with high temperature dependence of gain (3% - 10% /C) as compared to
PMT. These limitations are being rapidly improved with newer versions of SiPM. However, for a strong signal pulse with high
number of photons, the functionality of device is not affected much by high dark counts [4].
The gain of the SiPM is a function of applied electric field for biasing of SiPM; which in turn will be dependent on the operating
conditions such as biasing voltage, operating temperature, etc. The precise control and stability of biasing voltage is necessary for
adjusting desired gain. Also SiPM has a temperature coefficient for the breakdown voltage. A typical temperature coefficient of
breakdown voltage for Hamamatsu SiPM is of 50 mV/C [5].
For a fixed biasing voltage of SiPM when temperature rises, the gain will decrease due to the positive temperature coefficient of
breakdown voltage. For a typical overvoltage, small change in SiPM temperature causes the gain to vary by several percent at a fixed
biasing voltage condition. In conclusion, the temperature effects on SiPM along with precise control of biasing voltage are important
parameters for proper operation of SiPM in stable gain mode.
Different power supply techniques and approaches are discussed in this paper and it is organized as follows: Section II discusses
commercially available power supplies that can be used with SiPM. Section III shows some custom power supplies made for SiPM
operation, testing and characterization purpose. Section IV represents different approaches taken by various research groups for
achieving stable gain mode operation of SiPM in varying operating conditions. Section V concludes the paper and discusses best
possible approach for SiPM power supply implementation.
II. COMMERCIALLY AVAILABLE OPTIONS FOR SIPM
Out of the many commercially available power supplies, Keithleys Model 6487 [6] is a one of the best option, being widely used
for biasing SiPM . It is single channel voltage source with built-in pico-ammeter. The output voltage can be set from 0.2 mV to 505V
with the step sizes of 0.2 mV in 10V range, 1 mV in 50 V range and 10 mV in 505 V range. The maximum ripple voltages in these
ranges are: 50 V
p-p
in 10 V range, 150 V
p-p
in 50 V range and 1.5 mV
p-p
in 505 V range. The pico-ammeter is capable of measuring
the minimum current of 0.01 pA to maximum current of 20 mA in different ranges. It has automated voltage sweeps for I-V
characterization which makes it ideal for testing of SiPM. Also the interfacing options (IEEE-488 and RS-232) make it easier for use
of this power supply in automated test and measurement systems like LabVIEW.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

600 www.ijergs.org

Agilent Technologies has a power supply (Model 6614C) [7] which is similar to Keithleys Model 6487, but with lower precision
specifications. The single channel output voltage can be set from 0V to 100 V with maximum current capacity of 0.5 mA. The
programming step size for output voltage is 25 mV with maximum ripple voltage of 5 mVp-p. It also has a current meter which can
measure current with accuracy of 2.5 A, along with interfacing options of IEEE-488 and RS-232. Tektronix Model PWS4721 [8] is
also capable of supplying output voltages between 0V to 72 V with maximum current capacity of 1.2 A. It has maximum output ripple
voltage of 3 mVp-p. The output voltages can be set with step size of 1 mV. Typically, commercially available power supplies which
are capable of giving the output voltages greater than 60 V, have specifications less precise than [6]; since they are used in general
purpose applications which does not require very precise control of output voltage.
III. CUSTOM POWER SUPPLIES MADE FOR SIPM OPERATION AND TESTING
CAEN has a complete SiPM development kit [9] which can be used for testing and characterization of SiPM along with provision
of data logging as shown in Fig. 1. The parameters of SiPM under test such as gain, photon number resolving power and dark count
rate at different photo-electron threshold etc. can be found with help of this kit.

Fig. 1 Block diagram of CAEN Silicon Photo-multiplier development kit

The module SP5600 used in development kit in [7] is a 2-channel Power Supply and Amplification Unit (PSAU) [10] which can
mount two SiPMs. The PSAU supplies the bias voltage for SiPMs and also incorporates a feedback circuit that will stabilize the gain
of SiPM against temperature variations. The PSAU also has an amplifier with variable gain up to 50 dB, one leading edge
discriminator per channel and a coincidence circuit for event trigger logic with provision of USB for parameter configuration via
software. Fig. 2 shows the block diagram for SP5600 PSAU. The PSAU is capable of providing biasing voltage up to 120V with the
resolution of 1.8 mV and temperature feedback with resolution of 0.1 C. This unit is meant for indoor environment use only.

Fig. 2 Block Diagram of CAEN 5600 Power Supply and Amplification Unit (PSAU)

The AiT instruments also has complete SiPM testing and data logging system which uses a SiPM base for SiPM mounting, a
SiPMIM16 interface module [11] and MDU20-GI16 integrator and interface control module [12]. This setup uses AiT instruments
HV80 as a power supply for SiPM [13]. The HV80 can supply 10V to 80V adjustable output voltage which can be controlled by 0 to
2.5V control voltage. HV80 can supply 4mA output current with adjustable overcurrent shutdown, output voltage and current
monitoring features. But this unit is also meant to be used in indoor, temperature controlled environment only.
Other than the CAEN and AiT instruments SiPM power supplies, there are some custom power supplies that are meant to be used
with particular SiPM only. The Excelitas Lynx SiPM module Lynx-A-33-050-T1-A [14] uses Excelitas C30742 series SiPM and it
consists of stable power supply for SiPM operation, thermoelectric cooler for temperature control and a low noise amplifier. This unit
takes +5V as supply voltage and gives directly the output pulse after amplifier in range of 0-5V.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

601 www.ijergs.org

Another example is the SensL MicroM-EVB and MicroB-EVB boards which are to be used with SensL L-series, M-series and B-
series SiPM. These boards have inbuilt power supply for biasing the SiPM and a pre-amplifier for amplification of SiPM output pulse.
The power supply module in MicroM-EVB board gives a fixed output voltage of 29.5V and in MicroB-EVB board gives a output
voltage of 27V for corresponding series SiPM.
IV. OTHER APPROACHES FOR SIPM BIASING
In previous sections, all power supplies and techniques used for biasing of SiPM are commercially available and majority of them
are meant to be used in temperature controlled environment since SiPM gain has significant temperature dependence (3% - 10% /C).
These variations in gain are critical when operated with large gain of SiPM (~10
6
). For stabilizing the gain of SiPM against
temperature variations, the temperature can either be kept constant or controlled while operating in closed environment. But, when
SiPM is to be used in uncontrolled environmental conditions such as in outdoor applications or in space; the temperature control is not
an option. The other approach to this problem is to tune the biasing voltage or SiPM dark current with changing conditions.
SiPM Dark Current Control
In [15], it is shown that the relation between dark current of SiPM and temperature can be approximated to exponential function
which is similar to behaviour of thermistor. Therefore, the gain of SiPM can be stabilized by using thermistor to compensate for
changes due to temperature variations by indirectly controlling the biasing voltage that will appear across SiPM. But this approach is
limited to small range temperature changes and drifting of amplifier gain is possible at temperature variations outside the range. The
temperature variation of -5% /C, in the range of -18C to -8C, was minimized to a value of 0.3% /C.
As the dark current of SiPM is a function of bias voltage and vice versa, the gain can be stabilized by controlling current that flows
through SiPM. This approach is explored in [16] by designing a voltage controlled current sink using NTC thermistors that can control
the current flowing through SiPM. But this approach is limited to low intensity of light and around room temperature applications
where photon rate is smaller than dark count rate. Controlling the dark current through SiPM is a difficult task and makes the system
more complex. But the gain variation is reduced to 6% in the temperature range of 5.1 C to 33.3 C.
Bias Voltage Control
Another approach is to control the biasing voltage directly [17] so that the effects of temperature variations can be compensated.
This approach is relatively less complex than controlling dark current. In [17], a temperature to voltage converter module is used to
control the bias voltage of SiPM. Fig. 4 shows the block diagram of the scheme. This scheme controls the bias return potential
according to temperature variations to control the apparent bias voltage across SiPM, so that the over-voltage applied will be constant
for achieving constant gain condition. In 3C temperature change, gain variation of about 33% without compensation is reduced to 1%
with passive compensation circuit.
A variation to the scheme in [17], is proposed by Licciulli et al., in [18]; which uses a blind SiPM as a temperature sensor for
correcting the biasing voltage of other light sensitive SiPMs as shown in Fig. 3. In [18], a SiPM with no incident light (blind SiPM) is
used as a temperature sensor and its gain is monitored by measuring the amplitude of output dark pulses. This temperature detector is
included in a negative feedback loop which modifies the bias voltage automatically so that the amplitude of dark pulses remains
constant, which will in turn keep the gain constant. The advantage of using this scheme is that it will correct the biasing voltage
irrespective of knowledge of SiPM parameters. Only thing is that the parameters of blind SiPM and light sensitive SiPM should match.
But the biggest disadvantage of this scheme is that it is very complex to implement. The change of 10% dark pulse amplitude variation
was reduced to 2% in 20C to 30 C temperature range.
The approaches discussed in [15-18] uses a control of bias voltage or dark current using an operational amplifier based circuits to
correct the bias voltage using temperature sensor feedback, but this increases the complexity of biasing circuit and limited to
temperature variations around room temperature. A. Gil et al. [19] proposed slightly different approach that controls the external
power supply. A high resolution DAC is used for controlling the high voltage supply output which supplies bias voltage. A high
resolution ADC is used for feedback purpose and all control action of ADC and DAC is done with help of a micro-controller, which
also reads the temperature from temperature sensor. The gain variations of 27% are reduced to 0.5% in the temperature range of 20 C
to 30 C.

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

602 www.ijergs.org


Fig. 3 A blind SiPM as a temperature sensor for correcting biasing voltage

In [20], the relation between gain, temperature and bias voltage is represented in the form of an equation and an external feedback
system is implemented using LabVIEW. This system takes the temperature input from a resistance thermometer attached to SiPM
assembly and gain input from SiPM data acquisition system. The corrected biasing voltage is calculated depending on the two inputs
to LabVIEW system and accordingly the power module is controlled to give corrected biasing voltage. The gain deviation of
maximum 1% is achieved in the temperature range of 0C to 40 C in this approach.
An extension of system in [20] is presented by same authors in [21], where a self-calibrating gain stabilization method is used for
multiple SiPMs. The parameters for stable gain operation are found for one or few SiPMs from a group of SiPMs and these
parameters are applied to all detectors in the group. When any of the SiPM does not have same parameters, the gain of that SiPM is
different. The system then recalculates the parameters for only those SiPMs whose gain is not matched with the gain of the calibrated
SiPMs such that the gain of all detectors is constant. The calibration process is triggered each time the temperature changes. The
value of gain does not differ by more than 1% from the set value [21]. This method is useful in systems where large number of
detectors are present and it can reduce the calibration process time significantly. Another advantage is that, one need not know the
parameters of all detectors; the parameters of only one or few detectors from the group of same detectors will be sufficient for
operation of such large systems.
Temperature Control
Controlling the biasing voltage according to temperature variations works for keeping the gain constant, but the noise levels of
SiPM also increases with increase in temperature and also drifting of amplification gain of SiPM pulse amplifier is also possible. The
low value of Signal-to-noise-ratio (SNR) will result in poor energy resolution and errors in event localization of scintillator based
systems [22]. However, cooling of SiPM results increase in SNR and energy resolution by SiPM. In [22] a recirculating cooled clear
optical liquid is inserted between two optical windows of SiPM resulting in cooling and light conduction in one module. This method
is efficient for increasing the SNR of SIPM output, but at the same time it is useful for keeping the gain constant by controlling the
temperature. More research is going on in cooling the SiPM structure itself.
The use of Peltier cooler for maintaining the SiPM temperature is demonstrated in [23]. The temperature control of approximately
10C can be achieved by using Peltier cooler. A complete multichannel SiPM power supply is designed [23] which can be used for 18
SiPM channels simultaneously. This supply can regulate the output voltage from 0V to 100 V and can supply 100 A current per
channel. The output voltage is settable with a resolution of 25 mV with stability of 5 mV. This type of supply is useful for multi-
channel systems where multiple SiPM detectors are required to work simultaneously.
V. CONCLUSION AND DISCUSSION
The power supply approaches discussed in section II and III are available commercially but main disadvantage is that the whole
unit can be used with single SiPM channel and the cost of instrument will be very high. A custom power supply unit [23] with
multiple channels capability at relatively lower cost than commercially available options is the way ahead. The control of bias voltage
(or dark current) of SiPM with respect to temperature can be implemented in this custom power supply unit with help of temperature
sensor and a controlling device like micro-controller or FPGA. A complete feedback system will be possible for control of biasing
voltage of SiPM with respect to varying temperature conditions. This approach will help to prove SiPM as a more accurate sensor and
suitable for use in different environmental conditions.

REFERENCES
[1] P. Buzhan et al., Silicon photomultiplier and its possible applications, Nucl. Instrum. Meth. A, vol. 504, pp. 4852, 2003.
[2] P. Finocchiaro et al., Features of Silicon Photo Multipliers: Precision Measurements of Noise, Cross-Talk, Afterpulsing,
Detection Efficiency, IEEE Trans. Nucl. Sci., vol. 53, No. 3, pp. 1033-1041, 2009.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

603 www.ijergs.org

[3] R. Shukla et al., A Micron Resolution Optical Scanner for Characterization of Silicon Detectors, Rev. Sci. Instru., vol. 85,
023301, 2014.
[4] P. Bohn et al.,Radiation Damage Studies of Silicon Photomultipliers, Nucl. Instrum. Meth. Phys. Res. Sect. A, vol. 598, pp.
722-736, 2009.
[5] Hamamatsu MPPC Multi Pixel Photon Counter, January 2014 [Online]. Available:
htttp://sales.hamamatsu.com/assets/pdfs/catsandguides/mppc_kapd0002e03.pdf.
[6] Keithley 6487 Picoammeter/Voltage Source [Online]. Available:
http://www.keithley.in/products/dcac/voltagesource/application/?mn=6487
[7] Agilent Technology 6614C 50W Power supply [Online]. Available: http://www.home.agilent.com/en/pd-838353-pn-6614C/50-
watt-system-power-supply-100v-05a?nid=-35716.384315.00&cc=IN&lc=eng
[8] Tektronix PWS4721 Power Supply [Online]. Available:
http://in.tek.com/sites/tek.com/files/media/media/resources/PWS4000_Series_Programmable_DC_Power_Supplies_Datasheet_3
GW-25253-5_1.pdf
[9] CAEN SP5600 SiPM Development Kit [Online]. Available:
http://www.caen.it/jsp/Template2/CaenProd.jsp?parent=61&idmod=719
[10] CAEN DS2626 Power Supply and Amplification Unit [Online]. Available:
http://www.caen.it/csite/CaenProd.jsp?showLicence=false&parent=61&idmod=719
[11] AiT Instruments SiPMIM16 SiPM Interface Module [Online]. Available: http://www.ait-
instruments.com/SiPMIM16_p/sipmim16.htm
[12] AiT Instruments MDU20-GI16 Integrator and Interface Control Module [Online] Available: http://www.ait-
instruments.com/MDU20_GI16_p/mdu20gi16.htm
[13] AiT Instruments HV80 Precision Regulated Power Supply [Online] Available: http://www.ait-
instruments.com/HV80_p/hv80.htm
[14] Excelitas Lynx SiPM module Lynx-A-33-050-T1-A [Online] Available:
http://www.excelitas.com/Downloads/DTS_SiPM_Module.pdf
[15] H. Miyomoto et al., SiPM Development and Application for Astroparticle Physics Experiments, Proceedings of the 31
st
ICRC,
2009.
[16] Z. Li et al., A Gain Control and Stabilization Technique for Silicon Photomultipliers in Low-Light-Level Applications around
Room Temperature, Nucl. Instrum. Meth. Phys. Res. Sect. A, vol. 695, pp. 222-225, 2012.
[17] R. Bencardino and J. Eberhardt, Development of a Fast-Neutron Detector with Silicon Photomultiplier Readout, IEEE Trans.
Nucl. Sci., vol. 56, No. 3, 2009.
[18] Francesco Licciulli, Ivano Indiveri, and Cristoforo Marzocca, A Novel Technique for the Stabilization of SiPM Gain Against
Temperature Variations, IEEE Trans. Nucl. Sci., vol. 60, No. 2, 2013.
[19] A. Gil, J. Rodriguez, v. Alvarez, J. Diaz, JJ. G6mez-Cadenas, D. Lorca, Programmable Power Supply System for SiPM Bias,
IEEE Nucl. Sci. Symposium Records, NP2.S-87, 2011.
[20] P. Dorosz M. Baszczyk, S. Glab, W. Kucewicz, L. Mik, M. Sapor, Silicon Photomultipliers Gain Stabilization by Bias
Correction for Compensation of the Temperature Fluctuations, Nucl. Instru. Meth. Phys. Res. A, vol. 718, pp. 202-204, 2013.
[21] P. Dorosz M. Baszczyk, S. Glab, W. Kucewicz, L. Mik, M. Sapor, Self-Calibrating Gain Stabilization Method for Applications
Using Silicon Photomultipliers, IEEE, 2013.
[22] A. Stolin, S. Majewski, R. Raylman, Novel Method of Temperature Stabilization for SiPM-Based Detectors, IEEE Trans. Nucl.
Sci., vol. 60, no. 5, 2013.
[23] J. Anderson, J. Freeman, S. Los, J. Whitemore, Upgrade of the CMS Hadron Outer Calorimeter with SIPMs, Elsevier Physics
Procedia, Vol. 37, pp. 72-78, 2012.



International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

604 www.ijergs.org

Counter ion Effects in AOT Systems and New Fluorocarbon- based Micro
Emulsion Gels Counter Ion Effects in AOT Systems
Narjes Nakhostin Maher, Mohammad Ali Adelian
Maher_narges@yahoo.com

Abstract Micro emulsions have important applications in various industries, including enhanced Oil recovery, reactions,
separations, drug delivery, cosmetics and foods. We investigated Two different kinds of water-in-oil micro emulsion systems, AOT
(bis(2-ethylhexyl) sulfosuccinate) micro emulsions with various counter ions and per fluorocarbon-based Micro emulsion gels with
triblock copolymers. In the AOT systems, we investigated the Viscosity and inter droplet interactions in Ca(AOT)2, Mg(AOT)2 and
KAOT Micro emulsions, and compared our results with the commonly-studied NaAOT/water/decane system. We attribute the
differences in behavior to different, hydration characteristics of the counter ions, and we believe that the results are consistent with a
previously proposed charge fluctuation model. Per fluorocarbons (PFCs) are of Interest in a variety of biomedical applications as
oxygen carriers. We have used triblock Copolymer Pluronic F127 to modify the theology of PFC-based micro emulsions, we Have
been able to form the rmoreversible PFOB (perfluorooctyl bromide)-based gels, and have investigated the phase stability, theology,
microstructure, interactions, and gelation Mechanism using scattering, reometry, and microscopy. Finally, we attempted to use these
data to understand the relationship between reology and structure in soft attractive Colloids.
KeywordsAcademy of technology(AOT), Power Finance Corporation(PFC),perfluorooctylbromide(PFOB),Nuclear magnetic
resonance(NMR),professional employer organization(PEO),sodium(Na),calcium(Ca)
INTRODUCTION
My topic work explores two different types of water-in-oil micro emulsion systems. The first system involves a charged surfactant,
AOT (bis(2-ethylhexyl) sulfosuccinate), which has commonly been used as a model system for the study of water-in-oil micro
emulsions. We have investigated the effect of counter ion type on the solution interactions and viscosity, and have used our results to
test a previously-proposed charge fluctuation model to describe inter droplet interactions in this system. The second micro emulsion
system we investigated utilizes a perfluoro carbon (PFC) as the oil. While other groups have reported stable PFC-based micro
emulsions, these systems have all been low-viscosity liquids. We wished to create stable, elastic gels containing PFCs. We used
triblock copolymers to modify the theology of the PFC micro emulsion and form thermo reversible gels. We have attempted to use the
results on this system to gain an understanding of the relationship between theology and structure in soft attractive colloids.
1.1 Micro emulsions:
Alcohol to a coarse macro emulsion stabilized by an ionic surfactant [2]. One of the best recent descriptions of micro emulsions is
given by Attwood [3]: A micro emulsion is a system of water, oil, and amphiphilic compounds (surfactant and co-surfactant) which
is a transparent, single optically isotropic, and thermodynamically stable liquid. The great potential for practical applications of micro
emulsions has stimulated a great deal of research in the field, especially for applications in enhanced oil recovery in the 1970s.
Schulman and coworkers were the first to investigate these transparent liquids [1, 4-9]. The microstructure, size, shape, theology and
dynamics of micro emulsions have been characterized by various techniques such as scattering, viscometry, reometry, X-ray
diffraction, ultracentrifugation, cryo-electron microscopy, electrical birefringence and nuclear magnetic resonance (NMR) [10]. One
of the most significant developments in the field was a theoretical statistical-mechanical description of micro emulsion systems, and
the demonstration that micro emulsions are thermodynamically stable phases because of their ultralow interfacial tension and highly
flexible interfacial layer [11-18]. By contrast, emulsion systems are only kinetically stable and often phase separate after a short time.
The other main differences between micro emulsions and emulsions are the size and the shape of dispersed phase. Micro emulsion
droplets are nano scale, typically 10-200 nm, much smaller than emulsion particles (1-20 m) and also smaller than the wavelength of
visible light, so that the micro emulsions systems are transparent. The microstructure of micro emulsions can evolve from droplet-like
to bicontinuous structures, whereas emulsions consist of large coarse spherical droplets [19]. Due to these unique properties and
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

605 www.ijergs.org

characteristics, micro emulsions have been used in various industries. Research on micro emulsion-based flooding techniques in
enhanced oil recovery began in 1970s; however the potential of their use was overestimated because of the high expense of surfactant
and current low oil prices [20-25]. Cheaper production of surfactants was needed to make this technique affordable [26-29]. Micro
emulsions can solubilize both hydrophilic and hydrophobic reactants at high concentration, so they have been used as a novel medium
for chemical synthesis as micro reactors or nano reactors, distinct from reactions in a bulk solvent [10].
The reaction parameters and chemical reactivity can be determined by the microstructure of micro emulsion, the properties of solvent,
surfactant and cosurfactant [30-35]. Micro emulsion reaction systems have been used for spectroscopic analysis, preparations of
mesoporous structure materials [36-38], synthesis of polymeric particles [37, 39-41], synthesis of ultrafine metal, metal oxide, and
semiconductor particles [42-47], and even used in supercritical fluids [48-50] and enzyme-catalyzed reactions [51-53]. Due to their
thermodynamic stability, bioavailability and topical penetration of poorly soluble drugs enhanced by the amphiphiles, micro
emulsions have gained an important role as drug delivery vehicles [19, 54-56] and in cosmetics [57-59]. This application has inspired
research on the use of novel highly efficient and nontoxic surfactants and cosurfactants. The transparent nature and ability to solubilize
large amounts of volatile organic compounds, like alcohol in fragrance formulations, make micro emulsions an important precursor in
cosmetic formulations, where they are sometimes referred to as micro emulsion gels [60-62]. Some foods contain micro emulsions
naturally, and the preparation of foods nearly always requires the incorporation of lipids which exist as micro emulsions in foods.
Micro emulsions can also be used as liquid membranes for separation due to their significantly large interfacial area and fast
spontaneous separation, extracting organic substances, metal, or proteins from 3 dilute streams [63-69]. The ultralow interfacial
tensions and the high solubilization power of both hydrophilic and hydrophobic substances make micro emulsion an excellent medium
in textile detergency [70-73].
In the above application processes, the rheological properties and structure are important factors. These impact the stability, reactivity,
bioavailability, penetration, separation efficiency, fine particle quality, and so on. Viscosity is a macroscopically observable
parameter, very important in oil recovery, drug delivery, reaction, cosmetics, and separations. The rheological properties, shape and
size of micro emulsion structure are basically determined by the surfactant and solvent. So the selection of surfactant and solvent is
very important, attracting enormous interest of researchers on the factors of theology and structure like chain length of solvent, ion
size and charge of surfactant.
1.2 Per fluorocarbons and Fluorinated Amphiphiles
1.2.1 Per fluorocarbons (PFCs)
When the hydrogen atoms in hydrocarbons are replaced by fluorine completely, the products are called per fluorocarbons (PFCs), or
simply fluorocarbons [74-76]. Hydrogenated amphiphiles can also be fluorinated fully or partially to form per fluorinated amphiphiles
or partially fluorinated amphiphiles [74]. Due to its strong electronegativity, fluorine shows an unusually high potential of ionization
and very low polarizability. Because the C-F bond is among the most stable single covalent bonds and its atom radius is much larger
than hydrogen atom, most fluorocarbons are very stable and inert thermally, chemically and biologically [75, 76]. They also have a
larger volume, a larger density and a much more stiff chain than their hydrogenated counterparts [74-79].Because of the low
polarizability of fluorine, both the van der Waals interactions between fluorinated chains and the cohesive energy densities in liquid
fluorocarbons are very low, resulting in many valuable properties, such as high fluidity, low surface tension, low boiling point, low
refractive indexes, low dielectric constant, high gas solubility, excellent spreading property, high vapor pressure, and high
compressibility [75]. The high density, anti-friction properties, and magnetic susceptibility values close to that of the water in PFCs
also are useful in biomedical applications [75]. Additionally, the per fluorinated chain offers larger surface area to enhance the
hydrophobicity so that the chain is both hydrophobic and lipophobic. Fluorocarbons are even immiscible with their hydrogenated
counterparts because of their different chain conformations. This phenomenon is still pending to be explained successfully and
completely [75, 78].
1.2.2 Fluorinated Amphiphiles: Fluorinated amphiphiles can be classified into four types according to their functional groups on
the backbone: anionic, cationic, amphoteric, and nonionic [74].
Because of strong hydrophobic interactions and low van der Waals interactions from the fluorinated chain, fluorinated amphiphiles
tend to self-assemble in water and collect at interfaces, showing strong surface activity. They have much lower critical micellar
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

606 www.ijergs.org

concentrations (cmc) than their hydrogenated counterparts [74, 75]. An increase of the chain length will decrease the cmc, and
branching of the backbone will increase the cmc [74]. Per fluorinated amphiphiles also have smaller cmc than their partially
fluorinated counterparts [74, 75].
1.2.3 Applications of Fluorocarbons and Fluorinated amphiphiles
Because of their unique properties, fluorocarbons and fluorinated amphiphiles have a lot of applications in both biomedical research
and industrial research. In biomedical research, typical applications involve oxygen transport, because of the exceptional oxygen
solubility and biocompatibility displayed by PFCs [75, 76, 78]. It is reported that fluorocarbon-based systems can act as liquid
ventilation, temporary blood substitutes, and injectable oxygen carriers during surgery [74-76, 78]. Fluorocarbons can dissolve a
large amount of gases, much more than hydrocarbons and water, displaying gas solubilities up to 25% higher than water [75, 76, 78].
The oxygen in fluorinated oil is not bound chemically to the fluorinated chain, so it may be easily transported to tissues. The
fluorocarbon brings no risks of infection to tissues and body because there is no metabolite-related toxicity. Thus, fluorinated blood
substitutes are very important in cases of blood shortage, rare blood type groups, on-site rescue, and so on [74-76, 78]. After
Creutzfeldt-Jacob disease, also called mad cow syndrome, was found, fluorinated micro emulsions become more popular and
competitive than the blood substitutes from bovine hemoglobin derivatives [78]. Fluorinated gels and micro emulsions also have
strong potentials for use in pulmonary drug delivery, controlled drug delivery, and ointments in pharmacy and ophthalmology to
maintain gas exchange and acid-base status [74-76, 78]. They also work very well in retinal repair, replacement of the vitreous liquid,
and treatment of articular disorders such as osteoarthritis and rheumatoid arthritis [75, 78]. Because of their unique properties,
fluorocarbons and fluorinated amphiphiles have a lot of applications in both biomedical research and industrial research.
In biomedical research, typical applications involve oxygen transport, because of the exceptional oxygen solubility and
biocompatibility displayed by PFCs [75, 76, 78]. It is reported that fluorocarbon-based systems can act as liquid ventilation,
temporary blood substitutes, and injectable oxygen carriers during surgery [74-76, 78]. Fluorocarbons can dissolve a large amount of
gases, much more than hydrocarbons and water, displaying gas solubilities up to 25% higher than water [75, 76, 78]. The oxygen in
fluorinated oil is not bound chemically to the fluorinated chain, so it may be easily transported to tissues. The fluorocarbon brings no
risks of infection to tissues and body because there is no metabolite-related toxicity. Thus, fluorinated blood substitutes are very
important in cases of blood shortage, rare blood type groups, on-site rescue, and so on [74-76, 78]. After Creutzfeldt-Jacob disease,
also called mad cow syndrome, was found, fluorinated micro emulsions become more popular and competitive than the blood
substitutes from bovine hemoglobin derivatives [78]. Fluorinated gels and micro emulsions also have strong potentials for use in
pulmonary drug delivery, controlled drug delivery, and ointments in pharmacy and ophthalmology to maintain gas exchange and acid-
base status [74-76, 78]. They also work very well in retinal repair, replacement of the vitreous liquid, and treatment of articular
disorders such as osteoarthritis and rheumatoid arthritis [75, 78].
1.3 polymer Adsorption and Triblock Copolymers
Polymer adsorption has been a very effective tool to control and adjust the phase behavior and rheological properties of colloidal
suspensions. Triblock copolymers, which consist of two end blocks and one midblock, are a significant class of macromolecules that
have such attractive applications. Intuitively, the formation of bridges absorbing two surfaces on each end of the polymer will induce
inter particle attractions, and the formation of loops or brushes absorbing single surface on both of the ends will induce interparticle
repulsion (Figure 1-1).

Figure 1-1. The triblock copolymers form loops and bridges on micro emulsion droplets. Dark double circles indicate surfactant layer between two immiscible liquids.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

607 www.ijergs.org

These interactions lead to unusual phase behavior and rheological properties of emulsion systems containing triblock copolymers [80-
88]. To understand the structure, dynamics, phase behavior and rheological properties of adsorbed layers of polymer or surfactant
molecules in colloidal systems, numerous techniques have been used, such as scattering, magnetic resonance, spectroscopic,
hydrodynamic and rheological techniques [82, 84-95]. For example, poly(ethylene oxide)/polyisoprene/poly(ethylene oxide) triblocks
(PEO-PI-PEO) were investigated in micro emulsion systems of AOT/water/decane [80, 81] and AOT/water/isooctane [89-95],
forming highly associated solutions [80, 81, 89-95]. The phase behavior of AOT/water/decane is unusual with a gas-liquid transition
due to an entropic gain with the conversion of loops to bridges. The viscoelastic moduli depend on concentration or volume fraction,
conforming to theories of reversible networks or flowerlike micelle solutions [80]. SANS results of these systems showed that the
equilibrium spacing of the droplets is independent of molecular weight and the number of polymers per droplet. The deviation
between a power law asymptote for I(q) at high q and Gaussian coils suggested chain swelling due to excluded volume effects of
polymer layer [81].
1.4 Universal Models for Phase Transitions and Rheology
Several authors have attempted to derive universal models to connect inter particle interactions to phase behavior and rheology in
colloidal systems. In any colloidal dispersion system, the forces between components, usually expressed in terms of inter particle
potential, play a significant role in the structure, phase stability and rheology of the system. Typical inter particle potentials have a
repulsive component and an attractive component of depth min/kT, as despicted in Figure 1-2.

Figure 1-2. Typical pair potentials for colloids, showing hard sphere, soft sphere , attractive hard sphere, and attractive soft sphere.
The dependence of solution rheology on the inter particle potential for dilute to moderately concentrated dispersions has been revealed
by experiments, non-equilibrium theories, and simulations [89, 91, 93, 97-86]. Equilibrium phase transitions also are determined by
the nature of the potentials. The transition of a crystalline solid from a disordered liquid at high concentrations is a typical example
[90-91].The liquid-gel transition between a liquid and a disordered viscoelastic solid, which is actively debated and studied, is
suggested to occur through one of two mechanisms, attractive aggregation or formation of a glass. These two mechanisms also can be
unified into jamming transitions, a more general description [54]. Attractive aggregation in systems with inter particle attractions
creates a fractal network of colloids in which the mass M within a radius r is given by M ~ rd. Here d, a fractal dimension, can be
measured using scattering techniques [73, 76]. The rheology of such systems can be described using percolation theory [65] and
characterized by the particle volume fraction p, with critical value of gelation, pc. The equilibrium modulus G0 and the low shear
viscosity rj0 are given by:

Near the critical gel point, G and G, the storage and loss modulus, have a power-law dependence on frequency [109, 110], with G ~
G ~ rnA [94, 92].
0
Can be scaled by considering energy stored in interparticle bonds [85]:

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

608 www.ijergs.org

A disordered viscoelastic glass can be formed in both attractive and purely repulsive colloidal systems as first observed by Pusey and
van Megen [103, 113]. For monodisperse hard sphere systems the liquid-glass transition occurs at G=0.56-0.60 [113, 114]. Above the
glass transition, G starts to dominate over G and becomes independent of frequency [80]. Colloidal glasses can also be formed in
hard particles with short range attractions, such as colloids subject to depletion forces [88] and some polymeric micelles [97].
Jamming transitions are found to occur in a wide variety of attractive colloid systems and be able to unify the phenomena of gelation,
aggregation, and the glass transition [70]. In these systems, the viscosity diverges as a critical volume fraction c is approached, and G
develops a low frequency plateau [76].
COUNTERION EFFECTS IN AOT SYSTEMS
The simplest micro emulsion systems are composed of a surfactant, water and oil. Aerosol OT, which is sodium bis(2 ethylhexyl)
sulfosuccinate and simply called AOT, is a model surfactant that can form nanometer size reverse micelles and micro emulsion water
droplets in many oils (Fig. 2-1). for clarity we will use NaAOT to refer to the surfactant with a sodium counterion. NaAOT has been
extensively studied and has important applications in drug delivery, enhanced oil recovery, cosmetics, detergency, and so on. It has
been found that the type of counterion, solvent, solvent content, droplet volume fraction and temperature all have important effects on
the droplet size, shape, structure and properties of AOT-based micro emulsion systems.

Figure 2-1. Chemical structure of NaAOT
Materials and Methods:
KAOT, Mg (AOT) 2 and Ca(AOT)2 are prepared from NaAOT purum(Sigma-Aldrich), using previously described methods [10, 11,
19]. Micro emulsions were formulated by mixing dried and recrystallized surfactant with water and decane at fixed volume fraction p,
calculated from the specific volumes, and then diluting with decane and filtering through 0.22 ^m Millipore membrane syringe filters
into the Ubbelohde capillary viscometers. The values of X cited are the stoichiometric ones, neglecting the small amount of water of
hydration (AX < 0.5 for KAOT and Ca(AOT)2, AX < 1 for Mg(AOT)2) that normally cannot be removed from the surfactant easily.
Measurements of the viscosity were performed in capillary viscometers at a fixed temperature T=30C maintained within 0.1C with
a Neslab R211 constant temperature water bath. Three repeat runs of each sample were performed, with the standard deviation
between runs in the range of 0.02-0.08% for all samples. Capillary viscometers of size 0C, 1C, and 1 were used, corresponding to
capillary radii ~ 1.0 mm. This is much larger than the size of the AOT micro emulsion droplets, which have radii in the range 2.0-5.0
nm [5], and thus we do not expect any edge effects from the capillary walls. The phase stability was studied in oven at different
temperatures. Some of the solution samples are filtered into tubes for droplet size measurement using Argon laser (wavelength
A=514.5nm) for dynamic light scattering at 30C.
Results and Discussion: Ca (AOT)2/water/n-decane system. The relative viscosity of Ca(AOT)2/water/n-decane systems initially
increases with increasing X, reaches a maximum at X = 15, and then decreases with further increases of water amount or X (Figure 1-
4). Similar to NaAOT, the position of the maximum does not depend on p, but the magnitude increases with p.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

609 www.ijergs.org


Figure 2-2. Relative viscosity r versus X, the molar ratio of water to Ca(AOT)2, at fixed p for the Ca(AOT) 2/water/n-decane system. Lines are guides for the eye.

The Ca (AOT)2/water/n-decane system at fixed X (Figure 2-3) allows the intrinsic viscosity [r] to be determined from the intercept.

Figure 2-3. Reduced viscosity rsp versus p for the Ca(AOT)2 system

Table 2-1. Intrinsic viscosity of Ca(AOT)2/water/n-decane, KAOT/water/n-decane, andMg(AOT)2/water/n-decane microemulsions
as a function of X, the molar ratio of water to surfactant.

The corresponding k
H
values at each X are very high and reach a maximum of nearly 80 at X = 15 (Figure 2-4). Figure 2-5 shows
q
r
versus p for the Ca (AOT)
2
/water/- decane system at fixed X along with quadratic fits to the data. Figure 2-5 includes data at
very low p (0.005-0.1) that are not shown in Figure 2-2 and 2-3; these data allow us to obtain more accurate values of fit
parameters. In all cases, the data fit a quadratic form very well.

Figure 2-4. Huggins coefficient kH versus X, the molar ratio of water to surfactant. Lines are guides for the eye.

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

610 www.ijergs.org


Figure 2-5. Relative viscosity r versus for the Ca(AOT)2 system. For clarity, data at different X are shown on separate graphs for
values below (left) and above (right) the viscosity maximum. Lines are fits to r = 1 +2.5 + (6.0 + 1.9/)2. Symbols and lines
are as follows: X = 5, filled diamond and dot-dashed line; X = 10, filled circle and dashed line; X = 12.5, filled triangle and solid line;
X = 15, filled square and dotted line; X = 17.5, open square and solid line; X = 20, open circle and dashed line; and X = 22, open
triangle and dotted line.

Figure 2-6. Stickiness parameter for Ca(AOT)2/water/n-decane microemulsions versus X,
the molar ratio of water to surfactant. The line is a guide for the eye.

Figure 2-6 shows the values of 1/ that can be derived from the data, along with uncertainties based on the goodness of fit. The relative
uncertainty in the 1/ values are 25%. The values of 1/ are very high and reach a maximum at X = 12.5 (Figure 2-6). Again, similar
to the NaAOT system, the droplet interactions appear to mirror the viscosity maximum, with a maximum attraction at a value of X
near the viscosity maximum.

One interesting feature is the high value of kH or 1/ for Ca(AOT)2 microemulsions, suggesting strong attractive interactions
between droplets. Corresponding values for the NaAOT system are in the range 1.0-10.0 [5]. Values of kH for the NaAOT system are
in the range 1.010.0, which would roughly correspond to 1/ values in the range 0.130.0 assuming that the droplets can be described
as adhesive hard spheres [5]. The high values may be a consequence of approximating the interdroplet potential by a simple adhesive
hard sphere model; if this is not an adequate description of the potential, the data must be interpreted in terms of qualitative trends
only. Bergenholtz et al. [4] found that a square well model could not provide quantitative agreement between values for the
interdroplet attraction derived from SANS and viscometry. However, these high values may also suggest strong interactions in the
Ca(AOT)2 system than in the NaAOT system. This may be related to the divalent counterion. When the Ca2+ counterion is released
and surfactant exchange occurs between droplets, the resulting pair of oppositely charged droplets will each have a higher net charge
than in the Na+ case, resulting in a stronger electrostatic attraction.
Mg (AOT)2/water/n-decane system. From Figure 2-11 and 2-12, the relative viscosity and reduced viscosity of the Mg(AOT)2
system change quickly with water content. Its relative viscosity increases sharply with water content below X ^ 5. The intrinsic
viscosity of Mg (AOT)2 system increases slightly from below 3 to above 3 with water content increasing. Here only the
monophasic systems at X = 0, 2.5 and 5 were diluted for the investigation of Huggins coefficient and stickiness parameter (Fi gure
2-13).
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

611 www.ijergs.org


Figure 2-11. Relative viscosity r changes with X, molar ratio of water to Mg (AOT)2. Lines are guides for the eye.

Figure 2-12. Reduced viscosity changes with p, the volume fraction of droplet to total solution of Mg(AOT)2. Lines are guides for the eye
2.4.4 Effects of Water Content

Water content plays an important role in the phase stability and microstructure. For divalent counterions, there is a trend to form
cylindrical aggregates when water content increases [12]. We observe similar behavior (Table 2-1 and Figure 2-4) in Ca (AOT)2 and
Mg(AOT)2 systems, the intrinsic viscosities of increase with water content (Table 2-1). Spherical droplets are present if the hydration
radius Rh of counterion < 3.0 , and cylinder shape droplets are present if Rh >3.0 because the Rh will affect the interaction

between counterion and hydrated SO3 - group as some authors stated [12]. The Rhs of Na+, K+, Mg2+ and Ca2+ are 1.6, 1.1, 3.1
and 2.7 respectively [11]. As Table 2-1 shows, Mg(AOT)2, Ca(AOT)2 and KAOT systems all have a structure transition when they
switch from the waterless binary systems to ternary systems with water since all the hydration radius of counterion increases with
addition of water. As Figure 2-3 shows, the Mg (AOT) 2 system containing water has a stronger interaction than waterless system.
However the KAOT system has reverse behavior and its intrinsic viscosity also decreases with addition of water. The shape
fluctuation may also make contribution to interaction. The results of DLS experiments provide some explanations. At constant volume
fraction, the surfactant content decreases slowly and water content increase sharply, and the droplet size increases with water content
except KAOT systems, shown in Figure 2-14 Figure 2-16. The swelling of the droplets increases the possibility of penetration of
solvent or overlapping of surfactant tails, leading to a stronger interaction and higher viscosity. But when the swelling grows to some
degree, the droplets merge into larger droplets. This merging will decrease the amount of droplets and their interaction surface area,
leading to the decrease of the viscosity. In low volume fraction KAOT systems, the droplet size does not change too much with water
content. It shows K+ has very low hydration capacity. The large droplet can be stabilized only in high volume fraction KAOT
systems. It suggests that the bicontinuous structures exit in high water content systems, especially in KAOT and Mg(AOT)2 systems.

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

612 www.ijergs.org


Figure 2-14. Average hydrodynamic radius of Ca(AOT)2/water/decane microemulsion
droplets. Lines are guides for the eye


Figure 2-15. Average hydrodynamic radius of KAOT/water/decane microemulsion droplets. Lines are guides for the eye.


Figure 2-16. Average hydrodynamic radius of Mg(AOT)2/water/decane microemulsion
droplets. Lines are guides for the eye.


These results suggest that the hydration capability of the counterion plays an important role in the droplet size and the viscosity
behavior. In the series we have examined, Mg2+ has the strongest hydration capability and K+ has the weakest hydration capability.
So water content has more obvious effects on the droplet sizes of Ca(AOT)2 and Mg(AOT)2 systems.

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

613 www.ijergs.org

Effects of Temperature

The three kinds of systems in this work have different sensitivities to temperature. KAOT and Ca(AOT)2 systems are more sensitive
than Mg(AOT)2. Compared with the insensitivity of M(AOT)2/water/cyclohexane to temperature and the less sensitivity of
NaAOT/water/cyclohexane to temperature [10], this shows that the sensitivity may partially arise from the long chain of n-decane that
can penetrate into the tails of AOT

Effects of Ion Hydration and Mobility
We have compared the viscosity behavior of NaAOT, KAOT, Ca(AOT)2, and MgAOT)2 with each other, and discussed the effects of
ion charge, hydrodynamic ion radius, water and volume fraction on the viscosity, and tried to use charge fluctuation model to explain
the viscosity anomalies. The charge fluctuation model suggested a possible origin of viscosity anomaly. At present, there are two
existing mechanisms to describe charge fluctuations in micro emulsion [2] (Figure 1-7). One mechanism is hopping, indicating
surfactant ions hop from one droplet to another one. The other one is that ions transport by fusion and fission. The asymmetric shape
of droplets or deviation from spherical shape may also affect the viscosity, which is indicated by the intrinsic viscosity.

Random dispersion cfiarg* transfer restricted bv tne distant of separation Association augmenting charge transfer by hopping

Fusion Fission

Transfer of Ions Nat transfer of

Figure 2-17. [A]. Hopping mechanism. Ions hop in the direction indicated by the curl heads. [B]. Ion transport by fusion
and fission. n cations in the droplets. m cations are involved in the transfer process. (Source: Ref [2])

Acknowledgment
I want to say thank you to my family, specially my father and my mother for supporting me during my study in M tech and my entire
friend which help me during this study. I have to also thanks for my college to support me during my m tech in chemical in Bharati
Vidyapeeth deemed University College of engineering.

CONCLUSION
The bio-oil could be considered as an emulsion system since it contains water and organic compounds, so there are a lot of hydrous
micro domains and anhydrous micro domains. Most of the chemicals listed in Table 1 are unstable. The functional groups are very
reactive and can cause various polymerization reactions in two kinds of micro domains. So in the future work, while investigate the
removal of acids and chars, we propose to investigate the reaction mechanisms existing in the bio-oil with concerns of cationic,
anionic, radical polymerization and cross-link reaction, and then find cost- effective polymerization inhibitors.

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

614 www.ijergs.org

REFERENCES:
[1] J. H. Schulman; W. Stoeckenius; L. M. Prince. J. Phys. Chem., 1959, 63, (10), p1677.
[2] T. P. Hoar; J. H. Schulman. Nature, 1943, 152, p102.
[3] D. Attwood, Microemulsions. In Colloidal Drug Delivery Systems, Kreuter, J., Ed. Marcel Dekker, New York: 1994.
[4] J. E. Bowcott; J. H. Schulman. Zeitschrift Fur Elektrochemie, 1955, 59, (4), p283.
[5] C. E. Cooke; J. H. Schulman In The effect of different hydrocarbons or the formation of microemulsions, Surface Chemistry,
Stockholm, 1964; Ekwall, P.; Groth, K.; Runnstrom-Reio, V., Eds. Academic Press, New York: Stockholm, 1964; pp 231.
[6] J. H. Schulman; J. A. Friend. J. Colloid Sci., 1949, 4, (5), p497.
[7] J. H. Schulman; D. P. Riley. J. Colloid Sci., 1948, 3, (4), p383.
[8] D. F. Sears; J. H. Schulman. J. Phys. Chem., 1964, 68, (12), p3529.
[9] Zlochowe.Ia; J. H. Schulman. J. Colloid Interface Sci., 1967, 24, (1), p115.
[10] P. Kumar; K. L. Mitta, Handbook of Microemulsion Science and Technology. 1999, Marcel Dekker, New York.
[11] G. Gillberg; H. Lehtinen; S. Friberg. J. Colloid Interface Sci., 1970, 33, (1), p40.
[12] R. Muller; E. Gerard; P. Dugand; P. Rempp; Y. Gnanou. Macromolecules, 1991, 24, (6), p1321.
[13] H. Saito; K. Shinoda. J. Colloid Interface Sci., 1967, 24, (1), p10.
[14] H. Saito; K. Shinoda. J. Colloid Interface Sci., 1970, 32, (4), p647.
[15] K. Shinoda. J. Colloid Interface Sci., 1967, 24, (1), p4.
[16] K. Shinoda. J. Colloid Interface Sci., 1970, 34, (2), p278.
[17] K. Shinoda; T. Ogawa. J. Colloid Interface Sci., 1967, 24, (1), p56.
[18] E. Sjoblom; S. Friberg. J. Colloid Interface Sci., 1978, 67, (1), p16.
[19] M. Kreilgaard. Adv. Drug Deliv. Rev., 2002, 54, pS77.
[20] J. L. Cayias; R. S. Schechter; W. H. Wade. J. Colloid Interface Sci., 1977, 59, (1), p31.
[21] M. Chiang; D. O. Shah. Abstr. Pap. Am. Chem. Soc., 1980, 179, (MAR), p147.
[22] M. Y. Chiang; K. S. Chan; D. O. Shah. J. Can. Pet. Technol., 1978, 17, (4), p61.
[23] R. N. Healy; R. L. Reed. SPE J., 1974, 14, (5), p491.
[24] R. N. Healy; R. L. Reed. SPE J., 1977, 17, (2), p129.
[25] M. J. Schwuger; K. Stickdorn; R. Schomacker. Chem. Rev., 1995, 95, (4), p849.
[26] M. Baviere; P. Glenat; V. Plazanet; J. Labrid. SPEReserv. Eng., 1995, 10, (3), p187.
[27] J. D. Desai; I. M. Banat. Microbiol. Mol. Biol. Rev., 1997, 61, (1), p47.
[28] L. L. Schramm; D. B. Fisher; S. Schurch; A. Cameron. Colloid Surf. A-Physicochem. Eng. Asp, 1995, 94, (2-3), p145.
[29] E. C. Donaldson; G. V. Chilingarian; T. F. Yen, Microbial Enhanced Oil Recovery. 1989, Elsevier, New York: p 9.
[30] C. A. Bunton; F. Nome; F. H. Quina; L. S. Romsted. Accounts Chem. Res., 1991, 24, (12), p357.
[31] A. Ceglie; K. P. Das; B. Lindman. J. Colloid Interface Sci., 1987, 115, (1), p115.
[32] S. J. Chen; D. F. Evans; B. W. Ninham; D. J. Mitchell; F. D. Blum; S. Pickup. J. Phys. Chem., 1986, 90, (5), p842.
[33] M. Fanun; M. Leser; A. Aserin; N. Garti. Colloid Surf. A-Physicochem. Eng. Asp., 2001, 194, (1-3), p175.
[34] F. M. Menger; A. R. Elrington. J. Am. Chem. Soc., 1991, 113, (25), p9621.
[35] V. K. Vanag; I. R. Epstein. Phys. Rev. Lett., 2001, 8722, (22).
[36] P. Y. Feng; X. H. Bu; G. D. Stucky; D. J. Pine. J. Am. Chem. Soc, 2000, 122, (5), p994.
[37] W. Meier. Curr. Opin. Colloid Interface Sci., 1999, 4, (1), p6.
[38] X. Zhang; F. Zhang; K. Y. Chan. Mater. Lett, 2004, 58, (22-23), p2872.
[39] M. Antonietti; R. Basten; S. Lohmann. Macromol. Chem. Phys., 1995, 196, (2), p441.
[40] W. Ming; F. N. Jones; S. K. Fu. Macromol. Chem. Phys., 1998, 199, (6), p1075.
[41] M. Antonietti; W. Bremser; D. Muschenborn; C. Rosenauer; B. Schupp; M. Schmidt. Macromolecules, 1991, 24, (25),
p6636.
[42] P. Y. Chow; J. Ding; X. Z. Wang; C. H. Chew; L. M. Gan. Phys. Status Solidi A- Appl. Res., 2000, 180, (2), p547.
[43] J. H. Clint; I. R. Collins; J. A. Williams; B. H. Robinson; T. F. Towey; P. Cajean; A. Khanlodhi. Faraday Discuss., 1993,
p219.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

615 www.ijergs.org

[44] S. Eriksson; U. Nylen; S. Rojas; M. Boutonnet. Appl. Catal. A-Gen., 2004, 265, (2), p207.
[45] T. Hanaoka; H. Hayashi; T. Tago; M. Kishida; K. Wakabayashi. J. Colloid Interface Sci., 2001, 235, (2), p235.
[46] T. Masui; K. Fujiwara; Y. M. Peng; T. Sakata; K. Machida; H. Mori; G. Adachi. J. Alloy. Compd., 1998, 269, (1-2), p116.
[47] K. Zhang; C. H. Chew; S. Kawi; J. Wang; L. M. Gan. Catal. Lett., 2000, 64, (2-4), p179.
[48] N. Kometani; Y. Toyoda; K. Asami; Y. Yonezawa. Chem. Lett., 2000, (6), p682.
[49] H. Ohde; J. M. Rodriguez; X. R. Ye; C. M. Wai. Chem. Commun., 2000, (23), p2353.
[50] H. Ohde; C. M. Wai; H. Kim; J. Kim; M. Ohde. J. Am. Chem. Soc, 2002, 124, (17), p4540.
[51] Y. L. Khmelnitsky; R. Hilhorst; C. Veeger. Eur. J. Biochem., 1988, 176, (2), p265.
[52] A. Na; C. Eriksson; S. G. Eriksson; E. Osterberg; K. Holmberg. J. Am. Oil Chem. Soc., 1990, 67, (11), p766.
[53] H. Stamatis; A. Xenakis; M. Provelegiou; F. N. Kolisis. Biotechnol. Bioeng., 1993, 42, (1), p103.
[54] M. J. Lawrence; G. D. Rees. Adv. DrugDeliv. Rev., 2000, 45, (1), p89.
[55] J. M. Sarciaux; L. Acar; P. A. Sado. Int. J. Pharm., 1995, 120, (2), p127.
[56] T. F. Vandamme. Prog. Retin. Eye Res., 2002, 21, (1), p15.
[57] S. Magdassi. Colloid Surf. A-Physicochem. Eng. Asp., 1997, 123, p671.
[58] B. K. Paul; S. P. Moulik. Curr. Sci, 2001, 80, (8), p990.
[59] T. F. Tadros. Intl. J. of Cosmetic Sci., 1992, 14, (3), p93.
[60] F. Dreher; P. Walde; P. Walther; E. Wehrli. J. Control. Release, 1997, 45, (2), p131.
[61] G. J. T. Tiddy. Phys. Rep.-Rev. Sec. Phys. Lett., 1980, 57, (1), p2.
[62] H. Wennerstrom; B. Lindman. Phys. Rep.-Rev. Sec. Phys. Lett., 1979, 52, (1), p1.
[63] N. N. Li Separating hydrocarbons with liquid membranes. US Pat. 3, 410, 794, 1968.
[64] K. Naoe; T. Kai; M. Kawagoe; M. Imai. Biochem. Eng. J., 1999, 3, (1), p79.
[65] M. Saidi; H. Khalaf. Hydrometallurgy, 2004, 74, (1-2), p85.
[66] V. E. Serga; L. D. Kulikova; B. A. Purin. Sep. Sci. Technol., 1999, 35, (2), p299.
[67] C. Tondre; A. Xenakis. Faraday Discuss., 1984, p115.
[68] S. W. Tsai; C. L. Wen; J. L. Chen; C. S. Wu. J. Membr. Sci, 1995, 100, (2), p87.
[69] J. M. Wiencek; S. Qutubuddin. Sep. Sci. Technol., 1992, 27, (10), p1211.
[70] N. Azemar; I. Carrera; C. Solans. J. Dispersion Sci. Technol., 1993, 14, (6), p645.
[71] R. L. Blum; M. H. Robbins; L. M. Hearn; S. L. Nelson Microemulsion dilutable cleaner. US Pat. 5, 854, 187, 1998.
[72] C. Solans; J. G. Dominguez; S. E. Friberg. J. Dispersion Sci. Technol., 1985, 6, (5), p523.
[73] C. Toncumpou; E. J. Acosta; L. B. Quencer; A. F. Joseph; J. F. Scamehorn; D. A. Sabatini; S. Chavadej; N. Yanumet. J.
SurfactantsDeterg., 2003, 6, (3), p191.
[74] E. Kissa, Fluorinated surfactants and repellents. 2001, Marcel Dekker, New York: Vol. 97.
[75] M. P. Krafft. Adv. Drug Deliv. Rev., 2001, 47, (2-3), p209.
[76] J. G. Riess. Colloid Surf. A-Physicochem. Eng. Asp., 1994, 84, (1), p33.
[77] C. Ceschin; J. Roques; M. C. Maletmartino; A. Lattes. J. Chem. Tech. & Biotech. a- Chem. Tech, 1985, 35, (2), p73.
[78] P. LoNostro; S. M. Choi; C. Y. Ku; S. H. Chen. J. Phys. Chem. B, 1999, 103, (25), p5347.
[79] P. Mukerjee. Colloid Surf. A-Physicochem. Eng. Asp., 1994, 84, (1), p1.
[80] U. Batra; W. B. Russel; M. Pitsikalis; S. Sioula; J. W. Mays; J. S. Huang. Macromolecules, 1997, 30, (20), p6120.
[81] S. R. Bhatia; W. B. Russel; J. Lal. J. Appl. Crystallogr., 2000, 33, (1), p614.
[82] G. J. Fleer; M. A. C. Stuart; J. M. H. M. Scheutjens; T. Cosgrove; B. Vincent, Polymers at Interfaces. 1993, Chapman & Hall:
London, New York.
[83] S. A. Hagan; S. S. Davis; L. Illum; M. C. Davies; M. C. Garnett; D. C. Taylor; M. P. Irving; T. F. Tadros. Langmuir, 1995,
11, (5), p1482.
[84] W. Liang; T. F. Tadros; P. F. Luckham. J. Colloid Interface Sci., 1992, 153, (1), p131.
[85] S. T. Milner; T. A. Witten. Macromolecules, 1992, 25, (20), p5495.
[86] M. A. C. Stuart; T. Cosgrove; B. Vincent. Adv. Colloid Interface Sci., 1986, 24, (23), p143.
[87] C. Washington; S. M. King. Langmuir, 1997, 13, (17), p4545.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

616 www.ijergs.org

(12) C. Washington; S. M. King; R. K. Heenan. J. Phys. Chem., 1996, 100, (18), p7603.H. F. Eicke; M. Gauthier; R. Hilfiker; R.
Struis; G. Xu. J. Phys. Chem., 1992, 96, p5175.
[88] H. F. Eicke; C. Quellet; G. Xu. Colloids & Surfaces, 1989, 36, (1), p97.
[89] G. Fleischer; F. Stieber; U. Hofmeier; H. F. Eicke. Langmuir, 1994, 10, (6), p1780.
[90] R. Hilfiker; H. F. Eicke; C. Steeb; U. Hofmeier. J. Phys. Chem., 1991, 95, (3), p1478.
[91] M. Odenwald; H. F. Eicke; W. Meier. Macromolecules, 1995, 28, (14), p5069.
[92] C. Quellet; H. F. Eicke; G. Xu; Y. Hauger. Macromolecules, 1990, 23, (13), p3347.
[93] R. Struis; H. F. Eicke. J. Phys. Chem., 1991, 95, (15), p5989.
[94] S. Fusco; A. Borzacchiello; P. A. Netti. J. Bioact. Compat. Polym., 2006, 21, (2), p149.
[95] J. Bergenholtz. Curr. Opin. Colloid Interface Sci., 2001, 6, (5-6), p484.
[96] J. Bergenholtz; N. Willenbacher; N. J. Wagner; B. Morrison; D. van den Ende; J. Mellema. J. Colloid Interface Sci., 1998,
202, (2), p430.
[97] J. F. Brady. J. Chem. Phys., 1993, 99, (1), p567


















International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

617 www.ijergs.org

Design of a Focused Crawler Based on Dynamic Computation of Topic Specific
Weight Table
Meenu
1
, Priyanka Singla
1
, Rakesh Batra
1

1
Dept. Of Computer Science & Engineering, YMCA Institute of Engineering and Technology, Faridabad, India

E-mail: mahi.batra11@gmail.com
Abstract - Focused Crawler aims to select relevant web pages from internet. These pages are relevant to some predefined topics.
Previous focused crawlers have a problem of not keeping track of user interest and goals .The topic weight table is calculated only once
statically and that is less sensitive to potential changes in environment. To address this problem we design a focused crawler based on
dynamic computation of topic keywords and their weights. This weight table is constructed according to user query. To check the
similarity of web page with respect to the topic keywords, a cosine similarity function is used and priority of extracted links is
calculated.

Keywords - Crawler; Focused Crawler; Topic Weight Table; Link Score; Search Engine; page score; user.

1. INTRODUCTION
Web Crawler is a continuously running program which downloads pages periodically from World Wide Web. It is also known as Web
Spider or as a Wanderer. These downloaded pages are indexed and stored in a database. Later on, these pages are used by search engine
to find the information related to search query. A recent study estimate that the size of visible web has passed over billions of
documents and list is still increasing. Due to enormous growth and changing in the web, it becomes difficult for a search engine to keep
up index fresh. Even a popular search engine like Google crawl only 40 % of whole web [1]. So to avoid this problem, we need a
crawler which crawl a specific and relevant subset of World Wide Web. There is need of a crawler which efficiently and effectively
works with respect to limiting resource and time [7].Focused crawler is a crawler also known as topical web crawler which downloads
only relevant pages from World Wide Web. These pages are relevant to the set of topics defined. It was first introduced by Chakrabarti
et al.[2]. Focused crawler predicts the relevancy of page at two places: (a) before downloading (b) after downloading. Before
downloading it predicts the relevancy of page by seeing the anchor text of links; this approach is given by Pinkerton [3][8], also known
as link based analysis and after downloading by seeing the content of page, known as content based analysis. Relevant pages are stored
in a database and their contained URL is added to URL queue. However, most focused crawler use local search algorithm such as best-
first search or breadth first search to determine the order by which target URL are visited[4]. Focused crawler is a useful for
application such as distributed processing of web. It is also used in personal search engine, web database and commercial intelligence.

In this paper a focused crawler has been designed based on dynamic computation of topic keywords & their weights. It constructs topic
weight table according to user query. Thus, it allows the final collection to address the user information needs. The outline of this paper
is as follows: sections 2 provide the brief discussion of existing crawler and issues related to that crawler. Section 3 describes our
proposed work. In Section 4 results have been obtained and compare with existing crawler and give some experimental result. In
Section 5 conclusion and suggestion for future direction have been presented.

2. RELATED WORK
Focused Crawler is heavily depending upon topical locality phenomenon [5]. Topics offer a good mechanism for evaluating the
relevancy of page. Topic is the set of keywords with their associated weights. The topic vector can be written as given in equation

(1).

Topic = {(k
1
, w1), (k
2,
w2).. (k
n
,w
n
)} (1)

Here k
1
, k
2
.., k
n
are keywords and w
1
, w
2
w
n
are the weights associated with these keywords. Topics may be obtained from different
sources such as asking user to specify them. But users are unwilling to specify the topics because of requirement of additional effort
and time. Anshika Pal et al. [9] proposed a method for topic specific weight table construction. Topic name is given to Google web
search engine and first few results are retrieved. After that term frequency and document frequency of words are calculated and each
word is assigned weight w
i
= tf * df. After that their weights are normalized using the following eq. (2).
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

618 www.ijergs.org


W
i+1
= W
i
/ Wmax (2)

Where, Wmax is the maximum weight assigned to any keyword and W
i+1
is the new weight assigned to each keyword. After
construction of topic specific weight table, page relevancy is calculated based on content and link analysis as proposed in [5]. A critical
look at the available literature indicates the following limitations:

1. The present crawlers rely on Keywords weights which are computed once statically.
2. Results are less relevant to user interest and goals.
3. It is not sensitive to potential alteration in the environment.

In order to make search results more relevant to user interest a mechanism has been proposed which dynamically computes topic
specific weight table. And dynamic computation leads to a higher relevancy of a page according to user.

3. PROPOSED ARCHITECTURE
In order to get results according to user interest and more sensitive to potential changes in the environment section 3.1 describe
dynamically construction of topic specific weight table.

3.1 Topic Weight Table Construction:

Fig.1 shows the Process of topic specific weight table construction



Fig. 1: Topic Specific Weight Table Construction

User Query log is a place where all the queries fired by users are stored. In preprocessing stage, tokens are extracted from queries and
normalization of token is done. After that topic specific weight table is constructed. The topic specific weight table is reconstructed
after fixed interval of time by applying same procedure .The algorithm related to weight table construction is given below.

3.1 .1 Weight Table Construction algorithm
As we know that user query is the most important source of knowing user interest. For knowing the relevancy of page according to user
interest, it constructs weight table with the help of user query. The proposed algorithm is given below.

Step 1: Crawler access the user queries from user query logs.
Step 2: Tokenize the queries collected by crawler. /* Tokenization is the task of chopping a query in to pieces called tokens. It will
throw away certain characters such as punctuation.*/
Step 3: Drop common terms such as stop words. /* Process of dropping common words is known as stop listing. These words have a
little value for knowing the user interest. Most of the commonly used stop words are for, it, the, a, can, do, did, will,
shall etc. */
Step 4: Do linguistic processing such as lemmatization/stemming and producing a list of normalized tokens. /* The main aim of both
stemming and lemmatization is to reduce inflectional form and sometimes derivationally related form of a word to a common base
form. */
Step 5: Sort the terms alphabetically either in ascending or in descending order.
User Query
Logs

l

Preprocessing Topic Specific
Weight Table
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

619 www.ijergs.org

Step 6: Multiple occurrences of same terms are merged. This step also records some statistics such as query frequency which is the
number of queries which contain each term.
Step 7: After that it calculate the weight of each term as given in equation (3).

W
i
new = 1 qf + W
i
(old)

(3)


Here, is constant whose value lies in between 0< 0.5, qf is the query frequency of each term and W
i(old)
is the weight of term if that
term occur in previous weight table if not occurred previously then taken it as 0 and W
i
(new)

is the current weight of term.
Step 8: Terms whose weight is greater than or equal to threshold value is taken as keyword for knowing the relevancy of a page.

Example: Suppose we have set of following sample queries taken from query log.
Q1: Kejriwal new manifesto for lok sabha polls 2014.
Q2: Manifesto of BJP for lok sabha polls 2014.
Q3: lok sabha polls 2014 dates

Apply step by step procedure on the above given queries. Up to step 6 following result will come.
Terms Query Frequency
BJP 1
date 1
kejriwal 1
lok sabha 3
manifesto 2
new 1
poll 3
2014 3
Let us suppose =0.5 and w
i
(old) of terms is 0.By applying the formula given in equation (3) following results will come.

Terms Weight
BJP 0.5
date 0.5
kejriwal 0.5
lok sabha 1.5
manifesto 1.0
new 0.5
poll 1.5
2014 1.5
Let us suppose threshold value for topic keyword weight is 1.0. Thus final topic specific weight table is given in Table I
Table I: Weight Table
No Keyword Weight
1 lok sabha 1.5
2 poll 1.5
3 manifesto 1.0
4 2014 1.5
This Weight Table is used by relevancy calculator as shown in fig.2.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

620 www.ijergs.org






















If not
seen

If not seen
URL to
test

TO
Relevancy
Calculator
Doc+URL Buffer
Bufffer
URL Sets
WWW
Web Page
Downloader
Document
Buffer
Content Seen Test
Doc FPs
Repository
HTML Parser &
link Extractor
L
Something to
parse

Topic Specific
Table
URL Seen
Test
Relevant URL
Buffer

Initial seed URLs URL Frontier



Something
to test
Something
to calculate

3.2 Crawling Process
The process of crawling after topic specific weight table construction is shown in fig. 2. The detailed description of each component is
given below.
3.2.1 Seed URLs Generation
Here, seed URLs are generated by one search engine www.threesearch.com. We put the topic keyword here and it show the result of
three most popular search engine Google, Yahoo, MSN .We take the seed URLs which are common in all the three search engine.
Initially these URLs are given to URL Frontier .
3.2.2 URL Frontier
It is a data structure that contains all URLs that remain to be downloaded. Here, it is used priority queue instead of simple queue.
3.2.3 Web page downloader
This module built the connection with the internet and downloads the pages corresponding to the given URL using appropriate
network protocol and store temporarily in document buffer. After that it gives signal: Something to test to the content seen test
















Fig.2 Crawling Process

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

621 www.ijergs.org


3.2.4 Content Seen Test
Many documents available on the web under multiple different URLS .These effects will cause any crawler to download same
document multiple times. To prevent downloading of a document more than once, web crawler wishes to perform content seen test.
Using content seen test it is possible to suppress link extraction from mirrored pages which may result in significant reduction in
number of pages that needs to be downloaded. The content seen test will be expensive if we match complete documents. In order to
save space and time, we maintain a data structure called document fingerprint set that store 64 bit checksum of the content of each
downloaded document. If the document is downloaded before, it reject that page and next document is downloaded by downloader
otherwise page is stored in repository and signal is given to HTML Parser and Link Extractor
3.2.5 HTML Parser & Link Extractor
Once a page has been downloaded we need to parse its content to extract the information that will guide the future possible path of the
crawler. In order to extract hyperlink URL from a web page, anchor tags and other related information we can use these parser. After
that page, its extracted links and other information are stored in buffer and signal: Something to calculate is given to Relevancy
Calculator.
3.2.6 Relevancy Calculator
This module calculates the relevancy of page corresponding to topic keyword in the table by using equation (4). Here, it uses cosine
similarity to calculate the relevancy of page:
Relevancy(t,p) =
() CWi ( p)

=1

2
()

(4)

Where , CWi(t) and CWi(p)are the weight of i-th common keyword in weight table t and web page p respectively, and Wi(t) and
Wi(p) are the weight of keyword in web page p and weight table t respectively. If the relevancy score of page is greater than threshold
value then Link Score of its extracting links are calculated by using equation (5).

LinkScore (k) = + + + (5)

Where LinkScore (k) is score of link k, = URLScore (k) is the relevancy between topic keywords and href information of k, =
AnchorScore (k) is the relevancy between topic keywords and anchor text of k, =ParentScore (k) is the page relevancy score of page
from which link was extracted and =SurroundingScore (k) is the relevancy between text surrounding the link and topic keyword. The
links whose score is greater than threshold is considered to be relevant. Relevant URLs and their score is stored in relevant URL buffer
and signal is given to process URL seen test.
3.2.7 URL Seen Test:
In the course of extracting links, crawler may encounter duplicate URLs .To avoid downloaded of document more than once URL seen
test must be performed on extracted links before adding to URL Frontier .To perform URL seen test , We store all URLs seen by
crawler in canonical form in a table called URL set. To save space and time, it does not store textual representation of each URL in the
URL set but uses a fixed size checksum which are stored in disk. To increase the efficiency, we keep in-memory cache of most popular
URLs.
4. EXPERIMENTAL RESULTS
Generally, Harvest-ratio is used to measure the performance of focused crawler. Harvest-ratio is also known as precision metric of
crawler. It can be defined as percentage of crawled pages that are relevant to specific topic.
Harvest Ratio =
# Of Relevant Pages
# Of Downloaded Pages
6
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

622 www.ijergs.org


The harvest-ratio of present crawler has been calculated by using formula given in equation (6) and compared with the basic crawler
and focused crawler based on static weight table. Table II shows the harvest ratio of three crawlers at different no. of crawled pages.


Table II

No. of
Crawled
pages
Harvest-Ratio
Basic Crawler Focused Crawler Based On Static
Computation Of Topic Weight Table
Focused Crawler Based On Dynamic
Computation Of Topic Weight Table
500 0.3 0.8 1
1000 0.25 0.78 0.95
1500 0.27 0.79 0.92
2000 0.2 0.77 0.9
2500 0.18 0.74 0.88
3000 0.19 0.75 0.91
3500 0.15 0.73 0.87
4000 0.15 0.7 0.87


The illustrated crawled result is shown on two dimensional graphs where x-axis is the number of crawled pages and y-axis is the
Harvest-Ratio calculated as shown in fig. 3

Fig. 3: Experimental Result
5. CONCLUSION AND FUTURE WORK
Focused Crawler is the main component of special search engine. A focused crawler selectively seeks out and downloads web pages
that are relevant to the search topic. Our approach is based on dynamic computation of topic specific weight table of focused crawler.
Here, Weight table is built according to user query. Thus, it gives results which are more relevant to user. This approach does not
consider the context related to keyword of topic. In our Future work, we will try to consider context related to keywords and also do
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
500 1000 1500 2000 2500 3000 3500 4000
H
a
r
v
e
s
t
-
R
a
t
i
o
No of Crawled Pages
Basic crawler
Focused Crawler based on static
computation of topic weight table
Focused Crawler based on
dynamic computation of topic
weight table
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

623 www.ijergs.org

code optimization because crawler efficiency not only depend to retrieve maximum relevant page but also to finish the operation as
soon as possible.

REFERENCES:

[1] S. Lawrence and L. Giles, Accessibility and distribution of information the web[j], Nature, vol. 400, pp. 107109, 1999.

[2] S. Chakrabarti, M. Van Den Berg, B. Dom, Focused Crawling: A New Approach to Topic specific web resource discovery, Proc.
Of 8th International WWW conference, Toronto, Canada, May, 1999.

[3] B. Pinkerton, Finding what people want: Experiences with the web crawler in In Proceedings of the First International World-
Wide Web Conference, Geneva, Switzerland, May 1994.

[4] Ah Chung Tsoi , Daniele Forsali ,Marco Gori, Markus Hagenbuchner and Franco Scarselli, A simple Focused Crawler , WWW
2003 ACM, 2003.

[5] X.Chen and X. Zhang, HAWK: A Focused Crawler with Content and Link Analysis, Proc. IEEE International Conf. on e-
Business Engineering, 2008.

[6] Li Wei-jiang, Ru Hua-suo, Zhao Tie-jun,Zang Wen-mao , A New Algorithm of Topical Crawler, Second International Workshop
on Computer Science and Engineering 2009.

[7] M. Kumar and R. Vig, Design of CORE: context ontology rule enhanced focused web crawler, International Conference on
Advances in Computing, Communication and Control (ICAC309) pp. 494-497, 2009.

[8] Brin, S. and Page, L. (1998) The Anatomy of a Large- Scale Hyper textual Web Search Engine, Computer Networks and ISDN
Systems, 30(17).

[9]Anshika Pal , Deepak Singh Tomar , S.C Shrivastava Effective Focused Crawler Based on Content and Link structure Analysis,
International Journal Of Computer Science and Information Security, vol.2,no.1.june 2009.

[10]Debashis Hati , Amritesh Kumar , An Approach for Identifying URLs Based on Division Score and Link Score in Focused
Crawler International Journal of Computer Applications , Volume 2 No.3, May 2010.

[11]Jaytrilok Choudhary and Devshri Roy A Priority Based Focused Web Crawler International Journal of Computer Engineering
and Technology, Volume 4, Issue 4, July-August 2013.

[12] Mohsen Jamali, Hassan Sayyadi, Babak Bagheri Hariri and Hassan Abolhassani, A Method for Focused Crawling Using
Combination of Link Structure and Content Similarity





International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

624 www.ijergs.org

Use of Phase Change Materials in Construction of Buildings: A Review
Pawan R. Ingole
1
, Tushar R Mohod
2
, Sagar S Gaddamwar
2

1
Asst. Professor, Dept. of Mechanical Engineering, J.D.I.E.T. Yavatmal, S.G.B.A. University, India
E-mail- pawaningole5@gmail.com
Abstract Phase-change material (PCM) is a substance with a high heat of fusion which, on melting and solidifying at a certain
temperature, is capable of storing and releasing large amounts of energy. PCMs are regarded as a possible solution for reducing the
energy consumption of buildings. For raising the building inertia and stabilizing the indoor climate, PCMs are more useful because of
its nature of storing and releasing heat within a certain temperature range. In this paper, recent development in the field of using
different types of PCMs with concrete, their incorporation and the influence of PCMs on the properties of concrete at the different
stages are reviewed.

Keywords Eutectic, Immersions, Impregnation Paraffines, Phase change materials.
INTRODUCTION
Phase Change Materials exhibits Thermodynamic property of storing large amount of latent heat during its phase change. PCM
solidifies on drop of ambient temperature giving off its latent heat of fusion. As compared to conventional materials PCMs have the
property of storing high amount of latent heat giving more heat storage capacity per unit volume. PCMs are implemented in Gypsum
wall boards, plasters, textured finishes due to its thermal energy storage application. Chemical composition Of PCMs yields to three
basic sub-categories namely (i) organic compounds, (ii) inorganic compounds and (iii) inorganic eutectics or eutectic mixtures. PCMs
should be desired to have high latent heat of fusion and good heat transfer rate. It mainly depends upon desired comfort temperature
and ambient temperature. Super cooling influences the performance of PCMs. Incorporation of PCMs in construction work and to
provide information on their characteristics are the main aim of this paper.

1. CLASSIFICATION OF PHASE CHANGE MATERIALS

1.1 EUTECTICS
Eutectic mixtures or eutectics are the mixtures having low melting point of multiple solids and its volumetric storage density is
slightly higher than that of organic compounds. The eutectic binary systems showed melting points between 18 and 51 C and freezing
points between 16 and 51 C, with a heat of fusion between 120 and 160kJ/kg. The organic eutectic capric mauric acid is the most
suited for passive solar storage since it has a melting point of 18 0C, a freezing point of 17 C and a heat of fusion of 120kJ/kg.

1.2 ORGANIC PHASE CHANGE MATERIALS
These are generally stable compounds and free from super cooling, corrosion, having great latent heat of fusion. Commercial paraffin
waxes are inexpensive and have a reasonable thermal storage density of 120kJ/kg up to 210kJ/kg. Paraffins are chemically inert and
available in a wide range of melting temperatures from approximately 200C up to about 700C, of most interest in this group are the
fatty acids or palmitoleic acids. It is free from super cooling, volumetric change and has high latent heat of fusion.

1.3 INORGANIC PHASE CHANGE MATERIALS
PCMs exhibit properties of good thermal conductivity, affordability and non-flammability. However, most of them are corrosive to
most metals, undergo super cooling and undergo phase decomposition. Highly crystalline polymer for example high density
polyethylene (HDPE) is advantageous if it is rendered stable by cross linking when 98% of the heat of fusion is used by transition.
Most of them occur at higher unfavorable temperatures ranging from 30 C to 600C.


Table1: List of Main Phase Change Materials
Organic Paraffins Inorganic
Polyglycol E400 Paraffin C14 H
2
O
Polyglycol E 600 Paraffin C15C16 LiClO3 3H2O
Polyglycol E 6000 Paraffin C16C18 Mn(NO3)2 6H2O
Dodecanol Paraffin C13C24 LiNO3 3H2O
Tetradodocanol Paraffin C16C28 Zn(NO3)2 6H2O
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

625 www.ijergs.org

Biphenyl Paraffin C
18
Na
2
CO
3
10H
2
O
HDPE Paraffin C
20
C
33
CaBr
2
6H
2
O
Propianide Paraffin C
23
C
50
Na
2
S
2
O
3
5H
2
O
Dimethyl-sulfoxide Ba(OH)
2
8H
2
O
Capric acid Mg(NO
3
)
2
6H
2
O
Capricinic acid (NH
4
)Al(SO
4
) 6H
2
O
Laurinic acid MgCl2 6H2O
Miristic acid NaNO3
Lakisol KNO3
Palmitic acid KOH
Stearic acid MgCl2
2. PCMS INCORPORATIONS IN CONCRETE

Concrete wall incorporated with PCM


Heat absorbed Heat released


Solar Radiations



Figure1. Heating and cooling function of concrete wall incorporated with PCM.

2.1 IMPREGNATION
Impregnation consists of three basic proceedings; first step includes evacuation of air and water from the porous or light weight
aggregates with the help of vacuum pump. Soaking of porous aggregates in the liquid PCM under vacuum ends second step. Lastly, in
the third step, the pre-soaked PCM porous aggregate functioning as a carrier for the PCM is mixed into the concrete.

2.2 IMMERSIONS
Soaking of the porous concrete products in a melted PCM (named immersion PCM-concrete) is called as immersion technique which
was first introduced by Hawes. It is the immersion of porous concrete products in a container already filled with the liquid PCM. The
effectiveness of emersion process mainly depends on absorption capacity of the concrete, temperature and types of PCM being
employed.

2.3 DIRECT MIXING
PCM must be first encapsulated within a chemically and physically stable shell before directly mixing it with concrete. Encapsulation
can be done by interfacial polymerization, emulsion polymerization, in situ polymerization as well as spray drying. For direct mixing,
the shell hardness of the PCM microcapsules should be sustainable and indestructible to avoid any damage during the concrete
mixing.
3. APPLICATIONS OF PCMS
3.1 BUILDING APPLICATIONS
To improve the performances of technical installations such as hot water heat stores, pipe insulation and cool thermal energy storage
and latent heat thermal storage systems, PCMs can be incorporated. In addition, the improvement of double facades with PCMs has
been achieved for better control of the cavity temperature.

3.2 PCM ENHANCED CONCRETE
PCM enhanced concrete (thermo-concrete) is another possibility for applying PCMs in building constructions. To produce low cost
storage materials with structural and thermostatic properties, thermo-concrete is an appropriate PCM with a concrete matrix or open-
cell cements.

3.3 THERMAL ENERGY STORAGE AND COOLING POWER POTENTIAL
For efficient energy storage in connection with the two-phase heat transfer fluid water/steam, high-temperature PCM is a key
component. Necessary power density demands for specific applications can be met by nitrate salts used for high-temperature PCM
SUN
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

626 www.ijergs.org

storage. A design development of radially finned tubes by DLR is applied in a 700 kW h PCM thermal energy storage demonstration
modules.

4. RECENT WORK IN THE FIELD OF PCMs
M. Ravikumar et al. (2012) analyzed the heat transmission across three roof structures viz., bare RCC roof, RCC roof with weathering
coarse and RCC roof with PCM above RCC. It was concluded that the thermal inertia of roof is very high, moderate and lowest with
PCM, WC and RCC roofs respectively. Thus the roof structure below the PCM layer is not affected much by the external climatic
variation.
Gang Li et al. (2012) reviewed the recent development of available cold storage materials for subzero applications. Absorption and
adsorption storages are mainly discussed for their working pairs, heat transfer enhancement and system performance improvement
aspects.
Lin Qiu, et al. (2012) analyzed the reality melting and solidification of PCM set up the PCM heat transfer model which considering
liquid-phase natural convection in this paper and exploits CFD software to carry out numerical simulation.
Mario A Medina, et al. (2013) presented results of the potential thermal enhancements in building walls derived from using phase
change materials. For the frame walls, the PCM encapsulated within reflective foil sheets yielded the highest reductions of 52.4%
(peak) and 35.6% for a PCM concentration of about 15%, producing more stable wall temperatures.
Amarendra Uttam, et al. (2013) presented the application of phase-change energy storage in air conditioning applications. It was
concluded from the results that during day time temperature of air coming to the room is decreased by 2-4 K. The effectiveness of
system is highly dependent on local climate.
Guohui Feng, , et al. (2013) showed that compared to normal fresh air system, the phase change solar energy fresh air thermal
storage system has a significant improvement in energy saving and indoor comfort level and will play an important role in the energy
sustainable development.
Tung-Chai Ling, , et al. (2013) investigated that PCM-concrete has some useful characteristics such as better latent heat storage and
thermal performance. The inclusion of PCM in concrete yields a significant improvement in the thermal performance of the concrete.
Jessica Giro-Paloma et al. (2013) suggested that the use of microencapsulated PCM has many advantages as microcapsules can handle
phase change materials as core allowing the preparation of slurries. It was concluded that stiffness depended on the temperature assay
and particle size, which showed an important decrease in elastic properties at 8000C.
Doerte Laing, et al. (2013) analyzed that a high-temperature PCM is a key component for storing latent heat. The 700 kWh PCM
storage has been tested successfully in a combined storage system for DSG. The operation of this PCM storage module for
evaporating water in constant and sliding pressure mode was succeeded.
Camila Barreneche et al. (2013), showed that the Phase change materials can be presented as materials with high thermal energy
storage capacity due to the latent heat stored/released during phase change, reducing the energy demand of buildings when
incorporated to construction materials. The benefit from extending the PCM addition up to 15 wt% is better for gypsum samples than
for Ordinary Portland cement matrixes.
Servando lvarez, et al. (2013) investigated that more the contact area between PCM and air by a factor of approximately 3.6,
increase the convective heat transfer coefficient significantly, allowing the cold stored usage.
M.R. Anisur et al. (2013) emphasized that opportunities for energy savings and green house-gas emissions reduction with the
implementation of PCM in TES systems. It was concluded that about 3% of total CO2 emissions by fuel, projected in 2020 could be
reduced with PCM applications in building for heating and cooling.
Jisoo Jeon, et al. (2013) highlighted that the proper design of TES systems using a PCM requires quantitative information and
knowledge about the heat transfer. He reviewed the development of available latent heat thermal energy storage technologies and
discusses PCM application methods for residential building using radiant floor heating systems.

5. CONCLUSION
PCMs work as a store house of thermal energy and deliver it as and when required due to its high latent heat of fusion/fission, thus
aids to energy saving. Wide applications of PCMs in buildings do not seem optimal but improvements in thermal heat storage (THS)
of PCMs make them implementable on wide range. Among all PCMs, Organic compounds particularly Paraffin waxes are most
suitable for latent heat storage (LHS) due to their compatibility with human comfort temperature range of about 26
0
C with their
affordability.

REFERENCES:
Journal Papers and Conferences
[1] Francis Agyenim, Neil Hewitt, Philip Eames,Mervyn Smyth, A review of materials, heat transfer and phase change problem
formulation for latent heat thermal energy storage systems (LHTESS), Renewable and Sustainable Energy Reviews, vol.14, pp.615
628, 2010.
[2] Mohammed M. Farid, Amar M. Khudhair, Siddique Ali K. Razack,Said Al-Hallaj, A review on phase change energy storage:
materials and applications, Energy Conversion and Management, vol.45, pp.15971615, 2004.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

627 www.ijergs.org

[3] M Ravikumar, PSS Srinivasan, Analysis of heat transfer across building roof with phase change material, Journal of
Computational Information Systems, vol.4, pp.1497-1505, 2012.
[4] Piia Lamberg, Reijo Lehtiniemi, Anna-Maria Henell, Numerical and experimental investigation of melting and freezing processes
in phase change material storage, International Journal of Thermal Sciences, vol.43, pp.277287, 2004.
[5] Y. Tian, C.Y. Zhao, Numerical investigations of heat transfer in phase change materials using non-thermal equilibrium model,
11th UK National Heat Transfer Conference, London, UK, 6-8 September, 2009.
[6] Dariusz Heim, Joe A. Clarke, Numerical modelling and thermal simulation of PCMgypsum composites with ESP-r, Energy and
Buildings, vol. 36, pp. 795805, 2004.
[7] Lin Qiu, Min Yan, Zhi Tan, Numerical Simulation and Analysis of PCM on Phase Change Process Consider Natural Convection
Influence, The 2
nd
International Conference on Computer Application and System Modeling 2012.
[8] Mario A Medina, Kyoung Ok Lee, Xing Jin, Xiaoqin Sun, On the use of phase change materials in building walls for heat transfer
control and enhanced thermal performance, APEC Conference on Low-carbon Towns and Physical Energy Storage, Changsha, China,
25-26 May, 2013.
[9] Amarendra Uttam, J. Sarkar, Performance Analysis of Phase Change Material based Air-Conditioning System, 2
nd
International
Conference on Emerging Trends in Engineering & Technology, College of Engineering, Teerthanker Mahaveer University, 12-13
April, 2013.
[10] Ruben Baetensa, Bjrn Petter Jelle, Arild Gustavsen, Phase change materials for building applications: A state-of-the-art review,
Energy and Buildings, vol.42, pp.13611368, 2010.
[11] Guohui Feng, Lei Zhao, Yingchao Fei, Huang Kailiang, Shui Yu, Research on the phase change solar energy fresh air thermal
storage system, APEC Conference on Low-carbon Towns and Physical Energy Storage, Changsha, China, May 25-26, 2013.
[12] Dominic Groulx and Wilson Ogoh, Solid-liquid phase change simulation applied to a cylindrical latent heat energy storage
system, COMSOL Conference, Boston, 2009.
[13] Tung-Chai Ling, Chi-Sun Poon, Use of phase change materials for thermal energy storage in concrete: An overview,
Construction and Building Materials, vol.46, pp.5562, 2013.
[14] M. Marinkovic, R. Nikolic, J. Savovic, S. Gadz\uric, I. Zsigrai, Thermochromic complex compounds in phase change materials:
Possible application in an agricultural greenhouse, Solar Energy Materials and Solar Cells, vol.51, pp.401-411, 1998.
[15] Shankar Krishnan, Suresh V. Garimella, and Sukhvinder S. Kang, A novel hybrid heat sink using phase change materials for
transient thermal management of electronics, IEEE Transactions on Components and Packaging Technologies, vol. 28, pp.281-289,
June 2005.
[16] Murat M. Kenisarin, High-temperature phase change materials for thermal energy storage, Renewable and Sustainable Energy
Reviews, vol.14, pp.955970, 2010.
[17] T. Siegrist, P. Jost, H. Volker, M.Woda, P. Merkelbach, C. Schlockermann and M.Wuttig, Disorder-induced localization in
crystalline
Phase-change materials, Published online, DOI: 10.1038/nmat2934, 9 January 2011.
[18] I. Krupa, G. Mikova, A.S. Luyt, Phase change materials based on low-density polyethylene/paraffin wax blends, European
Polymer Journal, vol.43, pp.46954705, 2007.
[19] Dale P. Bentz, Randy Turpin, Potential applications of phase change materials in concrete technology, Cement & Concrete
Composites, vol.29, pp.527532, 2007.
[20] Xiaoming Fang, Zhengguo Zhang, Zhonghua Chen, Study on preparation of montmorillonite-based composite phase change
materials and their applications in thermal storage building materials, Energy Conversion and Management, vol.49, pp.718723, 2008.
[21] Jessica Giro-Paloma, Gerard Oncins, Camila Barreneche, Mnica Martnez, A.Ins Fernndez, Luisa F. Cabeza, Physico-
chemical and mechanical properties of microencapsulated phase change material, Applied Energy, vol.109, pp.441448, 2013.
[22] Doerte Laing a, Thomas Bauer, Nils Breidenbach, Bernd Hachmann, Maike Johnson, Development of high temperature phase-
change-material storages, Applied Energy, vol.109, pp.497504, 2013.
[23] Camila Barreneche, M. Elena Navarro, A. Ins Fernndez, Luisa F. Cabeza, Improvement of the thermal inertia of building
materials incorporating PCM: Evaluation in the macro-scale, Applied Energy, vol.109, pp.428432, 2013.
[24] Servando lvarez, Luisa F. Cabeza, Alvaro Ruiz-Pardo, Albert Castell, Jos Antonio Tenorio, Building integration of PCM for
natural cooling of buildings, Applied Energy, vol.109, pp.514522, 2013.
[25] M.R. Anisur, M.H.Mahfuz, M.A.Kibria, R.Saidur, I.H.S.C.Metselaar, T.M.I.Mahlia, Curbing global warming with phase change
materials for energy storage, Renewable and Sustainable Energy Reviews, vol.18, pp.2330, 2013.
[26] Jisoo Jeon, Jung-Hun Lee, Jungki Seo,Su-Gwang Jeong, Sumin Kim, Application of PCM thermal energy storage system to
reduce building energy consumption, J. Therm Anal Calorim, DOI 10.1007/s10973-012-2291-9, pp.279288, Budapest, Hungary, 15
February 2012.
[27] Dong Zhanga, Zongjin Lib, Jianmin Zhoua, Keru Wu, Development of thermal energy storage concrete, Cement and Concrete
Research, vol.34, pp.927934,2004.
[28] J. Kosny, D.W. Yarbrough, K.E. Wilkes, D. Leuthold, A.M. SyEd, PCM enhanced cellulose insulation in lightweight natural
fibers. http://intraweb.stockton.edu/, 2005 (retrieved 13.10.08).
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

628 www.ijergs.org

[29] D.J. Morrison, S.I. Abdel-Khalik, Effects of phase change energy storage on the performance of air-based and liquid-based solar
heating systems, Solar Energy, vol.20, pp.5767, 1978.
[30] H. Mehling, L. F. Cabeza, S. Hippeli, S. Hiebler, PCM module to improve hot water heat stores with stratification,
http://www.fskab.com/, 2008 (retrieved 13.10.08).
[31] B. Zalba, J.M. Marn, L.F. Cabeza, H. Mehling, Free-cooling of buildings with phase changing materials, International Journal of
Refrigeration, vol.27, pp.839849, 2004.
[32] B. Zalba, B. Snchez-valverde, J.M. Marin, An experimental study of thermal energy storage with phase change materials by
design of experiments, Journal of Applied Statistics, vol.32, (4), pp.321332, 2005.
[33] Hawlader MNA, Uddin MS, Khin MM, Microencapsulated PCM thermal-energy storage system, Applied Energy, vol.74,
pp.195202, 2003




















International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

629 www.ijergs.org

Modeling the deformation of Earth Dam during an Earthquake
Mohammad reza kahoorzadeh
1
, Masoud dehghani
2
1
Department of Civil Engineering, pardis of gheshm, Iran
2
Assistant Professor, Department of Civil Engineering, University of Hormozgan, Iran
(
1
m_kahoorzadeh@yahoo.com,
2
dehghani@hormozgan.ac.ir )

Abstract Embankments or earth dams are of important structures which have considerable application in the vast area of
geotechnical engineering. The control of different geotechnical phenomena such sliding, and overturning and settlement of the clayey
dams has high research priority from which a strong controlling tool obtains for engineers and managers. In this article, the dynamic
or seismic deformation of embankments is investigated by finite element modeling in ANSYS software. The Young modules ratio of
models was defined as the crest elasticity modules relative to the soft soil elasticity modules of the foundation. It is indicated that the
higher value of elasticity ratio yields the lower amount of dynamic horizontal displacement and higher dynamic vertical displacement.
Comparing the results with the literature indicated a good agreement in aspect of dynamic displacements. Finally dams with better
seismic performance are introduced.
Keywords earth dam, seismic behavior, hydrodynamic pressures, ANSYS.
INTRODUCTION

Despite the progress are made in understanding the behavior of embankment erected on soft clay ground in recent years,
the optimal design of such embankment stay difficult and complex (Hird et al.1995) [1]. The failure of earth structures
such as natural slopes or earth embankments and dams has resulted in heavy loss of life and property in communities'
world wide, where the understanding of the mechanism of slope failure and its analysis have generally been in sufficient
to prevent accidents which occurred. At present, this problem stay incompletely resolved (Espinoza et al.1994) [2]. Slope
stability analyses have received a great deal of studies by various researchers and a wide variety of analytical procedures
have been developed over the years.
The reconnaissance reports of several recent earthquakes document numerous cases of significant damage to bridge
foundations and abutments from liquefaction-induced ground failures. Additional documentation on the damage to
highways, bridges, and embankments from liquefaction of loose, saturated, cohesionless soils clearly points out the need
to develop improved criteria to identify the damage potential of both new and existing highway structures. The experience
of the Niigata earthquake developed an awareness of the following types of damage and behavior due to liquefaction.

1- Settlement, tilting, and toppling of bridge foundation elements due to a reduction in ground bearing capacity.
2- Tilting, rotation or collapse of retaining walls, abutments and quay walls as a result of increased earth pressure
and reduction in soil shear strength.
3- Failure of earth structures, such as embankments, due to decreases in the strengths of sandy soil materials.
Dynamic analysis of concrete dams have been considered in the last decade. Gazetaset al. (1992) investigate on Seismic
analysis and design of rockfill dams. Theoretical methods for estimating the dynamic response and predicting th
performance of modern rockfill dams subjected to strong earthquake shaking are reviewed. The focus is on methods
accounting for nonlinear material behavior, for 3-dimensional canyon geometry, and asynchronous base excitation. It is
shown that both strong nonlinearities and lack of coherence in the seismic excitation tend to reduce the magnitude of the
deleterious 'whip-lash' effect computed for tall dams built in rigid narrow canyons. Particular emphasis is accorded to
Concrete- Faced-Rockfill dams and a case study involving an actually designed dam in a narrow canyon points to
potential problems and suggests desirable modifications [3].
Franklin (1983) survey on seismic stability of embankment structures. He discuss on the guidelines and criteria used by
the U.S. Corps of Engineers for seismic analysis and design of dams, and the procedures used by the Waterways
Experiment Station to evaluate the seismic stability of earth and rock-fill dams [4].
Noorzad et al. (2010) were investigate on Seismic displacement analysis of embankment dams with reinforced cohesive
shell. They refer to this subject, that Suitable materials for use as shell of embankment dams are clean coarse-grained soils
or natural rockfill. In some sites these materials may not be available at an economic distance from the dam axis. The use
of in-situ cohesive soils reinforced with geotextiles as the shell is suggested in this study for such cases. Dynamic
behavior of reinforced embankment dam is evaluated through fully coupled nonlinear effective stress dynamic analysis. A
practical pore generation model has been employed to incorporate pore pressure build up during cyclic loading.
Parametric analyses have been performed to study the effect of reinforcements on the seismic behavior of the reinforced
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

630 www.ijergs.org

dam. Results showed that reinforcements placed within the embankment reduce horizontal and vertical displacements of
the dam as well as crest settlements. Maximum shear strains within the embankment also decreased as a result of
reinforcing. Furthermore, it was observed that reinforcements cause amplification in maximum horizontal crest
acceleration [5].
Okamura et al. (2013) were investigate on Seismic stability of embankments subjected to pre-deformation due to
foundation consolidation. It has been reported that the major cause of earthquake damage to embankments on level
ground surfaces is liquefaction of foundation soil. A few case histories, however, suggest that river levees resting on non-
liquefiable foundation soil have been severely damaged if the foundation soil is highly compressible, such as thick soft
clay and peat deposits. A large number of such river levees were severely damaged by the 2011 off the Pacific coast of
Tohoku earthquake. A detailed inspection of the dissected damaged levees revealed that the base of the levees subsided in
a bowl shape due to foundation consolidation. The liquefaction of a saturated zone, formed at the embankment base, is
considered the prime cause of the damage. The deformation of the levees, due to the foundation consolidation which may
have resulted in a reduction in stress and the degradation of soil density, is surmised to have contributed as an underlying
mechanism. In this study, a series of centrifuge tests is conducted to experimentally verify the effects of the thickness of
the saturated zone in embankments and of the foundation consolidation on the seismic damage to embankments. It is
found that the thickness of the saturated zone in embankments and the drainage boundary conditions of the zone have a
significant effect on the deformation of the embankments during shaking. For an embankment on a soft clay deposit,
horizontal tensile strain as high as 6% was observed at the zone above the embankment base and horizontal stress was
approximately half that of the embankment on stiff foundation soil. Crest settlement and the deformation of the
embankment during shaking were larger for the embankment subjected to deformation due to foundation consolidation
[6].

INTRODUCTION OF FORCES AND MODELING BY SOFTWARE

The momentum equations for two dimensional flows in a vertical plane (Fig.1), integrated over a control volume are
written as (Demirel, 2012) [7]:

} } } } }
V +
c
c
= +
c
c
CV
x
CS CV CS CV
d a A ud d
x
p
A d V u ud
t

u

1


( 1 )

} } } } }
V +
c
c
= +
c
c
CV CS CV CS CV
gd A wd d
z
p
A d V w wd
t

u

1



( 2 )

where x and z are coordinate axes in horizontal and verticaldirections respectively, ax is horizontal ground
acceleration,u and w are velocity components,

is velocity vector relative to the moving ground, p is pressure, tis time, g


is gravitational acceleration, is kinematic viscosity, is fluid density,
V

is the del operator, CV indicates control volume,


CS indicates control surface and d

is the area element normal to the control surface pointing out of the control volume.
Horizontal ground acceleration is included to represent earthquake excitations (Demirel, 2012) [7].















Figure 1.Definition sketch of dam-reservoir system subjected to earthquake


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

631 www.ijergs.org

Chopra presented ananalytical expression for the variation of hydrodynamicpressures on a vertical dam face during
arbitrary groundmotion (Chopra, 1990) [8]:

}

t
n x n
n
n
x
d t c J a z
n
a
t z p
0
1
1
)} ( { 0 ) ( cos
1 2
) 1 ( 4
) , , 0 ( t t t
t



( 3 )

whereJ0=the Bessel function of the first kind of zero. The determination of hydrodynamic response of a vertical dam
face to prescribe earthquake motion involves the numerical evaluation of Eq. (3).

NUMERICAL ANALYSIS

In this article, the dynamic or seismic deformation of embankments is investigated by finite element modeling in
ANSYS software. Like other systems of analysis, in thisanalytical system, that is composed fromdam and its surrounding
soil. Initially the MohrCoulomb failure criterion and drained behavior was considered for all the materials. Material
properties that have been adopted in this study are presented in table 1.That this work is done in part of Engineering Data
from software.


TABLE 1- MATERIAL PROPERTIES
G
(pa)

E
(pa)

(kg/^3)
Layer of
dam
1.2E+07 0.25 3E+07 800 Foundation
6.866E+06 0.45 2E+07 900
Saturated
layer
1.538E+07 0.3 4E+06 1900
Saturated
layer1
2.307E+06 0.3 6E+06 1900
Saturated
layer2
2.027E+06 0.48 6E+06 1900
Saturated
layer3

The first step for the analysis of Dam system and its interaction with soil and the surrounding fluid, is creating
geometry. For this purpose, from existing tools in Ansys software were used. Fig. 2 shows the geometry of dam and
foundation.

Figure 2.geometry of dam and foundation

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

632 www.ijergs.org

The modeling of damshave adimension, that are shown in Fig. 3.

Figure 3.the geometrical dimensions ofDam
After thegeometry of themodel, shouldcreate the appropriatemesh. Typical adopted mesh is shown in Fig. 4.

Figure 4.used Mesh for model
Figures5to7 shows thehorizontal displacementofdams. The purpose of thisstudy and threedifferentdamsis Comparison
ofhorizontal displacementand dynamicssettlementunderseismicloads.

Figure 5 . Minimum and maximum of dynamic
horizontal displacement of Dam 1
Figure 6. Minimum and maximum of dynamic
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

633 www.ijergs.org

horizontal displacement of Dam 2

Figure 7. Minimum and maximum of dynamic horizontal displacement of Dam 3
Also, figures8to10show theminimumandmaximum ofdynamicverticaldisplacement(settlement) inthe earthquake.


Figure 8 . Minimum and maximum of dynamic vertical
displacement of Dam 1
Figure 9. Minimum and maximum of dynamic vertical
displacement of Dam 2

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

634 www.ijergs.org


Figure 10. Minimum and maximum of dynamic verticaldisplacement of Dam 3
Considering the above figures, The Young modules ratio of models was defined as the crest elasticity modules
relative to the soft soil elasticity modules of the foundation. It is indicated that the higher value of elasticity ratio yields
the lower amount of dynamic horizontal displacement and higher dynamic vertical displacement. Comparing the results
with the literature indicated a good agreement in aspect of dynamic displacements. Finally dams with better seismic
performance are introduce
CONCLUSION
In this study, dynamic or seismic behavior dams were studied. Three types of Dam, with different geotechnical
characteristics in different layers under the influence of earthquake with maximum acceleration of 0.5g were placed. The
results indicate that values of seismic displacement (both of horizontal and vertical), are a function of the modulus of
elasticity ratio of the dam crest on its foundation.
Also, as can be seen, Increase in Poisson's ratio models like the results that obtained in this study lead to a reduction in
dynamic Settlement of points.
REFERENCES
[1] Hird, Pyrah, Russell, and Cinicioglu (1995). Modeling the Effect of Vertical Drains in Two Dimensional F.E.A. of
Embankments on Soft Ground, Canadian Geotechnical Journal; 32, pp.795-807.
[2] Espinoza, Bourdeau and Muhunthan B ( 1994) .Unified Formulation for Analysis of Slopes with General Slip Surface.
Journal of Geotechnical Engineering, ASCE, 120.pp. 1185-1189.
[3] Gazetas, G. and Dakoulas, P. Seismic analysis and design of rockfilldams:state of the art, Soil Dynamics and Earthquake
Engineering ,Vol. 11, pp.27-61, .(1992).
[4] Franklin A.G.Seismic stability of embankment structures. Structural Safety, Volume 1, Issue 2, Pages 141-154, 19821983.
[5] Reza Noorzad, Mehdi Omidvar. seismicdisplacement analysis of embankment dams with reinforced cohesive shell , Soil
Dynamics and Earthquake Engineering, Volume 30, Issue 11, Pages 1149-1157, November 2010.
[6] Mitsu Okamura, ShujiTamamura, Rikuto Yamamoto. Seismic stability of embankment subjected to pre-deformation due to
foundation consolidation, Soils and Foundations, Volume 53, Issue 1, Pages 11-22, February 2013.
[7] Demirel E., Hydrodynamic Analysis of Earthquake Excited Dam-Reservoirs with Sloping Face. Proceedings of the World
Congress on Engineering and omputer Science 2012 Vol II WCECS 2012, San rancisco, USA. October 24-26, 2012.
[8] Chopra A.K, Hydrodynamic pressures on dams during earthquakes J. Eng. Mech. Div., ASCE, vol. 93, pp. 205-223, 1967
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

635 www.ijergs.org

Use of Smart Wireless Node in Vehicle Networking
Shripad S. Kulkarni
1
, Prof. Ramesh. Y. Mali
1

1
University of Pune, India
E-mail- shri3925@gmail.com

Abstract The paper proposed the use of smart wireless node in vehicle network for communication between different Electronic
Control Units (ECU) in the vehicle more specifically for Body Control Module (BCM) in bus platform. In typical bus platform the
main constrain is wiring harness as it involves many critical issues as weight, complex design and many more. To overcome this issue
the smart wireless nodes in vehicle can play important role as it significantly reduced wiring harness. We use PIC 18FXX
microcontroller, Zigbee Device with IEEE 801.15.4 standard for communication between different wireless network modules in
vehicle.

Keywords Wireless Node, In vehicle Networking (IVN), Electronic Control Unit (ECU), Body Control Module (BCM), PIC
Microcontroller, Zigbee
INTRODUCTION
The electrical circuits and their electronic control units are essential for good performance of vehicle and communication
between them. At the beginnings of the 1980s, the engineers of the automobile manufacturers assessed the existing field
bus systems for their use in vehicles as requirements are continuously changing so lots of research activities and
innovation gets involved in automotive segment. Intra vehicle to vehicle communication, Vehicle to road infrastructure
communication, Communication between different parts within vehicle such as trailer and dispatchers are getting
connected and able to gather and distribute data, which could be used to enable better operations. If we broadly consider
any communication possible by physical hard wired point to point connection, second is use of inter ECU communication
protocol and third one is wireless communication medium. Till date above both physical point to point hard wired
connections for communication and proprietary hard wired serial communication protocol is used but even though
wireless sensor networks are having potential to be used in many vehicle applications it is not being actively used or not
on focus for further research. They came to the conclusion that none of protocols fulfilled completely their requirements.
It supposes the beginning of the development for new field bus protocols use of same in vehicle. With the increased
number of electronic control unit system and its complexity it is impossible to implement this exchange of information
through point to point links because it would suppose a disproportionate length of cable, an increase of cost and
production time, reliability problems, and other drawbacks. To overcome this scenario using more than one protocol in
vehicle, to reduce wiring harness reduction and better scalability wireless sensor network (WSN) can play important role.
In this IEEE 802.15.4 protocol based Zigbee transceivers module are used to make the wireless sensor network. The node
will acquire and internally store data periodically. Starting times as well as the time intervals for can be freely
programmed over the network system. As soon as a proper network is detected in its proximity the node will
automatically transfer data. Optionally sensor data can be delivered on demand. When in its idle state the node remains in
power-down mode in order to minimize power consumption. These multiplexed network modules installed in the vehicle
to provide an important reduction of the wiring that involves a reduction in costs, less breakdown risks, and easier
scalability. Also, the maintenance tasks can be enhanced.In this paper non safety critical functions are consider for
implementation as a first step toward use of smart wireless node in-vehicle networking. As in-vehicle network architecture
can be partitioned into different domains mainly safety critical or non-safety critical function. Safety critical functions are
the functions which are introduced in the system to prevent or stop accident or critical situation occurrence. If this critical
safety function is malfunction then there may be chance of accident. Non-safety critical functions are the function those
does not affect main system if it gets failed due to some reason but if these functions are present in the system then it
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

636 www.ijergs.org

enhanced overall system. From bus platform view it includes user oriented features in vehicle like park light, buzzer,
Internal Lights, front and rear side of lamp etc. after successful proto building later on can able to move towards complete
wireless in-vehicle networking architecture.
OBJECTIVE AND SCOPE
The scope and objective of this paper is to integrate wireless sensor node in the vehicle also to get different
solution for existing wiring harness design. The technology chosen for the wireless network is Zigbee, after successful
implementation of concept on prototype communication protocol can be easily upgrade. Another important point is
weight reduction. In vehicle there are different parts of wiring harness such as front panel, dashboard, Engine, Body
control, Chassis, Tail wiring harness and total weight is more than 130kg in trucks and buses. If total length we consider
then it is more than 8km of copper wire and cost is huge so even if consider and manage 10 to 30 % reduction in wiring
harness it will create significant difference in terms of both cost and weight.
Main Objectives is
To provide alternate solution for current system
Overview of future technological requirement
Other Expected Key Outcomes
Reduction in wiring harness complexity
Reduction in total weight
Cost Reduction
Easy Diagnosis , Monitoring
Higher Scalability

PROJECTDESCRIPTION
The number of sensors in the vehicle has increased significantly over the past few years, mainly due to various
safety and convenience applications. Currently, the sensors and the microprocessor in a car communicate over a serial
data bus and are connected with physical wires. The most significant problem of the current wired architecture is
scalability, resulting in the emerging need to develop an in vehicle wireless sensor network to provide a flexible open
architecture to incorporate hundreds of sensors which will be installed in future vehicles. Wireless sensor Networks
recently have come into importance due to the fact that they have the prospective to revolutionize many segments like
environmental monitoring, transportation, healthcare industries. Because of the advantages of the wireless sensor network
such as low power consumption, wireless distribution, and flexibility without cable restrictions the usage of WSN in
automobile filed is expected to grow in coming years and it will drastically reduce overall wiring harness cost as well as
weight.
As shown above figure3 in prototype there are three modules which is called as nodes, this nodes will be placed in vehicle
at appropriate location such that source of signal or output that need to be derived from node is very nearby place that will
help to reduce wiring harness. First node will be near to front side that in driver cabin, as all front combi switch inputs will
be easily accessible. Input from combi switch such as turning light input, parking light input is given to node one, it will
transmit wireless data to both second node and third node. Second node will be placed in middle of vehicle as side blinker
lamps will cover in this module and engine related sensor input given to second module. When data received from first
node it will turn on side blinker lights. Third module will be at rear side of vehicle and rear loads of vehicle will be
connected to this module. The sensor input such as air pressure, Engine oil pressure will be given to node and it will
transmit data to first node, it will receive data from respective node and display on LCD module. Each sensor node
contains a computational module (a programmable unit) which provides computation ability, storage, and bidirectional
communication with other nodes in the system.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

637 www.ijergs.org

The main two advantages are they can be re-task in the field and easily communicate with the rest of the network.
Nowadays the need to collect and act on real-time data increased drastically. However, to collect data using typical wired
sensor networks has always been expensive, considering to installation and maintenance costs. Although past wireless
measurement solutions have been elusive, the spreading of the use of wireless sensor networks (WSNs) is in fast
development. WSN is a term used to describe an emerging class of embedded communication products that provide
redundant, fault-tolerant wireless connections between sensors, actuators and controllers. WSNs are typically formed by
groups of several sensor nodes, so called as nodes, whose individual constitution is based on actually combining sensor
radios and CPUs into an effective robust, secure and flexible network, with low power consumption and advanced
communication and computation capabilities. Its applications include industry, atmosphere monitoring, and defense,
among others. Besides instrumentation concepts, WSNs involve aspects of wireless communications, networks
architectures, and protocols. A wireless sensor network is composed of autonomous distributed sensors that cooperate to
monitor physical conditions. In a vehicle these conditions can be tire pressure, cargo temperature, trailer door status,
presence detection and others. Furthermore, with this technology available in vehicles, many other applications can be
implemented for the truck. Device is installed in the trucks, trailers, tippers to collect information from the vehicle. One of
the main advantages of using Zigbee for this application is that it supports mesh topologies. By using that it is possible to
have a very flexible network. The main advantages of this topology are that it is possible to reconfigure the network to
skip broken nodes and it is possible to choose the shortest path to a certain destination. Volvo group presents concept use
of WSN on trailer in which wireless nodes consist of side-marker lights and sensors that create an electronic fence
around the trailer and can detect if an unauthorized person is trying to access the trucks cargo, steal its fuel or anything
else from the vehicle. The network is composed by the lamps and sensors, which is the Zigbee coordinator and has the
intelligence to process the messages from the lamps and identify and alarm situation.


Module 1

Module 2
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

638 www.ijergs.org


Module 3
The major blocks of the proposed system are given below, the block describes about the components and modules used in the systems.
The major blocks are,
- ZigBee Trans receiver (2.4 GHz)
- Microcontroller
- Software


ZIGBEE TRANS RECEIVER
ZigBee is a wireless communication protocol standard based on the IEEE 802.15.4. Zigbee is a low-cost, low-power, wireless mesh network
standard. The low cost allows the technology to be widely deployed in wireless control and monitoring applications. Low power usage allows
longer life with smaller batteries. Different networking topology provides high reliability and more extensive range and also very flexible
network. ZigBee nodes can go from sleep to active mode in 30 ms or less, the latency can be low and devices can be responsive, particularly
compared to Bluetooth wake-up delays, which are typically around three seconds. Because ZigBee nodes can sleep most of the time, average
power consumption can be low, resulting in long battery life.

MICROCONTROLLER
The Microcontroller used in the proposed system is general purpose PIC18F46K22 controller with serial UART
(Universal Asynchronous Receiver and Transmitter).The UARTisconnected to ZigBee transceiver module for serial
communication. The vehicle chassis unique number and module node ID is saved in the NVM (Non Volatile Memory) of
controller while final programming this is required to identify and to authenticate appropriate node and vehicle platform.
SOFTWARE
MPLAB Integrated Development Environment (IDE) is a free, integrated toolset for the development of embedded applications
employing Microchip's PIC8bit, 16bit and 32bit microcontrollers. MPLAB IDE tool is easy to use and includes software components
for fast application development and debugging. PICPgm is a PC-Software to program PIC microcontrollers using external
programmer hardware connected to the PC. It allows
flashing program a HEX file into a PIC microcontroller
Read the content of a PIC microcontroller and save it to a HEX file
Erase a PIC microcontroller
Check if a PIC microcontroller is empty, i.e. not programmed (Blank Check)

CONCLUSION
Based on study and document experience with prototype model it is observed that today introduction of new functionality in vehicle is
limited by expensive installation and harnessing and communication protocol which could be enhance by the introduction of wireless
sensor networks in current system. The ultimate goal of in vehicle Networking research is to enable novel applications that change the
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

639 www.ijergs.org

way of interact or communication in vehicle. The challenge is at the same time to transform the capabilities of sensor networks to be
useful services for the vehicle application.The architecture is able to support flexible, application-specific communications protocols
without sacrificing efficiency. This architecture has been validated through the development of three hardware node and a software
system. For these networks, lifetime is the main evaluation criterion. A second class of applications is that of highly dynamic sense-
and control networks with higher data rates, and highly mobile nodes. Instead of passively monitoring a relatively static environment,
these networks attempt to control the environment in real time. We have evaluated or architecture with respect to both application
classes.

FUTURE SCOPE
The proposed work is mainly focused on receiving the data from the remote wireless nodes and finding alternative
solution to conventional wiring harness. There are different possibilities for extension of the research work and they listed
as under:
In our work only three control nodes are provided we can deploy several control nodes with mesh networking to
cover maximum functionality.
We have used 8 MHz Microcontrollers. In future we can construct low power microcontrollers for wireless
sensors.
GUI for Data log facility on PC can serve purpose of Diagnostic and ease for the fault finding.
More expertise require for packaging and installation of wireless modules.

ACKNOWLEDGMENT
I am thankful to my Guide, P.G. Coordinatorfor constant encouragement and guidance. I am also thankful to Principle of Institute and
Head of E&TC Engineering Department for their valuable support. I take this opportunity to excess my deep sense of gratitude
towards those, who have helped us in various ways, for preparing my seminar. At the last but not least, I am thankful to my parent
who had encouraged and inspired me with their blessing.

REFERENCES:
[1] Reconfigurable Computing in Next-Generation Automotive Networks ShankerShreejith, Suhaib A. Fahmy, and Martin
Lukasiewycz IEEE EMBEDDED SYSTEMS LETTERS, VOL. 5, NO. 1, MARCH 2013

[2]System Architecture for Wireless Sensor Networks by Jason Lester Hill Spring 2003 pages 4-5, 11-15, 20-23

[3] G. Leen and D. Heffernan, Vehicles without wires, Computing and Control Engineering Journal, vol. 12, no. 5, pp. 205
211,October 2001

[4] Application of Wireless Sensor Networks to Automobiles Jorge Tavares, Fernando J. Velez, Joo M. Ferro MEASUREMENT
SCIENCE REVIEW, Volume 8, Section 3, No. 3, 2008, page 65-66

[5] Feasibility of In-car Wireless Sensor Networks: A Statistical Evaluation H.-M. Tsai,W. Viriyasitavat, O. K. Tonguz, C. Saraydar,
T. Talty, and A. Macdonald Carnegie Mellon University, ECE Dept, Pittsburgh, PA 15213-3890, USA General Motors Corporation,
Warren, MI 48092-2709, USA pages 1-3

[6] Wireless Sensor Networks in a Vehicle Environment Master of Science Thesis RAFAEL BASSO UNIVERSITY OF
GOTHENBURG Gteborg, Sweden, December 2009 pages 3-4, 9-11, 27-28, 60-63

[7] S. Chakraborty, M. Lukasiewycz, C. Buckl, S. Fahmy, N. Chang, S. Park, Y. Kim, P. Leteinturier, and H. Adlkofer, Embedded
systems and software challenges in electric vehicles, in Proc. Design Autom. Test Eur. (DATE) Conf., 2012, pp. 424429.

[8] N. Navet, Y. Song, F. S. Lion, and C. Wilwert, Trends in automotive communication systems, Proc. IEEE, vol. 93, no. 6, pp.
12041223, Jun. 2005.

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

640 www.ijergs.org

[9] J. Luo and J.P. Hubaux, A Survey of Inter-Vehicle Communication School of Computer and Communication Sciences, EPFL,
Lausanne, Switzerland, Tech. Rep. IC/2004/24, 2004.

[10] P. Caballero-Gil, Mobile Ad-Hoc Networks: Applications. New York, NY, USA: InTech, 2011, ch. 4, Security Issues in
Vehicular AdHoc Networks

[11] Robert Faludi, Building Wireless Sensor Network, a practical guide to the Zigbee Network Protocol OREILLY Publication
December 2010: First Edition.

[12] K. S. J. Pister, J. M. Kahn, and B. E. Boser. Smart dust: Wireless networks of millimeter-scale sensor nodes, 1999.

[13] Jason Hill et. al. System Architecture Directions for Networked Sensors. Http://www.jlhlabs.com/jhill_cs/papers/tos.pdf

[14] http://electronicdesign.com/automotive/wireless-technologies-simplify-wiring-harness

[15] http://theinstitute.ieee.org/benefits/standards/fewer-wires-lighter-cars

[16] http://www.zigbee.org/About/FAQ.aspx

[17] http://wireless.arcada.fi/MOBWI/material/PAN_5_4.html
















International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

641 www.ijergs.org

Blind Aid: A Self-Learning Braille System for Visually Impaired
Shahbaz Ali Khidri, Shakir Hussain Memon, Aamir Jameel
Department of Electrical Engineering, Sukkur Institute of Business Administration, Sukkur, Sindh, Pakistan
shahbazkhidri@outlook.com, shakir.hussain@iba-suk.edu.pk, aamir.jameel@iba-suk.edu.pk
Abstract - Braille is vital to all visually impaired individuals and its the only system through which visually impaired children can
learn to read and write, yet the rate of Braille literacy among visually impaired people belonging to developing countries including
Pakistan is alarming low. Today in developing countries less than 3% of visually impaired children are learning to read Braille in
school. This continues despite the fact that studies have shown that 80% of all employed visually impaired people read and write
Braille fluently. Thus, Braille literacy is the key to employment and full participation in society. This research paper presents the
design of a low-cost, low-power, portable, self-learning, and user friendly Braille system. The designed system serves as Braille
writing and reading tutor, so visually impaired people can enhance their Braille writing and reading skills without the assistance of a
Braille teacher. The designed system takes the input through Braille keyboard and produces the speech output and it also has the
capability to read documents. It is believed that by implementing the designed Braille system in schools and homes, Braille literacy
rate can be increased and visually impaired people can be employed and can fully participate in society.
Keywords Braille system, visually impaired, Braille literacy, employment, Braille keyboard, Braille writing and reading tutor,
speech output.
I. INTRODUCTION
According to the World Health organization (WHO), 285 million people are estimated to be visually impaired worldwide among
which 90% live in developing countries [1]. Despite the fact that education plays a crucial role in everyones life and there is a
significant relationship between Braille literacy and academic success, higher income, and employment, the rate of Braille literacy in
developing countries is alarming low [2]. Today in developing countries less than 3% of visually impaired children are learning to
read Braille in school [3].
Braille is the only system through which children with profound or total loss of sight can learn to read and write. Its traditionally
written with embossed paper. Louis Braille, a French 12-year-old, who was also blind, developed this code for the French alphabets as
an improvement on night writing which was a tactile military code developed by Charles Barbier in response to Napoleons demand
for a means for soldiers to communicate silently at night and without light. Later on, the work of Louis Braille changed the world of
reading and writing, forever [4].
The basic grid of a Braille alphabet character consists of six raised dots, positioned like the figure six on a die, in two parallel vertical
lines of three dots each. From the six raised dots that make up the basic grid, 64 different signs can be created. At first, Braille was a
one-to-one transliteration of French orthography, but soon various abbreviations, contractions, and even logograms were developed,
creating a system much more like shorthand. There are three levels of Braille system encoding named as Grade 1, Grade 2, and Grade
3. Grade 1 is a letter-by-letter transcription used for basic literacy and consists of the standard alphabets of English and punctuation
marks. Grade 2 consists of the standard alphabets of English, punctuation marks and contractions. Contractions are used to efficiently
utilize the Braille page space and Grade 2 is used for books, public place signs, menus, and other Braille materials. Grade 3 is used in
personal letters, diaries, notes, and in literature. Its a kind of shorthand, with entire words shortened to a few letters. Figure 1 shows
Braille character sets.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

642 www.ijergs.org


Figure 1 Braille Character Sets
Despite the fact that Braille literacy is the key to employment and full participation in society, educating visually impaired children is
not given priority in developing countries including Pakistan. Most of the schools for visually impaired in Pakistan are in a very poor
state, lacking basic facilities [5]. The number of educational institutes for visually impaired people and trained Braille teachers in
developing countries is negligible. Most of the devices available in the market for visually impaired people are either complex to
operate, or costly. Majority of the people belonging to developing countries are living on less than $1.25 per day so its almost
impossible for the parents belonging to developing countries to educate their visually impaired children [6].
Thus, in order to cope these barriers, we have designed a low-cost, low-power, portable, self-learning, and user friendly Braille
system. The designed system is capable to enhance the Braille writing and reading skills of a visually impaired individual without the
need of a teacher. The designed system works on the text-to-speech technology and is capable to read documents. So by implementing
the designed system in schools and homes, the rate of Braille literacy can be increased and visually impaired people can be employed
and can fully participate in society and the money, time, and human resources can be saved in an efficient way.
This research paper is based on 5 sections. Section II is based on literature review and reviews the existing techniques and devices for
visually impaired people. Section III is based on system implementation methodologies and describes our contribution and the
techniques we have used in designing the Braille system. Section IV is based on results and reflects the results obtained from carried
out research work. Section V is based on conclusions and concludes the research paper with the important suggestion and factual
finding from the research paper.
II. LITERATURE REVIEW
There are different devices available in the market for visually impaired people to help them in educational activities and to bridge the
communication gap between visually impaired people and people with sight. The popular devices for visually impaired people are
Speech Assisted Learning (SAL) which costs around $4,600, Book Sense Reader which costs around $499, Eye-Pal Reader which
costs around $1,995, Eye-Pal ROL which costs around $2,195, Electronic Braille pad [7], Automated electronic pen [8], Automatic
visual to tactile translation [9], Interactive 3D Sound Hyper stories for Blind Children [10], A PC-based Braille library system for the
sightless [11], FPGA Based Braille to Text and Speech for Blind Persons [12]. There are also some web browsers specially designed
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

643 www.ijergs.org

for visually impaired people to help them in internet surfing. The popular web browsers for visually impaired people are Audio-haptic
internet browser and associated tools for blinds and visually impaired computer users [13], The Auditory browser for blind and
visually impaired users [14].
According to the statistics provided by the World Health Organization (WHO), about 90% of worlds visually impaired live in
developing countries and majority of the people are living on less than $1.25 per day so they cant afford the devices available in the
market for visually impaired individuals. Most of the devices available in the market are either Braille writing tutors, or Braille
scanners, yet a low-cost Braille system is not available in the market for visually impaired individuals belonging to developing
countries which can teach the Braille writing and reading skills to visually impaired people without the need of a Braille teacher.
Blind Aid: A Self-Learning Braille System for Visually Impaired is the only Braille system which is low-cost, low-power, portable,
self-learning, and user friendly Braille writing and reading tutor with the capability of reading documents and works on text-to-speech
technology which is the assistive technology for visually impaired individuals.

III. BRAILLE SYSTEM IMPLEMENTATION METHODOLOGY
The designed Braille system is based on text-to-speech technology which is the assistive technology for visually impaired people. The
block diagram of the designed system is shown in Figure 2.

Figure 2 Block Diagram of Blind Aid
The block diagram of Blind Aid in Figure 2 shows different steps of the designed Braille system. The input can be provided by either
typing through the provided Braille keyboard, or inserting text files. The designed Braille keyboard supports all levels of the Braille
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

644 www.ijergs.org

system encoding so beginners as well as advanced users can use it for typing. The entered characters or words by Braille keyboard
and/or inserted text files are then processed by the computer. The designed Braille system then converts the Braille character sets into
the standard alphabets of English, numbers, and punctuation marks. The final output appears in the form of text on the computer
screen and in the speech format by using the text-to-speech synthesis. The designed Braille system also has the capability to read
documents so inserted paragraphs or complete text is processed by the system and the speech output is produced.
The designed Braille system is an intelligent system so it predicts the entered characters or words and automatically decides that
whether the given input is a character, or a word and then it produces the speech output accordingly. This feature of the designed
Braille system distinguishes it from the other available text-to-speech Braille systems which only produce the speech output in
character form.

IV. RESULTS AND DISCUSSIONS
The simulation of the designed Braille system is done on Microsoft Visual Studio software which is an integrated development
environment (IDE) from Microsoft and is used to develop computer programs for Microsoft Windows superfamily of operating
systems, as well as websites, web applications and web services. A sample text typed through Braille keyboard with speaking mode is
shown in Figure 3.

Figure 3 Speaking Mode
The designed Braille system has the capability to pause and resume the speech. When the speech output is stopped the system goes in
idle mode. The volume of the designed Braille system can be adjusted to the desired level. The speech rate can also be changed and
visually impaired individual can increase or decrease the speech rate to the desired level. The designed Braille system also provides
the facility to change the gender so the speech output can either be produced in female voice, or male voice. The output of the
designed Braille system in pause and idle mode is shown in Figure 4 and Figure 5 respectively.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

645 www.ijergs.org


Figure 4 Pause Mode

Figure 5 Idle Mode
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

646 www.ijergs.org

V. CONCLUSIONS AND FUTURE RECOMMENDATIONS
The presented solution is a low-cost, low-power, portable, self-learning, and user friendly Braille system. The designed Braille system
costs around $30. The presented solution is a comprehensive system for Braille writing and reading and is based on text-to-speech
technology. The designed Braille system supports all levels of the Braille system encoding so beginners as well as advanced users can
use it for typing. Blind Aid is a self-learning system so by implementing this Braille system in schools and homes, time, money, and
human resources can be saved. It is believed that by implementing this system in developing countries, the rate of Braille literacy can
be increased and visually impaired people can be employed and can fully participate in society.
In future, the designed Braille system can be used to update the visually impaired individual about date, time, temperature, etc. The
designed system can also be used as a personal assistant to the visually impaired individual and can be used to schedule meetings and
other important events for the visually impaired individual. In future, Blind Aid can be integrated with different sensors to monitor
several health parameters of the visually impaired individual to update the visually impaired individual about his/her current health
status.
ACKNOWLEDGMENT
Our foremost thanks go to our research supervisor Engineer Mir Muhammad Lodro, who had showered us with ideas and guidance
through the whole time till last second. This work would not have been possible without his help and inspiration.
We would like to thank our Head of Department Professor Dr. Madad Ali Shah, for his vital encouragement and support.
Last but not least, we would like to express our appreciation to our beloved parents for the unconditional love and support that let us
through the toughest days in our life.

REFERENCES:
[1] World Health Organization, Fact Sheet: Visual impairment and blindness, October 2013, Web, July 01, 2014,
http://www.who.int/mediacentre/factsheets/fs282/en/.
[2] Johnson and L., The Braille Literacy Crisis for Children, Journal of Visual Impairment & Blindness, v90, n3, p276-78,
ISSN: 0145-482X, May-June 1996.
[3] Spungin and S. J., Braille and Beyond: Braille literacy in a Larger Context, Journal of Visual Impairment & Blindness,
v90, n3, p271-74, ISSN: 0145-482X, May-June 1996.
[4] Jimenez Javier, Jesus Olea, Jesus Torres, Inmaculada Alonso, Dirk Harder, and Konstanze Fischer, Biography of Louis
Braille and Invention of the Braille Alphabet, Survey of Ophthalmology, v54, n1, p142-49, January-February 2009.
[5] Kazmi Hasan S., Ashfaq A. Shah, Abdul A. Awan, Jaffar Khan, and Noman Siddiqui, Status of Children in Blind Schools in
the Northern Areas of Pakistan, J. Ayub Med. Coll. Abbottabad, v19, n4, p37-39, 2007.
[6] Mail Dominic M., Gavin Yamey, Adam Visconti, April Harding, and Joanne Yoong, Where Do Poor Women in Developing
Countries Give Birth? A Multi-Country Analysis of Demographic and Health Survey Data, PLOS ONE, February 28, 2011.
[7] Supriya S. and Senthilkumar A., Electronic Braille pad, INCACEC 2009, International Conference on Control,
Automation, Communication, and Energy conversation, p1-5, June 2009.
[8] Joshi A. V. K., Madhan T. P., and Mohan S. R., Automated Electronic pen aiding visually impaired in reading, visualizing,
and understanding textual contents, IEEEICEIT 2011, IEEE International Conference on Electro Information Technology,
p1-6, May 2011.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

647 www.ijergs.org

[9] Way T. P. and Barner K. E., Automatic Visual to tactile translation, IEEETRE 1997, IEEE Transaction on Rehabilitation
Engineering, v5, n1, p81-94, March 1997.
[10] Lumbreras M. and Sanchez J, Interactive 3D Sound Hyper stories for Blind Children, CHI 1999 Proceedings, p318-25,
May 1999.
[11] Basu A., Dutta P., Roy S., and Banerjee S., A PC-based Braille library system for the sightless, IEEETRE 1998, IEEE
Transactions on Rehabilitation Engineering, v6, n1, p60-65, March 1998.
[12] Rajarapollu P., Stavan K., Dhananjay L., and Amarsinh K., FPGA Based Braille to Text & Speech for Blind Persons,
International Journal of Scientific & Engineering Research, v4, n4, p348-53, ISSN: 2229-5518, April 2013.
[13] Roth P., Lori S. P., Andre A., and Thierry P., Audio-haptic internet browser and associated tools for blinds and visually
impaired computer users, Workshop on friendly exchanging through the net, March 2000.
[14] Roth P., Lori S. P., Andre A., and Thierry P., Auditory browser for blind and visually impaired users, ACMSIGHI
Computer Human Interaction 1999, Conference on Human Factors in Computing Systems, Pittsburgh, PA, USA, p218-19, May 1999
















International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

648 www.ijergs.org

CFD Simulation of Heat Transfer Enhancement by Plain and Curved Winglet
Type Vertex Generators with Punched Holes
Russi Kamboj*, Prof. Sunil Dhingra (Asst. Prof.), Prof.Gurjeet Singh (Asst. Prof.)
Department of Mechanical Engineering, University Institute of Engineering & Technology Kurukshetra, INDIA.
Department of Mechanical Engineering, Punjab Engineering College, Chandigarh, INDIA
Rusikamboj@gmail.com, M.No- +919992236623

ABSTRACT: CFD simulations were carried out to investigate the performance of plane and curved trapezoidal winglet type vortex
generators (VGs). Effects of shape of the VGs on heat transfer enhancement were evaluated using dimensionless numbers - j/j
0
, f/f
0

andh =(j/j
0
)/(f/f
0
). The results showed that curved winglet type VGs have better heat transfer enhancement than plain winglet type in
both laminar and turbulent flow regions. The punched holes really reduce the value of friction factor. The flow resistance is also lower
in case of curved winglet type than corresponding plane winglet VGs.The best results for heat transfer enhancement is obtained at
high Reynolds number values (Re > 10000) by using VGs.The processes in solving thesimulation consist of modeling and meshing
the basicgeometry of rectangular channel with VGs using the package ANSYS ICEM CFD 14.0.Then the boundary condition will be
set according to the experimental data available from the literature. Finally result has been examined inCFD-Post. This work presents
a numerically study on themean Nusselt number, friction factor and heat enhancementcharacteristics in a rectangular channel having a
pair of winglet type VGs under uniform heat flux of 416.67 W/m
2
. The results indicate the advantages of using curved winglet VGs
with punched holes for heat transfer enhancement

Keywords:Heat transfer enhancement, Vertex generators, Winglet types, punched holes Rectangular channel, Numerical investigation,
CFD, Flow simulation

INTRODUCTION
Computational Fluid Dynamics (CFD) is a useful tool in solving and analyzing problems that involve fluid flows and heat transfer to
the fluid. As a kind of passive heat-transfer enhancing devices, vortex generators (VGs) have been widely investigated to improve the
convective heat transfer coefficient (usually the air-side) of plate fin or finned tube type heat exchangers. The basic principle of VGs is
to induce secondary flow, particularly longitudinal vortices (LVs), which could disturb the thermal boundary layer developed along
the wall and ensure the proper mixing of air throughout the channel by means of large-scale turbulence [1]. Among the various types
of VGs, wings and winglets have attracted extensive attention since these VGs could be easily punched or mounted on the channel
walls or fins and could effectively generate longitudinal vortices for high enhancement of convective heat transfer. However, the heat
transfer enhancement (HTE) by LVs is usually accompanied with the increase of flow resistance. Experimental research by Feibig et
al. [2] showed the average heat transfer in laminar channel-flow was enhanced by more than 50% and the corresponding increase of
drag coefficient was up to 45% by delta and rectangular wings and winglets. Further experiment with double rows of delta winglets in
transitional channel flow byTiggelbeck et al. [3] showed that the ratio of HTE and drag increasewas larger for higher Reynolds
numbers. Feibig [4] also pointed outthat the winglets are more effective than wings, but winglet form isof minor importance.Recently,
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

649 www.ijergs.org

Tian et al. [5] performed three dimensional simulationson wavy fin-and-tube heat exchanger with punched deltawinglets in staggered
and in-line arrangements and their resultsshowed that each delta winglet generates a downstream mainvortex and a corner vortex. For
Re = 3000, compared with the wavyfin, the Colburn j-factor and friction f-factor of the wavy fin withdelta winglets in staggered and
in-line arrays are increased by13.1%, 7.0% and 15.4%, 10.5%, respectively. Chu et al. [6] numericallyinvestigated the three row fin-
and-oval-tube heat exchanger withdelta winglets for Re = 500-2500. They reported that, comparedwith the baseline case without
LVGs, the average Nu with LVGs was increased by 13.6-32.9%
NOMENCLATUR
A
c
cross sectional area of air channel (m
2
)
A
i
heat transfer area of each small element on copperplate (m
2
)
A
p
heat transfer area of copper plate in tested channel(m
2
)
b width of vortex generator (mm)
CRWP curved rectangular winglet pair
CTWP curved trapezoidal winglet pair
RWP rectangular winglet pair
TWPH trapezoidal winglet pair with holes
CTWPH curved trapezoidal winglet pair with holes
HTE heat transfer enhancement
C
p
specific heat (J /kg C)
D hydraulic diameter of the air channel (m)
f Darcy friction factor
f
0
Darcy friction factor of smooth channel (i.e. withoutVG)
h height of VG-trailing edge (mm)
h
c
convective heat transfer coefficient (W /m
2
C)
j Colburn factor
j
0
Colburn factor of smooth channel (i.e. without VG)
l length of vortex generator (mm)
L length of tested channel along air flow direction (m)
LVG longitudinal vortex generator
Nu Nusselt number
p pressure (Pa)
P electric power (W)
Pr Prandtl number
Q heat transfer rate (W)
Re Reynolds number
S
1
front edge pitch of a pair of vortex generators (m)
S
2
distance of vortex generator pair downstream
T temperature (K)
U velocity (m/s)
VG vortex generator
K thermal conductivity
Greek letters
inclination angle of VG ()
attack angle ()
r density (kg m_3)
DP pressure drop (Pa)
dynamic viscosity (Pa s)
h thermal enhancement factor

Subscripts
a air
c cross section or convective
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

650 www.ijergs.org

e effective
E expanded
i number of thermocouple or element
in inlet
m average
out outlet
w wall

and the corresponding pressure drop was increased by 29.2-40.6%, respectively.The above experimental and numerical results show
that pressure drop penalty is comparative with the heat transfer enhancement caused by the LVGs. Under some conditions, the
increase of pressure drop can be even 2-4 times higher than the heat transfer enhancement by LVGs [7-9], which weakens the
advantages of LVGs. Chen et al. [10] pointed out that the form drag of the LVGs is predominant for the pressure drop, and the LVs
themany additional pressure drop of the flow. As to the latter part i.e., the drag generated from the friction between the LVs and
wallsurface, Wang et al. [11] reported that the transverse expansion ofLV takes the major part of the reason. After the flow separation
atthe edge of VG, there exists a low-speed recirculation zone at theback of the LVG, which dissipates the kinetic energy. This is
themain source of the form drag and increases with the increase ofattack angle against the flow. Therefore, the effort to diminish
thelow-speed recirculation zone is important to decrease the formdrag and then the overall pressure drop caused by the LVG. Minet al.
[12] developed a modified rectangular LVG obtained by cuttingoff the four corners of a rectangular wing. Their experimentalresults of
this LVG mounted in rectangular channel suggested thatthe modified rectangular wing pairs (MRWPs) have better flow andheat
transfer characteristics than those of rectangular wing pair(RWP).Xie and Ye [14] presented an injection flowmethod to reduce
theform drag of the airship, i.e. an injection channel is made from theleading to the trailing of the airship. Part of the outer flow is
conductedto flow through the injection channel. The simulation resultsshowed that the drag coefficient of the airship is reduced
sharply. Ifthe radius of the injection channel is 1/15 of themaximum thicknessof the airship, the drag coefficient is reduced to 32.7%
of the originalvalue. Tang et al. [15] numerically investigated the fluid flowand heattransfer by trapezoidal tab with and without
clearance using theRealizable k- model. Their results showed that the overall performanceof trapezoidal tab with clearance is higher
than that withoutclearance and the formdrag is reduced.Habchi et al. [16] numericallyinvestigated the performance of trapezoidal wing
with excavation atthe bottom. The results showed that the excavation really reduced theflow resistance, but on the other side, the
cavity also reduces thecontact surface between the heatedwall and the vortex generator andthus reduces the conduction heat flux
through the vortex generator;as a result, convective heat transfer between the vortex generator andthe surrounding fluid is decreased.
Therefore, the size of the cavityshould be optimized to maximize the effect of heat transferenhancement and flow resistance
reductionNumerical study by Biswas and Chattopadhyay [17] on deltawing with punched hole in base wall showed that heat
transferenhancement and friction factor (f Re) at the exit are bothrelatively lower than those of the case without any punchedhole.
Wu and Tao [18] also did numerical study on thermalhydraulicperformance of rectangular winglets with punchedholes at the channel
wall and found that the case withpunched holes has slightly higher average Nu number (about1.1%) and slightly lower average friction
factor (about 1.2%)compared with the case without punched holes. Both theabove two papers dealt with punched holes just located
infront of the folding line (baseline) of wing or winglet VG.Whereas the low-speed recirculation zone is just behind theVG where the
heat transfer and flow drag are slightly influencedby the front holes.To address the heat transfer enhancement in recirculation zoneas
well as flow drag reduction, the present paper attempt to punchholes within the plane winglets as well as recently developedcurved
winglets and experiments were performed to examine theeffect of these kinds of VGs on air-side heat transfer enhancementand flow
resistance in channel flow. Then, the average convectiveheat transfer coefficient was measured and dimensionless numberse j/j0, f/f0
and thermohydraulic performance factor (j/j0)/(f/f0)were used for performance evaluation. The effect of size and positionof the holes
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

651 www.ijergs.org

on the performance of these VGs were thenevaluated. In our previous work the simulation of heat transfer enhancement was carried
out using RWP & CRWP without punched holes.

Fig: 1.1 Trapezoidal Winglet with hole (TWH)

Fig: 1.2 Curved Trapezoidal Winglet with hole (CTWH)

2. NUMERICAL SIMULATION
2.1 Physical Model. The numerical simulations were carried out using FLUENT V6 Software that uses the finite-volume method to
solve the governing equations. Geometry was created in CATIA Design tools for air flowing through an electrically heated rectangular
channel with copper plate at bottom of 1000mm300mm and the dimension of the channel is 1000mm240mm40mm. Meshing has
been created in ICEM CFD 14.0 with tetrahedral shapes (Fig. 2). In this study Reynolds number varies from 750 to 21000.
2.2 Numerical Method. For turbulent, steady andincompressible air flow with constant properties .Wefollow the three-dimensional
equations of continuity,momentum and energy, in the fluid region.


Fig. 2.1 Meshing in ICEM CFD 14.0

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

652 www.ijergs.org


Fig. 2.2 Meshing of CTWPH

These equations are below:

Continuity equation:

+ . (rV) = 0 . (1)
Momentum equation:
( ) .
j
i j
i i i j
u
p
u u
x x x x

c | | c c c
=
|
c c c c
\ .
.. (2)
Energy equation:
( ) .
j
i
i i p i
u
k
u T
x x C x

| | c
c c
= |
|
c c c
\ .
.. (3)

Table 1.1 Properties of air at 250C
Properties value
Specific heat capacity, Cp 1006 J/kg K

Density, 1.225 kg/m3

Thermal conductivity, k 0.0242W/m K


Velocity and pressure linkage was solved by SIMPLEalgorithm. For validating the accuracy of numericalsolutions, the grid
independent test has been performedfor the physical model. The tetrahedral grid is highlyconcentrated near the wall regions and also
near the VGs
Table 1.2 Nodes and Element in geometry are below.
Geometry type Nodes Elements
Smooth 196412 884899
TWPH 199461 994569
CTWPH 203820 997590

Table 1.2 shows that CTWPHhave maximum nodes and element in comparison of smooth and TWPH. In addition, a
convergencecriterion of 10
-7
was used for energy and 10
-3
for the massconservation of the calculated parameters. The air inlet
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

653 www.ijergs.org

temperature was specified as 293 K and three assumptionswere made in model: (1) the uniform heat flux was along the length of
rectangular channel. (2)Wall of the channel will be perfectly insulated. (3) Steady and incompressible flow.In fluent, inlet was taken
as velocity-inlet and outlet was taken as pressure-outlet.
3. Data Reduction. Three important parameters wereconsidered-friction factor, Nusselt number and thermalperformance, which
determined the friction loss, heattransfer rate and the effectiveness of heat transfer enhancement in the rectangular channel,
respectively.
The friction factor (f) is investigated from pressure drop,DP across the length of rectangular channel (L) using thefollowing equation:

f = 2DpD / (LU
2
r) .. (4)
The Nusselt number is defined as

trasnfer heat conductive
transfer heat convective
k
hL
Nu
L
= = .. (5)
The Nusselt number and the Reynolds number were basedon the average of the channel wall temperature and the outlettemperature,
the pressure drop across the test section, andthe air flow velocity were measured for heat transfer of the heated wall with different kind
of VGs. Theaverage Nusselt numbers and friction factors wereobtained and all fluids properties were found at the overallbulk mean
temperature.
Thermal performance factor was given by:
h = (j/j
0
)/(f/f
0
) (6)
Where j
0
and j and f and f
0
were the Colburn factor and friction factors for the smooth channel and channel with VGs respectively.
3. RESULTS AND DISCUSSION
3.1 Validation of setup. The CFD numerical result of the smooth channel without any VG beenvalidated with the experimental data as
shown in Figures3.1 and 3.2. These results are within 8% deviation for heat transfer (Nu) and 3% the friction factor (f) witheach-
other. In low Reynolds number the deviation becomesmall in experimental and CFD results but when Reynoldsnumber become more
then these deviation slightly higherin experimental and CFD results, respectively.

Fig: 3.1 Nusselts number Vs Reynolds number
0
50
100
0 3000 6000 9000 12000 15000 18000 21000
N
u
Re
Nu vs Re
Exp Num Rel
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

654 www.ijergs.org

3.2 Heat Transfer.Effect of the VGs of TWPH & CTWPH types on the heat transfer rate is presented in Figure- 3.3.The results for the
dimensionless number (j/j
0
) of channel with TWPH & CTWPH Vs Reynolds number is shown.All the Reynolds numbers used due to
the induction of high reverse flows and disruption of boundary layers. We clearly seen that as the Reynolds number goes on
increasing, the heat transfer coefficient also goes on increasing or Colburn factor. The channel with TWPH & CTWPH increase the
heat transfer rate by average 53% & 50% respectively than the smooth channel.


Fig: 3.2 Friction factor Vs Reynolds number

Fig: 3.3 (j/j
0
)Vs Reynolds number
3.3 Friction Factor. The variation of the pressure drop is presented (eq.4) in terms of friction as Figure 3.4.It showsthe friction factor
Vs the Reynolds number, for TWPH& CTWPHin rectangular channel. It is seen that frictionfactor decreases with an increases in
Reynolds number.It was found that the pressure drop for the TWPH & CTWPH was average 51% & 45% respectively.The curved
shape winglet pair have lower friction in comparison with the plain type winglet pair. The punched holes helps in reducing the flow
resistance.
3.4 Thermal Performance Factor. From Figure 3.5, it hasbeen observed that the thermal performance factor is high for CTWPH in
comparison with TWPH. It was also observed that the thermal enhancement factor is increase as the Reynolds number increases. The
maximum enhancement factor was observed at Reynolds number upper limit consideration I,e Re = 21000for the present study.



0
0.05
0.1
0.15
0.2
0 5000 10000 15000 20000 25000
f
Re
f vs Re
Exp Num
0
0.5
1
1.5
2
0 10000 20000 30000
(
j
/
j
0
)
Re
(j/j0) Vs Re
TWPH CTWPH
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

655 www.ijergs.org



Fig: 3.4 (f/f
0
) Vs Reynolds number

Fig: 3.5 (j/j
0
)/(f/f
0
) Vs Reynolds number
The figure 4.1 to 4.3 shows the temperature distribution for TWPH at different flow zones I,e laminar, transient and turbulent zone
and figure 4.4 to 4.6 shows the pressure distribution for the same. The figure 4.7 to 4.9 shows the temperature distribution for CTWPH
at different flow zones and figure 4.10 to 4.12 shows the pressure distribution for CTWPH.

Fig: 4.1 Temperature distribution at velocity 0.165 m/s

Fig: 4.2 Temperature distribution at velocity 0.662 m/s
0
0.5
1
1.5
0 10000 20000 30000
(
j
/
j
0
)
/
(
f
/
f
0
)
Re
Overall Heat Enhancement
TWPH CTWPH
0
0.5
1
1.5
2
0 10000 20000 30000
(
f
/
f
0
)
Re
(f/f0) Vs Re
TWPH CTWPH
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

656 www.ijergs.org


Fig: 4.3 Temperature distribution at velocity 3.31 m/s

Fig: 4.4 Pressure distribution at velocity 0.165 m/s

Fig: 4.5 Pressure distribution at velocity 0.662 m/s

Fig: 4.6 Pressure distribution at velocity 3.31 m/s

Fig: 4.7 Temperature distribution at velocity 0.165 m/s

Fig: 4.8 Temperature distribution at velocity 0.662 m/s
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

657 www.ijergs.org


Fig: 4.9 Temperature distribution at velocity 3.31 m/s

Fig: 4.10 Pressure distribution at velocity 0.165 m/s

Fig: 4.11 Pressure distribution at velocity 0.332 m/s

Fig: 4.12 Pressure distribution at velocity 3.31 m/s
4. CONCLUSION
When the winglet type of VGs (TWPH & CTWPH) are fitted in a rectangular channel, the effect on the heat transfer (Nu) or Colburn
factor (j), friction factor (f) and thermal performance factor (h) have been investigated numerically by using ANSYS-14 software.
The following conclusions are below:

1. We clearly seen that as the Reynolds number goes onincreasing, the heat transfer coefficient also goes on increasing .The TWPH in
rectangular channel increase heat transfer by average 53% and CTWPH is by average 50% more than smooth channel.

2. Pressure drop for the TWPH configuration are 51% more than smooth channel and for the CTWPH configuration is 45% more than
the smooth channel.

3. The punched holes in the winglet pairs really improve the heat transfer by reducing the flow resistance.

4. It has been observed that the thermal enhancement factor tends to decreases at low values of Reynolds number and it increases at
high values of Reynolds number.

5. Overall it is concluded that the use of winglet type VGs enhance the heat transfer and the curved type of winglet pairs with punched
holes are more effective in heat enhancement than plain winglet type VGs with or without holes.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

658 www.ijergs.org


REFERENCES:
[1] S. Ferrouillat, P. Tochon, C. Garnier, H. Peerhossaini, Intensification of heattransfer and mixing in multifunctional
heat exchangers by artificially generated streamwise vorticity, Appl. Therm. Eng. 26 (16) (2006) 1820e1829.
[2] M. Fiebig, P. Kallweit, N.K. Mitra, St. Tiggelbeck, Heat transfer enhancement and drag by longitudinal vortex
generators in channel flow, Exp. Therm. Fluid Sci. 4 (1) (1991) 103e114.
[3] St. Tiggelbeck, N.K. Mitra, M. Fiebig, Experimental investigations of heat transfer enhancement and flow losses in a
channel with double rows of longitudinal vortex generators, Int. J. Heat Mass Transf. 36 (9) (1993) 2327e2337.
[4] M. Fiebig, Review: embedded vortices in internal flow: heat transfer and pressure loss enhancement, Int. J. Heat Fluid
Flow 16 (5) (1995) 376e388.
[5] L.T. Tian, Y.L. He, Y.B. Tao, W.Q. Tao, A comparative study on the air-side performance of wavy fin-and-tube heat
exchanger with punched delta winglets in staggered and in-line arrangements, Int. J. Therm. Sci. 48 (9) (2009)
1765e1776.
[6] P. Chu, Y.L. He, Y.G. Lei, L.T. Tian, R. Li, Three-dimensional numerical study on fin-and-oval-tube heat exchanger
with longitudinal vortex generators, Appl. Therm. Eng. 29 (5e6) (2009) 859e876.
[7] A. Joardar, A.M. Jacobi, Heat transfer enhancement by winglet-type vortex generator arrays in compact plain-fin-and-
tube heat exchangers, Int. J. Refrig. 31 (2008) 87e97.
[8] C. Liu, J.T. Teng, J.C. Chu, Y.L. Chiu, S.Y. Huang, S.P. Jin, T.T. Dang, R. Greif, H.H. Pan, Experimental
investigations on liquid flow and heat transfer in rectangular microchannel with longitudinal vortex generators, Int. J. Heat
Mass Transf. 54 (2011) 3069e3080.
[9] P. Promvonge, C. Khanoknaiyakarn, S. Kwankaomeng, C. Thianpong, Thermal behavior in solar air heater channel
fitted with combined rib and deltawinglet, Int. Commun. Heat Mass Transf. 38 (2011) 749e756.
[10] Y. Chen, M. Fiebig, N.K. Mitra, Heat transfer enhancement of a finned oval tube with punched longitudinal vortex
generators in line, Int. J. Heat Mass Transf. 41 (1998) 4151e4166.
[11] J.S. Wang, J.J. Tang, J.F. Zhang, Mechanism of heat transfer enhancement of semi-ellipse vortex generator, Chin. J.
Mech. Eng. 42 (5) (2006) 160e164 (in Chinese).
[12] C.H. Min, C.Y. Qi, X.F. Kong, J.F. Dong, Experimental study of rectangular channel with modified rectangular
longitudinal vortex generators, Int. J. Heat Mass Transf. 53 (15e16) (2010) 3023e3029.
[13] G.B. Zhou, Q.L. Ye, Experimental investigations of thermal and flow characteristics of curved trapezoidal-winglet
type vortex generators, Appl. Therm. Eng. 37 (2012) 241e248.
[14] F. Xie, Z.Y. Ye, The simulation of the airship flow field with injection channel for the drag reduction, Eng. Mech. 27
(2) (2010) 222e227 (in Chinese).
[15] X.Y. Tang, D.S. Zhu, H. Chen, Vortical flow and heat transfer characteristics in rectangular channel with trapezoidal
tab, J. Chem. Ind. Eng. 63 (1) (2012) 71e 83 (in Chinese).
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

659 www.ijergs.org

[16] C. Habchi, S. Russeil, D. Bougeard, J.L. Harion, T. Lemenand, D.D. Valle, H. Peerhossaini, Enhancing heat transfer
in vortex generator-type multifunctional heat exchangers, Appl. Therm. Eng. 38 (2012) 14e25.
[17] G. Biswas, H. Chattopadhyay, Heat transfer in a channel flow with built-in wing-type vortex generator, Int. J. Heat
Mass Transf. 35 (1992) 803e814.
[18] J.M. Wu, W.Q. Tao, Numerical study on laminar convection heat transfer in a rectangular channel with longitudinal
vortex generator. Part A: verification of field synergy principle, Int. J. Heat Mass Transf. 51 (2008) 1179e1191.
[19] K.C. Chen, Experimental Technique in Fluid Mechanics, China Machine Press, Beijing, 1983, pp. 126e129 (in
Chinese).
[20] Y.Z. Cao, Experimental Heat Transfer, first ed., National Defense Industry Press, Beijing, 1998, pp. 120e125 (in
Chinese).
[21] I.Ya. Umarov, A.A. Fattakhov, A.G. Umarov, V.S. Trukhov, I.A. Tursunbaev, Yu.B. Sokolova, Yu.Kh. Gaziev, Heat
loss in a cavity-type solar collector, Appl. Sol. Energy 19 (3) (1983) 35e38.
[22] C.X. Yin, Calculation method of radiation loss through openings, Petro-Chem. Equip. Technol. 9 (2) (1988) 27e28
(in Chinese).
[23] P. Wibulswas, Laminar flow heat transfer in non-circular ducts (Ph.D. thesis), London University, London, 1966, in:
S. Kakac, R.K. Shah, W. Aung (Eds.), Handbook for Single-phase Convective Heat Transfer, Wiley Interscience, New
York, 1987, p. 3.51.
[24] J. Ma, Y.P. Huang, J. Huang, Y.L. Wang, Q.W. Wang, Experimental investigations on single-phase heat transfer
enhancement with longitudinal vortices in narrow rectangular channel, Nucl. Eng. Des. 240 (1) (2010) 92e 102.
[25] J.P. Holman, Heat Transfer, tenth ed., McGraw-Hill, New York, 2010, pp. 279e 293.
[26] R.K. Shah, Fully developed laminar flow forced convection in channels, in: S. Kakac, R.K. Shah, A.E. Bergles
(Eds.), Low Reynolds Number Flow Heat Exchanger, Hemisphere, New York, 1983, pp. 75e108.
[27] Z.Y. Guo, W.Q. Tao, R.K. Shah, The field synergy (coordination) principle and its applications in enhancing single
phase convective heat transfer, Int. J. Heat Mass Transf. 48 (9) (2005) 1797e1807.
[28] G. Biswas, K. Torii, D. Fujii, K. Nishino, Numerical and experimental determination of flow structure and heat
transfer effects of longitudinal vortices in a channel flow, Int. J. Heat Mass Transf. 39 (1996) 3441e3451.
[29] K. Torii, K.M. Kwak, K. Nishino, Heat transfer enhancement accompanying pressure-loss reduction with winglet-
type vortex generators for fin-tube heat exchangers, Int. J. Heat Mass Transf. 45 (2002) 3795e3801




International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

660 www.ijergs.org

Carbon Nanotubes: A Review on Synthesis, Properties and
Applications
Kalpna Varshney*,Assistant Professor,
Mob: 9911191445, email: kalpna.fet@mriu.edu.in


Abstract Carbon Nanotubes (CNTs) are allotropes of carbon with a nanostructure that can have a length-to-diameter ratio greater
than 1,000,000. These cylindrical carbon molecules have novel properties that make them potentially useful in many applications in
nanotechnology. Formally derived from the grapheme sheet they exhibit unusual mechanical properties such as high toughness and
high elastic moduli. Referring to their electronic structure, they exhibit semiconducting as well as metallic behavior and thus cover the
full range of properties important for technology. Nanotubes are categorized as single-walled nanotubes and multiple walled
nanotubes. Techniques have been developed to produce Nanotubes in sizeable quantities, including arc discharge, laser ablation,
chemical vapor deposition, silane solution method and flame synthesis method. The properties and characteristics of CNTs are still
being researched heavily and scientists have barely begun to tap the potential of these structures. Without doubt, carbon nanotubes
represent a material that offers great potential, bringing with it the possibility of breakthroughs in a new generation of devices, electric
equipment and bio fields. Overall, recent studies regarding CNTs have shown a very promising glimpse of what lies ahead in the
future of CNTs in nanotechnology, optics, electronics, and other fields of materials science.
Keywords Carbon Nano Tubes, Naohorns, Naobuds, electrical properties of CNT, Mechanical properties of CNT, Applications of
CNT
INTRODUCTION
Nanotube: In 1985, a confluence of events led to an unexpected and unplanned experiment with a new kind of microscope resulting in
the discovery of a new molecule made purely of carbon the very element chemists felt there was nothing more to learn about. Bucky
balls sixty carbon atoms arranged in a soccer ball shape had been discovered and the chemical world, not to mention the physical
and material worlds, would never be the same.
A Carbon Nanotube is a tube-shaped material, made of carbon, having a diameter measuring on the nanometer scale. The graphite
layer appears somewhat like a rolled-up chicken wire with a continuous unbroken hexagonal mesh and carbon molecules at the apexes
of the hexagons known as graphene. Carbon Nanotubes have many structures, differing in length, thickness, and in the type of helicity
and number of layers. Although they are formed from essentially the same graphite sheet, their electrical characteristics differ
depending on these variations, acting either as metals or as semiconductors. Elemental carbon in the sp2hybridization can form a
variety of amazing structures [1] Apart from the well-known graphite; carbon can build closed and open cages with honeycomb
atomic arrangement. The first such structure to be discovered was the C60 molecule by Kroto et al 1985 [2]. Although various carbon
cages were studied, it was only in 1991, when Iijima observed for the first time tubular carbon structures [3]. The Nanotubes consisted
of up to several tens of graphitic shells (so-called multi-walled carbon nanotubes (MWNT)) with adjacent shell separation of 0.34 nm,
diameters of 1 nm and high length/diameter ratio. As a group, Carbon Nanotubes typically have diameters ranging from <1 nm up to
50 nm. Their lengths are typically several microns, but recent advancements have made the Nanotubes much longer, and measured in
centimeters. A graphene sheet can be rolled more than one way, producing different types of carbon Nanotubes. [5] and thus Carbon
Nanotubes can be categorized by their structures:
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

661 www.ijergs.org

1.1 SINGLE-WALL NANOTUBES (SWNT)

Most Single-Walled Nanotubes (SWNT) have a diameter of close to 1 nanometer, with a tube length that can be many millions of
times longer. The structure of a SWNT can be conceptualized by wrapping a one-atom-thick layer of graphite called graphene into a
seamless cylinder. The way the graphene sheet is wrapped is represented by a pair of indices (n,m) called the chiral vector. The
integers n and m denote the number of unit vectors along two directions in the honeycomb crystal lattice of graphene. If m = 0, the
Nanotubes are called "zigzag, which is named for the pattern of hexagons as we move on circumference of the tube. If n = m, the
Nanotubes are called "armchair", which describes one of the two confirmers of cyclohexene a hexagon of carbon atoms. Otherwise,
they are called "chiral", in which the m value lies between zigzag and armchair structures. The word chiral means handedness and it
indicates that the tubes may twist in either direction. [4], [5]

Figure 1: Single walled Carbon Nanotube
1.2. MWNTS- MULTIPLE WALLED CARBON NANOTUBES

There are two models which can be used to describe the structures of multi-walled nanotubes. In the Russian Doll model, sheets of
graphite are arranged in concentric cylinders, e.g. a single-walled nanotube (SWNT) within a larger single-walled nanotube. In the
Parchment model, a single sheet of graphite is rolled in around itself, resembling a scroll of parchment or a rolled newspaper. The
interlayer distance in multi-walled nanotubes is close to the distance between graphene layers in graphite, approximately 3.3 (330
pm). The special place of double-walled carbon nanotubes (DWNT) must be emphasized here because their morphology and
properties are similar to SWNT but their resistance to chemicals is significantly improved. This is especially important when
Functionalization is required (this means grafting of chemical functions at the surface of the nanotubes) to add new properties to the
CNT. In the case of SWNT, covalent Functionalization will break some C=C double bonds, leaving "holes" in the structure on the
nanotube and thus modifying both its mechanical and electrical properties. In the case of DWNT, only the outer wall is modified.
DWNT synthesis on the gram-scale was first proposed in 2003 by the CCVD technique, from the selective reduction of oxide
solutions in methane and hydrogen. [5]

Figure-2: Double-wall Nanotubes (DWNT) Multiwalled Nanotubes
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

662 www.ijergs.org

Table 1- Comparison between SWNT and MWNT [6]
S.No. . SWNT MWNT
1 Single layer of graphene Multiple layer of graphene
2 Catalyst is required for synthesis Can be produced without catalyst
3 Bulk synthesis is difficult as it requires
Bulk synthesis is difficult as it requires
atmospheric condition
Bulk synthesis is easy
4 Purity is poor Purity is high
5 A chance of defect is more during
Functionalization
A chance of defect is less but once occurred its difficult to
improve
6 Less accumulation in body More accumulation in body
7 It can be easily twisted and are more pliable It cannot be easily twisted

1.3. NANOTORUS:
A nanotorus is theoretically described as carbon nano tube bent into a torus (doughnut shape). Nanotori are predicted to have many
unique properties, such as magnetic moments 1000 times larger than previously expected for certain specific radii. Properties such as
magnetic moment, thermal stability etc. varies widely depending on radius of the torus and radius of the tube. Nano-torus particles are
promising in nano-photonics applications [7].

Figure 3: A complete Nanotorus Structure
1.4. NANO-BUDS
Carbon Nanobuds are a newly created material combining two previously discovered allotropes of carbon; carbon nanotubes and
fullerenes. In this new material fullerene-like "buds" are covalently bonded to the outer sidewalls of the underlying carbon nanotube.
This hybrid material has useful properties of both fullerenes and carbon nanotubes. In particular, they have been found to be
exceptionally good field emitters. In composite materials, the attached fullerene molecules may function as molecular anchors
preventing slipping of the nanotubes, thus improving the composites mechanical properties [8].
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

663 www.ijergs.org


Figure-4: Nano buds
1.5NANO NORNS
They were first reported by Harris et al and lijima et.al. [3]. Single-walled carbon nanohorns (SWCNHs) are horn-shaped single-
walled tubules with a conical tip. [9] The primary advantage of SWNHs is that no catalyst is required for synthesis so high purity
materials can be produced. Their high surface area and excellent electronic properties have led to promising results for their use as
electrode material for energy storage. [10]. Currently, SWCNHs have been widely studied for various applications, such as gas
storage, adsorption, catalyst support, drug delivery system, magnetic resonance analysis, electrochemistry, biosensing application,
photovoltaics and photoelectrochemical cells, photodynamic therapy, fuel cells, and so on.[11].

Figure 5: Carbon Nanohorns
1.5 FUNCTIONALIZATION OF CARBON NANOTUBES
The carbon atoms in nanotubes are great at forming covalent bonds with many other types of atoms for several reasons:
Carbon atoms have a natural capacity to form covalent bonds with many other elements because of a property called electronegativity.
Electronegativity is a measure of how strongly an atom holds onto electrons orbiting about it. The electronegativity of carbon (2.5) is
about in the middle of the range of electronegativity of various substances from potassium (0.8) to fluorine. Because carbon has
electronegativity in the middle of the range, it can form stable covalent bonds with a large number of elements.
- All the carbon atoms in nanotubes are on the surface of the nanotube and therefore accessible to other atoms.
- The carbon atoms in nanotubes are bonded to only three other atoms, so they have the capability to bond to a fourth atom.
These factors make it relatively easy to covalently bond a variety of atoms or molecules to nanotubes, which changes the chemical
properties of the nanotube. (This method is called Functionalization). Taking this bonding thing further, if the molecules attached to
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

664 www.ijergs.org

the carbon nanotubes also attach to carbon fibers, the functionalized carbon nanotubes can bond to the fibers in a composite,
producing a stronger material [12],[13].
2.0 METHODS OF PRODUCTIONS OF CNTS:
2.1. PLASMA BASED SYNTHESIS METHODS:
a. Arc Discharge Method

The arc-evaporation method, which produces the best quality nanotubes, involves passing a current of about 50 amps between two
graphite electrodes in an atmosphere of helium. This causes the graphite to vaporize, some of it condensing on the walls of the
reaction vessel and some of it on the cathode. It is the deposit on the cathode which contains the carbon nanotubes. Single-walled
nanotubes are produced when Co and Ni or some other metal is added to the anode. It has been known since the 1950s, if not earlier,
that carbon nanotubes can also be made by passing a carbon-containing gas, such as a hydrocarbon, over a catalyst. The catalyst
consists of nano-sized particles of metal, usually Fe, Co or Ni. These particles catalyze the breakdown of the gaseous molecules into
carbon, and a tube then begins to grow with a metal particle at the tip [14], [15]. In 1991, Iijima reported the preparation of a new type
of finite carbon structures consisting of needle-like tubes [3]. The tubes were produced using an arc discharge evaporation method
similar to that used for the fullerene synthesis. The carbon needles, ranging from 4 to 30 nm in diameter and up to 1 mm in length,
were grown on the negative end of the carbon electrode used for the direct current (dc) arc-discharge evaporation of carbon in an
argon-filled vessel (100 Torr). The perfection of carbon nanotubes produced in this way has generally been poorer than those made by
arc-evaporation, but great improvements in the technique have been made in recent years. The big advantage of catalytic synthesis
over arc-evaporation is that it can be scaled up for volume production. The third important method for making carbon nanotubes
involves using a powerful laser to vaporize a metal-graphite target. This can be used to produce single-walled tubes with high yield
[16]. Ebbesen and Ajayan 1992 reported large-scale synthesis of MWNT by a variant of the standard arc discharge technique [17]. It
was shown in 1996 that single-walled nanotubes can also be produced catalytically.

Figure 6: (a) Schematic representation of arc discharge apparatus. (b) Experimental arc discharge set-up in liquid N2.

b. Laser Ablation Method:
First large-scale (gram quantities) production of SWNTs was achieved in 1996 by the Smalleys group at Rice University [17], [18]. A
pulsed or continuous laser is used to vaporize a 1.2 at. % of cobalt/nickel with 98.8 at.% of graphite composite target that is placed in
a 1200C quartz tube furnace with an inert atmosphere of ~500 Torr of Ar or He. Nanometer-size metal catalyst particles are formed
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

665 www.ijergs.org

in the plume of vaporized graphite. The metal particles catalyze the growth of SWNTs in the plasma plume, but many by-products are
formed at the same time. As the vaporized species cool, small carbon molecules and atoms quickly condense to form larger clusters,
possibly including fullerenes. The catalysts also begin to condense, but more slowly at first, and attach to carbon clusters and prevent
their closing into cage structures. Catalysts may even open cage structures when they attach to them. From these initial clusters,
tubular molecules grow into single-wall carbon nanotubes until the catalyst particles become too large, or until conditions have cooled
sufficiently that carbon no longer can diffuse through or over the surface of the catalyst particles. It is also possible that the particles
become that much coated with a carbon layer that they cannot absorb more and the nanotube stops growing. [18]

The SWNTs formed in this case are bundled together by van der waals forces. The nanotubes and by-products are collected via
condensation on a cold finger downstream from the target. In principle, arc discharge and laser ablation are similar methods, as both
use a metal-impregnated graphite target (anode) to produce SWNTs, and both produce MWNT and fullerenes when pure graphite is
used instead. However, the length of MWNT produced through laser ablation is much shorter than that produced by arc discharge
method. Therefore, this method does not seem adequate for the synthesis of MWNT. The diameter distribution of SWNTs made by
this method is roughly between 1.0 and 1.6 nm. Because of the good quality of nanotubes produced by this method, scientists are
trying to scale up laser ablation. However, the results are not yet as good as for the arc-discharge method, but they are still promising.
Two new developments in this field are ultra fast Pulses from a free electron laser method the continuous wave laser-powder method.


Figure 7: Schematic synthesis apparatus. (a) Classical laser ablation technique. (b) Ultrafast laser evaporation (FEL-free
electron laser).
2.2 Thermal Synthesis Process:
Arc discharge and laser ablation methods are fundamentally plasma based synthesis. However, in thermal synthesis, only thermal
energy is relied and the hot zone of reaction never goes beyond 1200
0
C, including the case of plasma enhanced CVD. In almost all
cases, in presence of active catalytic species such as Fe, Ni, and Co, carbon feedstock produces CNTs. Depending on the carbon
feedstock; Mo and Ru are sometimes added as promoters to render the feedstock more active for the formation of CNTs. In fact,
thermal synthesis is a more generic term to represent various chemical vapor deposition methods. It includes Chemical Vapor
Deposition processes, Carbon monoxide synthesis processes and flame synthesis.

2.2.1. Chemical vapor deposition (CVD)

While the arc discharge method is capable of producing large quantities of unpurified nanotubes, significant effort is being directed
towards production processes that offer more controllable routes to the nanotube synthesis. A class of processes that seems to offer the
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

666 www.ijergs.org

best chance to obtain a controllable process for the selective production of nanotubes with predefined properties is chemical vapour
deposition (CVD) [19]. In principle, chemical vapour deposition is the catalytic decomposition of hydrocarbon or carbon monoxide
feedstock with the aid of supported transition metal catalysts

It is carried out in two step process:-
Catalyst is deposited on substrate and then nucleation of catalyst is carried via chemical etching or thermal annealing. Ammonia is
used as an etchant. Metal catalysts used are Ni, Fe or Co.

Carbon source is then placed in gas phase in reaction chamber. Then carbon molecule is converted to atomic level by using energy
source like plasma or heated coil. This carbon will get diffused towards substrate, which is coated with catalyst and Nanotubes grow
over this metal catalyst. Carbon source used is methane, carbon monoxide or acetylene. Temperature used for synthesis of nanotube is
650 9000 C range. The typical yield is 30%. [20, 21, 22].

a)

b) c)

Figure 8: Schematic demonstration of CVD method. (a) Horizontal furnace. (b) Vertical furnace. (c) Fluidized bed reactor.

Using CVD method, several structural forms of carbon are formed such as amorphous carbon layers on the surface of the catalyst,
filaments of amorphous carbon, graphite layers covering metal particles, SWNTs and MWNTs made from well-crystallized graphite
layers. The general nanotube growth mechanism in the CVD process involves the dissociation of hydrocarbon molecules catalyzed by
the transition metal, and the saturation of carbon atoms in the metal nanoparticle. The precipitation of carbon from the metal particle
leads to the formation of tubular carbon solids in a sp2 structure. The characteristics of the carbon nanotubes produced by CVD
method depend on the working conditions such as the temperature and the operation pressure, the kind, volume and concentration of
hydrocarbon, the nature, size and the pretreatment of metallic catalyst, the nature of the support and the reaction time [23].
2.2.2. Plasma Enhanced CVD (PECVD):

Plasma-enhanced chemical vapor deposition (PECVD) systems have been used to produce both SWNTs and MWNTs. PECVD is a
general term, encompassing several differing synthesis methods. In general PECVD can be direct or remote. Direct PECVD systems
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

667 www.ijergs.org

can be used for the production of MWNT field emitter towers and some SWNTs. A remote PECVD can also be used to produce both
MWNTs and SWNTs (Figure 6). For SWNT synthesis in the direct PECVD system, the researchers heated the substrate up to 550 to
850C, utilized a CH4/H2 gas mixture at 500 mT, and applied 900 W of plasma power as well as an externally applied magnetic field.


Figure 9: Plasma Enhanced CVD

The plasma enhanced CVD method generates a glow discharge in a chamber or a reaction furnace by a high frequency voltage applied
to both electrodes. A substrate is placed on the grounded electrode. In order to form a uniform film, the reaction gas is supplied from
the opposite plate. Catalytic metal, such as Fe, Ni and Co are used on a Si, SiO2, or glass substrate using thermal CVD or sputtering.
As such, PECVD and HWCVD is essentially a crossover between plasma-based growth and CVD synthesis. In contrast, to arc
discharge, laser ablation, and solar furnace, the carbon for PECVD synthesis comes from feedstock gases such as CH4 and CO, so
there is no need for a solid graphite source. The argon-assisted plasma is used to break down the feedstock gases into C2, CH, and
other reactive carbon species (CxHy) to facilitate growth at low temperature and pressure.
2.2.3. Alcohol Catalytic CVD (ACCVD):
Low cost production of SWNT in large scale in Alcohol catalytic CVD (ACCVD). Evaporated methanol and ethanol are being utilized
over iron and cobalt catalytic metal particles supported with zeolite. CNT is obtained at a relatively low minimum temperature of
about 550 oC. It seems that hydroxyl radicals, who come from reacting alcohol on catalytic metal particles, remove carbon atoms with
dangling bonds, which are obstacles in creating high-purity SWNTs. The diameter of the SWNTs produced is about one nm.

Figure 10: Alcohol catalytic CVD

2.3. The Hydrothermal Methods
Sonochemical/hydrothermal technique is another synthesis method wich is successful for the preparation of different carbonaceous
nanoarchitectures such as nano-onions, nanorods, nanowires, nanobelts, MWNTs. This process has many advantages in comparison
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

668 www.ijergs.org

with other methods: i) the starting materials are easy to obtain and are stable in ambient temperature; ii) it is low temperature process
(about 150180 C); iii) there is no hydrocarbon or carrier gas necessary for the operation. MWNTs were produced by hydrothermal
processing where a mixture of polyethylene and water with a Ni catalyst is heated from 700 to 800 C under 60100 MPa pressure
[24]. Both closed and open end multiwall carbon nanotubes with the wall thickness from several to more than 100 carbon layers were
produced. An important feature of hydrothermal nanotubes is the small wall thickness and large inner core diameter, 20800 nm.
Graphitic carbon nanotubes were synthesized by the same research group using ethylene glycol (C2H4O2) solution in the presence of
Ni catalyst at 730800 C under 60100 MPa pressure [25,26]. TEM analysis shows that these carbon nanotubes have long and wide
internal channels and Ni inclusions in the tips. Typically, hydrothermal nanotubes have wall thickness 725 nm and outer diameter of
50150 nm. Thin-wall carbon tubes with internal diameters from 101000 nm have been also produced. During growth of a tube, the
synthesis fluid, which is a supercritical mixture of CO, CO2, H2O, H2, and CH4 enters the tube. Manafi et al. [27] have prepared large
quantity of carbon nanotubes using sonochemical/hydrothermal method. 5 mol/l NaOH aqueous solution of dichloromethane, CCl2
and metallic Li was used as starting materials. The hydrothermal synthesis was conducted at 150160 C for 24 h. The nanotubes
produced in this way were about 60 nm in diameter and 25 m long. Uniformly distributed catalyst nanoparticles were observed by
SEM analysis as a result of the ultrasonic pre-treatment of the starting solution. Multiwall carbon nanocells and multiwall carbon
nanotubes have been artificially grown in hydrothermal fluids from amorphous carbon, at temperatures below 800 C, in the absence
of metal catalysts [26]. Carbon nanocells were formed by interconnecting multiwalls of graphitic carbon at 600 C. The bulk made of
connected hollow spherical cells appears macroscopically as disordered carbon. The nanocells have diameters smaller than 100 nm,
with outer diameters ranging from 15 to 100 nm, and internal cavities with diameters from 10 to 80 nm. The nanotubes observed in the
sample have diameters in the range of tens and length in the range of hundreds of nanometers [27]
3.0 PURIFICATION OF CNTs
Nanotubes usually contain a large amount of impurities such as metal articles, amorphous carbon, and multishell. There are different
steps in purification of nanotubes [28].
3.1 Air Oxidation:

The carbon nanotubes are having less purity; the average purity is about 5- 10%. So purification is needed before attachment of drugs
onto CNTs. purification of single-walled carbon nanotubes (SWCNTs), based on the selective oxidation of carbonaceous impurities by
heating at a constantly increasing temperature (i.e. dynamic oxidation) in air. Air oxidation is useful in reducing the amount of
amorphous carbon and metal catalyst particles (Ni, Y). Optimal oxidation condition is found to be at 673 k for 40 min. dynamic
oxidation allows for an efficient removal of carbonaceous impurities without significant loss of nanotubes [29, 30].
3.2 Acid Refluxing

Refluxing the sample in strong acid is effective in reducing the amount of metal particles and amorphous carbon. Different acids used
were hydrochloric acid (HCl), nitric acid (HNO3) and sulphuric acid (H2SO4), but HCl was identified to be the ideal refluxing acid.
[30,31].

3.3 Surfactant aided sonication, filtration and annealing

After acid refluxing, the CNTs were purer but, tubes were entangled together, trapping most of the impurities, such as carbon particles
and catalyst particles, which were difficult to remove with filtration. So surfactant-aided sonication was carried out. Sodium dodecyl
benzene sulphonate (SDBS) aided sonication with ethanol (or methanol) as organic solvent were preferred because it took the longest
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

669 www.ijergs.org

time for CNTs to settle down, indicating an even suspension state was achieved. The sample was then filtered with an ultra filtration
unit and annealed at 1273 k in N2 for 4 h. Annealing is effective in optimizing the CNT structures. It was proved the surfactant-aided
sonication is effective to untangle CNTs, thus to free the particulate impurities embedded in the entanglement. Nanotube can also be
purified by multi-step purification method. [32, 33, 34, 35]
4.0 PROPERTIES OF CNTs
4.1 Mechanical Properties
Carbon nanotubes are the strongest and stiffest materials yet discovered in terms of tensile strength and elastic modulus respectively.
This strength results from the covalent sp bonds formed between the individual carbon atoms. Because of C-C bonds, CNTs are
expected to be extremely strong along their axes and have a very large Youngs modulus in their axial direction. The Young modulus
value of a SWNT is estimated as high as 1Tpa to 1.8 Tpa. The high value of elastic modulus makes it suitable for the application as
probe tips of scanning microscopy. The modulus of a SWNT depends on the diameter and chirality. However, in the case of MWNT,
it correlates to the amount disorder in the sidewalls. For MWNTs, experiments have indicated that only the outer graphitic shell can
support stress when the tubes are dispersed in an epoxy matrix1,3, and for single wall nanotube bundles (also known as ropes), it has
been demonstrated that shearing effects due to the weak inter tube cohesion gives significantly reduced moduli compared to
individual.[36]

Figure 11: Tensile Strength and Elastic modulus of CNT

A single perfect nanotube is about 10 to 100 times stronger than steel per unit weight. The Young's modulus of the best nanotubes can
be as high as 1000 GPa which is approximately 5x higher than steel. The tensile strength, or breaking strain of nanotubes can be up to
63 GPa, around 50x higher than steel. These properties, coupled with the lightness of carbon nanotubes, give them great potential in
applications such as aerospace. It has even been suggested that nanotubes could be used in the space elevator, an Earth-to-space
cable first proposed by Arthur C. Clarke. The electronic properties of carbon nanotubes are also extraordinary. Especially notable is
the fact that nanotubes can be metallic or semiconducting depending on their structure. Thus, some nanotubes have conductivities
higher than that of copper, while others behave more like silicon. There is great interest in the possibility of constructing nanoscale
electronic devices from nanotubes, and some progress is being made in this area. However, in order to construct a useful device we
would need to arrange many thousands of nanotubes in a defined pattern, and we do not yet have the degree of control necessary to
achieve this. There are several areas of technology where carbon nanotubes are already being used. These include flat-panel displays,
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

670 www.ijergs.org

scanning probe microscopes and sensing devices. The unique properties of carbon nanotubes will undoubtedly lead to many more
applications. [37, 38, 39]

Table-2: Comparison of Mechanical Properties of CNTs with other string materials [40]

Material Youngs modulus (GPa) Tensile Strength (GPa) Density (g/cm3)
Single wall nanotube 1054 150 N/A
Multi wall nanotube 1200 150 2.6
Steel 208 0.4 7.8
Epoxy 3.5 0.005 1.25
Wood 16 0.008 0.6

4.2 Electrical Properties
Not only are carbon nanotubes extremely strong, but they having very interesting electrical properties. A single graphite sheet is a
semimetal, which means that it has properties intermediate between semiconductors (like the silicon in computer chips, where
electrons have restricted motion) and metals (like the copper used in wires, where electrons can move freely). When a graphite sheet is
rolled into a nanotube, not only do the carbon atoms have to line up around the circumference of the tube, but the quantum mechanical
wave functions of the electrons must also match up. Remember, in quantum mechanics the electrons. In theory, metallic nanotubes
can carry an electrical current density of 4 109 A/cm2 which is more than 1,000 times greater than metals such as copper [41].
Individual nanotubes, like macroscopic structures, can be characterized by a set of electrical properties resistance, capacitance and
inductance which arise from the intrinsic structure of the nanotube and its interaction with other objects. Electrical transport inside
the CNTs is affected by scattering by defects and by lattice vibrations that lead to resistance, similar to that in bulk materials.
However, the 1D nature of the CNT and their strong covalent bonding drastically affects these processes. Scattering by small angles is
not allowed in a 1D material, only forward and backward motion of the carriers. Most importantly the 1D nature of the CNT leads to a
new type of quantized resistance related to its contacts with three-dimensional (3D) macroscopic objects such as the metal electrodes.
For a metallic CNT, M=2 so that RQ = h/4e2 = 6.45 k. Of course, as well as this quantum resistance there are other forms of contact
resistance such as that attributable to the presence of Schottky barriers at metalsemiconducting nanotube interfaces and parasitic
resistance, which is simply due to bad contacts. At the other extreme, in long CNTs, or at high bias, many scattering collisi ons can
take place and the so-called diffusive limit of transport that is typical of conventional conductors is reached. In this limit the carriers
have a finite mobility. However, in CNTs this can be very high as much as 1,000 times higher than in bulk silicon.
The intrinsic electronic structure of a CNT also leads to a capacitance that is related to its density-of-states that is, how its energy
states are distributed in energy and it is independent of electrostatics. This quantum capacitance, CQ, is small of the order of 10
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

671 www.ijergs.org

16 F m1. In addition to CQ, a CNT incorporated in a structure has an electrostatic capacitance, CG, which arises from its coupling
to surrounding conductors and as such depends on the device geometry and dielectric structure.
Finally, CNTs have inductance, which is a resistance to any changes in the current flowing through them. Again, there is a quantum
and a classical contribution. Classical self inductance depends on the CNT diameter, geometry of the structure and the magnetic
permeability of the medium. The total inductance is the sum of the two values, so that the larger inductance, LK, dominates (LK 16
nH m1, LC 1 nH m1). In response to an a.c. signal, a CNT behaves like a transmission line owing to its inductance. (41, 42)

4.3 Thermal Properties:

All nanotubes are expected to be very good thermal conductors along the tube, exhibiting a property known as "ballistic conduction,"
but good insulators laterally to the tube axis. It is predicted that carbon nanotubes will be able to transmit up to 6000 W m1K1 at
room temperature; compare this to copper, a metal well-known for its good thermal conductivity, which transmits 385 Wm1K1.
The temperature stability of carbon nanotubes is estimated to be up to 2800 C in vacuum and about 750 C in air. Thermal expansion
of CNTs will be largely isotropic, which is different than conventional graphite fibers, which are strongly anisotropic. This may be
beneficial for carbon-carbon composites. It is expected that low-defect CNTs will have very low coefficients of thermal expansion
[43, 44].
4.4 Chemical Properties:
The chemical reactivity of a CNT is, compared with a graphene sheet, enhanced as a direct result of the curvature of the CNT surface.
This curvature causes the mixing of the and orbital, which leads to hybridization between the orbitals. The degree of hybridization
becomes larger as the diameter of a SWNT gets smaller. Hence, carbon nanotube reactivity is directly related to the -orbital
mismatch caused by an increased curvature. Therefore, a distinction must be made between the sidewall and the end caps of a
nanotube. For the same reason, a smaller nanotube diameter results in increased reactivity. Covalent chemical modification of either
sidewalls or end caps has shown to be possible. For example, the solubility of CNTs in different solvents can be controlled this way.
However, covalent attachment of molecular species to fully sp2-bonded carbon atoms on the nanotube sidewalls proves to be difficult.
Therefore, nanotubes can be considered as usually chemically inert [45].
4.5 Optical Properties:

Optical properties of SWNT are related to their quasi one- dimensional nature. Theoretical studies have revealed that the optical
activity of chiral nanotubes disappears if the nanotubes become larger therefore, it is expected that other physical properties are
influenced by these parameters too. Use of the optical activity might result in optical devices in which CNTs play an important role
[46].
5.0 APPLICATIONS OF CNTS
Various applications of CNTs are as follows:

1) Carrier for Drug delivery: Carbon nanohorns (CNHs) are the spherical aggregates of CNTs with irregular horn like shape. Research
studies have proved CNTs and CNHs as a potential carrier for drug delivery system [47].

2) Functionalized carbon nanotubes are reported for targeting of Amphotericin B to Cells [48].
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

672 www.ijergs.org


3) Cisplatin incorporated oxidized SWNHs have showed slow release of Cisplatin in aqueous environment. The released Cisplatin had
been effective in terminating the growth of human lung cancer cells, while the SWNHs alone did not show anticancer activity [49].

4) Anticancer drug Polyphosphazene platinum given with nanotubes had enhanced permeability, distribution and retention in the brain
due to controlled lipophilicity of Nanotubes [50].

5) Antibiotic, Doxorubicin given with nanotubes is reported for enhanced intracellular penetration. The gelatin CNT mixture (hydro-
gel) has been used as potential carrier system for biomedical [50].

7) CNT-based carrier system can offer a successful oral alternative administration of Erythropoietin (EPO), which has not been
possible so far because of the denaturation of EPO by the gastric environment conditions and enzymes. [50]

8) They can be used as lubricants or glidants in tablet manufacturing due to nanosize and sliding nature of graphite layers bound with
Van der Waals forces [50].

9) In Genetic Engineering:
In genetic engineering, CNTs and CNHs are used to manipulate genes and atoms in the development of bioimaging genomes,
proteomics and tissue engineering. The unwound DNA (single stranded) winds around SWNT by connecting its specific nucleotides
and causes change in its electrostatic property. This creates its potential application in diagnostics (polymerase chain reaction) and in
therapeutics. Wrapping of carbon nanotubes by single-stranded DNA was found to be sequence-dependent, and hence can be used in
DNA analysis. Nanotubes due to their unique cylindrical structure and properties are used as carrier for genes (gene therapy) to treat
cancer and genetic disorders. Their tubular nature has proved them as a vector in gene therapy. Nanotubes complexed with DNA were
found to release DNA before it was destroyed by cells defense system, boosting transfection significantly. Nanostructures have
showed antiviral effect in respiratory syncytical virus (RSV), a virus with severe bronchitis and asthma34. The treatment is generally
done by combining nanoparticles and gene slicing technologies. Here RNA fragments capable of inhibiting a protein (which is needed
for virus multiplication) is encapsulated within nanotubes and administered in the form of nasal sprays or drops. The promising results
have been noted inhibiting further growth of virus. Nanotubes are reported for helical crystallisation of proteins and growth of
embryonic rat brain neurons. Streptavidin protein is successfully immobilized on CNT via 1-pyrene butanoic acid and succinimidyl
ester32. Nanotubes and nanohorns can adhere various antigens on their surface, hence act as source of antigen in vaccines. Hence, by
use of nanotubes, use of dead bacteria as source for antigen which is sometimes dangerous can be avoided. [51]

10) Biomedical applications
Bianco et al.have prepared soluble CNTs and have covalently linked biologically active peptides with them. This was demonstrated
for viral protein VP1 of foot mouth disease virus (FMDV) showing immunogenicity and eliciting antibody response. In chemotherapy,
drug embedded nanotubes attack directly on viral ulcers and kills viruses. No antibodies were produced against the CNT backbone
alone, suggesting that the nanotubes do not possess intrinsic immunogenicity. Combination of all the described features of the vaccine
system with the fact that the capacities of the anti-peptide antibodies to neutralize FMDV have been enhanced has indicated that CNT
can have a valuable role in the construction of novel and effective vaccines. In vitro studies by [52] showed selective cancer cell
killing obtained by hyperthermia due to the thermal conductivity of CNT internalized into those cells. The work developed regarding
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

673 www.ijergs.org

the use of CNT as gene therapy vectors have shown that these engineered structures can effectively transport the genes and drugs
inside mammalian cells. The CNT-transported genetic material has conserved the ability to express proteins33.
Detection of cancer at early stages is a critical step in improving cancer treatment. Currently, detection and diagnosis of cancer usually
depend on changes in cells and tissues that are detected by a doctor's physical touch or imaging expertise. The potential for
nanostructures to enter and analyze single cells suggests they could meet this need [42].

11) Artificial implants
Normally body shows rejection reaction for implants with the post administration pain but, miniature sized nanotubes and nanohorns
get attached with other proteins and amino acids avoiding rejection. Also, they can be used as implants in the form of artificial joints
without host rejection reaction. Moreover, due to their high tensile strength, carbon nanotubes filled with calcium and
arranged/grouped in the structure of bone can act as bone substitute. (54)

12) Preservative

Carbon nanotubes and nanohorns are antioxidant in nature. Hence, they are used to preserve drugs formulations prone to oxidation.
Their antioxidant property is used in anti aging cosmetics and with zinc oxide as sunscreen dermatological to prevent oxidation of
important skin components [50]

13) Diagnostic tool

Protein-encapsulated or protein/enzyme filled nanotubes, due to their fluorescence ability in presence of specific biomolecules have
been tried as implantable biosensors. Even, Nanocapsules filled with magnetic materials, radioisotope enzymes can be used as
biosensors Nanosize robots and motors with nanotubes can be used in studying cells and biological systems. [53]

14) As catalyst

Nanohorns offer large surface area and hence, the catalyst at molecular level can be incorporated into nanotubes in large amount and
simultaneously can be released in required rate at particular time. Hence, reduction in the frequency and amount of catalyst addition
can be achieved by using CNTs and CNHs [53].

15) As Biosensors

CNTs act as sensing materials in pressure, flow, thermal, gas, optical, mass, position, stress, strain, chemical, and biological sensors.
Some applications of carbon nanotube based sensors are given below.
Biomedical industry CNT-incorporated sensors are expected to bring about revolutionary changes in various fields and especially in
the biomedical industry sector. An example is the glucose sensing application, where regular self-tests of glucose by diabetic patients
are required to measure and control their sugar levels. Another example is monitoring of the exposure to hazardous radiation like in
nuclear plants/reactors or in chemical laboratories or industries. The main purpose in all these cases is to detect the exposure in
different stages so that appropriate treatment may be administered. CNT-based nanosensors are highly suitable as implantable
sensors. Implanted sensors can be used for monitoring pulse, temperature, blood glucose, and also for diagnosing diseases. One such
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

674 www.ijergs.org

example is the use of nanotubes to track glucose levels in the blood, which would allow diabetics to check their sugar levels without
the need for taking samples by pricking their fingers. [42]
6.0 LIMITATATIONS OF CNTs
Lack of solubility in most solvents compatible with the biological milieu (aqueous based).
The production of structurally and chemically reproducible batches of CNTs with identical characteristics.

Difficulty in maintaining high quality and minimal impurities.

7.0 MARKET OF CNT
Market size will increase from $6 million in 2004 to $1,070 million in 2014[40].
8.0 CONCLUSION:

With the prospect of gene therapy, cancer treatments, and innovative new answers for life-threatening diseases on the horizon, the
science of Nanomedicine has become an ever-growing field that has an incredible ability to bypass barriers. The properties and
characteristics of CNTs are still being researched heavily and scientists have barely begun to tap the potential of these structures.
Single and multiple walled carbon nanotubes have already proven to serve as safer and more effective alternatives to previous drug
delivery. Among the various methods shown in this review the CVD method clearly emerges as the best one for large scale production
of MWNTs. However, the production of SWNTs is still in the gram scale and the helical carbon nanotubes are only obtained together
with linear CNTs.

REFERENCES:
[1] N .Valentin, Popov, Mat Sci and Engg R 43, 61, 2004.

[2] H. W. Kroto, J. R. Heath, S. C. O'Brien, R. F. Curl & R. E. Smalley, C60: Buckminsterfullerene, Nature 318, 162 - 163 14
November 1985.

[3] S. Iijima, Nature (London) 354 56, 1991.

[4] Teri Wang Odom, Jin-Lin Huang, Philip Kim & Charles M. Lieber, Atomic structure and electronic properties of single-
walled carbon nanotubes, Nature 391, 62-64 ,1 January 1998.

[5] E.N.Ganesh. Single Walled and Multi Walled Carbon Nanotube Structure, Synthesis and Applications. International Journal
of Innovative Technology and Exploring Engineering (IJITEE) ISSN: 2278-3075,Volume-2, Issue-4, March 2013.

[6] Rajashree Hirlekar, Manohar Yamagar, Harshal Garse, Mohit Vij, Vilasrao Kadam. Carbon Nanotubes and Its Applications:
A Review. Asian Journal of Pharmaceutical and Clinical Research, Vol.2 Issue 4, October- December 2009.

[7] Qiang Shi, Zhongyuan Yu, , Yumin Liu, Hui Gong, Haozhi Yin, Wen Zhang, Jiantao Liu, Yiwei Peng. Plasmonics properties
of nano-torus: An FEM method. Optics Communications, Volume 285, Issues 2122, Pages 45424548, 1 October 2012.

[8] Xiaojun Wu and Xiao Cheng Zeng Periodic Graphene Nanobuds. Nano Lett., December 11, 2008.


[9] Harris P J F, Tsang S C, Claridge J B and Green M L H High resolution electron microscopy studied of a microporous carbon
produced by arc evaporation J. Chem.Soc. Faraday. Trans.90, 2799802, 1994.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

675 www.ijergs.org


[10] http://www.ee.nec.de/News/Releases/pr283-01.html (August, 2001).

[11] Shuyun Zhu and Guobao Xu. Single-walled carbon nanohorns and their applications. Nanoscale, 2, 2538-2549.2010.

[12] Sayes CM, Liang F, Hudson JL, Mendez J,Guo W, Beach JM et al. Functionalization density dependence of single-walled
carbon nanotubes cytotoxicity in vitro. Toxicol. Lett. 16: 135142, 2006.

[13] H. Kuzmany , A. Kukovecz, , F. Simona , M. Holzweber, Ch. Kramberger, T. Pichler. Functionalization of carbon nanotubes.
Synthetic Metals 141, 113122, 2004.

[14] Brenner, D. Empirical potential for hydrocarbons for use in simulating the chemical vapor deposition of diamond films.
Physical Review B 42(15): 9458-9471, 1990.

[15] Calvert, P. Strength in disunity. Nature 357: 365-366, 1992.

[16] Che, G., B. Lakshmi, C. Martin and E. Fisher Chemical vapor deposition based on synthesis of carbon nanotubes and
nanofibers using a template method. Chemistry of Materials 10: 260-267, 1998.

[17] T. W. Ebbesen & P. M. Ajayan, Large-scale synthesis of carbon nanotubes, Nature 358, 220 - 222 (16 July 1992.

[18] Sinnott, S.B.; Andrews, R. Carbon Nanotubes: Synthesis, properties and applications. Critical Reviews in Solid State Mat.
Sci. 26, 145249, 2001.

[19] Smalley, R.E, Dai, H.J.; Rinzler, A.G.; Nikolaev, P.; Thess, A.; Colbert, D.T. Single-wall nanotubes produced by metal-
catalyzed disproportionation of carbon monoxide. Chem. Phys. Lett. 260, 471475. 1996.

[20] T. Guo. ; Nikolaev, P.; Rinzler, A.G.; Tomanek, D.; Colbert, D.T.; Smalley, R.E. Self-assembly of tubular fullerenes. J. Phys.
Chem. 99, 1069410697, 1995.

[21] Paradise, M.; Goswami, T. Carbon nanotubes-production and industrial application. Mat. Design, 28, 14771489, 2007.

[22]Teo, K.B.K.; Singh, Ch.; Chhowalla, M.; Milne, W.I. Catalytic synthesis of carbon nanotubes and nanofibers. In
Encyclopedia of Nanoscience and Nanotechnology; Nalwa, H.S., Ed.; American Scientific Publisher: Valencia, CA, USA;
Volume 1, pp. 665668, 2003.

[23]http://www.pharmainfo.net.

[24] Gogotsi, Y.; Libera, J.A.; Yoshimura, M. Hydrothermal synthesis of multiwall carbon nanotubes. J. Mat. Res. 15, 2591
2594, 2000.

[25] Gogotsi, Y.; Naguib, N.; Libera, J. In situ chemical experiments in carbon nanotubes. Chem. Phys. Lett. 365, 354360, 2002.

[26] Manafi, S.; Nadali, H.; Irani, H.R. Low temperature synthesis of multi-walled carbon nanotubes via a
sonochemical/hydrothermal method. Mat. Lett. 62, 41754176, 2008.

[27] Calderon Moreno, J.M.; Swamy, S.S.; Fujino, T.; Yoshimura, M. Carbon nanocells and nanotubes grown in hydrothermal
fluids. Chem. Phys. Lett. 329, 317322, 2000.

[28] Hou PX, Bai S, Yang GH, Liu C, Cheng HM.Multi-step purification of carbon nanotubes.Carbon; 40: 81-85,2002.

[29] Dillon, A. C.; Jones, K. M.; Bekkedahl, T. A.; Kiang, C. H.;Bethune, D. S.; Heben, M. J. Nature, 376-377. 1997,

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

676 www.ijergs.org

[30] Nikolay Dementev,a Sebastian Osswald,b Yury Gogotsib and Eric Borguet, Purification of carbon nanotubes by dynamic
oxidation in air, J. Mater. Chem., 19, 7904-7908, 2009.

[31] I. W. Chiang, B. E. Brinson, R. E. Smalley, J. L. Margrave, and R. H. Hauge. Purification and Characterization of Single-
Wall Carbon Nanotubes. J. Phys. Chem. B, 105, 1157-1161, 2001.

[32] Zimmerman, J. L.; Bradley, R. K.; Huffman, C. B.; Hauge, R. H.;Margrave, J. L.Chem. Mater. 12, 1361, 2000.

[33] Bandow, S.; Rao, A. M.; Williams, K. A.; Thess, A.; Smalley, R.E.; Eklund, P. C.J. Phys. Chem. B , 101, 8839. 1997.

[34] Duesberg, G. S.; Burghard, M.; Muster, J.; Philipp, J.; Roth, S.Chem. Commun. 435. 1998.


[35] Shelimov, K. B.; Esenaliev, R. O.; Rinzler, A. G.; Huffman, C.B.; Smalley, R. E.Chem. Phys. Lett. , 282, 429, 1998.

[36] Harris, P. Carbon nanotubes and related structures: new materials for the 21st century. Cambridge, Cambridge University
Press, 1999.

[37] M. Meo and M. Rossi, Prediction of Youngs modulus of single wall carbon nanotubes by molecular-mechanics based finite
element modeling, Composites Science and Technology, vol. 66, no. 11-12, pp. 15971605, 2006.

[38] M.-F. Yu, B. S. Files, S. Arepalli, and R. S. Ruoff, Tensile loading of ropes of single wall carbon nanotubes and their
mechanical properties, Physical Review Letters, vol. 84, no. 24, pp. 55525555, 2000.

[39] Nardelli, M., J.-L. Fattebert, D. Orlikowski, C. Roland, Q. Zhao, et al. Mechanical properties, defects, and electronic
behavior of carbon nanotubes. Carbon 38: 1703-1711, 2000.

[40] http://www.dolcera.com/wiki/index.php?title=Carbon_Nanotubes_%28CNT%29.

[41] H. Dai, A. Javey, E. Pop, D. Mann, and Y. Lu, Electrical transport properties and field-effect transistors of carbon
nanotubes, NANO: Brief Reports and Reviews, vol. 1, no. 1, pp. 14, 2006.

[42] Prabhakar R. Bandaru. Electrical Properties and Applications of Carbon Nanotube Structures, Journal of Nanoscience and
Nanotechnology Vol.7, 129, 2007.

[43] E. Pop, D. Mann, Q. Wang, K. Goodson, and H. Dai, Thermal conductance of an individual single-wall carbon nanotube
above room temperature, Nano Letters, vol. 6, no. 1, pp. 96100, 2006.

[44]Stahl, H., J. Appenzeller, R. Martel, P. Avouris and B. Lengeler Intertube coupling in ropes of single-wall carbon
nanotubes. Physical Review Letters 85(24): 5186-5189, 2000.

[45] Lordi, V. and N. Yao Molecular mechanics of binding in carbon-nanotube-polymer composites. Journal of Materials
Research 15(12): 2770-2779, 2000.

[46] H. Kataura, Y. Kumazawa, Y. Maniwa, I. Umezu, S. Suzuki, Y. Ohtsuka, and Y. Achiba, Optical properties of single-wall
carbon nanotubes, Synthetic Metals, vol. 103, no. 13, pp. 25552558, 1999.

[47] Sebastien W, Giorgia W, Monica P, Cedric B, Jean-Paul K, Renato B. Targeted delivery of amphotericin b to cells by using
functionalized carbon nanotubes. Angewandte Chemie 117: 6516 6520, 2005.


[48] Barroug A, Glimcher M. Hydroxyapatite crystals as a local delivery system for cisplatin: adsorption and release of cisplatin
in vitro. J Orthop Res; 20: 274-280, 2002.

[49] Kumiko Ajima , Masako Yudasaka , Tatsuya Murakami , Alan Maign , Kiyotaka Shiba and Sumio Iijima. Carbon
Nanohorns as Anticancer Drug Carriers. Mol. Pharm., 2 (6), pp 475480, 2005.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

677 www.ijergs.org



[50] Pai P, Nair K, Jamade S, Shah R, Ekshinge V, Jadhav N. Pharmaceutical applications of carbon tubes and nanohorns.
Current Pharma esearch Journal; 1:11-15, 2006.

[51] Pantarotto D, Partidos C, Hoebeke J, Brown F,Kramer E, Briand J. Immunization with peptide- functionalized carbon
Nanotubes enhances virus-specific neutralizing antibody responses. Chem Biol 10: 961-966, 2003.

[52] Kam NWS, O'Connell M, Wisdom JA, Dai HJ. Carbon nanotubes as multifunctional biological transporters
and near infrared agents for selective cancer cell destruction, Proc. Natl. Acad. Sci. U. S. A.;102:
1160011605, 2005.

[53] Kuznetsova A, Mawhinney D. Enhancement of adsorption inside of single-walled nanotubes: opening the entry ports. Chem
Phys Lett; 321: 292-296, 2000.
[54] Deng P, Xu Z, Li J. Simultaneous determination of ascorbic acid and rutin in pharmaceutical preparations with
electrochemical method based on multi-walled carbon nanotubes-chitosan composite film modified electrode. Journal of
Pharmaceutical and Biomedical Analysis.;76:234242, 2013

















International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

678 www.ijergs.org

Review on Microstrip Patch Antennas using Metamaterials
Atul Kumar
1
, Nitin Kumar
2
, Dr. S.C. Gupta
2

1
Scholar (PG), M-Tech digital communication, DIT Dehradun, India
2
Department of ECE, DIT Dehradun, India
Abstract: Microstrip patch antennas has many advantage due to light weight and small size, low cost but also have some
disadvantage as low gain , narrow band width these are the two important parameters. This review paper describe how we can
increase the performance of the patch antenna by using metamaterials or how we can improve the gain & bandwidth. Here we first
provide the introduction of metamaterials and microstrip patch antenna after that describe the parameter of microstrip patch antenna
which can improve by using metamaterials and discuss future scope and application of metamateriaIs.
Keywords- Metamaterials(MTM) ,SRR , LHM , Microstrip Patch Antennas(MSA),
I. INTRODUCTION:
II. METAMATERIALS:
Metamaterials is a artificial material which has negative value of and but all The natural material find in the nature has positive
value of and . According to Viktor Veselago in 1967 provide the visionary speculation on the existence of MTM substance
with simultaneously negative value of and [1].These substances LH to express the fact that they would allow the propagation of
electromagnetic waves with the electric field, the magnetic field, and the phase constant vectors building a left-handed triad,
compared with conventional materials where this triad is known to be right-handed [2] . Metamaterials is also known as Negative
refractive index material or left handed material LHM this is also reversal of Doppler effect or reversal of snells law. In LHM ray
refracted away from the normal but in all natural material ray refracted toward the normal That produce the focuss inside the
material.There are mainly four type of of metamaterial structure:Split Ring structure, Symmetrical Ring structure ,Omega structure ,
S structure.
After Veselagos paper, more than 30 years until the first LH material was conceived and demonstrated experimentally. This LH
material was not a natural substance, as expected by Veselago, but an artificial effectively homogeneous structure (i.e., a MTM),
which was proposed by Smith and colleagues at University of California, San Diego (UCSD) [3].This structure was inspired by the
pioneering works of Pendry at Imperial College, London. Pendry introduced the plasmonic-type negative-/positive- and positive-
/negative- structures shown in Fig 1., which can be designed to have their plasmonic frequency in the microwave range. Both of
these structures have an average cell size p much smaller than the guided wavelength g (p _ g) and are therefore effectively
homogeneous structures, or MTMs.


The negative-/positive- MTM is the metal thin-wire (TW) structure shown in Fig 1.(a). If the excitation electric field E is parallel to
the axis of the wires (E_z), so as to induce a current along them and generate equivalent electric dipolemoments, this MTM exhibit a
plasmonic-type permittivity frequency function of the form [4,5].


The positive-/negative- MTM is the metal split-ring resonator (SRR) structure shown in Fig 1.2(b). If the excitation magnetic field
H is perpendicular to the plane of the rings (Hy), so as to induce resonating currents in the loop and generate equivalent magnetic
dipole moments ,10 this MTM exhibits a plasmonic-type permeability frequency function of the form [6]

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

679 www.ijergs.org





Fig.1 First negative-/positive- and positive-/negative- MTM (p _ g), constituted only by standard metals and dielectrics,
proposed by Pendry. (a) Thin-wire (TW)structure exhibiting negative-/positive- if E_z [5]. (b) Split-ring resonator (SRR) structure
exhibiting positive-/negative- if Hy [7].


In LHM ray refracted away from the normal but in all natural material ray refracted toward the normal That produce the focuss inside
the material as in fig.



c. Refracted rays in left handed metamaterial

III. MICROSTRIP PATCH ANTENNAS:

Microstrip patch antennas are most widely use today, particularly this has many advantage in frequency range of 1 to 6 GHz.
Deschamps first proposed the concept of the Microstrip antenna (MSA) in 1953. Microstrip antennas are also known as microstrip
patch antennas , or simply patch antennas. A microstrip antenna in its simplest form consists of a radiating patch on one side of a
dielectric substrate and a ground plane on the other side.The radiating elements and the feedlines are usually photoetched on
the dielectric substrate. The microstrip antenna radiates relatively broad beam broadside to the plane of substrate. Thus the
microstrip antenna has a very low profile and so it can be fabricated using printed circuit (photolithographic)technology [7].The
radiating patch may be square, rectangular, thin strip (dipole), circular, elliptical, triangular or any other configuration. There are
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

680 www.ijergs.org

many configurations that can be used to feed microstrip antennas. Microstrip line, coaxial probe, aperture coupling and proximity
coupling.Patch antennas has following advantage or disadvantage[8].




Advantage:
-Lightweight and have a small volume a .
-Low fabrication cost Easier to integrate with other MICs on the same substrate -They allow both linear polarization and circular
polarization .
-They can be made compact for use in personal mobile communication
-They allow for dual- and triple-frequency operations.

And disadvantage are:
- Low bandwidth.
-Low gain .
-Low power handling capability


IV. PARAMETERS OF PATCH ANTENNAS WHICH CAN IMPROVE BY USING MTM

By using metamaterial as asubstrate or cover we can enhance the gain and we can also increase the bandwidth and directivity of patch
antennas[9].
MSA have narrow band width which is the major limiting factor for wide spread application .So Increasing the bandwidth of MSA is
a imprtant research today. By increasing the substrate hight and reduce the dielectric constant [10]. And by using MTM as a
cover we can also improve the bandwidth[11][12].
Directivity can be increase by using left handed metamaterial when we use left handed metamaterial as a slab it will act like a
lens and it will focus the energy and radiant energy is concentrated.
When negative permeability metamaterial Reflecting surface applied to the microstrip patch antenna gain increase about 6.91 dbi
that is beacause SRR eliminate the substrate surface wave and radiant energy is concentrated[13] .So main problem in the patch
Antennas is substrate surface wave which can be remove by using SRR[14].

Microstrip patch antennas size can be reduced by using metamaterial structure. With a Mushroom Structured Composite
Right/Left Handed transmission line (CRLH - TL) metamaterial ,a size of 61.12% can be reduced[15] .
In addition, a wideband also can be obtained by reducing the ground plane of the antenna. A compact ultra Wide Band (UWB)
antenna can be designed using meta- -material structure. The antenna exhibits a wide bandwidth of 189%.
The bandwidth of a single patch antenna can be raised by placing a number of metamaterial unit cells [16].

V. MODERN METAMATERIAL APPLICATION
The concept of metamaterials, which Is given by the works of V. G. Veselago and J. B. Pendry, has drastically change our way of
thinking about light-matter interactions and greatly enriched the fields of classical and quantum electrodynamics.We can see that not
only negative-index metamaterials be fabricated the practically but they can also be used to create super- and hyperlenses with the
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

681 www.ijergs.org

subwavelength optical resolution. Likewise, it is now possible to fabricate metamaterials designed using the transformation optics
approach and apply them in real invisibility cloaks.

The modern metamaterial applications
Superresolution imaging and optical sensing,
The advancement of photonic circuitry with metatronics,
All-optical and electrooptical dynamic control of light,
Electromagnetic cloaking, and light harvesting for improved solar-cell technology.
This special issue focuses on the advances along these research avenues and on the new photonic devices associated with them.

VI. FUTURE SCOPE OF METAMATERIAL

The dream of invisibility cloaks will be become possible, and although the process will likely take another decade, steps already
have been made in that dire--ction. Research on developing metamaterials a special kind of material that causeLight to bend in
unusual ways,following the contours of the material structure and Come back out the same way it went in.

VII. CONCLUSION

Microstrip antennas is one of the most innovative topics in antenna theory and design ,and have application in modern microwave
systems. in this world till now, microstrip patch antennas also have some advantages. Some research are going on to improve the
gain and bandwidth of patch antenna. Existing solutions leads to the problems of spurious radiation and high complexity. This new
approach come up with a new solution called metamaterial. Metamaterials play important role in the antenna design due to its
interesting and unusual properties. By this review [1] [16], Metamaterials can be used for the performance enhancement of
microstrip patch antennas. A metamaterial antenna is made by loading the metamaterial structure over the substrate. There are
different Kind of metamaterial substrates.If change the metamaterial substrate will change in the parameters of antenna. Gain of a
patch antenna increases by a value of 1.5dB to 7dB with the addition of metamaterial structures. Miniaturization is the primary
function of metamaterial. In all the works mentioned here shows that use of metamaterials results in about 50% reduction in the size
of a patch antenna. Narrow bandwidth and lower gain are the two main drawbacks of microstrip patch antenna. By using
metamaterial we can overcome these problem.

REFERENCES

1.Caloz and T. Itoh, Electromagnetic Metamaterials: Transmission Line Theory and MicrowaveApplications. Piscataway, NJ: Wiley-
IEEE, 2005.
2.V. Veselago. The electrodynamics of substances with simultaneously negative values of and , Soviet Physics Uspekhi, vol. 10,
no. 4, pp. 509514, Jan., Feb. 1968.
3. D. R. Smith, W. J. Padilla, D. C. Vier, S. C. Nemat-Nasser, and S. Schultz Composite medium with simultaneously negative
permeability and permittivity, Phys. Rev. Lett., vol. 84, no. 18, pp. 41844187, May 2000.
4. J. B. Pendry, A. J. Holden, W. J. Stewart, and I. Youngs. Extremely low frequency plasmons in metallic mesostructure, Phys.
Rev. Lett., vol. 76, no. 25, pp. 47734776, June 1996.
5. J. B. Pendry, A. J. Holden, D. J. Robbins, and W. J. Stewart. Low frequency plasmons.in thin-wire structures, J. Phys. Condens.
Matter, vol. 10, pp. 47854809,1998.
6. J. B. Pendry, A. J. Holden, D. J. Robbins, and W. J. Stewart. Magnetism from conductors.
and enhanced nonlinear phenomena, IEEE Trans. Micr. Theory. Tech., vol. 47,
no. 11, pp. 20751084, Nov. 1999.
7. Antenna Theory - Analysis and Design (Constantine A. Balanis) (2nd Ed) [John Willey].
8.Yoonjae Lee and Yang Hao, Characterization of microstrip patch antennas on metamaterial Substrates loaded with complementary
split-ring Resonators Wiley Periodicals, Inc. Microwave Opt Technol. Lett. 50, pp.21312135, 2008.
9.R. F. Harrington, "Effect of antenna size on gain, bandwidth, and efficiency", J. Res. Nat. Bureau Stand., vol. 64D, pp. 1 - 12, 1960.
10. K.C. Gupta Broadbanding technique for Microstrip patch antennas- A Review, Scientific Report no. 98, 1988.
11. Design and Comparative analysis of a Metamaterial included Slotted Patch Antenna with a Metamaterial Cover over Patch
.Surabhi dwivedi, Vivekanand mishra,Y.K. Posta . ISSN: 2277-3878, Volume-1, Issue-6, January 2013.
12. Mimi A. W. Nordin, Mohammad T. Islam, and Norbahiah Misran, Design of a compact Ultra WideBand metamaterial antenna
based on the modified Split ring resonator and Capacitively Loaded Strips unit cell, PIER, Vol. 136, pp. 157-173, 2013.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

682 www.ijergs.org

13.A 2.45-GHz WLAN High-Gain Antenna Using A Metamaterial Reflecting Surface#Sarawuth Chaimool1, Kwok L. Chung2,
*Prayoot Akkaraekthalin.
14.W Wang, B.-I. Wu, J. Pacheco, X. Chen, T. Grzegorczyk and J. A. Kong, A study of using metamaterials as antenna substrate to
enhance gain, PIER 51, pp. 295328, 2005.
15. J. B. Pendry, A. J. Holden, D. J. Robbins, and W. J. Stewart , Magnetism from Conductors and Enhanced Nonlinear Phenomena,
IEEE
Transactions on Microwave Theory and Techniques, Vol. 47, No. 11, November 1999.
16. Mimi A. W. Nordin, Mohammad T. Islam, and Norbahiah Misran, Design of a compact UltraWideBand metamaterial antenna
based on the modified Split ring resonator and Capacitively Loaded Strips unit cell, PIER, Vol. 136, pp. 157-173, 2013




















International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

683 www.ijergs.org

Frequency Analysis of Healthy & Epileptic Seizure in EEG using Fast
Fourier Transform
Meenakshi, Dr. R.K Singh, Prof. A.K Singh
M.tech scholar, knit sultanpur uttar pradesh , menu.akshi@gmail.com ,contact no. -7398322482

Abstract Analysis of EEG signals shows that the range of frequency for epileptic seizure in a neurological disorder which needs to
be detected at an early stage to know their specific needs and to help them survive with the problem. In reference to the India
population 0.6 to 0.8% is affected by seizure, which is the common neurological disease just after the knock, of which 3o% have not
been able to gain any control over their seizure using current pharmacological treatment measure [1-2].KNN and Linear discriminate
analysis are used to detect the discrete emotions (surprise, happy, fear, neutral and disgust) of human through EEG signals. We
measure EEG signals frequency range relating to seizure, divide them into five different domain such as , , , and related to the
total range, and eliminate frequency distribution through FFT of EEG signals to compare the difference between seizure and healthy
subject. The resulting calculations are based on the selected frequency range.
Keywords Epileptic seizure, EEG, Fast Fourier Transform, BCI, rhythms of the EEG signals.

INTRODUCTION
An EEG is the most used technique to capture brain signals due to its excellent temporal resolution, usability, noninvasiveness, and
low set-up costs. The Supreme commander of the human body is the brain. It's the central part of the nervous system which governs
the functions of a variety of organs in the body. The signals measured from the central nervous system will give the relationship
between psychological change & emotions. An EEG can show what state a person is in, whether in sleep, anaesthetized, awake,
because the characteristic patterns of the electrical potentials differ for each of these states. On the classification of EEG signals in the
two most important areas, epilepsy and brain computer interface (BCI)[3]. Seizure is a transient abnormal behavior of neurons within
one or several neural networks, which limits the patients physical and mental activities. EEG plays an important role in nervous
electro-physiology field such as using spike wave to discovering diagnose epilepsy, brain tumour early, sleep analysis and monitoring
the depth of anesthesia etc.
Although theoretically there exist various signal analysis methods used in EEG analysis application[4-6], owing to the limitation
of signal processing technique, the research on EEG by existing EEG instrument is not through, also the extracting of feature
information of EEG, satisfied for clinical diagnosis. Virtual EEG instrument is based on the virtual instrument technology. The
emergence of virtual instrument technology based on PC enabled us not only make full use of the resource of computer software and
hardware, but also renew the functions and performance of the instrument in time. Because EEG signals is a stochastic complex non-
stationary signal, it is difficult to extract the feature rhythms in EEG signals effectively only by some simple analysis methods in time
domain or frequency domain. Furthermore, there are various different feature waveforms with different parameter feature contained in
EEG signals, such as spike wave, slow wave, sharp wave sine wave, spindles and K-complex etc., which have relation with different
pathological changes, so it is very difficult to extract all feature information only by a certain signal analysis method. Based on above
consideration, for different feature information in EEG signals and the functions set of the EEG instrument, the concrete realization of
several time-frequency analysis methods have been discussed and integrated into the virtual EEG instrument to extract adaptively
feature information in EEG signals.


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

684 www.ijergs.org

I. THE ALGORITHM REALIZATION FOR EXTRACTION OF BASIC RHYTHMS IN EEG SIGNALS
In clinical purpose, to evaluator if a basic rhythm of EEG signal is controlled, doctors usually develop some simple analysis methods
in time methods or frequency domain and themselves practice, which are of many suspicions. The function extracting the feature
information of basic rhythms in EEG signals automatically integrated into the virtual EEG device, which is realized by Gabor
transform with the definition of EEG signals basic rhythm frequency band relative intensity ratio (BRIR) [9].The Gabor transform of
signal x (t) is expressed as

, =

)
2
(1)
Where * represent complex conjugate. The window function D g is used for restricted the Fourier transform of the signal at time t. By
discrete time and frequency, the discrete Gabor transform can be defined as

(, )......................................... (2)
where F and T represent the sampling interval of frequency and time. Short interval F corresponds to big window width, while short
interval T can be obtained by height overlapping of adjacent window. From the Gabor transform defined by eq.1, the spectrum
can be defined as
, =

,
2
=

(, )... (3)
By employing recursive algorithm, slipping time window and output energy (I) as the functions of time and frequency, the time-
frequency expression of the signal can be obtained. To quantize information, for the frequency band of each basic rhythm (i=, , ,
) in EEG signals, the frequency band power spectrum density is defined as

= ,

. . 5 , = , , ,
Where (f (i) min, f (i)) max) represents the upper and under limit of frequency band i. Note that the division of frequency bands in
EEG is not arbitrary which correspond to different origin and function of brain activity. Where i =, , , denotes the different
oscillation rhythms with different frequency [10].
Frequency range according to nature of rhythms of the EEG signals

Delta 0.5 to 4 Hz It is primarily associated with deep sleep, serious brain disorder.
Theta 4 to 8 Hz Theta wave arises from emotional stress or disappointment and
unconscious material, creative inspiration and deep mediation.
Alpha 8 to 13 Hz When the brain is in relaxation state.

Beta 13 to 30 Hz When the brain is associated with active attention, mental activities.

Gamma >30 Hz It is associated with various cognitive and motor functions.

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

685 www.ijergs.org


II. FAST FOURIER TRANSFORM
FFT is very important for many reasons that efficient computation of this has emerged as a well analyzed topic for decades. To
determine the FFT algorithm have been developed which are generally known as FFT [7-8].

= ()
2 / 1
=0
, k=0, 1, .. N-1
Subscript (N) is to show length of DFT for each value of k, computation of requires.N = Complex multiplications , N-1=
Complex additions.
III.EEG DATA AND CHANNEL SELECTION
The EEG signals were recorded from 4 channels; C3, C4, P3 and P4 and filtered using band pass filter with frequency range 8 Hz to
30 Hz. The signal was analyzed using Fast Fourier Transform. The database is generated with 20 subjects in the age group of 22 to 40
years using 64 channels with a sampling frequency of 256 Hz.





Figure 1. Channel placement



IV. PROPOSED METHOD
The main objective of this work is to compose the efficiency of classifying seizure & healthy subject using two fast Fourier
transform based on their frequency range such as 0.5 to 4 Hz (delta), 8 to 13 Hz (alpha) 13 to 30 Hz (beta) and>30Hz (gamma)[11].
We measure EEG signals frequency range relating to seizure, divide them into five frequency range such as , , , and related to
the total range, and eliminate frequency.





EEG Hardware Brain Amp Amplifier
Kind of electrode Ag/Agcl
Hardware reference Electrodes,Neurofeedback
Sampling Rate 256 Hz
Hardware filter Hum notch filter
Software filter Band pass filter
Other Hardware None
Patient state during
reading
Relax sitting on chair
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

686 www.ijergs.org

V. Flow chart of frequency analysis














VI. THE SYNTHESIS ANALYSIS METHOD FOR EXTRACTING FEATURE WAVEFORMS IN EPILEPTIC EEG
The EEG waves of epileptic consist of spike wave, slow wave sharp wave, and their combination like spike and slow wave, sharp and
slow complex etc. What literatures mention mostly is the detection for spike wave and spike wave [12-14]. However, spike waves and
sharp waves often emerge during the epileptic outbreak, while patients are in the diapauses at most of cases, in which the waves of
spike and sharp will reduce greatly , even not emerge, although they are of significant clinical indication meaning for epileptic
diagnosis. There will be slow waves and some complex waves with the background of slow wave more in the EEG signals of patients.
The emergence status can prompt the vested pathogen for epileptic classification and focus localization. But the detection and analysis
for slow wave has been mentioned rarely in the literature and product information of EEG instruments held by the authors, because the
amplitude and frequency span of slow waves is wider and the difference of waveforms is great. At the same time, considering the
multiformity of feature waveforms emerged in some pathology which cannot be extracted by using one or more signal analysis
methods simply, multi signal analysis methods are synthesized in the virtual EEG instrument to detect the feature waveforms in the
multi-channel EEG signals automatically[15,16].
VII. FEATURE EXTRACTION
The most important task here is to extract different features from the distributed frequency of FFT as shown in fig. (3), which directly
dictate the finding & classification precision. In this work processing of FFT is obtained, there are two type of subject epileptic seizure
and healthy subject, which frequency distribution colour are defined red and green respectively.
Start
EEG Signal generation
Sampling of the Signal Fs= 256 Hz
Data set of sampling signal generating
.m file
Fast Fourier Transform
Eliminate frequency
range
(, , , , )
Compare frequency range
between epileptic seizure and
healthy subject
Result
Stop
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

687 www.ijergs.org

Seizure details of recording an patient Specification:
Source: record eegmmidb/S001/S001R01.edf Start: [16:15:00.000 12/08/2013] value has 1 row (signal) and 1600 columns
(samples/signal)
Duration: 0:10
Sampling frequency: 160 Hz
Sampling interval: 0.00625 sec
Row Signal Gain Base Units
1 Fc2. 1 0 Uv


Fig.2 FFT of epileptic and healthy subject
VIII. Frequency Elimination

Fig.3 for Frequency ()
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

688 www.ijergs.org

Fig.4 for frequency ()


Fig.5 for frequency ()



Fig.6 for frequency ()

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

689 www.ijergs.org


FIG.7 FREQUENCY MORE THAN ()


Fig.8 Frequency more than 30 Hz

Fig.9 Frequency more than 40 Hz
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

690 www.ijergs.org


Fig.10 Frequency more than 50 Hz
IX. RESULT
The complexity of feature extraction and its implementation is another important criterion while developing a feature vector for
multiclass epileptic seizure classification. It is found that frequency of spikes in epileptic seizure is less, in comparison to the healthy
subject. . From the above analysis we can easily extract the features of EEGs signals for healthy and unhealthy signal. The results of
this study confirm our hypothesis is that it is possible to prospectively predict an epileptic seizure and other patients.This frequency
analysis algorithm of multichannel, except for the initialization stage after occurring of the first seizure in each patient. In the
following plots we can analyze the repetition of frequency between healthy subject and epileptic seizure.

Fig.11 Overlapping Frequency of Healthy & Epileptic Seizure Fig.12. Overlapping Frequency of Healthy & Epileptic Seizur
CONCLUSION
Advantage of EEG Signal Extraction-Feature extraction and classification are used for investigation of the following clinical problems
[18-20]. (i) Monitoring alertness, coma, and brain death. (ii) Locating areas of damage following head injury, tumour, and stroke. (iii)
Testing afferent pathways (by evoked potentials).(iv) Monitoring cognitive engagement (alpha rhythm).(v) Producing biofeedback
situations.(vi) Controlling anesthesia depth (servo an aesthesia).(vii) Investigating epilepsy and locating seizure origin. (viii) Testing
epilepsy drug effects. (ix) Assisting in experimental cortical excision of epileptic focus. (x) Monitoring the brain development. (xi)
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

691 www.ijergs.org

Testing drugs for convulsive effects. (xii) Investigating sleep disorders and physiology.(xiii) Investigating mental disorders. (xiv)
Providing a hybrid data recording system together with other imaging modalities.
REFERENCES
[1] P. Rajdev, M. Ward, J. L. Rickus, R. M. Worth and P. Irazoqui, Real-time seizure prediction from local eld
potentials using an adaptive Wiener algorithm, Comp. in Bio. and Med., Elsvier, vol. 40, no.1, pp. 97-108, 2010.
[2] L. D. Iasemidis, D. S. Shiau, W. Chaovalitwongse, J. C. Sackellares, P. M. Pardalos, J. C. Principe, P. R. Carney,
A. Prasad, B. Veeramani and K. Tsakalis,Adaptive Epileptic Seizure Prediction System, IEEE Trans. on
Biomed. Eng., vol. 50, no. 5, pp. 616-627, 2003.
[3] Ji Zhong, Qin Shuren, Peng Liling. Time-Frequency Analysis of EEG Sleep Spindles Based Upon Matching
Pursuit. ISIST2002 Proceedings, 2nd International Symposium on Instrumentation Science and Technology,
Vol.2: 671-675
[4] Zhong Ji, Shuren Qin. Detection Of EEG Basic Rhythm Feature By Using Band Relative Intensity Ratio(BRIR)
the 28
th
IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), April 2003
[5] Zhong Ji, Shuren Qin. Detection Of EEG Basic Rhythm Feature By Using Band Relative Intensity Ratio(BRIR)
the 28
th
IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP2003), Vol.VI,
April 2003: VI-429~VI-432
[6] Dubois, E., Venetsanopoulos, A. A new algorithm for the radix-3 FFT, IEEE Transactions Acoustics, Speech
and Signal Processing, vol. 26, pp. 222 - 225, 1978.
[7] Stasinski, R., Radix-K FFT's using K-point convolutions, IEEE Transactions Signal Processing, vol. 42, pp. 743
- 750, 1994.
[8] H. Adeli, S. Ghosh-Dastidar and N. Dadmehr, A wavelet chaos methodology for analysis of EEGs and EEG
subbands to detect seizure and epilepsy, IEEE Trans. Biomedical Eng., vol.54, no.2, pp.205-211, 2007.
[9] Suzuki, Y., Toshio Sone, Kido, K., A new FFT algorithm of radix 3,6, and 12, IEEE Transactions Acoustics,
Speech and Signal Processing, vol. 34, pp. 380 - 383, 1986.
[10] Automated Neural Network Detection of EEG Spikes. IEEE Engineering in Medicine and Biology[J],
0739-5175,1995(3/4)
[11] Zhang Tong, yang Fusheng, Tang Qingyu. Automatic Detection and Classification of Epileptic Waves in
EEGA Bierarchical Multi-Method Integrated Approach. Chinese Journal of Biomedical Engineering[J],Vol.
17, No.1, 19983)
[12] Arakawa K. Fender DH, Harashima H, et al. Separation of a nonstationary component from the EEG by a
nonlinear digital filter. IEEE Trans on BME[J], 33(7),1986:1809-1812
[13] Bishop, C.M. Neural Networks for Pattern Recognition, Oxford: Oxford University Press, 1995
[14] Jung Tzyy-Ping , Colin Humphries, Lee Te-Won. Extended ICA Removes Artifacts for
Electroencephalographic Recording. Advances in Neural information Processing System, 1998(10): 894-900
[15] Aapo Hyvarinen and Erkki Oja, Independent Component Analysis: Algorithm and Application, Neural
Networks, 1999(4)
[16] Wu Xiaopei, Feng Huanqing, Zhou Heqin, etc.. Independent Component Analysis and Its Application for
Preprocessing EEG[J]. Beijing Biomedical Engineering, Vol.(20), No.1,2001(3)
[17] L. Steven, B. Benjamin, D. Thorsten and M. Klaus-Robert, Introduction to machine learning for brain
imaging, NeuroImage, vol.56, no.2, pp.387-399, 2011.
[18] H. Adeli, Z. Zhou and N. Dadmehr, Analysis of EEG records in an epileptic patient using wavelet
transform, J. Neu. Meth., vol.123, no.1, pp.69-87, 2003.
[19] O. Rosso, S. Blanco and A. Rabinowicz, Wavelet analysis of generalized tonic-clonic epileptic
seizures, Signal Processing, vol.83, no.6, pp.1275-1289, 2003.
[20] R. Andrzejak, K. Lehnertz, F. Mormann, C. Rieke, P. David and C. Elger, Indications of nonlinear
deterministic and finite-dimensional structures in time series of brain electrical activity: dependence on recording
region and brain state,Phys. Rev. E, vol.64, pp.061907-1-061907-8, 2001

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

692 www.ijergs.org

Parental Controlled Social Network with Multiparty Access Control and
String Search
Anu P Salim
1
, Reeba R
1

1
Department of Computer Science and Engineering, Sree Buddha College of Engineering, Alappuzha, Kerala, India
E-mail- anupsalim@gmail.com

Abstract Online social networks or simply social networks is one the important emerging service provided in the Internet. It is
very popular and powerful tools for making and finding friends and for identifying other people who share similar interests. This
paper introduce a new Online Social Network with two new techniques, one is for improving the performance of information
collection using string transformation and enable the protection of shared data associated with multiple users in OSN. A parental
control is also provided to control the activities of kids in social network.

KeywordsSocial Networks, Multiparty Authorization, Social Search, String Transformation, Parental Control, Graph.
INTRODUCTION
Online social networks (OSNs) have become a new networking platform for connecting people through a variety of mutual
relationships. Social Network Services (SNS) such as Facebook, Friendster, MySpace and Orkut have established themselves as very
popular and powerful tools for making and finding friends and for identifying other people who share similar interests. The dynamics
and evolution of social networks are very interesting but at the same time very challenging area. In this paper the formation and
growth of one of such structure.
A typical OSN provides each user with a virtual space containing profile information, a list of the users friends, and web pages,
such as Timeline in Facebook, where users and friends can post contents and leave messages. A user profile usually includes
information with respect to the users personal information. In addition, users can not only upload a content into their own or others
spaces but also tag other users who appear in the content. Each tag is an explicit reference that links to a users space. For the
protection of user data, current OSNs indirectly require users to be system and policy administrators for regulating their data, where
users can restrict data sharing to a specific set of trusted users. OSNs often use user relationship and group membership to distinguish
between trusted and untrusted users. Although OSNs currently provide simple access control mechanisms allowing users to govern
access to information contained in their own spaces, users, unfortunately, have no control over data residing outside their spaces. To
address such an issue, preliminary protection mechanisms have been offered by existing OSNs.
Search behavior of Web users often reflects that of others who have similar interests or similar information profiles in social
networks. Social search or a social search engine is a type of search method that tries to determine the relevance of search results by
considering interactions or contributions of users. The premise is that by collecting and analyzing information from a users explicit or
implicit social network improve the accuracy of search results. The most common social search scenario is a user in the social
networking site submits a query to the search engine associated with it. Then the search engine computes ordered list of the most
relevant results using a ranking algorithm. The search engine collects information that lies in the neighborhood of user and relates to
the results in list. It utilize this information to reorder the list to a new list and which is presented to user. Using of string
transformation for searching is a new technique in social search. String transformation is about generating one string from another
string, such as OSN from Online Social Network.
In this paper introducing a new OSN with multiparty authorization framework (MAF) to model and realize multiparty access
control for an effective and flexible access control mechanism , accommodating the special authorization requirements coming from
multiple associated users for collaboratively managing the shared data and provide string transformation for searching in OSN.
Kids can also use this OSN because a parental control is provided for them. The challenge is to help children enjoy the benefits of
going online while avoiding the risks. For solving this issue we put forward a browser which helps in avoiding the inappropriate
contents reaching children and to inform parents about the surfing content of children. The parent and child should be registered in the
browser in order to access the features. When accessing the social networking site, the child will be under verification. The search
keywords entered by the children and the search contents will be mailed to the parent email id provided while during registration. The
mail consists of the time and date of accessing, the screenshot of the accessed or searched contents and keywords provided while
during search. Thus it helps parents to continuously verify the internet contents browsed by the child. It also helps parent to funnel
children towards child-friendly options and remove the chance of accidental exposure to inappropriate content.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

693 www.ijergs.org

The social network structure can be modeled as a graph G with individuals representing nodes and relationships among them
representing edges.
RELATED WORKD
A. Access Control in OSN

Access control for OSNs is still a relatively new research area. Several access control models for OSNs have been introduced. Early
access control solutions for OSNs introduced trust-based access control inspired by the developments of trust and reputation
computation in OSNs. The D-FOAF system [16] is primarily a Friend of a Friend (FOAF) ontology-based distributed identity
management system for OSNs, where relationships are associated with a trust level, which indicates the level of friendship between
the users participating in a given relationship. Carminati et al. [15] introduced a conceptually-similar but more comprehensive trust-
based access control model. This model allows the specification of access rules for online resources, where authorized users are
denoted in terms of the relationship type, depth, and trust level between users in OSNs. They further presented a semi-decentralized
discretionary access control model and a related enforcement mechanism for controlled sharing of information in OSNs [7]. Fong et
al. [14] proposed an access control model that formalizes and generalizes the access control mechanism implemented in Facebook,
admitting arbitrary policy vocabularies that are based on theoretical graph properties. Gates [8] described relationship-based access
control as one of new security paradigms that addresses unique requirements of Web 2.0. Then, Fong [13] recently formulated this
paradigm called a Relationship-Based Access Control (ReBAC) model that bases authorization decisions on the relationships between
the resource owner and the resource accessor in an OSN. However, none of these existing work could model and analyze access
control requirements with respect to collaborative authorization management of shared data in OSNs.

B. Social Search Technique

There are many social search techniques. Most of the searching is based on the relationship between the nodes in the graph. The
retrieved information is ranked on the basis of relationship between the nodes. If the nodes are strong then the result is ranked as one.
This paper propose a social search based on the string transformation and the relationship. Some of the existing techniques are
described in the following section.
Search Based on Relationship
This is the common technique for the social search. The concept of strong link [1] is introduced. If two nodes are communicated
regularly then a strong link is formed between them. Similarities between articles and keywords are measured and rank the search
result based on it. It also combine keyword density and social relations as a value which is called social ranking value.
Hybrid Social Search
Hybrid social search model, which harnesses the users social relation to generate the satisfying results [2]. Upon receiving a users
query, the search engine aims to return a list of ranked answers who might give the correct answer to that query. Topic Relevance
Rank (TRR) algorithm is used to evaluates users professional score on the relevant topics. Social Relation Rank (SRR) algorithm is
used to capture the social strength between users.
SMART Finder
Social search behavior of a user often reflects that of who have similar interest or similar information profiles in the network.
Therefore if we locate users interested in certain topics or areas and then keep track of their preference in terms of search result.
SMART Finder [3] is an efficient search to pinpoint relevant and reliable information about these people. The search results for
locating people whose social relationships are highly ranked according to specific topics. It can also identify people who are highly
associated with each other with regard to search topic.
Agent-Based Mining
Developed an agent-based framework that mines the social network of a user to improve search result [4]. Agent in the system utilize
the connections of a user in the social network to facilitate the search for items of interest. Agent observes the user activity such as
rating and comments and agents retrieve such users those who are comment, tagged by user to the searcher.
Search Based on Framework
The HTML framework or template is extracted from the social networks [5] and this information is used for searching. Similarity
between the frameworks of users is the key for searching. Such type of users has some relations so it is used for ranking.



C. Parental Control

The recent software for parental control are Qustodio and Avira. Qustodio is parental control designed for todays busy, web-savvy
parents. No hardware, no complicated setupjust a simple, web-based dashboard that gives the information. Whether your kids use
the family computer, personal laptop, tablet, or mobile phone, Qustodio is there to set limits, block questionable sites, and keep kids
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

694 www.ijergs.org

safe. The parents get the details only if it is installed. Avira is social network protection. In it the parent register in the site and get kids
browsing details in a mail.

SOCIAL NETWORK STRUCTURE

The social network structure can be modeled as a graph G with individuals representing nodes and relationships among them
representing edges. The label associated with each edge indicates the type of the relationship. Edge direction denotes that the initial
node of an edge establishes the relationship and the terminal node of the edge accepts the relationship. The number and type of
supported relationships rely on the specific OSNs and its purposes.
















Fig .1. Social Network Design

A. Multiparty Authorization

To enable a collaborative authorization management of data sharing in OSNs, it is essential for multiparty access control policies to
be in place to regulate access over shared data, representing authorization requirements from multiple associated users. The friends or
the neighboring node of a particular user is categorized into three levels, a relative- which is the high priority friend they can access all
the information of the user. The user can select the relative node. The second level is close friend, they can access some information
and the user can decide which information is accessible to them. The final level is friend with lowest priority; they can only access the
basic information of the user A flexible access control mechanism in a multi-user environment like OSNs is necessary to allow
multiple controllers associated with the shared data item to specify access control policies. For a specific data there is an owner and
controllers including the publisher, tagger and sharer of data, also desire to regulate access to the shared data. Define these controllers
as follows:

Owner: Let d be a data item in the space m of a user u in the social network. The user u is called the owner of d. The owner
can decide data item d can be accessed by which level of friends. Thus, it enables the owner to discover potential malicious activities
in collaborative control. The detection of collusion behaviors in collaborative systems has been addressed by the recent work.
Publisher: Let d be a data item published by a user u in someone elses space in the social network. The data item is published
only after the authorization of both owner and the publisher. If the owner provide access to relative and the publisher provide access to
close friends. Then take the intersection of relative and close friends and the content is accessible to these intersection nodes.
Tagger: Let d be a data item in the space of a user in the social network. Let T be the set of tagged users associated with d. A
user u is called a tagger of d, if uT. In this scenario, authorization requirements from both the owner and the tagger should be
considered. Otherwise, the taggers privacy concern may be violated. A shared content has multiple taggers.
Sharer: Let d be a data item shared by a user u from someone elses space to his/her space in the social network. The user u is
called a sharer of d. A content sharing pattern where the sharing starts with an originator (owner or publisher who uploads the content)
publishing the content, and then a sharer views and shares the content.







International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

695 www.ijergs.org






















Fig. 2. Multiparty Authorization

Let A is an owner or a publisher of a dataset such as a photograph, where B is tagger and C is sharer of the photograph. The
photograph is accessible to only those in the intersection of relative friends of A, B and C only if A,B and C provide access to their
relative. So the photograph is visible or accessible to A, B, C, D, E and F. That is it is accessible to the nodes in connected subgraph
with same weight.

B. String Search

There are two possible settings for string transformation. One is to generate strings within a dictionary, and the other is to do so
without a dictionary. In the former, string transformation becomes approximate string search, which is the problem of identifying
strings in a given dictionary that are similar to an input string. In approximate string search, it is usually assumed that the model
(similarity or distance) is fixed and the objective is to efficiently find all the strings in the dictionary. Most existing methods attempt to
find all the candidates within a fixed range and employ n- gram based algorithms or trie based algorithm. There are also methods for
finding the top k candidates by using n-grams. Efficiency is the major focus for these methods and the similarity functions in them are
predefined.
When a new node is created or a new user is register in the social networking site, name of the user (or the string) and the
corresponding possible strings that can be generated from the original string is entered into the dictionary. Whenever a query is entered
to search it checks the dictionary and fetch the corresponding data item. The ranking of search result is based on the three levels. Top
ranked data is in relative level then
close friends level and friends in the lowest level. For searching an unknown friend the ranking is based on the number of mutual
friends between these levels.

C. Parental Control

The parental control is a web application associated with the social networking site. The parent needs to register in this application
along with the child details. Then the application send a conformation mail to the parents email address with the parent username and
password along with the child username and password. If a child want to register in the social networking site he/she can only use the
username and password generated by the application for the first time. The parent can set the time for using social networking site for
a kid. The search keywords provided by the child and the search content screenshot will be mailed to the parent mail id provided
during registration. Thus it helps parent to funnel children towards child-friendly options and remove the chance of accidental
exposure to inappropriate content. The application is one time installation. The registration in the application is based on the MAC
address of the system.




International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

696 www.ijergs.org

CONCLUSION

The concepts introduced in this paper such as multiparty authorization, string search and parental control improves the efficiency of
a social network. Multiparty authorization provide a better security. String search is a new concept in social network. It improves the
searching effect. The surfed content of a kid can be verify by a parent using the parental control in the online social network. The fake
profile can be eliminated using this parental control.

REFERENCES:
[1] Hsiao-Hsuan Lu, I-Hsien Ting and Shyue-Liang Wang, A Novel Search Engine Based on Social Relationships in Online Social Networking
Website, Advances in Social Networks Analysis and Mining (ASONAM), 2012 IEEE/ACM International Conference, 2012.
[2] Guo, Liang , Que, Xirong ,Cui, Yidong , Wang, Wendong , Cheng and Shiduan, A hybrid social search model based on the user's online
social networks, Cloud Computing and Intelligent Systems (CCIS), 2012 IEEE 2nd International Conference, 10.1109/CCIS.2012.6664235,
2012.
[3] Park, GunWoo ; Lee, SooJin ; Lee, SangHoon, To Enhance Web Search Based on Topic Sensitive_Social Relationship Ranking Algorithm in
Social Networks, Web Intelligence and Intelligent Agent Technologies, 2009. WI-IAT '09. IEEE/WIC/ACM International Joint Conference,
10.1109/WI-IAT.2009.322, 2009.
[4] Anil Gursel and Sandip Sen, Improving Search In Social Networks by Agent Based Mining, 2008.
[5] Shen Yang ; Liu Zi-tao ; Luo Cheng ; Li Ye, Research on Social Network Based on Meta-search Engine, Web Information Systems and
Applications Conference, 2009. WISA 2009.
[6] Wang, Z. ; Xu, G. ; Li, H. ; Zhang, M. ,A Probabilistic Approach to String Transformation, Knowledge and Data Engineering, IEEE
Transactions,2013.
[7] Carminati, B., Ferrari, E., Perego, A.: Enforcing access control in web-based social networks.ACM Transactions on Information and System
Security (TISSEC) 13(1), 138 (2009)
[8] Carrie, E.: Access Control Requirements forWeb 2.0 Security and Privacy. In: Proc. ofWorkshop on Web 2.0 Security & Privacy (W2SP),
Citeseer (2007)
[9] Choi, J., De Neve, W., Plataniotis, K., Ro, Y., Lee, S., Sohn, H., Yoo, H., Neve, W., Kim,C., Ro, Y., et al.: Collaborative Face Recognition for
Improved Face Annotation in Personal Photo Collections Shared on Online Social Networks. IEEE Transactions on Multimedia, 114 (2010)
[10] Elahi, N., Chowdhury, M., Noll, J.: Semantic Access Control in Web Based Communities. In: Proceedings of the Third International Multi-
Conference on Computing in the Global Information Technology, pp. 131136. IEEE, Los Alamitos (2008) Multiparty Authorization
Framework for Data Sharing in Online Social Networks 43
[11] Fang, L., LeFevre, K.: Privacy wizards for social networking sites. In: Proceedings of the 19th International Conference on World Wide Web,
pp. 351360. ACM, New York (2010)
[12] Fisler, K., Krishnamurthi, S., Meyerovich, L.A., Tschantz, M.C.: Verification and changeimpact analysis of access-control policies. In: ICSE
2005: Proceedings of the 27th International Conference on Software Engineering, pp. 196205. ACM, New York (2005)
[13] Fong, P.: Relationship-Based Access Control: Protection Model and Policy Language. In: Proceedings of the First ACM Conference on Data
and Application Security and Privacy. ACM, New York (2011)
[14] Fong, P., Anwar, M., Zhao, Z.: A privacy preservation model for facebook-style social network systems. In: Backes, M., Ning, P. (eds.)
ESORICS 2009. LNCS, vol. 5789, pp. 303 320. Springer, Heidelberg (2009)
[15] Carminati, B., Ferrari, E., Perego, A.: Rule-based access control for social networks. In: Meersman, R., Tari, Z., Herrero, P. (eds.)
OTM2006Workshops. LNCS, vol. 4278, pp. 17341744. Springer, Heidelberg (2006)
[16] Kruk, S., Grzonkowski, S., Gzella, A., Woroniecki, T., Choi, H.: D-FOAF: Distributed identity management with access rights delegation.
InMizoguchi, R., Shi, Z.-Z., Giunchiglia, F. (eds.) ASWC 2006. LNCS, vol. 4185, pp. 140154. Springer, Heidelberg (2006






International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

697 www.ijergs.org

Performance Analysis of Multi-Cylinder C.I. Engine by using Various
Alternate Fules
N. BalajiGanesh
1
,Dr. B Chandra Mohan Reddy
2

1
Assistant Professor, Mechanical Department, Aditya College of Engineering, Madanapalle, Andra Pradesh
2
Assistant Professor, Mechanical Department, JNTUA Anantapur, Andra Pradesh
E-mail- balajiganeshn@gmail.com

Abstract: Modernization and increase in the number of automobiles worldwide, the consumption of diesel and gasoline has
enormously increased. As petroleum is non renewable source of energy and the petroleum reserves are scarce nowadays, there is a
need to search for alternative fuels for automobiles. The intensive search for alternative fuels for compression ignition engines has
been focused attention on fuels which can be derived from bio mass in this regard cashew nut oil and cottonseed oil is found to be a
potential fuel for C.I Engines. The properties of cashew nut oil and sunflower oil are determined by using standard methods. The
experiment is to be conduct when the engine fuelled with mixing of cashew nut oil, cottonseed oil and its blends in various
proportions like 10%, 20%, 30% and 40% by volume and then investigate the performance and emission characteristics of C.I Engine
at different load conditions
Keywords: Alternate Fuels, Diesel engine, Cashew nut oil, Cottonseed oil
1. INTRODUCTION
In recent years, a lot of effort has been taken all over the world to reduce the dependency on petroleum products for power
generation and transportation. Vegetable oils and biomass-derived fuels have received much attention in the last few decades. These
fuels have been found to be potential fuels for an agriculture-based country like India. Biomass is a source of fuel, which is renewable,
eco-friendly and largely available. Ethanol as a bio-fuel, derived from sugarcane, has been used in gasoline engines for many years.
However, bio-fuels are, in general, 35 times more expensive than fossil fuel.
Vegetable oils have been found to be a potential alternative to diesel. They have properties comparable to diesel and can be
used to run a compression ignition engine with minor modifications. The use of vegetable oils will also reduce the net CO2 emissions.
Altin Recep et al. studied the effect of vegetable oil fuels and their methyl esters injected in a diesel engine. They observed that
vegetable oils lead to problems such as gum formation, flow, atomization and high smoke and particulate emissions. Due to its
complex structure and composition, gas phase emissions are higher. In order to use these fuels in diesel engines, high compression
ratio and ignition assistance devices are required.
In the light of above, it becomes essential to search for alternative fuel, which can replace the petroleum products. The
production of Cashew nut shell liquid is very simple and its auto-ignition properties are almost same as that of diesel fuels hence can
be used in diesel engines with little or no engine modifications. Based on these facts, cashew nut shell liquid can be used as a
substitute of diesel fuel.
India is the fifth largest cotton producing country in the World today, the first-four being the US, China, Russia, and Brazil.
Our country produces about 8% of the World cotton. Cotton is a tropical plant.
It is a vegetable oil extracted from the seed of cotton, after the cotton lint has been removed after being freed
from the linters, the seeds are shelled and then crushed and pressed are treated with solvent to obtain the crude cotton seed oil. Cotton
seed oil is one of the most widely used oil and it is relatively in-expensive and also readily available.

An objective of the present work aims to find out suitability of cashew nut oil, cottonseed oil and its blends with diesel. In
this project cashew nut oil and cottonseed oil - diesel blends are taken up for study on 10HP, Multi cylinder, four stroke, water cooled
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

698 www.ijergs.org

AMBASSADOR diesel engine and performance for different blends is tested and performance curves are drawn. Gasoline either
partially in the form of a blend or as a total replacement
2. EXPERIMENTAL INVESTIGATION
The experiments were conducted by considering various parameters. The tests were conducted for cashew nut oil, cottonseed
oil and its blends at different proportions (10%, 20%, 30% and 40%) for conventional engine. The tests were conducted from no load
to maximum load conditions. The readings such as time taken to consume 20cc of fuel consumption, speed of the engine,
temperatures, etc, were noted. The observations were recorded in tabular column and calculations are made using appropriate
equations.
The experiments were conducted on a Multi cylinder Hindustan four stroke diesel engine. The general specifications of the
engine are given in Table-1. By taking the engine performance and plot the graphs
Hindustan engines for generating sets are fuel efficient, with the lube oil consumption less than 1% of S.C.F. lowest among the
comparable brands. They are equipped with heavy flywheels incorporating 4% governing on the fuel injection equipment. This
complete avoids voltage functions. In case of emergency, the unique overload stop feature safeguards equipments by shutting down
the engine automatically
Table-1. Engine specifications.

Item Specifications
Engine power 10 H.P
Cylinder bore 84 mm
Stroke length 110 mm
Arrangement of cylinder Vertical
Engine speed 1500 rpm
Compression ratio 15:1

Table 2: Properties of Diesel, cashew nut oil and cottonseed oil

Properties Diesel
Cashew
nut oil
Cottonseed oil
Calorific value
(kJ/kg)
42000 37300 38000
Density at 30
0
C
(kg/l)
0.85 0.902 0.912
Viscosity at 40
2.7 49.62 55.61
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

699 www.ijergs.org

0
C,
Flash point 52 167 207
Fire point 65 180 230
Cetane number 50 49 52

Table 3: Proportions of Diesel, cashew nut oil and cottonseed oil Blends
S.NO BLENDS DIESEL,%
Vol
CASHW
NUT
OIL,% Vol
COTTON
SEED
OIL,% Vol
1 Diesel fuel 100 0 0
2 B10 90 5 5
3 B20 80 10 10
4 B30 70 15 15
5 B40 60 20 20


Fig 1:- Test rig engine




International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

700 www.ijergs.org

COMPONENTS OF EXPERIMENTAL SETUP
Manometer

Fig 2: Manometer
Loading System





Fig 3: Dynamometer
Air box system

Fig 4:- Load indicator

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

701 www.ijergs.org

3. Results and Discussions

Graph No.1: Brake power Vs Specific fuel consumption

In the above graph Brake power is taken in x-axis and is taken BSFC in y-axis. The BSFC of the blends has been
compared with diesel fuel at various loads and it is shown in figure. It is observed that the BSFC is less for the B20 Over the entire
range of load.
Graph No 2: Brake power Vs Mechanical Efficiency


In the above graph Brake power is taken in x-axis and Mechanical Efficiency should be taken in y-axis. The
Mechanical efficiency of the blends has been compared with diesel fuel at various loads and it is shown in figure. It is observed that
the Mechanical Efficiency for B20 blend was considering Higher over entire load range

0
10
20
30
0 10 20 30
B
S
F
C
B.P
B.P V/S BSFC
Diesel
B10
B20
B30
B40
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

702 www.ijergs.org

Graph No 3: Brake power Vs Volumetric efficiency


In the above graph Brake power is taken in x-axis and Volumetric Efficiency should be taken in y-axis. The volumetric
efficiency of the blends has been compared with diesel fuel at various loads and it is shown in figure. It is observed that the
Volumetric Efficiency for B40 blend was considering Higher over entire load range.
Graph No 4: Brake power vs Brake thermal efficiency


In the above graph Brake power is taken in x-axis and Brake Thermal Efficiency should be taken in y-axis. The Brake
Thermal efficiency of the blends has been compared with diesel fuel at various loads and it is shown in figure. It is observed that the
Brake Thermal Efficiency for B40 blend was considering higher for first three loads remaining B20 is higher over the other blends
operation over entire load range.


0
5
10
15
0 10 20 30
B
R
A
K
E

T
H
E
R
M
A
L

E
F
F
I
C
I
E
N
C
Y
B.P
B.P V/S BRAKE THERMAL
EFFICIENCY
Diesel
B10
B20
B30
B40
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

703 www.ijergs.org

Graph No 5: Brake Power v/s Indicated thermal efficiency


In the above graph Brake power is taken in x-axis and Indicated Thermal Efficiency should be taken in y-axis. The Indicated
Thermal efficiency of the blends has been compared with diesel fuel at various loads and it is shown in figure. It is observed that the
Indicated Thermal Efficiency for B40 blend was considering higher for first four loads remaining B30 is higher over the other blends
operation over entire load range.
Graph No 6: Load vs Exhaust gas Temperature

The variation of exhaust gas temperature with load at various load conditions is depicted in Fig. 6. It is observed that the exhaust gas
temperature increases with load because more fuel is burnt to meet the power requirement. It can be seen that in the case of diesel fuel
operation exhaust gas temperature ranges from 85
0
C at low load to 275
0
C at full load. For B10and B20, at full load the exhaust gas
temperature marginally increases to 322
0
C and 315
0
C respectively. The exhaust gas temperature for B40 varies from 141
0
C at low
0
5
10
15
20
0 10 20 30
I
N
D
I
C
A
T
E
D

T
H
E
R
M
A
L

E
F
F
I
C
I
E
N
C
Y
B.P
B.P V/S INDICATED THERMAL
EFFICIENCY
Diesel
B10
B20
B30
B40
0
50
100
150
200
250
300
350
400
0 50 100 150
E
x
h
a
u
s
t

g
a
s

T
e
m
p

i
n

0
c
Load in %
Load vs Exhaust gas Temperature
Diesel
B10
B20
B30
B40
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

704 www.ijergs.org

load to 353
0
C at full load. Higher exhaust gas temperature in the case of cashew nut oil and cottonseed oil blends compared to Diesel
is due to higher heat release rate.
Graph No 7: Load vs Carbonmonoxide

From Fig. 7, the variation of carbon monoxide with load can be observed for all the cashew nut oil and cottonseed oil blends Diesel
fuel blends. The results show that CO emission of cashew nut oil and cottonseed oil blends is lower than Diesel fuel. With increase in
power output, the CO emission gradually reduces for cashew nut oil and cottonseed oil blends Diesel fuel blends and the difference
in the values for CO emission with Diesel fuel reduces significantly
Graph No 8: Load vs Hydrocarbons

The variation of hydrocarbons with load for tested fuels is depicted in Fig. 8.. From the results, it can be noticed that the concentration
of hydrocarbon of cashew nut oil and cottonseed oil blends Diesel fuel blends is less than Diesel fuel. With increase in power output,
the HC emission gradually increases for cashew nut oil and cottonseed oil blends Diesel fuel blends.

0
0.1
0.2
0.3
0 20 40 60 80 100
C
a
r
b
o
n

m
o
n
o
x
i
d
e
,
%

V
o
l
Load in %
Load vs Carbonmonoxide
Diesel
B10
B20
B30
B40
0
10
20
30
40
50
60
0 20 40 60 80 100
H
y
d
r
o
c
a
r
b
o
n
s
,
%

V
o
l
Load in %
Load vs Hydrocarbons
Diesel
B10
B20
B30
B40
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

705 www.ijergs.org

Graph No 9: Load vs Carbon dioxide

As shown in fig.9, it can be observed that the variation of carbon dioxide emission with load for Diesel Fuel and cashew nut oil and
cottonseed oil blends Diesel fuel blends. From the results, it is observed that the amount of CO2 produced while using cashew nut oil
and cottonseed oil blends Diesel fuel blends is lower than Diesel fuel at all loads except full load. This may be due to late burning of
fuel leading to incomplete oxidation of CO.
Graph No 10: Load vs Oxygen

The variation of brake thermal efficiency with load for and cashew nut oil and cottonseed oil blends Diesel fuel blends is shown in
Fig.4. It is clear that oxygen present in the exhaust gas is decreases as the load increases. It is Obvious that due to improved
combustion, the temperature in the combustion chamber can be expected to be higher and higher amount of oxygen is also present,
leading to formation of higher quantity of NOx, in and cashew nut oil and cottonseed oil blends Diesel fuel blends.

0
0.5
1
1.5
2
0 20 40 60 80 100
C
a
r
b
o
n
d
i
o
x
i
d
e
,
%

V
o
l
Load in %
Load vs Carbondioxide
Diesel
B10
B20
B30
B40
0
5
10
15
20
25
0 20 40 60 80 100
O
x
y
g
e
n
,
%

V
o
l
Load in %
Load Vs Oxygen
Diesel
B10
B20
B30
B40
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

706 www.ijergs.org

4.CONCLUSIONS:
A Multi Cylinder Four Stroke Compressed Ignition Engine was operated successfully using the mixing of cashew nut oil and
cottonseed oil and diesel blends as fuel. The following conclusions are made based on the experimental results.
a.The Specific fuel Consumption for Blend 20 is less when compared to diesel and all other blends over the entire load range.
b.The efficiencies such as Brake Thermal Efficiency, Indicated Thermal Efficiency and Mechanical Efficiency values for blend 20%
is more than diesel and other blends over the entire load range.
c.The Volumetric Efficiency for diesel is more than all the blends over the entire load range.
e.Exhaust gas temperature of blends B 30 is less than that of the diesel, which indicates the effectiveness of input energy.
f.Carbon monoxide emission from the exhaust gas is reduced as the output power increases but this concentration is increased as the
cashew nut oil and cottonseed oil blend increase with the diesel fuel.
g.Hydro carbon emission is found that lesser in concentration than the diesel at all load conditions. For the B20 and B40 hydrocarbon
emission is slightly higher than the diesel.
h.Carbon dioxide emission is increased as the load variation increased but the concentration is less when compared to the diesel fuel
operation.
I.Oxygen content is reduced from the exhaust gas as the load is increased. If the high content of oxygen is present in the exhaust it
leads to the formation of oxygen.
So, it is preferred to use the B20 blend, as a best blend to the diesel due to the following reasons:
1. Lowest specific fuel consumption reduces the expenditure on fuel.
2. The power utilized is more from the developed power than other blends.
3. Low exhaust gas temperature results in decreasing the environmental pollution.
4. As the volumetric efficiency is good sufficient amount of air is available to the fuel, so the emission is due to incomplete
combustion is lowered.

REFERENCES:
1.Ganeshan.V Internal Combustion Engines,7 Tata Mc.Graw Hill Publishing, New Delhi, 2002.
2.Dr. Jagadish Lal, Theory of Mechanisms and Machines, Metropolitan Book co. Pvt. Ltd, New Delhi, 2004.
3.Hey wood John.B, Internal Combustion Enginees Fundamental, Mc. Graw Hill Book Company, New Delhi, 1988.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

707 www.ijergs.org

4.Asfar K.R., Hamed H. Combustion of fuel blends. Energy Conversion and Management, 1998,Vol. 39, Issue 10, pp. 1081-1093.
5.Carraretto, C.; Macor, A.; Mirandola, A.; Stoppato, A.; and Tonon, S. (2004). Biodiesel as alternative fuel: Experimental analysis
and energetic evaluations. Energy, 29(12-15), 2195-2211.
6.Agarwal D; Kumar, L.; and Agarwal, A.K. (2008). Performance evaluation of a vegetable oil fuelled compression ignition
engine. Renewable energy, 33(6), 1147-1156.
7.Kesse D G. Global warming-facts, assessment, countermeasures. J Pet
Sci Eng. 26, pp 157-68, 2000.
8.Sridhar G, Paul PJ, Mukunda HS. Biomass derived producer gas as a reciprocating engine fuel -an experimental analysis. Biomass
andBioenergy. 21, pp 61-67, 2001.
9.Vellguth G (1998) Performance of vegetable oils and their monoesters as fuels for diesel engines.SAE paper No.831358.
10.Rao PS and Gopalakrishna KV (1989) Use of non edible vegetable oils as diesel engine fuels. J. Intt.Engg. India. 70 (4), 24-29.
11.Michel SG and Robert LM (1998) Combustion of fat and vegetables oil derived fuels in diesel engines. Prog. Energy. Combustion
Sci. 24,125-64.
12.Ertan Alptekin and Mustafa Canakci(2006) Determination of the density and the viscosities of biodiesel diesel fuel blends.
Renewable Energy (33), 2623 2630.
13.Agarwal AK and Das LM (2001) Bio diesel development and characterization for use as fuel in compression engines. Trans.
ASME. 123, 440-447.
14.Altin R, Cetinkaya S and Yuces HS (2001) The potential of using vegetable oil fuel as fuel in compression Ignition engines.
Energy Conversion Mangt. 42, 529-538.
15.Barsic NJ and Humke AL (1996) Performance and emissions characteristics of a naturally aspirated diesel engine with vegetable
oil fuels. SAE paper No.810262.
16.IS: 1448 (1998) Methods of test for petroleum and its products: Determination of flash point and fire point by Ables apparatus,
Bureau of Indian Standards. New Delhi. p:20.
17.Henham AWE (1990) Experience with alternate fuels for small stationary diesel engines: fuels for automotive and industrial diesel
engines., I. Mech. E.(46) 117-22.
18.Rao G.L.N., Saravanan S., Sampath S., Rajagopal. K., Emission Characteristics of a Direct injection diesel engined fuelled with
Biodiesel and its blends, in proceedings of the international conference on resource utilization and intelligent systems, India .Allied
publishers private limited. 2006, pp 353-356
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

708 www.ijergs.org

An Implementation of proficient Rectified Probabilistic Packet Marking for
Tracing Attackers
Naveen Kumar S. Koregol
1
, Chaithanyaprabhu.A.S
2
, Raghavendra K
3
, Sayyed Johar
4

1
Scholar (M.Tech), Department of Computer Science and Engineering, BTLIT, Bangalore, Karnataka, India
2
Senior System Engineer, INFOSYS LIMITED, Mysore, Karnataka, India
3
Asst. Prof, Department of Computer Science and Engineering, PESITM, Shivamogga, Karnataka, India
4
Asst. Prof, Department of Computer Science and Engineering, JNNCE, Shivamogga, Karnataka, India
E-mail- naveen08cs@gmail.com

Abstract -- The probabilistic packet marking (PPM) algorithm is a hopeful technique to determine the Internet map or an attack
graph that the attack packets spanned during a dispersed self denial of service attack. On the other hand, the PPM algorithm is not
ideal, as its terminus condition is not well specified in the literary study. Further notably, without a proper terminus consideration, the
attack graph fabricated by the PPM algorithm would be wrong. In this work, we provide a particular terminus consideration for the
PPM algorithm and name the recent algorithm the Rectified PPM (RPPM) algorithm. The almost substantial worthiness of the RPPM
algorithm is that while the algorithm terminates, the algorithm ensures that the created attack graph is acceptable, with a intended level
of assurance. We carry out simulations on the RPPM algorithm and illustrate that the RPPM algorithm can promise the rightness of
the fabricated attack graph under diverse probabilities that a router marks the attack packets and dissimilar structures of the network
graph. The RPPM algorithm furnishes an independent means for the inventive PPM algorithm to find out its termination, and it is a
hopeful way of improving the dependability of the PPM algorithm.
Keywords -- RPPM, PPM , attack graph, DDoS, TPN, DoS, node
INTRODUCTION
The denial-of-service (DoS) attack has been a serious problem in recent days. DoS [7] safety measures study has grew into one of the
major streams in network security. An assortment of techniques such as the pushback message [4], ICMP traces back [5], and the
packet filtering methods are the outcomes from this lively field of research. The probabilistic packet marking (PPM) algorithm has
captivated the most curiosity in adding the idea of IP trace back. The most fascinating point of this IP trace back approach is that it
permits routers to encode certain information on the attack packets based on a predetermined probability. On getting a ample number
of marked packets, the victim (or a data collection node) can produce the set of paths that the attack packets spanned and, hence, the
victim can receive the place(s) of the attacker(s).
The goal of the Probabilistic Packet Marking algorithm is to receive a created graph such that the created graph is the same
as the attack graph, where an attack graph is the set of routes the attack packets spanned, and a created graph is a graph responded by
the Probabilistic Packet Marking algorithm. To accomplish this goal, their exists a method, the attack graph edges data is Encoded into
the attack packets by the support of the routers in the victim site and attack graph. On the whole, the probabilistic packet marking
algorithm is made up of two distinguished processes: the packet marking process, which is carried out on the router side, and the graph
reconstruction process, which is carried out on the victim side.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

709 www.ijergs.org

The packet marking process is intended to arbitrarily encode edges selective information on the packets reaching the routers. Then,
by utilizing the information, the victim carries out the graph rebuilding procedure to build the attack graph.
EXISTING SYSTEM
In the existing system PPM algorithm is not ideal, as its termination condition is not well determined.

The algorithm necessitates superior knowledge about the network topology.

In packet marking algorithm the Termination Packet Number (TPN) computation is not well determined.

In the existing system it only holds the single attacker environment.

Disadvantages of Existing System:
Without appropriate termination circumstance the attack graph constructed by the PPM algorithm would be wrong.

The constructed path and the re-construction will be varied.

It wont hold up the multiple attacker environments
PROPOSED SYSTEM
To choose termination condition of the Probabilistic Packet Marking algorithm, this is lacking or is not explicitly showed.

Through the innovative termination condition, the exploiter of the new algorithm is free to decide the appropriateness of the
constructed graph.


The constructed graph is assured to attain the correctness assigned by the user, free of the marking probability and the
structure of the underlying network graph.

In this system we chose a Rectified Probabilistic Packet Marking Algorithm to encrypt the packet in the routers to find the
attacked packets.

To shrink the created graph such that the created graph is the identical as the attack graph, where an attack graph is the set of
routes the attack packets spanned,

To build a graph, a graph is responded by the PPM algorithm.


TECHNIQUE USED
The probabilistically marking packets, as they span routers in the Internet, More explicitly, they intend that router mark the
packet, with small-scale probability with either the routers IP address or the edges of the path that the packet spanned to arrive at the
router.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

710 www.ijergs.org

Edge marking, demands that the two nodes that make up an edge mark the pathway with their IP addresses all along with the
length among them This approach would necessitate more state information in each packet than naive node marking but would
converge much more rapidly [1]. Three modes to decrease the state information of these approaches into amazing more controllable.
The first pattern is to XOR each node making an edge in the pathway with each other. Node a binds its Internet Protocol
address into the packet and sends out it to b. On remaining discovered at b (By detecting a 0 in the distance), b XORs its address with
the address of a. This innovative data unit is called an edge id and decreases the necessary state for edge sampling by half. Their next
approach is to supplementary take this edge id and break up it into k smaller fragments. Then, arbitrarily choose a fragment and
encode it, along with the fragment offset so that the correct consequent fragment is elected from a downstream router for working.
When sufficient packets are obtained, the victim will receive all edges and all fragments so that an attack path can be reconstructed
(even in the presence of multiple attackers). The low probability of marking cuts down the associated overheads. Moreover, only a
predetermined space is needed in each packet. Because of the high number of combinings necessitated to reconstruct a fragmentized
edge id, the re-formation of such an attack graph is computationally intense. Additionally, the approach outcome in a huge number of
fake positives. As an example, with only 25 attacking hosts in a DDoS [6] attack the reconstruction method needs days to construct
and results in thousands of false positives [3].
Trace back scheme: As an alternative of encoding the IP address furnished with a hash, Encoding the IP address into an 11
bit hash and preserve a 5 bit hop count, together collected in the 16-bit fragment ID field[5]. This is based on the surveillance that a 5-
bit hop count (32 max hops) is enough for around all Internet routes. Two dissimilar hashing functions be utilized so that the order of
the routers in the markings can be decided. Next, if any given hop determines to mark it initially checks the remoteness field for a 0,
which means that a earlier router has already marked it. If this is the case, it produces an 11-bit hash of its own IP address and then
XORs it with the earlier hop. If it finds a non-zero hop count it attaches its IP-hash, sets the hop count to zero and forwards the packet
on. If a router determines not to mark the packet , it merely increases the hop count in the overloaded fragment id field.

Figure 1: Implementation flow diagram [2]
Source Packet Value(i) Packet Marking Method
Router(x)
Router(x) Router(x) Router(x)
Router(x)
Decoding
Termination Packet
No
Path Re-Construction
Graph
Construction of
path
Graph
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

711 www.ijergs.org

Benefits
It endures multiple attacker environments.

The rectified packet marking algorithmic program contributes the exact attack graph.

In this method it trace out the hackers host-id.

I. MODULES

1. Path Construction
2. Packet Marking Procedure
3. Router maintenance
4. Termination Packet Number (Tpn) generation.
5. Re-Construction Path.

Module Description:
Path Construction
In this unit the path will be built which the data packets should span. This path should be dynamically altered in
case of traffic and failure in router. The path will be assigned based on the destination address.
This built path will compare with the reconstructed path. The reconstruction method is produced at the destination.

Figure 2: Path construction

Packet marking procedure

In this unit, each packet will be marked with arbitrary values. [12] These marking procedure held at the router side
relies on the marking probability. The user defined the marking values at any assortment depends upon the marking value, Pm
will be allocated. The values are selected at the haphazard location then its checked with the Pm value [8]. These values are
encoded and its affixed in the start or in the edge of the packets.
Utilizing this window we will give source-id and destination-id as input to our system.

Here we will have an choice to browse a text or java file, that has to be carried to the destination from source.
Sender Construction of
Path
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

712 www.ijergs.org



Figure 3: Packet marking source (transition-router)
The path selective information from source to destination will be exhibited in this window.

A text box points the source-id and destination-id that was inserted as input and the information about the number of
characters in the file that has to be channeled to the destination.

The text box also furnishes information about number of packets that has to be carried to the destination.

Router maintenance

In this unit the router accessibility will be checked out relies upon the router availability, the path will be
constructed. Here we preserving the centralized routing table depend upon the source and destination the path will be
assigned [10]. The router will confirm the accessibility of the next router and then its forward to the next router. The routing
table will be altered dynamically.

The router maintenance window describes the selective information about several routers present in the network under
consideration.

As pointed in the window we can acquire the connection status of the routers utilized in the network i.e we can name that,
three routers indicated by Router-101 , Router-102, Router-103 are connected. If assume any of the router was not connected
then a Not connected information would be came out in the status.

Submit button is used to persist the execution, after affirming the node status in the network.

Close button is used to close the router maintenance window.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

713 www.ijergs.org


Figure 4: Router Maintenance

TPN Generation

In this unit the encoded values in the packet are retrieved and its patterned with the rendered code[9]. This TPN will be rendered at
the destination side [2]. The TPN will verifies total received packets and it retrieves the attack graph and it will produce the re-
construction path. Then it receives the determined values and it decodes that values then it finds out with the packet marked
value.

Figure 5: TPN generation
Re-Construction Path

In this unit the pathway will be re-built with the obtained packets its validated with the built path. The attack graph
will acknowledged and then it gives the re-constructed path [11]. Then it promote the request for the constructed path and it
analyzed with the re-constructed path. Here we will locate the packets are hacked or delivered correctly. By this we will
hackers host id if its hacked.

Figure 6: Re-construction path
Attack
Graph
Recover
the Path
Check
Find
Encoded
Values
Decode
the
Values
Check
the
values
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

714 www.ijergs.org


In Module Given Input and Expected output
Path Construction
Given Input : Select the paths for data traverse.
Expected Output : Path will be generated.
Packet marking procedure
Given Input : Select the values to be encoded.
Expected Output : Packet will be encoded and then it will be appended to the packets.
Router maintenance
Given Input : Design the graphical user interface for router maintenance.
Expected Output : Change the router availability dynamically.
TPN Generation
Given Input : Retrieve the encoded values.
Expected Output: Get the exact values by decoding the number .
Re-Construction Path
Given Input : Retrieve the path from the attack graph.
Expected Output : Get the reconstructed path.

REFERENCES:
[1] A Precise Termination Condition of the Probabilistic Packet Marking Algorithm Tsz-Yeung Wong, Man-Hon Wong, and Chi-
Shing (John) Lui, Senior Member, ieee, ieee transactions on dependable and secure computing, vol. 5, no. 1, january-march 2008

[2] http://mindsetit.org/downloads/JAVA.doc
[3] CERT Advisory CA-2000-01: Denial-of-Service Developments, Computer Emergency Response Team, http://www.cert.org/-
advisories/-CA-2000-01.html, 2006.

[4] J. Ioannidis and S.M. Bellovin, Implementing Pushback: Router-Based Defense against DDoS Attacks, Proc. Network and
Distributed System Security Symp., pp. 100-108, Feb. 2002.

[5] S. Bellovin, M. Leech, and T. Taylor, ICMP Traceback Messages, Internet Draft Draft-Bellovin-Itrace-04.txt, Feb. 2003.

[6] K. Park and H. Lee, On the Effectiveness of Route-Based Packet Filtering for Distributed DoS Attack Prevention in Power-Law
Internets, Proc. ACM SIGCOMM 01, pp. 15-26, 2001.

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

715 www.ijergs.org

[7] P. Ferguson and D. Senie, RFC 2267: Network Ingress Filtering: Defeating Denial of Service Attacks Which Employ IP Source
Address Spoofing, The Internet Soc., Jan. 1998.

[8]http://www.cs.cuhk.hk/~cslui/PUBLICATION/tdsc2008.pdf
[9] http://seminarprojects.net/c/TERMINATION
[10] http://seminarprojects.net/c/algorithm-and-flowchart-for-railway-reservation-system
[11] http://en.wikipedia.org/wiki/IP_traceback
[12] M. Adler, Trade-Offs in Probabilistic Packet Marking for IP Traceback, J. ACM, vol. 52, pp. 217-244, Mar. 2005



















International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

716 www.ijergs.org

SVM Based Spatial Data Mining for Traffic Risk Analysis
Roopesh Kumar
1
,Diljeet Singh Chundawat
2
, Prabhat Kumar Singh
3

1,2,3
Lecturer Deptt.of Computer Science & Engineering,
MIT,Mandsaur M.P.(India)
roopesh.kumar@mitmandsaur.info
Abstract: - Extracting knowledge from spatial data like GIS data is important to reduce the data and extract information. GIS data
also contains information about accidents at certain place and road conditions. Such data contains useful information for the traffic
risk analysis. But such information are not directly present in the dataset. Hence spatial data mining technique is needed to
extract knowledge from these databases. The previous work shows unpractical approach to multi- layer geo-data mining. That
means the information from various sources are combined based on the data relation and data mining is
performed on the relation. The efficiency of risk factor evaluation requires

Key Word:- GIS, SVM,Clustering,Geo-Datamining, Ant Colony Optimization
INTRODUCTION
Automatic filtering of spatial relationships, on the whole the outcome of decision tree has dependence on the initial data which are
incomplete, incorrect or non- relevant which inevitably cannot deliver error free results. The s ugge s t ed model develops a SVM
based technique to achieve the same by first training support vector machine with risk pattern and further classifying the data based on
the training model. Therefore not only the result is based on relational model but also based on complex kernel techniques, compared
to other existing approaches using non-intelligent decision tree heuristics.
LITERATURE REVIEW
Spatial data mining fulfills real needs of many geomantic applications. It allows taking advantage of the growing availability of
geographically referenced data and their potential richness. This includes the spatial analysis of risk such as epidemic risk or traffic
accident risk in the road network. This work deals with the method of decision tree for spatial data classification.
This method differs from conventional decision trees by taking account implicit spatial relationships in addition to other object
attributes. Ref [2, 3] aims at taking account of the spatial feature of the accidents and their interaction with the geographical
environment. It involves a new field of data mining technology that is spatial data mining. In the previous work, the system
has implemented some spatial data mining methods such as generalization and characterization. This work [3] presents the approach
to spatial classification and its application to extend TOPASE.
Clustering in spatial data mining is to group similar objects based on their distance, connectivity, or their relative density in space.
In the real world, there exist many physical obstacles such as rivers, lakes and highways, and their presence may affect the result of
clustering substantially. In this project, the system studies the problem of clustering in the presence of obstacles and defines it as a
COD (Clustering with Obstructed Distance) problem. As a solution to this problem, the system proposes a scalable clustering
algorithm, called COD- CLARANS [5,6].
Spatial Clustering with Obstacles constraints (SCOC) has been a new topic in Spatial Data Mining (SDM). In [8] the author
proposes an Improved Ant Colony Optimization (IACO) and Hybrid Particle Swarm Optimization (HPSO) method for SCOC. In the
process of doing so, the system first use IACO to obtain the shortest obstructed distance, which is an effective method for arbitrary
shape obstacles, and then the system develop a novel HPKSCOC based on HPSO and K-Medoids to cluster spatial data with
obstacles, which can not only give attention to higher local constringency speed and stronger global optimum search, but also get
down to the obstacles constraints.
Spatial clustering is an important research topic in Spatial Data Mining (SDM). Many methods have been proposed in the
literature, but few of them have taken into account constraints that may be present in the data or constraints on the clustering. These
constraints have significant influence on the results of the clustering process of large spatial data. In this project, the system discuss
the problem of spatial clustering with obstacles constraints and propose a novel spatial clustering method based on Genetic
Algorithms (GAs) and KMedoids, called GKSCOC, which aims to cluster spatial data with obstacles constraints.[9] Spatial data
mining method is used to enrich Customer Intelligence analytical function in this project. The system first proposes a spatial
data classification method which can handle the uncertainty property of customer data. On the basis of spatial classification
rules, the system then proposes a detection method of potential customers by map overlapping. Deep spatial analytical function is
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

717 www.ijergs.org

realized in customer intelligence system which cannot be done by traditional data mining method. With the coming of E-business,
the enterprises are now faced harder competition than before. So they now focus their attention on customers instead of their
production only. In order to win the competition, the enterprises have to provide their customers more individualized and more
efficient service. Customer intelligence (C1) system appears in recent years to meet the need of above emergence. From the
analytical function of the system, customer intelligence is a decision analytical method which includes customer identification,
customer selection, customer acquirement, customer improvement and customer maintenance [12]. The spatial co-location rule
problem is different from the association rule problem, since there is no natural notion of transactions in spatial data sets which are
embedded in continuous geographic space. In this project, the system provides a transaction-free approach to mine co-location
patterns by using the concept of proximity neighborhood. A new interest measure, a participation index, is also proposed for spatial
co-location patterns. The participation index is used as the measure of prevalence of a co-location for two reasons.
Modeling spatial context (e.g., autocorrelation) is a key challenge in classification problems that arise in geospatial domains. In
[13] Markov random fields (MRF) are a popular model for incorporating Spatial context into image segmentation and land-use
classification problems. The spatial auto regression (SAR) model [14], which is an extension of the classical regression model for
incorporating spatial dependence, is popular for prediction and classification of spatial data in regional economics, natural resources,
and ecological studies.
GENETIC AND ACO BASED SPATIAL DATA MINING MODEL

The proposed spatial data mining model uses ACO integrated with GA for risk pattern storage. The proposed ant colony based
spatial data mining algorithm applies the emergent intelligent behavior of ant colonies. The proposed system handle the huge search
space encountered in the discovery of spatial data knowledge. It applies an effective greedy heuristic combined with the trail
intensity being laid by ants using a spatial path. GA uses searching population (set) to produce a new generation population. It
evolves into the optimum state progressively by exerting a series of genetic operators such as selection, crossover and mutation
etc on traffic risk patterns. The proposed system develops an ant colony algorithm for the discovery of spatial trends in a GIS
traffic risk analysis database. Intelligent ant agents are used to evaluate valuable and comprehensive spatial patterns.

Data from different Sources
















Classify Risk Factor



CONCLUSION
It is our survey paper and we have gone through for different aspects of data mining and find out problem in previous approaches and
find out the way of solution using SVM so that we have proposed some method .We will do implementation and will take as future
work
Form the
Relation Model
Extract Statistics as
Features
Train Classifier
SVM Place Info.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

718 www.ijergs.org


REFERENCES:
[1] [Agrawal & Srikant1994] Agrawal, R., and Srikant, R. 1994. Fast algorithms for Mining Association Rules. In Proc. of Very
Large Database.
[2] [Anselin1988] Anselin, L. 1988. Spatial Econometrics: Methods and Models. Dordrecht, Netherlands: Kluwer.
[3] [Anselin1994] Anselin, L. 1994. Exploratory Spatial Data Analysis and Geographic Information Systems. In Painho, M., ed.,
New Tools for Spatial Analysis, 45-54.
[4] [Anselin1995] Anselin, L. 1995. Local Indicators of S p a t i a l Association: LISA. Geographical Analysis 27(2):93{115.
[5] [Barnett & Lewis1994] Barnett, V., and Lewis, T. 1994. Outliers in statistical Data. John Wiley, 3rd Edition.
[6] [Besag1974] Besag, J. 1974. Spatial Interaction and Statistical Analysis of Lattice Systems. Journal of Royal Statistical
Society: Series B 36:192-236.
[7] [Bolstad2002] Bolstad, P. 2002. GIS Foundamentals: A Fisrt Text on GIS.Eider Press.
[8] [Cressie1993] Cressie, N. 1993. Statistics for Spatial Data (Revised Edition). New York: Wiley.
[9] [Han, Kamber, & Tung2001] Han, J.; Kamber, M.; and Tung, A. 2001. Spatial Clustering Methods in Data Mining: A
Survey. In Miller, H., and Han, J., eds., Geographic Data Mining and Knowledge Discovery. Taylor and Francis.
[10] [Hawkins1980] Hawkins, D. 1980. Identification of Outliers. Chapman and Hall. [Jain & Dubes1988] Jain, A., and
Dubes, R. 1988. Algorithms for Clustering Data. Prentice Hall.
[11] [Jhung & Swain1996] Jhung, Y., and Swain, P. H. 1996. Bayesian Contextual Classification Based on Modified M-
Estimates and Markov Random Fields. IEEE Transaction on Pattern Analysis and Machine Intelligence 34(1):67.
[12] [Koperski & Han1995] Koperski, K., and Han, J. 1995. Discovery of Spatial Association Rules in Geographic Information
Databases. In Proc. Fourth International Symposium on Large Spatial Databases, Maine. 47-66.
[13] [Shekhar et al.2002] Shekhar, S.; Schrater, P. R.; Vatsavai, R. R.; Wu, W.; and Chawla, S. 2002. Spatial Contextual
Classification and Prediction Models for Mining Geospatial Data. IEEE Transaction on Multimedia 4(2).
[14] [Zhang et al.2003] Zhang, P.; Huang, Y.; Shekhar, S.; and Kumar, V. 2003. Exploiting Spatial Autocorrelation to Efficiently
Process Correlation-Based Similarity Queries. In Proc. of the 8th Intl. Symp. on Spatial and Temporal Databases




International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

719 www.ijergs.org

Optimization of Process Parameters Influencing MRR, Surface Roughness and
Electrode Wear During Machining of Titanium Alloys by WEDM
P. Abinesh
1
, Dr. K. Varatharajan
2
, Dr. G. Satheesh Kumar
3

1
Research Scholar, Velammal Engineering College, Chennai
2
Faculty
E-mail- abinesh.mkr@gmail.com
ABSTRACT - Wire-cut Electrical Discharge Machining (WEDM) is extensively used in machining of conductive materials
producing intricate shapes with high accuracy. This study exhibits that WEDM process parameters can be altered to achieve
betterment of Material removal rate(MRR),Surface Roughness (SR) and Electrode Wear. The objective of our project is to investigate
and optimize the potential process parameters influencing the MRR, SR and Electrode Wear while machining of Titanium alloys using
WEDM process. This work involves study of the relation between the various input process parameters like Pulse-on time(T
on
), Pulse-
off time(T
off
), Pulse Peak Current(IP), Wire material and Work piece material and process variables. Based on the chosen input
parameters and performance measures L-16 orthogonal array is selected to optimize the best suited values for machining for Titanium
alloys by WEDM.
Keywords - T
on
,T
off
,IP,MRR, SR, Electrode Wear and orthogonal array.
1.INTRODUCTION
Recently, unconventional machining is being established well and is used by the component manufacturer as a manufacturing method.
Non-conventional is not involved with the high force and itsa green process. The process doesnt deal with the metal chip and only
water is used as an electrode.Wire-EDM as a precision cutting technology is possible to fabricate from a small range of product to a
large size of component. All types of good conductivity metal such as mild steel and copper are possible to be cut using wire-EDM.
However, machine setting varies for each type of metals. So, certain parameters need to be clearly defined for each of materials.The
setting is easy tuned for a straight line and become more difficult for the curve or part which is involving an angle. Rough cutting
operation in wire EDM is treated as challenging one because improvement of more than one performance measures viz.
Metal removal rate , Surface Roughness & Electrode Wear Rate(EWR) are used as parameters to obtain precision work. In this
research a path to determine parameters setting is proposed. Using orthogonal array method, significant machining parameters
affecting the performance measures are identified as pulse peak current, pulse on time, and pulse off time. Using the plots of Signal to
noise ratio, the effect of each control factor on the performance measure is studied separately. The study explains that the WEDM
process parameters can be adjusted so as to achieve maximum metal removal rate and reduced electrode wear rate and surface
Roughness.

Atul Kumaret al. [1] investigated influences of wire-EDM machining variables on surface roughness of newly developed DC 53 die
steel of width, length, and thickness 27, 65 and 13 mm, respectively. K.H.Hoet al [2]found WEDM is a widespread technique used in
industry for high-precision machining of all types of conductive materials such as metals, alloys, some ceramic materials and even
graphite.

Y.S Liao et al[3] investigated that WEDM machines have endorsed the pulsegenerating circuit using low power for high power and
ignition for machining. Nonetheless, it is unsuitable for finishing process since the energy produced by the high voltage sub circuit is
very high to obtain a covet fine surface, no consideration is given to the fewer pulse-on time is assigned. As newer and more exotic
materials are developed, and more complex shapes are presented, conventional machining operations will continue to reach their
limitations and the increased use of wire EDM in manufacturing will continue to grow at an accelerated rate[4]. NihatTosun et.al [5]
investigated on the effect and optimization of machining parameters on the kerf (cutting width)and material removal rate (MRR) in
wire electrical discharge machining (WEDM) operations.

Hewidy et al [6] developed themathematical models correlating the various WEDM machining parameters such as water
pressure,peak current, wire tension and with metal removal rate, wear ratio,duty factor and surface roughness based on the response
surface methodology.Mahapatra [7] studied the relationships between various control factors and responses like SF,MRR and kerf by
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

720 www.ijergs.org

means ofnonlinear regression analysis, resulting in a valid mathematical model. The study demonstrates that the WEDM process
parameters can be adjusted to achieve better metal removal rate, surface finish and cutting width simultaneously.
Sarkar et al.,[8](2005, 2006) produced a technology guideline for optimum machining of gamma titanium aluminide based on Pareto-
optimal solutions. Additionally, they calculated the wireoffset value and used it as input parameter to enhance the dimensional
accuracy of the product.
By applying multi-response S/N (MRSN) ratio technique, Ramakrishnan&Karunamoorthy (2006,2008)et al [9] reported optimal
setting for WEDM of tool steel and inconel 718.Anand et
all[10]usedafractionalfactorialexperimentwithanorthogonalarraylayouttoobtainthemostdesirableprocessspecificationforimprovingtheWE
DMdimensionalaccuracyandsurfaceroughness.Miller et al, [11] in their paper have discussed about wire electrical discharge machining
(WEDM) of cross-section with minimal thickness and compliant mechanisms is measured. This was backed by findings from SEM
micrographs of EDM debris , subsurface, surface.
Konda et al [12] classified the various potential factors affecting the WEDM performance and in addition,they applied the Design of
experiments(DOE) technique to study and optimize the possible effects of parameters during process design and development.

C. Bhaskar Reddy et.al, [13] have investigated for the best parameter selection to obtain maximum Metal removal rate
(MRR),Electrode wear rate(EWR) and better surface roughness (SR) by conducting the experiment on CONCORD DK7720C four
axes CNC Wire Electrical Discharge Machining of P20 die tool steel with molybdenum wire of 0.18mm diameter as an electrode.
From the literature, several researchers have applied the Taguchi method and used to optimize the performance parameters in WEDM
process. In the Present work Ti alloy is considered for calculating the output parameters likesurface roughness,material removal rate
and (EWR)electrode wear rate using Orthogonal Array method .

2.EXPERIMENTAL SETUP
The experiments were carried out on a wire-cut EDM machine (ELEKTRA SPRINTCUT 734) of M.S.K Tools Ltd, Chennai, India.
The WEDM machine tool (Figure 2.1) has the following specifications






























Design
Fixed column with moving
Table
Table size 440 x 650 mm
Max. workpiece height 200 mm
Max. workpiece weight 500 kg
Main table traverse (X, Y) 300, 400 mm
Auxiliary table traverse (u, v) 80, 80 mm
Wire electrode diameter
0.25 mm (Standard),
0.15, 0.20 mm (Optional)
Generator ELPULS-40 A DLX
Controlled axes X Y, U, Vsimultaneous
Interpolation Linear & Circular
Least input increment 0.0001mm
Input Power supply 3 phase, AC 415 V, 50 Hz
Connected load 10 KVA
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

721 www.ijergs.org


3.WORKPIECEMATERIAL SELECTION

Due to the different melting point, evaporation and thermal conductivity, different materials show different surface quality and MRR
at the same conditions of machining.Titaniun(Grade 5 & Grade 2)is the workpiece material which is used in this experiment.The
titanium plate of 125mm x 100mm x 3mm size has been used as a work piece material and a profile of 5mm x 5mm x 2mm is been cut
with the wire(Brass and Brass coated Nickel) traversing the through the kerf made and the performance analysis of output parameters
with respect to input parameters is measured.
Grade 5, also known as Ti6Al4V, Ti-6Al-4V or Ti 6-4(C-0.036,Al-6.30,V-3.99,Ti-89.31) is the most commonly used alloy.
Pure titanium undergoes an allotropic transformation from the hexagonal close-packed alpha phase to body-centered cubic
beta phase at a temperature of 882.5C (1620.5F).

Grade 2(C-0.008,Fe-0.04,Ti-99.83) is otherwise known aspure titanium undergoes an allotropic transformation from the
hexagonal close-packed alpha phase to body-centered cubic beta phase at a temperature of
882.5C(1620.5F).Commercially pure, or CP, titanium is unalloyed. At service temperature, it consists of 100% hcp alpha
phase.


4.METHODOLOGY

4.1 Taguchi Method
Taguchi, a Japanese scientist, developed a technique based on Orthogonal Array(OA) of experiments.Theassimilation of DOE with
parametric optimization of the process can be accomplished in the Taguchi method. An OA gives a set of well-balanced experiments,
and Taguchis signal-to-noise. (S/N) ratios,that are logarithmic functions of the craved output, serve as an objective functions for
optimization. It comforts to learn the whole parameter space with a small number (minimal experimental runs) of experiments. OA
and S/N ratios are used to study the effects of control factors and noise factors and to determine the best quality characteristics for
particular applications.















Fig.2. Principle of wire EDM

The optimal process parameters obtained from the Taguchi method are insensitive to the variation of environmental conditions and
other noise factors. However,Taguchi method was most suitable in the case to optimize single performance characteristics.

4.2Orthogonal array Method:
Orthogonal array testing is a black box testing technique that is a efficient, methodogical and statistical way of software testing.It
is usally preferred when the number of inputs to the system is small, but very large to permit for exhaustive testing of each possible
input to the systems. Orthogonal approach guarantees a pair wise coverage of all Variables.Orthogonal array method has an
application in user interface testing, configuration testing and performance testing.Orthogonal array testing guarantees testing the
pairwise combination of all the selected variables. The net effects of organizing the experiment in such treatments is that the same
piece of information is gathered in the minimum number of experiments.OAs are commonly represented as L
Runs
(Levels
Factors
) or
Orthogonal Array(Runs, Factors, Levels, Strength) or OA

(Runs(N); Factors(k), Levels(v), Strength(t)) is an N k array on v


symbols such that every N t sub-array contains all tuples of size t from v symbols exactly times.
Runs (N) Number of rows in the array, which translates into the number of Test Cases that will be generated.
Factors (k) Number of columns in the array, which translates into the maximum number of variables that can be handled by the
array.
Levels (v) Maximum number of values that can be taken on by any single factor.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

722 www.ijergs.org

Strength (t) The number of columns it takes to see all the possibilities equal number of times.

4.3Signal-to-Noise ratios (S/N ratio)
In the Taguchi method, the term signal denotes the desirable value (mean) for the output characteristic and the term noise
represents the undesirable value (S.D) for the output characteristic. Therefore, the S/N ratio is the ratio of the mean to the S.D. S/N
ratio is used to estimate the quality characteristic deviating from the desired value (8-9). The S/N ratio is defined as
1. Larger the Better:
S/N = -10 log (1/n)--------------------------(1)
2. Smaller the Better:
S/N = -10 log (1/n)-----------------------------(2)
Where n = no of repetition

4.4Material Removal Rate (MRR)
This is a production term usually measured in mm
3
/s. Increasing the MRR will obviously get a part done quicker,but increasing the
material removal rate is often accompanied by increases in tool wear, poor surface finishes. The material MRR is expressed as the
ratio of the difference of weight of the workpiece before and after machining to the machining time and density of the material.
Therefore, the MRR for the WEDM operation is calculated as
MRR = (Wjb~Wja)
Whereas ,Wjb = Weight of workpiece before machining.
Wja = Weight of workpiece after machining.
t = Machining time
= Density of Titanium grade-2 = 4.51*10-3 g/mm3
Density of Titanium grade-5 = 4.42*10-3 g/mm3 42
Wjb=[(Weight of the plate before machining)-[(Dia of Wire*Tickness of plate*Length of the Kerf)]/( )
Wja=[(Weight of plate before machining)+(Weight of the removed piece)]/(t* )

4.5Surface Roughness measurement
The surface roughness is measured using the surface tester SE-1200 measures roughness and evaluate parameters according to the
following standards: ISO 4287, ISO 12085 (MOTIF or CNOMO), DIN, ASME, JIS. There are two storing modes available in
surfcorder,they are Memo and statistics. With the Memo mode the measurements are stored, in order to display and/or print them.
With the Statistics the measurements stored on a maximum of 12 parameters to perform various statisticanalysison graphs and
histograms can be displayed or printed out. Initially the job whose surface roughness has to be tested is mounted on the V-block and
then a motorized arm which holds the stylus moves along the vertical column.And finally as the stylus comes in contact with the
surface of the job. The high resolute printer will provide the details about the surface roughness in a printed form.



4.6Electrode Wear Rate
The electrode wear rate is measured by using Digital VernierCaliper,where the diameter of the wire before machining is calculated is
compared with the diameter of the wire after machining.
Electrode wear is not just a function of electrode properties,but is also a function of power supply settings. Electrode wear is the
percentage ratio of the amount of electrode material lost from the electrode to the cavity corresponding to the input process
parameters.
EWR = [Area of the electrode x Length of the Electrode]/t


5.SELECTION OF CUTTING PARAMETERS

5.1Orthogonal Array Selector
Orthogonal Array No .of Experiment No. of factor
L-4 4 3
L-8 8 7
L-9 9 4
L-12 12 11
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

723 www.ijergs.org











Table 5.1 Orthogonal Array Selector Table
From this above array table, we have chosen the orthogonal array of L-16 since our No of factors is 5 are pulse on time, pulse off
time, peak current, wire material, workpiece material [14]. Minitab 15 software was used for graphical analysis of the obtained data.
5.2 Level Values of Input Factor
S.NO Control Factor Symbol Level 1 Level 2

Unit

1 Pulse-on time

A 115 120 s
2 Pulse-off time

B 50 55 s
3 Peak current

C 150 200 A
4 Wire material D Brass
(0.25mm)[D1]
Brass coated Ni
(0.25mm)[D2]
mm
5 Workpiece material E Grade-2 (E1) Grade-5 (E2) -
Table 5.2 Level Values of Input Factor



L-16 16 15
L-16 16 5
L-18 18 8
L-25 25 16
L-27 27 13
L-32 32 31
L-32 32 10
L-36 36 23
L-36 36 16
L-50 50 12
L-54 54 26
L-64 64 63
L-64 64 21
L-81 81 40
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

724 www.ijergs.org

5.3 Selection from Array
Ex.No A B C D E
1. A
1
B
1
C
1
D
1
E
1
2. A
1
B
1
C
1
D
1
E
2
3. A
1
B
1
C
1
D
2
E
1
4. A
1
B
1
C
1
D
2
E
2
5. A
2
B
2
C
2
D
1
E
1
6. A
2
B
2
C
2
D
1
E
2
7. A
2
B
2
C
2
D
2
E
1
8. A
2
B
2
C
2
D
2
E
2
9. A
1
B
2
C
1
D
1
E
1
10. A
1
B
2
C
1
D
1
E
2
11. A
1
B
2
C
1
D
2
E
1
12. A
1
B
2
C
1
D
2
E
2
13. A
2
B
1
C
2
D
1
E
1
14. A
2
B
1
C
2
D
1
E
2
15. A
2
B
1
C
2
D
2
E
1
16. A
2
B
1
C
2
D
2
E
2
Table 5.3 Selection from Array Table
To evaluate the effects of machining parameters on performance characteristic (MRR), and to identify the performancecharacteristic
under the optimal machining parameters, a specially designed experimental procedure is required.

6.EXPERIMENTAL TESTING
Various experiments were performed to find how the output parameter varies with the variation in the input parameters.
Theexperiments were performed in constant voltage mode of the WEDM.In the set of experiments Pulse on current is varied from 105
units to 129 units in the steps of 3 units, pulse off time (TOFF) is varied from 63 units to 39 units with regular decrement of 3 units,
peak current (IP) is varied from 230 amp to 50 amp in the decrements of 20 amp.

In the first set of experiments,the fixed input variables are T
on
=115;T
off
= 50;IP = 150 are set and the corresponding MRR,SR,EWR
are measured by machining Titanium(Grade 5 & 2) by brass and brass coated electrode wire.In the second set of experiments,the fixed
input variables are T
on
=120;T
off
= 55;IP = 200 are set and the corresponding MRR,SR,EWR are measured by machining
Titanium(Grade 5 & 2) by brass and brass coated electrode wire.In the third set of experiments,the fixed input variables are T
on
=115;T
off
= 55;IP = 150 are set and the corresponding MRR,SR,EWR are measured by machining Titanium(Grade 5 & 2) by brass and
brass coated electrode wire.In the Fouth set of experiments,the fixed input variables are T
on
=120;T
off
= 50;IP = 200 are set and the
corresponding MRR,SR,EWR are measured by machining Titanium(Grade 5 & 2) by brass and brass coated electrode wire.





International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

725 www.ijergs.org

7.EXPERIMENTAL RESULTS
7.1 Design matrix and Observation table

Table 7.1 Design matrix and Observation table
7.2 Design matrix and Observation table by S/N Ratio

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

726 www.ijergs.org

Table 7.2 Design matrix and Observation table by S/N Ratio
7.3 Graphs based on S/N ratio

Fig :7.3 Optimal set of input parameters for nominal output response

Fig :7.4 Optimal set of input parameters for minimum EWR
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

727 www.ijergs.org


Fig :7.5 Optimal set of input parameters for maximum MRR

Fig 7.6 Optimal set of input parameters for minimum SR
8.INFERENCE

It is evident from the results that, MRR increases upon increasing Pulse on Time, Pulse off Time, Pulse Peak Current. By setting the
input parameters as A2,B1,C1,D2,E2 , we can achieve the nominal output response (ie Max MRR, Min SR and EWR).And by
selecting A2,B2,C2,D1,E1, as input parameters, we can achieve maximum Metal Removal Rate(MRR).Surface Roughness decreases
upon increasing Pulse on Time and increasing upon increasing Pulse off time and Peak Current. By framing on A2,B1,C1,D1,E1 as
input parameters, we can achieve minimal Surface Roughness(SR).EWR reduces upon increasing Pulse off Time and Peak Current.
By choosing the input parameters A2,B2,C2,D1,E1 as, we can achieve minimal Electrode Wear Rate(EWR).

9.CONFIRMATION EXPERIMENT:

The confirmation experiment is the final step in any design of experiment process. Table 9.1, Table 9.2 and Table9.3 showsthe
comparison of the predicted value with the new experimental value for the selected combinations of the machining parameters. As
shown in the tables, the experimental values agree reasonably well with predictions because an error of 2.65 % for the S/N ratio of
MRR and 9.12 % for the S/N ratio of surface roughness and 4.18% for an Electrode wear rate is observed when predicted results are
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

728 www.ijergs.org

compared with experimental values. Hence, the experimental result confirms the optimization of the machining parameters using
orthogonal array method using S/N Ratio for enhancing the machining performance. However, the error in MRR,surface roughness
and Electrode Wear rate can be further expected to reduce if the number of measurements is increased.
9.1 Result of confirmation experiment for MRR
Predicted Value Experimental Value % error
Optimal Level A2,B2,C2,D1,E1 A2,B2,C2,D1,E1
MRR(mm
2
/min) 63.12 60.54
S/N Ratio for MRR 35.5589 34.6524 2.65
Table 9.1 Result of confirmation experiment for MRR

9.2 Result of confirmation experiment for SR
Predicted Value Experimental Value % error
Optimal Level A2,B1,C1,D1,E1 A2,B1,C1,D1,E1
SR(m) 1.29855 1.38
S/N Ratio for SR -3.37546 -3.04576 9.12
Table 9.2 Result of confirmation experiment for SR

9.3 Result of confirmation experiment for EWR
Predicted Value Experimental Value % error
Optimal Level A2,B2,C2,D1,E1 A2,B2,C2,D1,E1
EWR(mm
3
/s) 2.304 2.016
S/N Ratio for EWR -6.5287 -4.0797 4.18
Table 9.3 Result of confirmation experiment for EWR

REFERENCES:


[1]
Atul Kumar,DR.D.K.Singh, Performance Analysis of Wire Electric Discharge Machining (W-EDM), International Journal of
Engineering Research & Technology (IJERT), Vol. 1 Issue 4,2-9,2012.

[2]
K.H.Ho,S.T.Newman,S.Rahimifard,R.D.Allen, State of the art in wire Electrical discharge machining(WEDM),International
Journal of Machine tools and manufacture 44, 1247-1259,2004.
[3]
Y.S Liao,J.T Huang,Y.H Chen, A Study to achieve a fine surface finish in wire EDM,Journal of Materials Processing Technology
149, 165-171,2004.
[4]
Mustafa IlhanGokler, Alp MithatOzanozgu (2000), Experimental investigation of effects of cutting parameters on surface
roughness in the WEDM process,International Journal of Machine Tools & Manufacture 40,18311848, August 11
th
,1999.
[5]
N. Tosun, C. Cogun, G. Tosun, A study on kerf and material removal rate in wire electrical discharge machining based onTaguchi
method, Journal of Materials Processing Technology 152, 316-322,2004.

[6]
M.S. Hewidy, T.A. El-Taweel, M.F. El-Safty, Modelling the machining parameters of wire electrical dischargemachining of
Inconel 601 using RSM, Journal of Materials Processing Technology 169 ,328-336, 2005.

[7]
S.S. Mahapatra, A. Patnaik, Optimization of wire electrical discharge machining (WEDM) process parameters usingTaguchi
method,The International Journal of Advanced Manufacturing Technology 34/9-10 911-925, 2007.

[8]
Sarker S,Mitra S,Bhaltacharyya B, Parametric analysis and optimization of wire electrical discharge machining of Gamma-
Titanium aluminide alloy,Journal of Materials Processing Technology,159,286-294, 2005.
[9]
Ramakrishna R,Karunamoorthy L, Modelling and Multi response optimization of Inconel 718 on machining of CNC WEDM
Process,Journal of materials processing Technology,207,304-349, 2008.

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

729 www.ijergs.org

[10]
Anand K.N, Development of process technology in wire cut operation for improving machining quality ,Total quality
Management,Vol 7,11-28,1996.
[11]
Scott F. Miller, Chen-C. Kao, Albert J. Shih, and Jun Qu Investigation of wire electrical discharge machining of thin cross-
sections and compliant mechanisms, International journal of machine tools & manufacture, 45,, PP 1717-1725,2005.


[12]
Konda R,Rajurkar K.P,BishuRR,Guha A,Parson M, Design of experiments to study and optimize process performance,
International journal of quality and reliability management,Vol.16,56-71,1999.

[13]
C. Bhaskar Reddy et.al, Experimental Investigation of Surface Finish and Metal Removal Rate of P20 Die tool steel in wire-EDM
using Multiple Regression Analysis, GSTF journal of Engineering Technology, V.1, number.1,pp, 113-118,June 2012.

[14]
GurusamySelvakumar, SoumyaSarkar, SourenMitra,Experimental analysis on wedm of Monel 400 alloys in a range of
thicknesses, International Journal of Modern Manufacturing Technologies,ISSN 20673604, Vol. IV, No.1,2012

















International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

730 www.ijergs.org

Analysis of Liver MR Images for Cancer Detection using Genetic Algorithm
Yamini Upadhyay
1
, Vikas Wasson
1

1
Research Scholar, Chandigarh Group of Colleges, Gharuan
E-mail- vamini308@gmail.com

Abstract- Image segmentation denotes a process by which a raw input image is partitioned into non-overlapping regions such that
each region is homogenous and connected. This paper directly deals with locating and thus measuring the size of cancer affected area.
It also proposes to reduce the time and manual efforts included in studying the MR images of the patient, thus saving the precious time
of the doctors. The aim of this paper is to simplify the obnoxious study problems related to the study of MR images. Over the time,
study of MR images related to cancer detection in the liver or abdominal area has been difficult. The reason is countered as the shape
complexity and over lapping of liver with other organs. Watershed technique has been used as the base technique to compare the
results with the proposed technique Genetic algorithm. A tabular comparative analysis has been given on the bases of results of both
the techniques.


Keyword: Image Processing, Liver Segmentation, Cancer Detection, Mutation, Genetic Algorithm, Watershed Algorithm, MR
Images

INTRODUCTION
Humans are considered as the most unique creation of nature. Over the periods of evolution of human life, the body has adapted itself
in the most conducive shape and thus the complexity of internal structure poses a threat to the medical study and diagnosis. Cancer
being one of the most deadliest and widespread disease is the most fatal one. Across the globe efforts are being made to cure it and
eradicate the disease [1]. No 100% cure has yet been developed but treatment like chemotherapy and other intense radiation passing
on the affected area are helpful treatments in controlling the disease [7]. The treatment lies on the secondary side, the prime job of the
doctor is to detect the affected area in time. This becomes even more crucial, if there is a medical emergency. Though cancer can be
detected in human body through MRI images, but the intensity of MR images and also the shape complexity of organs pose a threat to
accurate detection. Also during the diagnosis and treatment phase the doctors are majorly interested in the problem area and not the
entire human body. That is the reason that image segmentation comes into play [10]. Segmentation sub divides the area of interest to
provide a better and clear view of the organ or part under observation. It should be noted that, segmentation is only a pre treatment
step. The detection of cells or of structures interior to cells can be considered as an image segmentation problem within digital image
analysis [4]. Different algorithms can be employed to segment an image. The algorithm discussed in this paper is Genetic algorithm.
Watershed algorithm: Multi threshold values in CT images are interpreted on the basis of grey levels. Comprehensive methods take
advantage of de-noising and gradient construction [5]. The grey level of a pixel is interpreted as its altitude. Local minimum values are
set. Intuitively, the watershed of a relief corresponds to the limits of the adjacent catchment basins of the drops of water.
Generic Algorithm: Using the binary strings, optimization problems are solved. This algorithm is inspired by natural evolution. Once
the random variables are generated, these can be improved by iteratively applying operators, termed selection, crossover and mutation
that mimic the corresponding processes of natural evolution. In fact, selection lets only the fittest individuals to be present in the next
generation (iteration of the algorithm); crossover lets them exchange tracts of their DNA (corresponding substrings) to generate
offspring (new solutions), while mutation randomly introduces new genes (by flipping one or more bits of a solution) [12].
Image processing is a developing and growing field in context to the medical application. Many methods have been developed and
replaced with the newer methods. So it becomes of prime importance to develop and select newer methods to suit the requirements of
the current times and problem specifications. Likewise 3D image analysis, reconstruction of the MRI slices and accurate boundary
detection are of prime importance. Softening of cells is a major problem in cancer, and various 3D orthogonal planes (sagittal, coronal,
transverse) are acquired [2]. Due to the shape of liver, its overlapping regions with lungs and heart and the artifacts of motion and
pulsation automatic liver segmentation is a difficult process, also the CT images show grayish values of range between 90-92 out of 0-
255 for a normal tumor free liver, but if there is tumor then the images become darker and the range is also ambiguous. So it is felt
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

731 www.ijergs.org

that its high time to design and implement a quick responsive and exact calculative liver segmentation method for medical image
analysis, which supports to analyze the benefits and problems of liver transplantation and the treatment method of liver tumors.
Magnetic resonance imaging is a far better method than CT scan for the reason of being free of ionizing radiation and also gives a
better image of soft tissue in terms of visualization [3].
Genetic algorithm (GA) is a computing model that will result as biological heredity, mutation of evolutionary process in the nature
and manifests thoughts through selection, crossover and mutation operators. Its main characteristics are the searching strategy,
exchanging of information between individuals in a group. It is particularly appropriate for complex and nonlinear problems which
were difficult to be resolved dealing with traditional methods, demonstrates its unique charm in combinatorial optimization, adaptive
control, artificial life and other application areas. It is one of the intelligent computing technologies. [1]

In the paper [6] contour based segmentation using Genetic algorithm is discussed. Also manual segmentation of a structure of interest
is time-consuming and infeasible in clinical environment is discussed. Genetic algorithms are optimization techniques in which
solutions of an optimization problem are encoded as binary strings. Once the first set of random solutions is generated further they can
be improved by applying operators such as crossover, mutation and selection iteratively. The selection operator is based on the natural
evolution phenomenon that is the survival of the fittest individual. In cross over the offspring are generated by letting the exchange of
tracts of the DNA of parents. In mutation operator the new genes are introduced randomly by interchanging one or more bits of a
solution. [6]
In paper [3] liver segmentation stating that a few prevailing segmentation techniques are Region Growing, Threshold based, Level
Set method, Statistical model, Active Contour, Clustering algorithm, Histogram Based approach and Gray level methods are
discussed. The Region Growing based approach starts with the provision of a small region as a seed point and proceeds with the
addition of neighboring pixels. Thresholding based approach is implemented using global thresholding, to select the global threshold
value is the main drawback of this method. Level set method adjusts the segmentation using a speed function obtained from a pixel
classification algorithm. Model based approach utilizes the statistical shape model and it has the best performance among all the
approaches. Histogram based approach is fully automatic segmentation by eliminating neighboring abdominal organs. Gray level
based approach starts with a single user-defined pixel seed inside the liver then the mean and the variance of the rectangular
neighborhood around the pixel is computed. Clustering based approach combines the K-means and is the simplest unsupervised
learning algorithm. [7]
In the paper [18] it is stated that segmentation is based on contour let transform and watershed algorithm. A medical image usually
contains a region of interest which holds the important diagnostic information. Due to irregular shapes of human organs, different
imaging equipments the medical images have low resolution, low contrast and large noise. The new algorithm contains three steps
which are: contour let transformation of original image, obtained low frequency image is divided using watershed algorithm and then
reverting the low frequency image into high frequency by using contour-let inverse transform. A watershed is a basin-like landform
defined by highpoint and ridgelines that go down into lower elevation and stream valleys. Simple direct projection in vertical and
horizontal direction leads into blur edges and loss of information; to overcome this problem contour let inverse transformation is used.
[11]

Problem Definition
The problem defined in the paper is to detect the cancer in the liver and thus to identify its correct location and size. Time required for
this detection has been reduced as compared to the base algorithm. A new technique has developed, that can analyze the data set of the
patient before its treatment, taking care of the points that manual process of checking data sets is time consuming and not accurate and
also the cost is a factor. The problem is based on the tabular form comparative analysis of Watershed algorithm and Genetic
algorithm.
So there is a dire need to design and implement a quick responsive and exact calculative liver segmentation method for medical
image analysis that is also cheap.
The major problems that are faced while analysis of the data sets of a patient manually is that
- The manual segmentation of the liver parenchyma is extremely laborious and time consuming [6].
- Existing methods are cost intensive and cost becomes an important factor while treating a patient [7].
- Manual analysis is not accurate as compared to a computer based analysis [10].
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

732 www.ijergs.org

- The results are not precise and hence the doctors again have to input time for manual analysis after the computer has
performed operation, which again results as loss of time.
Thus it is vital to sense the root cell in the MRI images for which liver images are segmented [7]. So the problem discussed in this
paper is to develop a technique that can analyze the data set of the patient before its treatment, taking care of the points that manual
process and other existing methods of checking data sets are time consuming and not accurate and also the cost is a factor.

Objectives
Considering the problem formulated and the objectives of the paper is de-noise the image and find ROI. For checking the data sets of
the patient, in case of cancer in liver, its size, volume, and shape; structure of its vessels; and location(s) are important. Hence the
objective of this dissertation presents a technique that can be used to de-noise the image and to calculate the region of interest. To
achieve this objective it is required to develop a technique that can analyze the data set of the patient before its treatment. The
technique proposed is based on GA, for dataset comparison, watershed technique has been used as the base technique. The analysis
and comparison is done on the basis of area of tumor, time taken for analysis, no of iterations of genetic algorithm which are fixed in
both techniques, pixel difference. The objective thus defines to implement a technique that can analyze the data set and provide results
on basis of area of tumor, taking lesser time as compared to the existing technique. The main objective includes:
- Acquire an image and perform pre-processing, that includes de-noising
- Find Region of Interest (ROI) and perform segmentation with algorithm to be proposed.
- Compare performance of proposed algorithm with existing technique, on the bases of area of tumor, time taken for analysis
and no of iterations of genetic algorithm.


PROPOSED ALGORITHM
The algorithm is presented as:
The region of interest (ROI) has been selected out of the de-noised image by making the fuzzy coded binary map. The binary map
values were selected and their inverse was mixed with de-noised values. These values were then given to an empty matrix, this results
as segmented liver images. Out of the selected region of the segmented MRI image, using GA the cancer can be detected.
1. Select the image.
2. Generate the random noise in the image
3. Denoise the image using contour based filtering method.
4. Select the desired area.
5. Obtain results by varying iterations.
6. Tabular comparison of results obtained for base and proposed technique.


















International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

733 www.ijergs.org











Figure1: Algorithm
RESULTS AND COMPARISON

The graphical user interface is shown in Figure 2; it is used to select both the algorithms on the same image. Get the image and add
noise to it. De-noise the image using the contour based filtering method so as to remove all the noise from the image. Once the noise is
removed the image is available for the further application of the base and the proposed algorithm.


Figure 2: Graphical user interface Figure 3: Image selection with and without noise
As the GUI is ready the desired image is selected from the data set and is noised as shown in figure 3, after the application of contour
based filtering the noise is removed and the pre processing of the image is done to get the region of interest.
With the removal of noise after using the filters, the base technique Watershed is applied. Figure 4 represents the results of Watershed
technique. The results are given on the bases of area, no of iterations, pixel difference and time consumed.
Select the MR Image to be segmented and pre process it for further computation
De-noised MR Image
Selecting Region mask for the second level G. A. Processing
Displaying Results: (In Term of Area calculation, Time taken by the algorithm , No. of GA Iterations)



International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

734 www.ijergs.org



Figure 4: Results with Watershed Technique Figure 5: Results Genetic algorithm
Table 1 represents the results of Watershed technique. For 100 iterations the area calculated was 19.62 sq mm and time consumed was
approximately 60 sec and it generated a pixel difference of 1968. Approximately 1minute is consumed to analyze the desired area.


Technique

Area

No of iterat-ion

Pixel difference

Time

Watershed 19.62 sq mm 100 1968 59.87 sec

Table 1: Results of Watershed technique

Genetic algorithm is applied on the same image and results are obtained for the same parameters. Figure 5 represents the results of
Genetic algorithm technique.
Technique

Area

No of iteration

Pixel difference

Time

Genetic algorithm 12.87 sq mm 100 1287 40.59 sec

Table 2: Results of Genetic algorithm

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

735 www.ijergs.org

Table 2 represents the results of Genetic algorithm. For 100 iterations the area calculated was 12.87 sq mm and time consumed was
approximately 40 sec and it generated a pixel difference of 1287. As the smaller area represents that a precise region has been
selected, thus better segmentation has taken place in lesser time.

From the results represented in Table 1 and Table 2, a combined tabular form is generated, which is shown in table 3.
Technique

Area

No of itera-tion

Pixel difference

Time

Watershed 19.62 sq mm 100 1968 59.87 sec

Genetic algorithm 12.87 sq mm 100 1287 40.59 sec

Table 3: Results comparison of Genetic algorithm and Watershed technique
From Table 3 it is clear that in terms of time, area and pixel difference Genetic algorithm provided better results as compare to the
base technique, watershed. Genetic algorithm takes lesser time and minutely analyzes the problem area keeping its area small. The
results can be varied and improved by varying the number of iteration.

CONCLUSION AND FUTURE SCOPE

A. Conclusions
In this paper a new automatic segmentation technique for liver cancer detection has been developed. The new proposed strategy
introduced is based on Genetic algorithm. From the result and generated analysis it can be concluded that Genetic Algorithm can be
considered as a new advance for liver cancer detection. Moreover, results obtained from this work it is clear that current work is able
to segment the cancerous region of liver that is quite consistent with the regions identified by doctors. Also by the results presented in
tables, it can be observed that the proposed Genetic algorithm based technique performs better than the Watershed in terms of
performance measures used.

B. Future Scope
In the initialization stage experts are required to enter the initial values in order to start the initial contour, the future work associated
with current research is to automate this initialization process completely. To develop a program that can compute the area of liver
cancer based on 2D segmented boundaries. In addition, development of a similar technique for 3D liver segmentation can be subject
for further work.




International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

736 www.ijergs.org

REFERENCES:


1. Pedro Rodrigues, Joao L. Vilaca and Jaime Fonseca, An Image Processing Application for Liver Tumour Segmentation,
IEEE, FEBRUARY 2011.
2. Evgin Goceri, Mehmet Z. Unlu, Cuneyt Guzelis and Oguz Dicle, An Automatic Level Set Based Liver Segmentation from
MRI Data Sets, IEEE, 2012.
3. S. Priyadarshni and Dr. D. Selvathi, Survey on Segmentation of Liver from CT Images, IEEE, 2012.
4. Angelina. S, L. Padma Suresh and S. H. Krishna Veni, Image Segmentation Based On Genetic Algorithm for Region
Growth and Region Merging, IEEE, 2012.
5. Ying Li, Yan-Ning Zhang, Ying-Lei Cheng, Rong-Chun Zhao and Gui-Sheng Liao, An Effective Method For Image
Segmentation, IEEE, 2005.
6. S. Cagnoni, A. B. Dobrzeniecki, J. C. Yanch and R. Poli, Interactive Segmentation of Multi-Dimensional Medical Data
With Contour-Based Application of Genetic Algorithms, IEEE, 1994.
7. Consuelo Cruz-Gomez, Petra Wiederhold and Marco Gudino-Zayas, Automatic Liver Tissue Segmentation in Microscopic
Images Using Fusion Color Space and Multiscale Morphological Reconstruction, IEEE, 2013.
8. Jie Lu, Lin Shi, Min Deng, Simon C.H. YU and Pheng Ann Heng, An Interactive Approoach To Liver Segmentation in CT
Based on Deformable Model Integrated With Attractor Force, IEEE, 2011.
9. Amir H. Foruzan, Yen-Wei Chen, Reza A. Zoroofi, Akira Furukawa, Yoshinobu Sato and Masatoshi Hori, Multi-mode
Narrow-band Thresholding with Application in Liver Segmentation from Low-contrast CT Images, IEEE, 2009.
10. Jie Lu, Defeng Wang, Lin Shi and Pheng Ann Heng, Automatic Liver Segmentation in CT Images based on Support Vector
Machine, IEEE, 2012.
11. Mridula J and Dipti Patra, Genetic Algorithm based Segmentation of High
Resolution Multispectral Images using GMRF Model, IEEE, 2010.
12. Jiang Hua Wei and Yang Kai, Research of Improved Genetic Algorithm for Thresholding Image Segmentation Based on
Maximum Entropy, IEEE, 2010.
13. S. S. Kumar and Dr. R. S. Moni, Diagnosis of Liver Tumour from CT Images Using Fast Discrete Curvelet Transform,
IJCA, 2010.
14. N. Gopinath, Extraction of Cancer Cells From MRI Prostate Image Using MATLAB, IJESIT, 2012.
15. M. A. Ansari and R. S. Anand, Region Based Segmenattion and Image Analysis with Application to Medical Imaging,
ICTES, 2007.
16. Om Prakash Verma, Madasu Hanmundlu, Sebe Susan, Murlidhar Kulkarni and Puneet Kumar Jain(2011) ASimple Single
Seeded Region Growing Algorithm for Colorv Image Segmentation using Adaptive Thresholding, International Conference
on Communication Systems and Network Technologies(IEEE).
17. Hongwei Ji, Jiangping He, Xin Yang, Rudi Deklerck and Jan Cornelis (2013), ACM-Based Automatic Liver Segmentation
From 3-D CT Images by Combining Multiple Atlases and Improved Mean-Shift Techniques, IEEE.
18. Hongying LIU, Yi LIU, Qian LI, Hongyan LIU, Yongan TONG (2011), Medical Image Segmentation Based on Countourlet
Transform and Watershed Algorithm, IEEE.
19. Ruchaneewan Susomboon, Daniela Raicu, Jacob Furst (2006), Automatic Single-Organ Segmentation in Computed
Tomography Images, IEEE.
20. Devendra Joshi, Narendra D Londhe (2013), Automatic Liver Tumour Detection in Abdominal CT Images, IJCTEE, Vol
3, Issue 1




International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

737 www.ijergs.org

























International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

738 www.ijergs.org

Applications of Intelligent Transportation Systems using RFID Systems
Satheesh Kumar M, Prof. Ramesh Y. Mali
University of Pune
E-mail- satheeshkumar2008@gmail.com

AbstractIntelligent transport systems vary in technologies applied, from basic management systems such as car
navigation; traffic signal control systems; container management systems; variable message signs; automatic number plate
recognition or speed cameras to monitor applications, such as security CCTV systems; and to more advanced applications that
integrate live data and feedback from a number of other sources, such a sparking guidance and information systems; weather
information; bridge de-icing systems; and the like. Additionally, predictive techniques are being developed to allow advanced
modelling and comparison with historical baseline data. Some of these technologies are described in the following sections.

Keywords RFID,ITS,APTS,Vehicle Positioning,Public Transportation, IEEE,
INTRODUCTION
A system architecture for ITS is an overall framework for ITS that shows the major ITS components and their
Interconnections. A very important part of the system architecture is the identification and description of the Interfaces
between major ITS components. These interfaces allow the major components of an overall intelligent Transportation
system to communicate with one another and to work together. Many important ITS standards are written to make these
interfaces consistent. An ITS system architecture provides a framework for planning, defining, deploying, and integrating
Intelligent Transportation systems.
An architecture defines:
The user services that ITS systems and applications are expected to perform.
The entities where these functions exist.
Information flows and data flows that connect functions and entities
This may sound a little complicated, but informally an ITS system architecture describes what ITS does (the user
Services), where this happens (entities), and what information moves between these components (flows).User
ServicesUser services describe the activities that ITS systems and applications perform or support. Typical user services
include providing traveler information, managing traffic, electronically collecting tolls, helping drivers perform better
(especially in emergency situations), responding to traffic incidents, managing public and private vehicle fleets, etc.

OBJECTIVE AND SCOPE
Over the last couple of years, there has been significant increase in the interest in ITS research. The main objective of
Intelligent Transportation systems is to provide solution to the current drawbacks in transportation systems, and to provide a reliable
bus service network which will attract more people to use the same, reduce air pollution by motivating people to use public transport
system,, it provides a lot of advantages with a modification to the current infrastructure and systems.
Core Objectives of ITS
To Provide Alternate Solution For Current System
Overview Of Future Technological Requirement
To Improve Traffic Safety
To Relieve Traffic Congestion
To Improve Transportation Efficiency
To Reduce Air Pollution
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

739 www.ijergs.org

PROJECTDESCRIPTION
In the proposed system the Passive RFID tags are fitted on the roofs busses used for public transportation and the
RFID readers/receivers are placed on each and every bus stops with the micro-controller unit which has unique bus stop
identification number stored on the non-volatile memory, these receivers which are placed in bus stops are interconnected
through wired LAN network with the use of routers to connect different nodes to the server, The Microcontroller module
on each bus stops also have a ethernet data transmitter which converts the serial information from the controller to
ethernet packets using telnet application.
The bus identification number through passive RFID tag are sent to a centralized server with the help ethernet network,
The bus route information and the bus running information is estimated and stored in a database server where it can be
linked with internet or an SMS server (short messaging service) Whenever the user needs the information on a particular
route bus, he/she can send a short message to the server requesting the Particular Bus status or he/she can request the
Busses running through that particular route or he/she can access the database information using internet. The distance
between each bus stops are mapped in the server and the accordingly the distance and expected time is also sent along
with the user request.

Block Diagram of the System:


Figure 1: Block Diagram
MICROCONTROLLER
A microcontroller is a small computer on a single integrated circuit containing a processor core, memory,programmable
and input/output peripherals. Program memory in the form of NOR flash or OTP ROM is also often included on chip, as
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

740 www.ijergs.org

well as a typically small amount of RAM. Microcontrollers are designed for embedded applications, in contrast to the
microprocessors used in personal computers or other general purpose applications.
Microcontrollers are used in automatically controlled products and devices, such as automobile engine control systems,
implantable medical devices, remote controls, office machines, appliances, power tools, toys and other embedded
systems. By reducing the size and cost compared to a design that uses a separate microprocessor, memory, and
input/output devices, microcontrollers make it economical to digitally control even more devices and processes. Mixed
signal microcontrollers are common, integrating analog components needed to control non-digital electronic systems.
The Microcontroller used in the proposed system is general purpose PIC18F43K22 controller with two serial UART
(Universal Asynchronous Receiver and Transmitter).The UART1 is connected to the RFID reader or receiver and the
UART2 is connected to the Stellaris Serial to Ethernet converter. The Bus stop ID is saved in the NVM (Non Volatile
Memory) of controller. The controller sends the bus stop ID on UART2 along with the received Tag information. The
overall UART1 and UART2 communication is carried out at 115200 baud rate.

RFID TRANSPONDERS
A standard RFID based system consists of transponders (which are also referred to as tags), readers, radio frequency
modules, antennas and host computers. The tags are small electronic devices that reflect and modify received continuous
radio wave signals which are retrieved over the air by the reader. It is hard to sub divided the types of tags or systems into
one or two categories. Tags can vary from being read only to read/write, and they can also be either active or passive. The
RFID systems may vary with their transmission methods or the frequency they operate on. Whatever their configuration,
their suitability for application to ITS is undeniable. The Reader may be fixed or movable. Fixed readers create a zone for
the interrogation with the Tags fixed in the objects. This zone is tightly controlled within the range of the reader. The
fixed reader identifies the movement of the Tags into and out of the zone. Mobile readers are handheld devices or fixed in
moving vehicles.
The interrogation between the Tag and the Reader is done in different ways depending on the radio frequency band used
by the Tag. Some Tags use near field in which Low and High frequency radio waves are used. In this condition, the tag
and the Reader will be closely coupled through radio frequencies. The Tag is capable of modulating the signals of the
reader by changing its electrical load.


Figure 2: RFID Transponders

Through changing the load between the lower and higher loads, the Tag can produce a change which can be detected by
the reader. The Tags using UHF and Higher frequencies require a different approach. Here the Tag is more than one radio
length away from the reader and back scatters the signal.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

741 www.ijergs.org


SOFTWARE
MPLAB Integrated Development Environment (IDE) is a free, integrated toolset for the development of embedded applications
employing Microchip's PIC8bit, 16bit and 32bit microcontrollers. MPLAB IDE tool is easy to use and includes software components
for fast application development and debugging. PICPgm is a PC-Software to program PIC microcontrollers using external
programmer hardware connected to the PC. It allows

Flashing program a HEX file into a PIC microcontroller
Read the content of a PIC microcontroller and save it to a HEX file
Erase a PIC microcontroller
Check if a PIC microcontroller is empty, i.e. not programmed (Blank Check)

CONCLUSION
The results of this literature review have shown that many benefits are obtained through deployments of ITS systems in an existing
transportation system. Based on documented experience locally and throughout the country, ITS deployments in current scenario have
the potential to offer major benefits. Even though the Intelligent Transportation system (ITS) provides the major advantages over the
existing Transportation system, the implementation of such system depends on the Government, and proper awareness of the
applications in the ITS system should be made available to the common public, And finally there is a need for further development of
this system to make more convenient and more cost effective.

FUTURE SCOPE
The proposed work is mainly focused on receiving the data from the remote wireless nodes and finding alternative
solution to conventional wiring harness. There are different possibilities for extension of the research work and they listed
as under:
In our work only three control nodes are provided we can deploy several control nodes with mesh networking to
cover maximum functionality.
We have used 8 MHz Microcontrollers. In future we can construct low power microcontrollers for wireless
sensors.
GUI for Data log facility on PC can serve purpose of Diagnostic and ease for the fault finding.
More expertise require for packaging and installation of wireless modules.
ACKNOWLEDGMENT
I am thankful to my Guide, P.G. Coordinatorfor constant encouragement and guidance. I am also thankful to Principle of
Institute and Head of E&TC Engineering Department for their valuable support. I take this opportunity to excess my deep sense of
gratitude towards those, who have helped us in various ways, for preparing my seminar. At the last but not least, I am thankful to my
parent who had encouraged and inspired me with their blessing.

REFERENCES:
[1] Ali, K, Hassanein, H Using passive RFID tags for vehicle-assisted data dissemination in intelligent transportation
systems.IEEE CONFERENCE PUBLICATIONS, OCTOBER 2009.
[2] R. Bishop, Intelligent Vechicle Technology and Trends. Artech House Publishers, 2005
[3] http://ttssh2.sourceforge.jp/index.html.en
[4] http://www.microchip.com/wwwproducts/Devices.aspx?dDocName=en547759
[5] H. Togashi and S. Yamada. Study on Detection of Car's location using RFIDs and Examination of its Applications, IPSJ SIG
Technical Reports, 2008-ITS-57. 2008


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

742 www.ijergs.org

Permeability Behaviour of Self Compacting Concrete
Monika Dhakla
1
, Dr S.K. Verma
2

1
Lecturer at Department of Civil Engineering, Amity University, Haryana, India
2
Associate Professor at Department of Civil Engineering, PEC university of Technology, Chandigarh, India
E-mail- monikadhakla@yahoo.com

Abstract- For several years, the problem of durability of concrete structure has been a major problem posed to engineers. To make
durable concrete structure, sufficient compaction is required. Excess of vibration causes segregation whereas under vibration leads to
improper compaction. Answer to this problem is Self compacting concrete(SCC) which can get compacted into every corner of form
work and gap between steel, purely by means of its own weight and without the need for compaction. Durability of concrete depends
largely upon the permeability of concrete that is defined as the ease with which it allows the fluids to pass through it. This paper aims
to focus on experimental study of permeability test on selected SCC trial mixes after giving different exposure condition. A suitable
mix was selected on the basis of self compactability properties which can be checked by various tests such as slump test, U Box test, L
Box test and V Funnel test. The various specimens were exposed to different conditions such as normal (lab environment), heat cool
cycles and wet dry cycles. The results of the exposed specimens was compared with the specimen cast and kept at room temperature
so as to estimate the durability parameters for duration of one month.
Keywords Aggregate, Coefficient, Durability, Exposure, Permeability, Self Compacting Concrete, Specimens
INTRODUCTION
Nowadays, performance required for concrete structure is more complicated and diversified. The concrete is required to have high
fluidity, high strength, self compatibility, and long service life concrete structures. SCC is highly engineered concrete that addresses
these requirements. Although compressive strength is a measure of durability to a great extent it is not true that the strong concrete is
always durable concrete. It is now recognized that the strength of concrete alone is not sufficient, the degree of harshness of the
environmental condition to which concrete is exposed over its entire life is equally important. Therefore, both strength and durability
have to be considered explicitly at the design stage.
Rapid chloride permeability, water absorption, water permeability, drying shrinkage is some of the test which can be done to measure
durability. Permeability and strength are related to each other through the capillary porosity, as a first approximation, the factors that
influence the strength of concrete also influence the permeability. Permeability tests measure the transfer of a liquid or gas into the
concrete under the action of a pressure gradient. They can be either Steady state or non-steady state depending on the condition of
flow established within the pore system of the concrete.
LITERATURE REVIEW
Okamura proposed the use of SCC in 1986.Studies to develop SCC including a fundamental study on the workability of concrete,
were carried out by Ozawa and Maekawa at the University of Tokyo and by 1988 the first practical prototypes of SCC were produced.
By the early 1990 Japan started to develop and use SCC and as of 2000, the volume of SCC used for prefabricated products and ready
mixed concrete in Japan was over 520,000yard3.
In India the first development work on SCC were reported by Subramanian and Chattopadhyay in 2002. They mentioned in their study
that the self compatibility test (U Tube) and deformability test (Slump Test) are adequate for SCC. They have also reported that trial
proportions suggested by Okamura and Ozawa appear to be suitable for rounded gravel aggregate, when using crushed angular
aggregates, the proportions are to be adjusted incorporating more fines.
Dr. R. Malathy and T. Govindasamy in 2006 developed SCC following EFNARC specifications. They developed mix design of SCC
for different grades, M 20 and M 60 and their flow properties such as passing ability, filling ability, compaction factor and strength
properties such as compressive and split tensile strength were studied. They developed comparison charts for different grades of SCC
and conventional concrete.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

743 www.ijergs.org

In 2009, an article written by M. Hunger and A.G. Entrop suggested a behavior of SCC containing micro encapsulated phase change
materials. Phase change materials which have ability to absorb and to release thermal energy at a specific temperature.PCM and its
influence on mechanical properties of SCC.
Even though durability is a key factor affecting longevity of concrete structures, unfortunately very limited information is available in
literature about this important aspect. Available literature

reveals that significant research programs have been carried out regarding the
fresh properties of self compacting concrete but only few studies are available regarding the durability characteristics of self
compacting concrete.

EXPERIMENTAL PROGRAMME
The present study comprises of water permeability test of SCC specimens after giving different exposure conditions. After 28 days of
water curing, the specimens were divided into three groups and exposed to conditions for duration of one month detailed given below:
- Control (Lab environment)
- Heat -cool (Heating at 60
o
C and then cooling at room temperature on alternate days)
- Wet dry( Wetting for 1 day and then drying for 1 day)

1. MATERIAL

- Cement: The cement use for the experimental studies was Ultratech 43 grade OPC as per the specifications of Indian
Standard Code IS: 8112-1989.
- Aggregate: The source of fine and coarse aggregates was from the course of Gaggar River which flow in the foot hills of
Shivalik range. The coarse aggregate and fine aggregates were crushed aggregates. The aggregates were procured from a
crusher installed at a location. The sieve analysis and physical properties of coarse and fine aggregate satisfied the
requirement of IS: 383-1970, which is mentioned in following Table 1.
TABLE 1
Type of Aggregate Specific gravity Fineness modulus
Coarse Aggregate 2.70 3.7
Fine Aggregate 2.60 2.11

- Superplasticizer: Structuro 202 which is light yellow in color having Ph value 6.5 was used as a plasticizer.
- Fly Ash: Fly ash used in the present work was procured from Guru Gobind Singh Thermal Power Plant, Ropar (Punjab). To
assess the properties of fly ash, laboratory tests conducted by Central Soil and Material Research Station (CSMRS), New
Delhi and CBRI-Roorkee are considered.
Various trial mixes has been done to achieve the acceptance criteria of self compatibility. Trial mix mentioned in Table 2 has
achieved the acceptance criteria which are given in Table 3 for all the self compacting test methods.
TABLE 2
WATER(Litre) CEMENT(kg) SAND(kg) COARSE
AGGREGATE(kg)
STONE
DUST(kg)
FLY
ASH(kg)
SUPERPLASTICIZER
(Litre)
191.78 400.2 431.72 820.04 421 87.95 6.62

TABLE 3
CONTENTS SLUMP FLOW
T
50
SLUMP
FLOW
V-FUNNEL
T
5min

L-BOX (H2/H1) U-BOX(H2-H1)
TRIAL MIX 700 4 3 0.9 25
RANGE OF
ACCEPTANCE
650 TO 800
(MM)
2 TO 5
(SECOND)
0 TO 3(SECOND) 0.8 TO 1 0 TO 30(MM)



2. WATER PERMEABILITY TEST

- Apparatus: There are three permeability cells for 150mm cube specimens. Each cell consists of a metal cylinder with a ledge
at the bottom and flange at the top, with a removable cover plate and a funnel. A control panel with three independent control
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

744 www.ijergs.org

circuits for three permeability cells, each control circuit consisting of a water reservoir, graduated gauge, glass tube, air inlet
valve, pressure regulator, pressure gauge 0-15 kg/ sq cm an air bleed valve, a drain cock for water reservoir and a shut off
valve for the permeability cell. A common air inlet for the three units is provided for connecting the air compressor. There are
three glass bottles to collect the percolated water. (Test set up shown in picture)
- Testing of Apparatus: Before testing the specimen for water permeability, annular space between the cell and specimen is
tightly filled to a depth of about 10 mm using cotton. Then molten sealing compound is poured into the gap and it is
completely filled. A mixture of commercial wax and rosin were used for making a sealing compound to fill the gap. It is
essential that the seal is watertight. This may be very conveniently done by bolting on the top cover plate, inverting the cell
and applying air pressure of 2 kg/cm
2
from below. A little water poured on the exposed face of the specimen is used to detect
any leaks through the seal, which would show up as bubbles along the edge. In case of leaks, the specimen shall be taken out
and resealed. Number of trial mixes was made mentioned in Table 4 (Paraffin wax: rosin) i.e.
TABLE 4
Trial Ratio Paraffin wax: Rosin
TR1 2 : 1
TR2 2 : 2
TR3 2.5 : 2
TR 3 fulfilled the above criteria and was used for sealing the specimen. When water passes freely through the drain cock it was closed
and the water reservoir filled. The reservoir water inlet and air bleeder valve was then closed. With the system completely filled with
water, the desired test pressure 50kg/cm
2
was applied to the water reservoir. After number of hours (i.e. 2-3 hours) the pressure
decreased and then with the help of air compressor pressure is increased to the same value i.e. 50kg/cm
2
. In the beginning the rate of
water intake is more than the rate of outflow, as the steady state of flow is approached the two rates tends to become equal and the
outflow reaches a maximum and stabilizes. Permeability test continues for 24 hours after steady rate of flow has been reached and the
outflow is considered as average of the entire outflow measured during this period of 24 hours. Then the coefficient of permeability
calculated by using the following formula:

K=Q/ (A*T* (H/L))
Where:
K= coefficient of permeability
Q= Quantity of water in milli litre percolating over the entire period of test after the steady state has been reached.
A= Area of the specimen face
T= Time in seconds over which Q is measured
H/L = Ratio of the pressure head to the thickness of specimen
The coefficient of water permeability for specimens exposed to all the three conditions investigated in this study is presented in
following Table 5
TABLE 5
Exposure Average coefficient of water
permeability (m/s)
Normal 4.011x 10
-12

Wet Dry 1.742x 10
-12

Heat Cool 3.443x 10
-12

CONCLUSION
When concrete permeability was tested after exposure of one month of three different conditions following conclusions were obtained.
- Water permeability coefficient in case of wet-dry exposure was less when compared to specimens exposed to normal
conditions.
- Values of the water permeability coefficient for normal and heat-cool exposure are almost same.
- Average water permeability coefficient for the SCC specimens under normal conditions was found 4.011 x 10
-12
, whereas
value of water permeability coefficient in case of wet-dry and heat-cool exposure was found 1.742 x 10
-12
and 3.443 x 10
-12

respectively.
Average value of water permeability coefficient for the SCC specimens under normal conditions is lower than the maximum
permissible value of the water permeability coefficient 15 x 10
-12
m/sec which is recommended by ACI 301-89
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

745 www.ijergs.org


PERMEABILITY TEST APPARATUS

REFERENCES:
[1] M. Shahul Hameed , V . Sraswathi, A.S.S. Sekar, Rapid chloride permeability test on SC high performance green concrete,
[2] Okamura, H. and Ozawa, K. (1994), Self compactable HPC in Japan, SP 169, ACI, Deitrot.
[3] Okamura, H., Mix Design for Self-Compacting Concrete, Concrete Library of JSCE, Vol.25, 107-120, 1995
[4] IS: 383-1970 (Reaffirmed 1997) Specifications for Coarse and Fine Aggregates from natural sources, Bureau of Indian
Standards, New Delhi, 1997.
[5] IS 8112 (1989): Specification for 43 grade ordinary Portland cement, Bureau of Indian Standards, New Delhi, 1989.
[6] Subramanian, S. and Chattopadhyay, D., Experiments for mix proportioning of self compacting concrete.
[7] Y.V.Mahesh and Manu Santhanam, Simple test methods to characterize the rheology of self compacting concrete, The Indian
concrete journal, June 2004
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

746 www.ijergs.org

[8] Sonebi, M., Bartos, P.J.M., Zhu, W., Gibbs. J., and Tamimi, A., Final Report Task 4 on the SCC Project; Project No. BE 96-
3801; Self-Compacting Concrete: Properties of Hardened Concrete, Advanced Concrete Masonary Center, University of Paisley,
Scotland, UK, May 2000
[9] Petersson, O, Billberg, P., Van, B.K., A Model for Self-Compacting Concrete, Proceedings of RILEM, International Conference
on Production Methods and Workability of Fresh Concrete, Paisley, Italy, 1996
[10] Okamura, H., Mix Design for Self-Compacting Concrete, Concrete Library of JSCE, Vol.25, 107-120, 1995
[11] Influence of aggregate characteristics on uniformity of SCC by Anirwan Sengupta and Manu Santhanam in Indian Concrete
Journal, June 2009
[12] EN197-1 Cement Composition, Specifications and Conformity Criteria. EFNARC,
http://www.efnarc.org/efnarc/SandGforSCC.PDF
[13] Bauzoubaa N et al, SCC incorporating high volumes of Class F fly ash, preliminary results, Cement and Concrete Research,
Vol.31, 413-420, 2001

















International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

747 www.ijergs.org

Preheating of Biodiesel for the Improvement of the Performance
Characteristics of Di Engine: A Review
Tushar R. Mohod
1
, S.S. Bhansali
1
, S.M. Moghe
1
, T.B. Kathoke
1

1
Asst. Professor, Deptt. Of Mechanical Engineering, J.D.I.E.T. Yavatmal, S.G.B.A. University, India
E-mail- tushar17_mohod@yahoo.co.in

Abstract As an alternative fuel for compression ignition engines, biodiesels are the principle renewable and carbon neutral
sources. The causes of technical problems arising from the use of various biodiesel are the high Surface Tension and the high
viscosity. In case of the CI engine, high Surface Tension and viscosity attains improper homogeneity in charge and fuel atomization
.This reduces the overall efficiency of the engine. Transesterification, pyrolysis are the processes generally performed in order to
reduce the viscosity of biodiesel but still it is higher to that of the diesel. Thus preheating is the technique to decrease viscosity of the
biodiesel. The preheating of biodiesel at different temperature as at 60C, 90C, 120C, 150C reduces the viscosity and surface
tension which enhances better fuel injection and there by better fuel atomization. To increase the fraction of biodiesel in blends, it is
required to reduce the viscosity by preheating. Preheating of biodiesel can be made in recupretors by using the exhaust gases special
arrangement of recupretor and exhaust manifold are made for preheating. The preheating of biodiesel results in complete combustion
of the biodiesel or fuel that results in decreased in amount of carbon dioxide ,carbon monoxide and particulate exhaust emission is also
complete combustion of biodiesel and more cleaner exhaust can be obtained while elevated temperature of the fuel increases NOx
emissions.
Keywords: WFO-Waste frying oil, COME-Cottonseed methyl ester, DF-Diesel Fuel.
1. INTRODUCTION
Industrial development and economy of any country mainly depends on its energy resources. Petroleum energy resources is
one of those energy resources .The depletion diesel and their inherent environmental concerns has led to the pursuit of renewable
biodiesel. Biomass sources, particularly biodiesel, have attracted much attention as an alternative energy source. They are renewable,
non-toxic and can be produced locally from agriculture and plant resources. Their utilization is not associated with adverse effects on
the environment because they emit less harmful emissions and green house gases [1]. Biodiesel, a form of biomass particularly
produced from vegetables oils has recently been considered as the best candidate for a diesel fuel substitution. Biodiesel is a clean
renewable fuel, simple to use, biodegradable, nontoxic, and essentially free of sulfur. It can be used in any compression ignition
engine without the need for modification .Also usage of biodiesel will allow a balance to be sought between agriculture, economic
development. Researchers have evaluated the use of sunflower, jetropha oil, rice bran oil, soyabean, cotton seed, rapeseed and orange
oils as potential renewable fuel source.
The use of the biodiesel is being restricted due to variation of its injection, ignition and emission characteristics from that of
the diesel. The direct use of vegetable oils is generally considered to be unsatisfactory and impractical for diesel engines. The high
viscosity ,density of vegetable oil interferes with the injection process and leads to poor fuel atomization .This results in inefficient
mixing of fuel with air contributes to incomplete combustion, carbonization of injector tip, poor cold engine start up ,misfire and
ignition delay period. It is therefore unsuitable to use straight vegetable oils in diesel engines .To overcome these problems caused by
the high viscosity of vegetable oils, a number of techniques have been used. These includes vegetable oil/diesel blends, preheating the
vegetable oil, vegetable oil.
Lot of research works were conducted to examine the engine performance and exhaust emissions using preheated
vegetable oils. Barsic et al.[7] have indicated that it is essentialto preheat the vegetable oil to 7090
0
C to resolve the fuel filter
clogging problem. Ryanand his co-workers [8] have specified a fuel inlet temperature requirement of 140
0
C for acceptable viscosity
for using vegetable as fuel for both direct injection and indirect injection engines. It was reported that heating the vegetable oils to
140
0
C would (i) reduce the viscosity to near that of diesel at 40
0
C, (ii) increase the cetane rating, (iii) improve the spray
characteristics by increasing the penetration rate accompanied by a decrease in cone angle. Bari et al. have showed that preheating of
crude palm to 60
0
C is essential to lower its viscosity, ensure smooth flow and to avoid fuel filter clogging. It was also indicated that
the injection system was not affected even by heating to 100
0
C.
2. LITERATURE REVIEW
The use of the biodiesel is being restricted due to variation of its injection, ignition and emission characteristics from that of
the diesel. The direct use of vegetable oils is generally considered to be unsatisfactory and impractical for diesel engines. The high
viscosity, density of vegetable oil interferes with the injection process and leads to poor fuel atomization .During previous studies,
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

748 www.ijergs.org

biodiesel used with different blend of diesel. These blends without any prior processing found insufficient to decrease the viscosity of
biodiesel. Therefore injection related problems were remained unsolved.
The studies were carried on transesterification process, which decreases effect on the viscosity of vegetable oil, biodiesel still
has higher viscosity and density when compared with diesel fuel. The viscosity of fuels has important effects on fuel droplet
formation, atomization, vaporization and fuelair mixing process, thus influencing the exhaust emissions and performance parameters
of the engine. The higher viscosity of biodiesel compared to diesel limits the use of complete biodiesel and biodiesel blends in the I.C.
engine. It was found that the higher viscosity has effect on combustion and proper mixing of fuel with air in the combustion chamber.
It avoids the proper atomization, fuel vaporization and ignition. Transesterification process reduced viscosity of biodiesel but still it
was much higher compared to the diesel.
Investigations have shown that B20 blend has good performance and emission characteristics on CI engines. Thus it is
preheated at different temperature as at 30C, 60C, 90C, and 120C. Further increase of biodiesel fraction in the blends will increase
the viscosity and decrease performance. To increase the fraction of biodiesel in blends, it is required to reduce the viscosity by
preheating. Preheating of biodiesel is the easy, less economical and efficient way for mentioned problem evaluation. The preheating of
the vegetable oil improves the injection characteristics by decreasing the kinematic viscosity, surface tension and the density of
biodiesel.

3. PREHEATING OF BIODIESEL
Preheating process involves heating of biodiesel before injecting it into combustion cylinder. Biodiesel can be preheated at
different temperature of, 60C, 90C, 120C, and 150C. But the case study on preheating temperature have found that preheating
temperature for biodiesel should be 90C to meet above mentioned characteristic requirement. Heat exchangers can be used to preheat
the biodiesel. Fig.1 shows a heat exchanger in which hot exhaust gases from engine are circulated around the fuel flowing tubes
.These gases can increase the temperature of fuel flowing through tubes or heating coils can be used to preheat.



Fig.1 Preheating setup

Fig.2 Recupretor for heat transfer

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

749 www.ijergs.org

Fig. 1: Block Diagram of system setup consists of a fuel line that connects the already existing fuel tank to a pre-heating
container. The container is designed to hold a volume depending in the size of the exhaust manifold. A recupretor consisting of copper
tubes. These copper tubes carries fuel and hot exhaust gases through manifold are passed over these copper tubes. Thus heat is
transferred to the fuel flowing inside the tubes. This container is placed within the manifold such that it receives heat directly from the
exhaust gases. A Fuel Line The material used for the fuel line could be reinforced rubber flex pipe as it can withstand high pressures of
up to 6Mpa and temperatures of around 250C.
3.1 PRE-HEATING CONTAINER
The Preheating container should be made of a material that is a good heat conductor, able to withstand temperatures up to
1200C [2] without undergoing any chemical change and any considerable change in physical properties. That is the material should
have a
- Low specific heat capacity.
- Low Thermal expansion coefficient.
- Inert chemical property.
- The volume of the preheating chamber is assumed to be such that it fits snugly into the exhaust manifold.
3.2 EXHAUST MANIFOLD CHANGES
The exhaust manifold, in order to house the pre-heating container will have to undergo changes as well. The exhaust
manifold is electronically operated [3] and its operation depends on the temperature in the pre-heating container. When required
temperature is attained a valve cut of the supply of exhaust gases.



Fig. 3: Block diagram of fuel flow

3.3 FUEL PASSAGE DIVERSION
In our setup, from the fuel tank, the fuel is transported via the fuel line the fuel line is designed to handle high pressure of the
fuel coming from the primary pump in the fuel tank. The line feeds the fuel into the pre-heating container.
3.4 HEATING MECHANISM
The fuel in the pre-heating chamber is heated using the exhaust gases. The flow of these exhaust gas is controlled with the
help of an electronically controlled bypass valve. When the engine starts, the valve allows exhaust gases to come in contact with the
pre-heating container. Once the required temperature is attainted, the valve automatically get switch off such that the exhaust gas
supply get cut off in order to maintain constant temperature of fuel, when the temperature drops below the necessary value, the valve
get switch on to restart heating process. this biodiesel with elevated temperature then flows through secondary fuel pipe made of the
same material to the fuel injection mechanism, at this point the fuel is both pressurised and at an elevated temperature.
3.5 FUEL INJECTION
Once the fuel reaches the fuel injection system, the heated fuel is injected into the engine cylinder. Due to the high
temperature, the droplet size and viscosity of the injected biodiesel is drastically reduced as compared to a conventional system. This
has great consequences on combustion properties as stated ahead.

4. INJECTION, IGNITION AND EMISSION CHARACTERISTICS.
4.1 FUEL INJECTION
Fuel injection is process of injecting fuel at very high pressure (up to 200 Mpa) through a small orifice or multiple orifices in
the injection nozzle into the combustion chamber that contains air that has been compressed to high pressure and temperature. This
injection process is characterized by atomization. As atomized fuel droplets go through a process of heating and evaporation due to
heat transfer from the hot cylinder. The evaporation process leads to a disappearance of the small droplets and rapid mixing of the
vaporized fuel with the air resulting in the formation of a very fuel-rich mixture at the tip of the fuel jets. If the fuel jet penetrates too
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

750 www.ijergs.org

far, the fuel interacts with the wall, resulting in degraded mixing, low temperature combustion on the walls and high unburned
hydrocarbon, NOx and smoke emissions. If the fuel vaporizes and mixes too close to the nozzle, the mixture will be overly rich,
leading to high unburned hydrocarbons and smoke emissions. The fuel properties that have the greatest effect on
injection include viscosity, density, and surface tension. Fuel injection is mainly characterized by atomization.
4.1.1 Kinematic Viscosity
Viscosity is a measure of the internal fluid friction of fuel to flow, which opposes any dynamic change in the fluid motion.
Viscosity, a measure of the fuel's resistance to flow, impacts the fuel spray characteristics through flow resistance inside the injection
system and in the nozzle holes. Higher viscosity generally results in reduced flow rates for equal injection pressure and degraded
atomization. Fuels with high viscosity tend to form larger droplets on injection which can cause poor combustion, increased exhaust
smoke and emissions. Fuels with low viscosity may not able to supply sufficient fuel to fill the pumping chamber, and again this effect
will be a loss in engine power.
Murat Karabektas [16] reported that the specific gravity and kinematic viscosity of the cottonseed oil Methyl Ester (COME)
gradually decrease with the increase in the preheating temperature. It is seen that the kinematic viscosity is 6.54 cSt at 30
0
C and
decreases gradually to 1.26 cSt at 120
0
C. Additionally, the specific gravity decreases from 0.882 at 30
0
C to 0.851 at 120
0
C.

DIESEL JATROPA OIL COTTON SEED OIL RICE BRAN OIL
1.Kinematic
viscosity
(cSt)



2.25
Without
Preheat 35.9
Without
Preheat 5.49
Without
Preheat 5.49
After
Preheat
At 120
0
C 5.82

After
Preheat
At 120
0
C 1.26
After
Preheat
At 120
0
C 1.26
2.Cetane number 48 52 - - - - - 56.2
3.Densitygm/cc 0.82 0.88 0.82 0.78

Table 4.1 Properties of various oils




Fig. 4.2 Viscosity Vs Temperature for rice bran oil and cotton seed oil [6]
4.1.2 Density
Increased density tends to degrade atomization rates. Fuel density has importance in diesel engine performance as it affects
pump and pipeline design. However, most importantly, they have a significant effect on the atomization quality of the spray injectors,
with subsequent impacts on the efficiency of the combustion and emissions. All diesel injection systems meter the fuel on a volume
basis, so that fuel density affects the mass of fuel injected. Increased density beyond specification results in higher than designed fuel
injection rates due to the direct relationship between mass, volume and density.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

751 www.ijergs.org

4.1.3 Surface tension
Surface tension or the tendency of the fuel to adhere to itself at the fuel-air interface, affects the tendency of the fuel to form
drops at the jet-air interface. The surface tension of the liquid is a property that allows it to resist an external force. The relatively high
surface tension of bio-oil presumably results from the high amount of water which has a high surface tension due to its strong
hydrogen bonding. For bio-oils, surface tension values of 2840 MN/m at room temperature have been measured. Greater surface
tension causes more shearing between fuel molecules and thus poor atomization will occur.
4.1.4 Atomization
The high-pressure injection process results in a breakup of the fuel injection jets into small droplets due to the shear forces
induced between the high velocity jets and the relatively air in the combustion chamber. This process is known as atomization. In this
process fuel is split up into small droplets for spray atomization is typically characterized by the ratio of fluid volume to surface area
in a given spray. Atomization includes spray properties affected like droplet size, degree of mixing, penetration and evaporation. The
change in any of these properties might be lead to different relative duration of premixed and diffusive combustion regimes.
The droplet distribution (also called dispersion) is a parameter which describes the distribution of the droplets in a spray
volume. It mainly depends on the characteristics of the properties of the fuel being injected .Fuel properties such as density, viscosity
and surface tension. But in case of biodiesels these fuel properties are found to be higher than that of the diesel. This affects the
atomization of biodiesel .Thus preheating of biodiesel at about 90C decreases the viscosity nearer to diesel and thereby improves
atomization.
A well-formed droplet distribution (atomization) promotes fuel evaporation, flame stability, and improves fuel combustion
and reduces emissions. Droplets Should be small in order to ensure complete carbon burnout and proper flame Shape and length in
existing burners, while large spray droplets may cause incomplete Combustion, soot (smoke) formation etc.
4.1.5 Cloud Point
The key flow property for winter fuel specification is a cloud point. Cloud point is the temperature at which wax (crystal)
formation of the fuel particles might begin. Below the cloud point, these crystals might plug filters or drop to the bottom of a storage
tank. Cloud point is the most widely used and most conservative estimate of the low temperature operability limit. However, fuels can
usually be pumped at temperatures below the cloud point. It is measured as the temperature of first formation of wax (crystal) as the
fuel is cooled. Cloud point is a measure of the fuel gelling point at which the fuel can no longer pumped.
The cloud point of biodiesel creates ignition problem called ignition delay in winter season and in cold countries.
Preheating of biodiesel will melt the waxy crystals formed during cold season or in the cold countries and will decrease the ignition
delay time .Thus it minimizes the excess delay time taken by the fuel and thereby improves the ignition characteristic associated with
biodiesel.

5. EXPERIMENTAL WORK OVER PREHEATED BIODIESELS
Dhinagar et al [13] tested neem oil, rice bran oil and karanji oil on a low heat rejection engine. An electric heater was used to
heat the oil. The exhaust gas was also utilized for heating the oil. Without heating, 14% lower efficiency was reported compared to
that of diesel. However, with heating, the efficiency was improved. Silvico et al [12] used heated palm oil as the fuel in a diesel
generator. Studies revealed that exhaust gas temperature and specific fuel consumption were increased with an increase in charge
percentage. The carbon monoxide emission was increased with the increase of load. Unburned HC emissions were lower at higher
loads, but tended to increase at higher loads. This was due to a lack of oxygen resulting from the operation at higher equivalence
ratios. Palm oil NOx emissions were lower as compared to the diesel fuel. They also reported that a diesel generator can be adapted to
run with heated palm oil and would give better performance.
Masjuki et al [10] used preheated palm oil to run a compression ignition engine. Preheating reduced the viscosity of fuel and
hence better spray and atomization characteristics were obtained. Torque, brake power, specific fuel consumption, exhaust emission
and brake thermal efficiency were found to be comparable to that of diesel.

6. EMISSION
Diesel engines are assumed as a good alternative to gasoline engines because they produce lower amount of
emissions. On the other hand, higher emissions of oxides of nitrogen (NOx) and particulate matter (PM) have been noticed as major
problems. Although, major constituents of diesel exhaust include carbon dioxide (CO2), water vapor (H
2
O), nitrogen (N
2
), and oxygen
(O
2
); carbon monoxide (CO), hydrocarbons (HC), oxides of nitrogen (NOx), and particulate matter (PM) are present in smaller but
environmentally significant quantities. In modern diesel engines, first four species normally consist of more than 99% exhaust, while
last four (the harmful pollutants) account for less than 1% exhaust. NOx comprise of nitric oxide (NO) and nitrogen dioxide (NO
2
) and
both are considered to be deleterious to humans as well as environmental health. NO
2
is considered to be more toxic than NO. It
affects human health directly and is precursor to ozone formation, which is mainly responsible for smog formation.
6.1 CO EMISSION
The increasing trend of CO emissions is due to increase in volumetric fuel consumption and knock with the engine power
output. The formation of CO emission mainly depends upon the physical and chemical properties of the fuel used. It is observed that,
the CO emission of biodiesel is less than that of diesel fuel. The decrease in CO emission for biodiesel is attributed to the high cetane
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

752 www.ijergs.org

number and the presence of oxygen in the molecular structure of the biodiesel. Also, the CO emission levels are further reduced for
preheated biodiesel and the reason is attributed to its reduced viscosity, density, and increase in rate of evaporation due to preheating.
The decreasing CO emission is more when preheating temperature is increased from 75C to 90C.
M. Pugazhvadivu et al [11] found the maximum CO emission was 0.22% and 0.77% with diesel and Waste frying oil WFO
(without preheating) respectively. The CO emission was decreased with preheating due to the improvement in spray characteristics
and better air-fuel mixing. The maximum CO emission was 0.58% and 0.48%, respectively, with WFO (750
0
C) and WFO (135
0
C),
respectively.
Hanbey Hazar et al [15] found that CO emission was decreased for all test fuels with preheating due to the improvement in
spray characteristics and better airfuel mixing. When rapeseed oil preheated, CO emissions were decreased by 20.59%, 16.67% and
25.86% for DF, B20 and B50, respectively.
Murat Karabektas et al [16] reported that the CO emissions arisen from incomplete combustion is decreased by applying
preheating to the fuel. CO emissions obtained with COME operations were on averages of 14.4045.66% lower than those with diesel
fuel operations.
6.2 CO
2
EMISSIONS
The increasing trend of CO
2
emissions is due to increase in volumetric fuel consumption. It is observed that, the CO
2

emission of biodiesel is less than that of diesel fuel. This is attributed to the presence of oxygen and high cetane number of biodiesel.
However, the CO
2
emission levels are lowered preheated and the reason is attributed to more fuel consumption caused by high
temperature and improved combustion.
6.3 NOX EMISSIONS
The NOx emissions against increasing preheating temperature of biodiesel blends are plotted in the Fig.6. Results show that
blends 20, 40 at preheating temperature 60C, 75C, 90C the increased NOx emission. This shows that the formation of NOx is very
sensitive to temperature and it increases with increase in the preheating temperature of biodiesel. This is because preheated fuel
premixes more rapidly with oxygen molecules present in air. Resulting in the formation of increased NOx emissions. NOx emission is
more when preheating temperature is increased from 75C to 90C This is the disadvantage of preheating biodiesel which can further
be reduced using EGR technique.

Fig.6 Preheated biodiesel temperature (
0
C)

Hanbey Hazar et al [15] found that NOx emission increases with the increase in the fuel inlet temperature preheated rapeseed
oil (RRO). The average NOx emission was increased by 19%, 18% and 15% using DF, O20 and O50, respectively. The increase in
NOx with preheating emission may be attributed to the increase in the combustion gas temperature with an increase in fuel inlet
temperature.
It is seen that the maximum NOx emission was 44% lower for Waste frying oil WFO (without preheating) compared to
diesel. It was found that the NOx emission was increased with the increase in the fuel inlet temperature. The maximum NOx emission
was increased by 23% and 25% using WFO (75
0
C) and WFO (135
0
C) respectively compared to WFO (without preheating). The
increase in NOx with preheating emission may be attributed to the increase in the combustion gas temperature with an increase in fuel
inlet temperature However the NOx emissions were 26% and 28% lower, respectively, using WFO (75C) and WFO (135C)
compared with that of diesel.

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

753 www.ijergs.org

6.4 SMOKE EMISSION
M. Pugazhvadivu reported that the smoke emission for WFO (without preheating) was significantly higher than diesel. This
may be due to the higher viscosity and poor volatility of WFO compared to diesel. Combustion characteristics by preheating [1014].
The smoke emission was reduced by 10% and 24%, respectively, using WFO (75
0
C) and WFO (135
0
C) compared to WFO (without
preheating). The maximum reduction in smoke emission was observed using WFO (135
0
C).
Hazar reported that the lowest smoke densities were obtained with preheated 50
0
C and 20
0
C. The average smoke densities
were decreased by 9.4%, 20.1% and 26.3% for DF, 20
0
C and 50
0
C, respectively. This may be due to the reduction in viscosity and
subsequent improvement in spray injection.

7. CONCLUSION
Various biodiesels like rapeseed oil,waste fryinf oil (WFI),rise bran oil,COTTONSEED METHYL ESTER
(COME) can be used in CI engines without making any changes in engine.The major technical problem of higher
viscosity, density can be effectively eliminated by heating biodiesel before injecting it into combustion chamber. Biodiesel thus with
decreased viscosity can be successfully used with following improved ignition and emission characteristics:-
1. Preheating of biodiesel effectively decreases the kinematic viscosity, density and surface tension properties which dominantly
improves injection of biodiesel by contributing to better fuel atomization at the elevated temperature of the biodiesel.
2. Moreover preheating reduces the ignition problem by decreasing the ignition delay time during cold start of engine in cold
countries.
3. Preheating contributes to reduction in CO ,CO2 emissions of biodiesel than that of the pure diesel and unpreheated biodiesel while
the NOx emission increases with increase in preheating temperature due to increase in combustion temperature.

REFERENCES :
[1] Tim Gilles, Automotive Service: Inspection, Maintenance, Repair, 4th Edition, pp 336
[2] Jeff Hartman, How to Tune & Modify Engine Management Systems, MotorBooks International 2004, pp37
[3] Ken Pickerill, Automotive engine performance, Cengage Learning 2009, pp 405
[4] Zplus, High temperature viscosity of multi weight oils, Tech brief 13(June 29, 2008), pp 6-8.
[5]Cameron, Dynamic viscosity calculation using ASTM
D341,http://www.jiskoot.com/NetsiteCMS/pageid/356/Viscosity%20temp.html
[6] Paul F. Waters, Jeny C. Trippe, Experimental Viscosity, Directory of Graduate Research(2001), pp 603,604.
[7] Dr J. F. Douglas, Dr J. M. Gasoriek, Prof John Swaffield, Lynne Jack, Fluid mechanics, Pearson education limited (Fifth edition,
2005)
[8] George E. Totten, Steven R. Westbrook, Rajesh J. Shah, Fuels and Lubricants Handbook, ASTM International (01-June-2003).
[9] David John Leeming, Reg Hartley, Heavy Vehicle Technology, Nelson Thornes(1981).
[10]H.H. Masjuki, M.J. Abedin, M.A. Kalam, A. Sanjid, S.M. Ashrafur Rahman, I.M. Rizwanul Fattah Performance, emissions,
and heat losses of palm and jatropha biodiesel blends in a diesel engine, industrial Crops and Products, Volume
59, August 2014, Pages 96-104.
[11] M. Pugazhvadivua, K. Jeyachandranb Investigations on the performance and exhaust emissions of a diesel
engine using preheated waste frying oil as fuel
[12] Silvico CA, Carlos R, Marious VG, Leonardodos SR, Guilherme F. Performance of a diesel generator
Fueled with palm oil. Journal of Fuels 2002;81:2097-102.
[13] Dhinagar S, Nagalingam B. Experimental investigation on non-edible vegetable oil operation in a LHR diesel engine for
improved performance, SAE 932846, 1993.
[14] Barsic NJ, Hurnke AL. Performance and emission characteristic of a naturally aspirated diesel engine with vegetable oil
fuels. SAE 1981;117387 (paper no.810262).
[15] Hanbey Hazar, Huseyin Aydin Performance and emission evaluation of a CI engine fueled with preheated raw rapeseed oil
(RRO)diesel blends.
[16] Murat Karabektas, Gokhan Ergen, Murat Hosoz, The effects of preheated cottonseed oil methyl ester on the performance and
exhaust emissions of a diesel engine
[17] International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963 593
Vol. 5, Issue 1, pp. 591-600
[18] W. Ryan MNL37-EB/Jun. 2003 Thomas Diesel Fuel Combustion Characteristics.
[19] Experimental investigation of control of NOx emissions in biodiesel-fueled compression ignition engine.
Received 1 June 2005; accepted 6 December 2005


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

754 www.ijergs.org

Similarity Solution for Unsteady MHD Flow near a Stagnation Point of a
Three-Dimensional Porous Body with Heat and Mass Transfer, Using HPM
Vivek Kumar Sharma
1
, Aisha Rafi
1

1
Department of Mathematics, Jagan Nath University, Jaipur, Rajasthan, India

Abstract: The problem of unsteady mixed convection heat and mass transfer near the stagnation point of a
three-dimensional porous body in the presence of magnetic field, chemical reaction and heat source or sink is
analyzed. Similarity transformation and Homotopy Perturbation Method are used to solve the transformed
similarity equations in the boundary layer. Velocity distribution and temperature distribution are shown through
graphs for various physical parameters and coefficients of skin friction and heat transfer are present through
tables.
Keywords: similarity transformation, Homotopy Perturbation Method, stagnation point; heat source/sink.
Introduction: Hydro magnetic incompressible viscous flow has many important engineering applications such
as magneto hydrodynamic power generators and the cooling of reactors. The laminar flow above a line heat
source in a transverse magnetic field was studied by Gray (1997).Vajravelu and Hadjinicolaou (1997) made
analysis to flows and heat transfer characteristics in an electrically- conducting fluid near an isothermal sheet.
Chamkha (2003) studied the problem of MHD flow of a uniformally stretched vertical permeable surface in the
presence of heat generation/absorption and chemical reaction. Cheng and Huang (2004) considered the problem
of unsteady flows and heat transfer in the laminar boundary layer on a linearly accelerating surface with suction
or blowing in the absence and presence of a heat source or sink. Unsteady heat and mass transfer from a rotating
vertical cone with a magnetic field and heat generation or absorption effects was studied by Chamkha and Al-
Mudhaf (2005). Chamkha et al. (2006) presented analysis of the effect of heat generation or absorption on
thermophoretic free convection boundary layer from a vertical flat plate embedded in a porous medium.. Liao
(2006) obtained an accurate series solution of unsteady boundary layer flows over an impulsively stretching
plate uniformly valid for all non-dimensional times. Bararnia et al. (2009) investigated analytically the problem
of MHD natural convection flow of a heat generation fluid in a porous medium. Sharma and Singh (2009)
presented a numerical solution for the problem of effects of variable thermal conductivity and heat source / sink
on MHD flow near a stagnation point on a linearly stretching sheet.
The main objective of this paper is to study the effects of heat generation and chemical reaction on unsteady
MHD flow heat and mass transfer near a stagnation point of a three dimensional porous body in the presence of
a uniform magnetic field. An efficient, similarity transformation and Homotopy Perturbation Method are used
to solve the transformed similarity equations in the boundary layer.
The Homotopy Perturbation Method is a combination of the classical perturbation technique and Homotopy
technique, which has eliminated the limitations of the traditional perturbation methods. This technique can
have full advantage of the traditional perturbation techniques. He J.H. (1999) Homotopy Perturbation
technique,.He J.H. (2003) Homotopy Perturbation Method: a new nonlinear analytical tech-
nique,.Dehghan M. and Shakeri F. (2008) Use of Hes Homotopy Perturbation Method for Solving a
Partial Differential Equation Arising in Modeling of Flow in Porous Media,.To illustrate the basic idea of
the Homotopy Perturbation Method for solving nonlinear differential equations, we consider the following
nonlinear differential equation:
A(u) f(r) = 0, (1)
Subject to boundary condition
Bu,
u
n
= 0, (2)
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

755 www.ijergs.org

Where A is a general differential operator, B is a boundary operator, f(r)is a known analytic function, and is
the boundary of the domain .
The operator A can, generally speaking, be divided into two parts: a linear part L and
a nonlinear part N. Equation can be rewritten as follows:
L(u) +N(u) f(r) =0, (3)
By the Homotopy technique, we construct a Homotopy V(r,p) : *(0,1)R which satisfy
H(V,p) = (1-p)[L(v) L(u
0
)] + p[A(v) f(r)] =0, (4)
H(V,p) = L(v) L(u
0
) + p L(u
0
) + p[N(v) f(r)] =0, (5)
Where p [0, 1] is an embedding parameter and u
0
is an initial approximation of which satisfies the boundary
conditions.
H(V,0) = L(v) L(u
0
)
H(V,1) = A(v) f(r)] (6)
Thus, the changing process of p from zero to unity is just that of v(r, p) from u
0
(r) to u(r). In Topology, this is
called deformation and L(v) L(u
0
), A(v) f(r)] are called homotopy. According to the HPM, we can first use
the embedding parameter p as a small parameter, and assume that the solution of can be written as a power
series in p:
V = V
0
+ pV
1
+p
2
V
2
+ (7)
Setting p=1 results in the approximate solution of
u = limV = V
0
+ V
1
+ (8)
p1
The series is convergent for most cases; however, the convergent rate depends upon the
Nonlinear operator A(V). The second derivative of N(V )with respect to V must be small because the parameter
may be relatively large; that is, p 1.
Aim of the paper is to investigate the unsteady mixed convection heat and mass transfer near the stagnation
point of a three-dimensional porous body in the presence of magnetic field, Similarity transformation and
Homotopy Perturbation Method are used to solve the transformed similarity equations in the boundary layer.
Formulation of the problem: Consider unsteady laminar incompressible boundary layer flow of a viscous
electrically conducting fluid at a three-dimensional stagnation point with magnetic field, chemical reaction, heat
generation/absorption and suction/injection effects. A uniform transverse magnetic field is assumed to be
applied normal to the body surface. The fluid properties are assumed to be constant and the chemical reaction is
taking place in the flow. The velocity components of the in-viscid flow over the three dimensional body surface
are given by:
u
x
u
x
+
v
y
+
w
z
= 0, (9)
u
t
+u
u
x
+v
u
y
+w
u
z
=
u
e
t
+u
e
u
e
x
+

2
u
z
2

0
2
uu
e

+
xgTT

+g
c
CC

l
, (10)
u
t
+u
v
x
+ v
v
y
+ w
v
z
=
v
e
t
+v
e
u
e
x
+

2
v
z
2

0
2
vv
e

+
cy gTT

+g
c
CC

l
, (11)
T
t
+u
T
x
+v
T
y
+ w
T
z
=
k
C
p

2
T
z
2
+
Q
o
TT

C
p
, (12)
C
t
+u
C
x
+ v
C
y
+ w
C
z
= D

2
C
z
2
K
c
C C

, (13)
Where C dimensional concentration, C
p
specific heat of the fluid, D mass diffusion coefficient, k fluid thermal
conductivity, k
c
chemical reaction parameter, Q
0
heat generation/absorption coefficient, T temperature, t time, u
velocity component in x-direction., u
e
free stream velocity component in x-direction., v velocity component in
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

756 www.ijergs.org

y-direction, v
e
free stream velocity component in y-direction, w velocity component in z-direction, x,y and z
longitudinal, transverse and normal directions
Corresponding boundary conditions are
t = 0: ux, y, z, t = u
i
x, y, z, vx, y, z, t = v
i
x, y, z, wx, y, z, t = w
i
x, y, z,
Tx, y, z, t = T
i
x, y, z, Cx, y, z, t = T
i
x, y, z, (14)
t > 0: x, y, z, t = 0, vx, y, z, t = 0, wx, y, z, t = w
w
, Tx, y, z, t = T
w
,
Cx, y, z, t = C
w
at y = 0, (15)
t > 0: x, y, z, t = u
e
x, y, vx, y, z, t = v
e
x, y, Tx, y, z, t = T

, Cx, y, z, t = C


at y (16)
Assuming,
= z
a
1

1 2
, = at, u = ax1
1
f, v = by1
1
s, < 1,
w = av 1
1 2
f + cs, =
TT

T
w
T

, =
CC

C
w
C

, c =
b
a
,
Re
x
=
ax
2
1
, M =

0
2
Re
x
x
2
a
2
, =
Q
0
Re
x
C
p
x
2
a
2
, =
k
c
Re
x
x
2
a
2
, f
w
= av1 w
w.
(17)
Using assumptions (17) into equation (10)-(13) yields the following similarity equations
f

+f

f + cs

2
f

+ f

+ + 1 + Mf

1 +
1
+
2
= 0 (18)

s

+s

f +cs

2
s

+cs

+ +c +Ms

1 +
1
+
2
= 0, (19)

+p
r

f + cs

2
+p
r
= 0, (20)

f +cs

2
s
c
s
c
= 0. (21)
Where f dimensionless stream functions, M magnetic field parameter, Re
x
Reynolds number, S dimensionless
stream function, c ratio of the velocity gradients at the edge of the boundary
layer, temperature, cocentration .
we introduce the following Homotopy,
Df, p = 1 p
d
3
f
d
3
M
df
d

d
3
f
I
d
3
M
df
I
d
+p
d
3
f
d
3
+
d
2
f
d
2
f + cs

2

df
d
+
df
d
+
+1+Mdfd1+1+2=0, (22)
Ds, p = 1 p
d
3
s
d
3
+M
ds
d

d
3
s
I
d
3
+ M
ds
I
d
+p
d
3
s
d
3
+
d
2
s
d
2
f + cs

2

ds
d
+ c
ds
d
+
+c+Mdsd1+1+2=0, (23)
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

757 www.ijergs.org

D, p = 1 p
d
2

d
2
+ p
r

d
2

I
d
2
+p
r

I
+p
d
2

d
2
+ p
r
d
d
f +cs

2
+ p
r
= 0
(24)
D, p = 1 p
d
2

d
2
s
c

d
2

I
d
2

I
s
c
+ p
d
2

d
2
+
d
d
f +cs

2
s
c
s
c
= 0
(25)
With the following assumption
f = f
0
+ pf
1
+ p
2
f
2
+, (26)
s = s
0
+ps
1
+ p
2
s
2
+ , (27)
=
0
+p
1
+ p
2

2
+ (28)
=
0
+ p
1
+p
2

2
+ (29)
Using equation (26), (27), (28), (29) into equation (22), (23), (24) and (25) and on comparing the like powers of
p,we get the zeoth order equation,

d
3
f
0
d
3
M
df
0
d

d
3
f
I
d
3
M
df
I
d
= 0, (30)

d
3
s
0
d
3
+M
ds
0
d

d
3
s
I
d
3
+M
ds
I
d
= 0, (31)

d
2

0
d
2
+p
r

0

d
2

I
d
2
+ p
r

I
= 0, (32)

d
2

0
d
2

0
s
c

d
2

I
d
2

I
s
c
= 0. (33) with the
corresponding boundary conditions are of zeroth order equations are:
= 0:
0
= 1,
0
= 1, s
0

= 0, s
0
= 1, f
0
= f
w
, f
0

= 1, ;
= :
0
= 0,
0
= 0, s
0

= 1, f
0

= 1, (34)
And first order equations are:

d
3
f
1
d
3
+M
df
1
d
e

f
w
+M
1

2
e

2
c 1 +ce
2
= 0, (35)

d
3
s
1
d
3
+M
ds
1
d
e

2 + f
w
2c +M
1

2
e

2
c 1 +e
2
= 0,
(36)

2
+ P
r

I
+ e

+ P
r
e

+ e

f
w
1 + c +ce

2
+P
r
e

= 0, (37)

1
s
c
+ e

+ e

f
w
1 + c +ce

2
s
c
e

s
c
= 0. (38)
With the corresponding boundary conditions are of first order equations are:
= 0:
1
= 0,
1
= 0, s
1

= 0, s
1
= 1, f
1
= 0, f
1

= 0, ;
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

758 www.ijergs.org

= :
1
= 0,
1
= 0, s
1

= 1, f
1

= 1, (39)
Solving equations (30) to (33) and (35) to (38) under the corresponding boundary conditions, (34) and (39),
letting p 1, values of eq.(26) to (29) are:
f = +e

f
w
1 +
c
B
5
4(B
5
)
2

B
6
B
5
1B
5

+
B
6
1B
5

c
24(B
5
)
2

B
7
B
5

2
1
1
B
5

2
+
2
B
5

3
+

B
6
B
5
1B
5

c
B
5
4(B
5
)
2

+
2B
7
B
5

5
e
B
5

+
ce
2
24(B
5
)
2

B
6
e

1B
5

+
B
7
e

B
5

2
+1 +
1
B
5

2
,
(40)
s = +e
n
+
B
1
1B
3

2
1
1
B
3

1
4B
3

1
2

1
B
3

B
2
B
3
1 +
2
B
3

1
B
3

2
+e
B
3

2B
2
B
3

5
+
B
1
B
3
1B
3

1B34B32+e224B32B1e1B32+eB2B32+1+1B32, (41)
= e

+
2A
3
A
1

4
+
A
2
A
1

2
+1

A
4
A
1

2
+4
e
A
1

+
e
2
A
4
A
1

2
+4

A
2
e

A
1

2
+1

A
3
e

A
1

A
1

2
+
2
A
1

2

(42)
= e

+
A
6
A
5

2
1
+
A
8
A
5

2
4

2A
7
A
5

4
e
A
5

A
6
e

A
5

2
1

A
8
e
2
A
5

2
4

A
7
e

A
5

2
+
2
A
5

2
(43)
SKIN-FRICTION: Skin-friction coefficient at the sheet is given by
C
fx
Re
x

1/2
= 2f

0, (44)

Table 1


Skin friction coefficient Skin friction coefficient
0.1 1.784514 3.0 2.1021276
0.5 1.5623433 4.0 2.9498989
2.0 1.953478 5.0 3.623243





NUSSELT NUMBER: The rate of heat transfer in terms of the Nusselt number at the sheet is given by
Re
x

1/2
Nu
x
=

0. (45)
Table 2
Nusselt number
M=0 M=1.0 M= 2.0
0.1 1.97856 0.123595 0.0010238
1.0 2.01856 3.262325 1.2512687
2.0 2.97856 3.223675 2.0236522

Conclusion: it is observed from table 1 and 2 that as and M increases, the numerical value of Skin friction
coefficient C
fx
(Re
x
)
1/2
and Nusselt number Re
x

1/2
Nu
x
also increases. Figures 1,2,3 show the effects of the
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

759 www.ijergs.org

ratio of the velocity gradients at the edge of the boundary layer c, suction/injection parameter f
w,
and the
magnetic field parameter M on stream function f, we observe that as c, f
w
and M increases, value of df also
increases. Figures 4,5,6 show the effects of the ratio of the velocity gradients at the edge of the boundary layer
c, suction/injection parameter f
w,
and the magnetic field parameter M on stream function S, we observe that as c,
f
w
and M increases, value of ds also increases. Figures7, 8, 9 and 10 show the temperature and concentration
profiles increase with the decreasing of the ratio of velocity gradients at the edge of the boundary layer c, and
the suction/injection parameter f
w
. The above results obtained by HPM have good agreement with the results
obtained by iterative tried diagonal inflict finite difference method
Fig 1 fig 2

Fig 3 Fig 6


0 1 2 3 4 5 6 7
0
0.2
0.4
0.6
0.8
1
1.2
Effect of velocity gradients of the boundary layer on df for accelerating flow
n
d
f


c=-0.2
c=-0.1
c=0
c=0.1
c=0.2
0 1 2 3 4 5 6 7 8
0
0.2
0.4
0.6
0.8
1
1.2
1.4
Effect of transpiration parameter on df
n
d
f


Fw=-0.5
Fw=-0.2
Fw=0
Fw=0.2
Fw=0.5
0 1 2 3 4 5 6 7 8 9 10
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
Effect of magnetic flow on df
n
d
f


M=0
M=.5
M=1
M=3
M=5
0 1 2 3 4 5 6 7
0
0.2
0.4
0.6
0.8
1
1.2
1.4
Effect of magnetic flow on ds
n
d
s


M=0
M=.5
M=1
M=3
M=5
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

760 www.ijergs.org

Fig 4 Fig 7

Fig 5 Fig 8

Fig 9 Fig 10



0 1 2 3 4 5 6 7 8
0
0.2
0.4
0.6
0.8
1
1.2
1.4
Effect of velocity gradients of the boundary layer on ds for the case of accelerating flow
n
d
s


c=-0.2
c=-0.1
c=0
c=0.1
c=0.2
0 1 2 3 4 5 6 7 8
0
0.2
0.4
0.6
0.8
1
1.2
Effect velocity gradients of the boundary layer on | for the case of accelerating flow
n
|


c=-0.2
c=-0.1
c=0
c=0.1
c=0.2
0 1 2 3 4 5 6 7 8 9 10
0
0.2
0.4
0.6
0.8
1
1.2
1.4
Effect of transpiration parameter on ds
n
d
s


Fw=-0.5
Fw=-0.2
Fw=0
Fw=0.2
Fw=0.5
0 2 4 6 8 10 12
0
0.2
0.4
0.6
0.8
1
1.2
1.4
Effect of transpiration parameter on |
n
|


Fw=-0.5
Fw=-0.2
Fw=0
Fw=0.2
Fw=0.5
0 2 4 6 8 10 12
0
0.2
0.4
0.6
0.8
1
1.2
1.4
Effect velocity gradients of the boundary layer on u for the case of accelerating flow
n
u


c=-0.2
c=-0.1
c=0
c=0.1
c=0.2
0 2 4 6 8 10 12 14
0
0.2
0.4
0.6
0.8
1
1.2
1.4
Effect of transpiration parameter on u
n
u


Fw=-0.5
Fw=-0.2
Fw=0
Fw=0.2
Fw=0.5
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

761 www.ijergs.org

REFERENCES:
[1] Yih, K.A. (1999). Free convection effect on MHD coupled heat and mass transfer of a moving permeable
vertical surface. International Communication in Heat and Mass Transfer 26, 95104.
[2] Pop, I. and A. Postelnicu (1999). Similarity solutions of free convection boundary layers over vertical and
horizontal surface porous media with internal heat generation. International Communication in Heat and Mass
Transfer 26 11831191.
[3] He J.H. (1999) Homotopyperturbation technique, 17bud Comput. Meth. Appl. Mech. Eng. 178 (3-4),
257-262.
[4] Eswara, A. T. and G. Nath (1999). Effect of large injection rates on unsteady mixed convection flow at a
three-dimensional stagnation point. International Journal of Non-Linear Mechanics 34, 85-103.
[5] Chamkha, A.J. (2003). MHD flow of a uniformally stretched vertical permeable surface in the presence of
heat generation/absorption and chemical reaction. International Communication in Heat and Mass Transfer 30,
413-422.
[6] He J.H. (2003) Homotopyperturbation method: a new nonlinear analytical tech-nique, Appl. Math.
Comput, 135, 73-79
[7] Cheng, W.T. and C.N. Huang (2004). Unsteady flow and heat transfer on an accelerating surface with
blowing or suction in the absence and presence of a heat source or sink. Chemical Engineering. Science 59,
771780.
[8] Xu, H. and S.J. Liao (2005). Analytic solutions of magnetohydrodynamic flows of non- Newtonian fluids
caused by an impulsively stretching plate. Journal of Non-Newton Fluid Mechanics 159, 4655.
[9] Chamkha, A.J. and A. Al-Mudhaf (2005). Unsteady heat and mass transfer from a rotating vertical cone
with a magnetic field and heat generation or absorption effects.International Journal of Thermal Science 44,
267276.
[10] Chamkha, A.J., A. Al-Mudhaf and I. Pop (2006). Effect of heat generation or absorption on thermophoretic
free convection boundary layer from a vertical flat plate embedded in a porous medium. International
Communication in Heat and Mass Transfer 33, 10961102.
[11] Liao, S.J. (2006). An analytic solution of unsteady boundarylayer flows caused by an impulsively
stretching plate. Communication in Nonlinear Science and Numerical Simulation 11, 326339










International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

762 www.ijergs.org

A Review on Detection of Motion in Real Time Images Using Pel Approach
1
Mr.Parveen Kumar (M.Tech Student),
2
Mrs.Pooja Sharma (Astt.Prof.)
1,2
Department of Electronics & Communication Engineering, Galaxy Global Educational Trusts Group of Institution, Ambala
Email: parveenporiya@gmail.com, pooja.sharma@galaxyglobaledu.com
Contact No. +919416677456

Abstract Estimating the motion (or dynamics) manifested in a set of images or an image sequence is a fundamental problem in
both image and video processing and computer vision. From a computer vision perspective, much of what is interpretable in this real-
world scene is reflected in the apparent motion. For instance, estimating apparent motion in a video sequence provides necessary
information for many applications including self-directed navigation, industrial process control, 3-D shape reconstruction, object
recognition, robotic motion control, object tracing, and automatic image sequence analysis. In image and video processing, estimation
of motion plays a dynamic role in video compression as well as multi-frame image enhancement. Unlike as they may seem, these
many applications share one common thread in all such applications, demand is high for accurate estimates of motion requiring
negligible computational cost. In this, a new technique called Motion Detection using Pixel Processing is proposed. This technique is
based on pattern search algorithm, in which size is dynamically determined, based on the mean of two motion vectors of the
neighboring macro-blocks instead of one as is the case with adaptive rood patter search. In this, we present enhancement based
motion detection which will be implemented on MATLAB.
Keywords Motion Detection, Pixel Processing, Real time enhancement, motion estimation.

I. INTRODUCTION
Comparing digitally stored video sequences and those stored on celluloid and considering fact that data-storage or data-
transmission capacity still is restricted in computer technology, this comparison shows necessity of compressing the video-data: Using
the same approach to store videos digitally as they are in classic way on celluloid would require at least 25 still images per second. A
high-quality 90 minute movie with a resolution of 720*576 pixels and a 24-bit color depth per pixel could require of over 156 GB of
storage capacity. Transmitting this amount of data over Internet is unreasonable, especially when real-time performance is needed.
This uncompressed video needs a transmission bandwidth of over 237 M Bit/s. Similar problems occur when storing data to disc
only very few memory devices have the necessary capacity [1].
In video sequences, motion is a key source of information. Motion arises due to moving objects in 3D scene, as well as
camera motion. Apparent motion, also known as optical flow, captures resulting spatial-temporal variations of pixel intensities in
successive images of a sequence. The purpose of motion estimation techniques is to recover this information by analyzing image
content. Efficient and accurate motion estimation is an essential component in domains of image sequence examination, computer
vision and video communication.
In context of image sequence analysis and computer vision, the main objective of motion estimation algorithms is exactly and
faithfully models motion in the scene. This information is fundamental for video understanding and object tracking. Relevant
applications include video surveillance, robotics, autonomous vehicles navigation, human motion investigation, quality control in
manufacturing, video search and retrieval, and video restoration. Accurate motion is also important in some video processing tasks
such as frame rate conversion or de-interlacing [2].
As far as video coding is concerned, compression is attained by exploiting data redundancies in both spatial and temporal
dimensions. Spatial redundancies reduction is largely achieved by transform-coding, e.g. using the Discrete Cosine Transform (DCT)
or Discrete Wavelet Transform (DWT), which effectively compacts signal energy into a few significant coefficients. In turn, temporal
redundancies are reduced by means of predictive coding. Observing that temporal correlation is maximized along motion trajectories,
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

763 www.ijergs.org

motion compensated prediction is used for this purpose. In this situation, the main objective of motion estimation is no longer to find
'true' motion in the scene, but rather to maximize compression efficiency. In other words, motion vectors should deliver an exact
prediction of signal. Moreover, the motion information should enable a compact representation, as it has to be conveyed as overhead
in the compressed code stream. Efficient motion estimation is a key to achieve high compression in video coding applications such as
TV broadcasting, Internet video streaming, digital cinema, DVD, Blue-ray Disc [3].
A video sequence can be considered to be a discretized three-dimensional projection of the real four-dimensional continuous
space-time. The objects in real world may move, interchange, or deform. The movements cannot be observed directly, but instead the
light reflected from object surfaces and projected onto an image. The light source can be moving, and reflected light varies depending
on the angle between a surface and a light source. There may be objects occluding the light rays and casting shadows. The objects may
be transparent (so that several independent motions could be observed at the same location of an image) or there might be fog, rain or
snow blurring the observed image. The discretization causes noise into the video sequence, from which video encoder makes its
motion estimations. There may also be noises in the image capture device (such as a video camera) or in electrical transmission lines.
A perfect motion model would take all factors into account and find the motion that has maximum likelihood from the observed video
sequence [4].
The paper is organized as follows. In section II, we discuss related work with recognition of fingerprint images. In Section
III, It describes basic motion detection system, its standards and block matching technique. In Section IV, it describes proposed
technique of motion detection. Finally, conclusion is explained in Section V.

II. RELATED WORK
In literature, authors proposed a search technique based on conjugate directions, and another simpler technique called one-at-
a-time search based on comparison of two methods, the latter technique is adopted as basis for further research. The accepted
technique is compared with brute force search, existing 2-d logarithmic search, and a modified version of it, for motion compensated
prediction. To estimate the motion on a block-by-block basis by brute force requires extensive computations. These motion estimation
techniques are applied to video sequences, and their larger performance compared to the existing techniques is illustrated based on
quantitative measures of the prediction errors [5].
Some Authors proposed that three-step search (TSS) algorithm has been widely used as motion estimation technique in some
low bit-rate video compression applications, owing to its simplicity and effectiveness. However, TSS uses a consistently allocated
checking point pattern in its first step, which becomes inefficient for estimation of small motions. A new three-step search (NTSS)
algorithm is proposed in this paper. The features of NTSS are that it pays a centre-biased checking point pattern in first step, which is
derived by making the search adaptive to motion vector distribution, and a halfway-stop technique to reduce computation cost.
Simulation results show that, as related to TSS, NTSS is much more robust, produces smaller motion compensation errors, and has a
very well-matched computational complexity [6].
Some proposed a new four-step search (4SS) algorithm with centre-biased checking point pattern for fast block motion
estimation. Halfway-stop technique is employed in new algorithm with searching steps of 2 to 4 and the total number of checking
points is varied from 17 to 27. Simulation results show that proposed 4SS performs better than the well-known three-step search and
has similar performance to the new three-step search (N3SS) in terms of motion compensation errors. In addition, the 4SS also reduces
worst-case computational requirement from 33 to 27 search points and average computational requirement from 21 to 19 search points
as compared with N3SS [7].
Authors proposed a block-based gradient descent search (BBGDS) algorithm to perform block motion estimation in video
coding. The BBGDS evaluates values of a given objective function starting from a small centralized checking block. The minimum
within checking block is found, and gradient descent direction where the minimum is expected to lie is used to determine search
direction and position of new checking block. The BBGDS is compared with full search (FS), three-step search (TSS), one-at-a-time
search (OTS), and new three-step search (NTSS). Experimental results show that proposed technique provides competitive
performance with reduced computational complexity [9].
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

764 www.ijergs.org

III. MOTION ESTIMATION
Motion estimation (ME) techniques have been successfully applied in motion compensated predictive coding for
falling temporal redundancies. They belong to class of nonlinear predictive coding techniques. An effective representation of
motion is serious in order to reach high performance in video coding. Estimation techniques should on one hand provide
good prediction, but on other hand should have low computational load [8].
Changes between frames are mainly due to movement of objects. Using a model of motion of objects between frames,
encoder estimates the motion that occurred between reference frame and current frame. This process is called motion detection. The
encoder then uses this motion model and information to move contents of reference frame to provide a better prediction of the current
frame. This process is called motion compensation (MC), and prediction so produced is called motion-compensated prediction (MCP)
or displaced-frame (DF). The purpose of Motion estimation (ME) is indeed to globally diminish the sum of these terms. As a
compromise, block matching ME, even though not optimal, has been universally used in inter-frame motion compensated
(MC) predictive coding since its computational complexity is much lesser than optical flow recursive methods [11].
In block based ME image is partitioned into blocks and same displacement vector is assigned to all pixels within a block. The
motion model undertakes that an image is usually composed of rigid objects in translational motion. Although assumption of
translational motion is often considered to be a major drawback in presence of zoom but block matching technique is able
to estimate closely the true zooming motion.
A. Video Standards
Since there are endless ways to compress and encode data, and many terminal vendors which each may have a unique idea of
data compression, common standards are required, that rigidly define how video is coded in the transmission channel. There are
mainly two standard series in common use, both having several versions. International Telecommunications Union (ITU) started
developing Recommendation H.261 in 1984, and effort was finished in 1990 when it was approved. The standard is aimed for video
conferencing and video phone services over integrated service digital network (ISDN) with bit rate a multiple of 64 kilobits per
second [10].
MPEG-1 is a video compression standard developed in joint operation by International Standards Organization (ISO) and
International Electro-Technical Commission (IEC). The system development was started in 1988 and finished in 1990, and it was
accepted as standard in 1992. MPEG-1 can be used at higher bit rates than H.261, at about 1.5 megabits per second, which is suitable
for storing the compressed video stream on compact disks or for using with interactive multimedia systems [3]. The standard covers
also audio associated with a video [13].
In 1996 a revised version of standard, Recommendation H.263, was finalized which adopts some new techniques for
compression, such as half pixel and optionally smaller block size for motion compensation. As a result it has better video quality than
H.261.Recommendation H.261 divides each frame into 16 16 picture element (pixel) blocks for backward motion compensation,
and H.263 can also take advantage of 8 8 pixel blocks. A new ITU standard in development is called H.26L, and it allows motion
compensation with greater variation in block sizes.
For motion estimation, MPEG-1 uses same block size as H.261, 16 16 pixels, but in addition to backward compensation,
MPEG can also apply bidirectional motion compensation. A revised standard, MPEG-2, was approved in 1994. Its target is at higher
bit rates than MPEG-1, from 2 to 30 megabits per second, where applications may be digital television or video services through a fast
computer network. The latest ISO/IEC video coding standard is MPEG-4, which was approved in beginning of 1999. It is targeted at
very low bit rates (832 kilobits per second) suitable for e.g. mobile video phones. MPEG-4 can be also used with higher bit rates, up
to 4 megabits per second.

B. Block Matching Technique
In a typical Block Matching Algorithm, each frame is separated into blocks, each of which consists of luminance and
chrominance blocks. Usually, for coding efficiency, motion approximation is performed only on the luminance block. Each luminance
block in present frame is matched against candidate blocks in a search area on the reference frame. These candidate blocks are just
displaced versions of unique block. The best candidate block is found and its displacement (motion vector) is recorded. In a typical
inter-frame coder, input frame is subtracted from the prediction of the reference frame. Consequently motion vector and the resulting
error can be transmitted instead of the original luminance block; thus inter-frame redundancy is removed and data compression is
achieved. At receiver end, decoder builds the frame difference signal from received data and adds it to reconstructed reference frames
[15].

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

765 www.ijergs.org



Figure 1: Block-Matching Motion Estimation [15]


This algorithm is based on a translational model of motion of objects between frames. It also assumes that all pels within a block
undergo same translational movement.

C. Matching Criteria for Motion Estimation
Inter frame predictive coding is used to eliminate the large amount of temporal and spatial redundancy that exists in video
sequences and helps in compressing them. In conventional predictive coding difference between the current frame and the predicted
frame is coded and transmitted. The better the prediction, smaller the error and hence the transmission bit rate when there is motion in
a sequence, then a pel on same part of the moving object is a better prediction for the current pel [12].
1. Block Size
The important parameter of the BMA is the block size. If the block size is smaller, it attains better prediction quality. This is
due to a number of reasons. A smaller block size reduces effect of the accuracy problem. In other words, with a smaller block size,
there is less possibility that block will contain different objects moving in different directions. In addition, a smaller block size
provides a better piecewise translational approximation to non-translational motion. Since a smaller block size means that there are
more blocks (and consequently more motion vectors) per frame, this better prediction quality comes at the expense of a larger motion
above. Most video coding standards use a block size of 16 16 as a compromise between prediction quality and motion overhead. A
number of variable-block-size motion estimation methods have also been proposed in the literature [14].
2. Search Range
The maximum allowed motion displacement dm, also known as the search range, has a direct impact on both the
computational complexity and the prediction quality of the BMA. A small dm results in poor compensation for fast-moving areas and
consequently poor prediction quality. A large dm, on other hand, results in better prediction quality but leads to an increase in the
computational complexity. A larger dm can also result in longer motion vectors and consequently a slight increase in motion overhead
[6]. In general, a maximum allowed displacement of dm = 15 pels is sufficient for low-bit-rate applications. MPEG standard uses a
maximum displacement of about 15 pels, although this range can optionally be doubled with the unrestricted motion vector mode
[15].
3. Search Accuracy
Initially, the BMA was designed to estimate motion displacements with full-pel accuracy. Clearly, this limits the
performance of the algorithm, since in reality the motion of objects is completely unrelated to the sampling grid. A number of workers
in the field have proposed to extend the BMA to sub-pel accuracy. For example, Ericsson demonstrated that a prediction gain of about
2 dB can be obtained by moving from full-pel to 1/8-pel accuracy. Girod presented an elegant theoretical analysis of motion-
compensating prediction with sub-pel accuracy. He termed the resulting prediction gain the accuracy effect. He also presented that
there is a critical accuracy beyond which the possibility of further improving prediction is very small. He concluded that with block
sizes of 16 16, quarter-pel accuracy is desirable for broadcast TV signals, whereas half-pel accuracy appears to be sufficient for
videophone signals [14].

IV. MOTION DETECTION USING PIXEL APPROACH
The proposed steps for motion estimation using pixel approach are:
1. Interface WEBCAM with MATLAB
a) Install image acquisition device.
b) Retrieve hardware information.
c) Create a video input object.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

766 www.ijergs.org

d) Preview video stream (optional)
e) Configure object properties.
f) Acquire image data.
g) Starting the video input object.
h) Triggering the acquisition.
2. Image Reading
3. Image Enhancement
4. Image Conversion
5. Image Segmentation
6. Apply thresholding process
7. Feature Extraction
8. Detect the final motion

A. Basic Image Acquisition Procedure
This section illustrates basic steps required to create an image acquisition application:
1. Install Your Image Acquisition Device
Follow the setup instructions that come with your image acquisition device. Setup typically involves installing frame grabber
board in your computer. These are supplied by the device vendor by connecting a camera to a connector on the frame grabber board.
It verifies that the camera is working properly by running the application software that came with the camera and viewing a live video
stream. Generic Windows image acquisition strategies, such as Webcams and digital video camcorders, typically do not require
installation of a frame grabber board. You connect these devices straight to your computer via a USB or FireWire port.
2. Retrieve Hardware Information
You may use the imaqhwinfo function to retrieve Adaptor name, Device ID, Video format. You can optionally specify the
video format of the video input object. To define which video formats an image acquisition device supports, look in the Supported
Formats field of the device info structure returned by the imaqhwinfo function.
3. Create a Video Input Object
In this step it creates video input object that the toolbox uses to represent the connection between MATLAB and an image
acquisition device. Using properties of a video input object, you can control many aspects of image acquisition process. To create a
video input object, enter the video input function at the MATLAB prompt. The video input function uses the adaptor name, device ID,
and video format that you retrieved in step 2 to create the object. The adaptor name is the only required argument; the video input
function can use defaults for the device ID and video format. For more info about image acquisition objects, see linking to Hardware.

vid = videoinput('matrox');

4. Image Reading
img= imread (strcat (a,num2str(i),b));

This takes the grey values of all the pixels in the grey scale image and puts them all into a matrix img. This matrix img is
now a MATLAB variable, and so we can perform many matrix operations on it. In general the imread function reads pixel values
from an image file, and returns a matrix of all pixel values.

5. Image Enhancement
Image quality is an important factor in performance of minutiae extraction and matching algorithms. A good quality image has high
contrast between ridges and valleys. A poor quality image is low in contrast, noisy, wrecked, or smudgy, causing spurious and missing
minutiae. Poor quality can be due to cuts, creases, or bruises on surface of fingertip, excessively wet or dry skin condition,
uncooperative attitude of subjects, broken and unclean scanner devices, low quality fingers (elderly people, manual worker), and other
factors [11].

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

767 www.ijergs.org

6. Image Segmentation
- Segmentation refers to operation of partitioning an image into component parts, or into separate objects.
- Segmentation refers to process of partitioning a digital image into multiple segments.
- The aim of segmentation is to simplify and/or change the representation of an image into something that is more meaningful and
easier to analyses.
- Image segmentation is normally used to locate objects and boundaries in images.
- More precisely, image segmentation is process of assigning a label to every pixel in an image such that pixels with the same label
share certain visual characteristics.
- The outcome of image segmentation is a set of segments that collectively cover the entire image.
7. Thresholding
- A grey scale image is turned into a binary (black and white) image by first choosing a grey level in the original image, and then
converting every pixel black or white according to whether its grey value is greater than or less than T :
- A pixel becomes : {white if its grey level is > T
{Black if its grey level is <T
- Thresholding is a vital part of image segmentation, where we wish to isolate objects from background. It is also an important
component of robot vision. Thresholding can be done very simply in MATLAB.

B. Implementation

MATLAB is one of a number of commercially accessible, sophisticated mathematical computation tools, which also
comprise Maple, Mathematica, and MathCAD. Despite what supporters may claim, no single one of these tools is the best. Each has
strengths and weaknesses. Each allows you to perform simple mathematical computations. They differ in way they handle symbolic
calculations and more complicated mathematical processes, such as matrix manipulation. For example, MATLAB (short for Matrix
Laboratory) excels at computations involving matrices, whereas Maple excels at symbolic calculations. At a central level, you can
think of these programs as sophisticated computer-based calculators. They can perform same functions as your scientific C
calculatorand many more. If you have a computer on your desk, you may find yourself using MATLAB instead of your calculator
for even the simplest mathematical applicationsfor example, balancing your check book. In many engineering classes, the use of
programs such as MATLAB to perform computations is replacing more traditional computer programming.



Figure 2: MATLAB Tool
CONCLUSION
Because of the Internet is more and more universal and the technology of multimedia has been progressed, the
communication of image data is a part in life. In order to pay effect in a limit transmission bandwidth, to convey most, high quality
user information .It is necessary to have more advanced compression method in image and data. Pel-recursive motion estimation is a
well-established approach. The proposed algorithm can decrease computational time as compared to block based technique. Motion
Estimation (ME) and compensation techniques, which can remove temporal redundancy between adjacent frames effectively, have
been extensively applied to popular video compression coding standards such as MPEG- 2, MPEG-4. The displacement of each
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

768 www.ijergs.org

picture element in each frame forms displacement vector field (DVF) and its estimation can be done using at least two successive
frames. The pixel based approaches depend upon intensity of image and its performance is affected by presence of noise. While block
based techniques depends upon motion vectors and it has high computation time as compared to pixel approaches.

REFFERENCES:

[1] Ram Srinivasan and K. R. Rao. Predictive Coding Based On Efficient Motion Estimation. IEEE Transactions On Communications, Vol.
Com-33, No. 7, August 1985
[2] Renxiang Li, Bing Zeng, And Ming L. Liou. A New Three-Step Search Algorithm For Block Motion Estimation. IEEE Transactions On
Circuits And Systems For Video Technology, Vol. 4, No. 3, August 1994.
[3] Lai-Man Po Wing-Chung Ma. A Novel Four-Step Search Algorithm For Fast Block Motion Estimation. IEEE Trans. Circuits Syst. Video
Technol., Vol. 6, No. 3, Pp. 312-317,Jun. 1996.
[4] Lurng-Kuo Liu and Ephraim Feig. A Block-Based Gradient Descent Search Algorithm for Block Motion Estimation In Video Coding. IEEE
Transactions On Circuits And Systems For Video Technology, Vol. 6, No. 3, August 1996.
[5] Jo Yew Tham, Surendra Ranganath, Maitreya Ranganath, and Ashraf Ali Kassim. A Novel Unrestricted Center-Biased Diamond Search
Algorithm for Block Motion Estimation. IEEE Transactions On Circuits And Systems For Video Technology, Vol. 8, No. 3, August 1998
[6] Ce Zhu, Xiao Lin, and Lap-Pui Chau. Hexagon-Based Search Pattern for Fast Block Motion Estimation. Ieee Transactions On Circuits And
Systems For Video Technology, Vol. 12, No. 4, May 2002
[7] Chun-Ho Cheung, Member, IEEE, And Lai-Man Po, Member, IEEE. Novel Cross-Diamond-Hexagonal Search Algorithms For Fast Block
Motion Estimation. Ieee Transactions On Multimedia, Vol. 7, No. 1, February 2005.
[8] Michael Gallant and Faouzi Kossentini. An Efficient Computation-Constrained Block-Based Motion Estimation Algorithm for Low Bit Rate
Video Coding.
[9] Ishfaq Ahmad, Senior Member, Ieee, Weiguo Zheng, Member, IEEE, Jiancong Luo, Member, IEEE, And Ming Liou, Life Fellow, IEEE. A
Fast Adaptive Motion Estimation Algorithm. IEEE Transactions On Circuits And Systems For Video Technology, Vol. 16, No. 3, March
2006.
[10] Ka-Ho Ng, Lai-Man Po and Ka-Man Wong. Search Patterns Switching For Motion Estimation Using Rate Of Error Descent. Icme 2007.
[11] Sumeer Goel, Student Member, Ieee Andmagdy A. Bayoumi, Fellow, IEEE. Multi-Path Search Algorithm for Block-Based Motion
Estimation. Icip 2006
[12] Alexis M. Tourapis. Enhanced Predictive Zonal Search For Single And Multiple Frame Motion Estimation.
[13] B. Kasi Viswanatha Reddy & Sukadev Meher. Three Step Diamond Search Algorithm for Fast Block-Matching Motion Estimation.
International Conference On Electrical, Electronics, Communications And Photonics, ISBN : 978-93-81693-88-19 , Goa, 31st March, 2013
[14] Sven Klomp, Marco Munderloh, Yuri Vatis, Jrn Ostermann, Fellow, Ieee. Decoder-Side Block Motion Estimation For H.264 / Mpeg-4 Avc
Based Video Coding. Institut Fr Informationsverarbeitung Leibniz Universitt Hannover, Appelstr. 9a, 30166 Hannover, Germany2009
IEEE.
[15] Lai-Man Po, Chi-Wang Ting, Ka-Man Wong, And Ka-Ho Ng. Novel Point-Oriented Inner Searches For Fast Block Motion Estimation. IEEE
Transactions on Multimedia, Vol. 8, No. 1, January 2007










International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

769 www.ijergs.org

Design Consideration in an Automatic Can/Plastic Bottle Crusher Machine
Vishal N. Kshirsagar
1
, Dr. S..K Choudhary
2
, Prof. A.P Ninawe
2

1
Resarch Scholar (M.Tech Pursuring, M.E.D), KDK College of Engineering, Nagpur, Maharashtra, India
2
KDK College of Engineering, Nagpur, Maharashtra, India
E-mail- vishalk031@gmail.com

Abstract: This paper describes about the design of various components of can or plastic bottle crusher machine. This machine is
widely used in beverage industries or in scrap dealers shop to reduce the volume of the cans/bottles. Hence in this design of various
parts are necessary, and design of various parts due to which the design quality of those parts will be improved. There are so many
researchers who have done work on design and analysis also, but still there are so many areas of scope regarding this design. Overall,
this project involves processes like design, fabrication and assembling of different components etc. After all process has been done,
this crusher may help us to understand the fabrication and designing that involved in this project.
Keywords: Design consideration, calculations, design procedure, Crushing force, force analysis, load diagram.


I. INTRODUCTION
The sole purpose of this paper is to understand the fundamental knowledge of design and mechanism. The design is an environment
friendly and uses simple mechanism properties such as fulcrum system, single slider crank mechanism and automation properties etc.
In this, some crushing force is needed to crush the cans/bottles to reduce its volume by large extent. The design is so done that the
knowledge of designing, mechanism and forces are increased. This project consists of designing and fabrication of an automatic can
crusher machine considering various important parameters. In this project, development of a recycle bin can/bottle crusher so the Can
might crush as flat and look as symmetrically as possible and inserted the bin. As well as the study of manufacturing was very
important in order to carry out this project to ensure that what are needs to do. This project involves the process of designing the
different parts of this crusher machine considering forces and ergonomic factor for people to use. This project is mainly about
generating a new concept of can/bottle crusher that would make easier to bring anywhere and easier to crush the can or bottle. After
the design has completed, it was transformed to its real product where the design is used for guideline.


II. DESIGN PROCEDURE
The aim of this is to give the complete design information about the Can Crusher machine. In this, the explanations and some other
parameters related to the project are included. With references from various sources as journal, thesis, design data book, literature
review has been carried out to collect information related to this project.


Fig.-1: Modeling of machine
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

770 www.ijergs.org

A. Design consideration
Maximum Force required to crush the Can/Plastic bottles
Considered elements
Standard size of cans/plastic bottles
Material of Cans Aluminum

B. Design calculations
Determination of crushing force experimentally
Force required to crush the plastic bottle
Force required to crush the Soda/Pepsi Can
So, we considering maximum of it.

Torque, T = F r
Where, r is radius or length of the crank.
F is required crushing force.

Power is given by,
P =
T
60


T is torque required
is angular velocity = 2N/60
where, N is speed of the crank.
Again, Power can be calculated by static force analysis,





Fig.-2: Single Slider Crank Mechanism
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

771 www.ijergs.org





Force F
43
is opposite to force F
34
and
Force F
43
is also opposite to force F
23

F
43
= F
23
= F
34

Again, Force F
23
is opposite to Force F
32
and Force F
32
opposite to Force F
12

F
32
= F
23
= F
12

Link - 2 shown in figure with Force F
32
and F
12
is been separated by a distance d forming a clockwise couple. Therefore a resisting
couple C should be equal F
32
d or F
12
d in counter clockwise direction.

C = F
32
d (CCW)
Or

Torque, T = F r

Power is given by,
P =
T
60


So, from this we can decide the Crushing Power Required.
C. Design Of V-Belt
Design Power (P
d
) = P
R
k
L

Where,
P
R
= rated power
Load Factor, K
L
= 1.10
Selection of belt on the basis of design power. Nominal width, w
Nominal thickness, t
Recommended Diameter, D
Centrifugal tension factor, K
C

Bending stress factor, K
b


Peripheral Velocity, V
p
=
D
1
N
1
60

Fig.3: Force Analysis of Links 2, 3 and 4
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

772 www.ijergs.org

D
1
= Diameter of smaller pulley i.e. electric motor shaft pulley,
N
1
= Speed of electric motor shaft pulley.
If this velocity i.e. V
P
is in range then, Ok.

Now, assuming Velocity Ratio, VR to calculate speed of driven pulley.

N
1
/N
2
= VR

By using velocity ratio with neglecting slip,

N
2
N
1
=
D
1
D
2

D
2
= Diameter of larger pulley
Centre to centre distance for V-belt,
C = (D
1
+ D
2
)

OR C = D
2

Angle of lap or contact on smaller pulley,


1
=
D
2
D
1
C


Angle of lap or contact on larger pulley,


2
= +
D
2
D
1
C


Since the smaller value of for the pulley will governs the design.

Belt Tension Ratio,

F
1
F
2
= e
cosec /2

= Groove angle = 34
0


= Coefficient of friction = 0.3
F
1
= Tension in tight side
F
1
= Tension in slack side

Belt Tension, (F
1
F
2
) =
P
d
V
P


Power Rating Per Belt = (F
W
- F
C
)
e

/sin

2
1
e

/sin

2
V
P

Working Load, F
W

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

773 www.ijergs.org

Centrifugal Tension, F
C
= K
C
(
V
P
5
)
2



No. of Strands =
P
d
Power /Belt



Length of the Belt,
L =

2
(D
1
+D
2
) + 2C +
(D
1
D
2
)
4C


Bending Load, F
b
=
K
b
D

K
b
= Bending stress factor,
D = Diameter of pulley i.e. smaller or large.
Initial Tension, 2F
i
= F
1
+ F
2


Fatigue Life of Belt, F = F
i
+ F
C
+ F
bmax


D. Design of Shaft
Design Torque, T
d
=
60PK
L
2N

Load Factor, K
L
= 1.75 (For Line Shaft)

Selecting material of shaft SAE 1030,
S
ut
= 527 MPa
S
yt
= 296 MPa

max
0.30 S
yt

max
0.18 S
ut

Considering F.O.S. = 2
For ductile material with dynamic heavy shocks for machines like forging, shearing and punching etc.


max
0.30 S
yt

= 0.30
296
2

= 44.4 N/mm
2


max
0.18 S
ut

= 0.18
527
2

= 47.43 N/mm
2

Considering minimum of it i.e.

max
= 44.4 N/mm
2
.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

774 www.ijergs.org

Consider Shaft-2 under loading

W
P4
= Weight of pulley.


Fig.:- Vertical Load Diagram
Resolving all the force vertically,
R
AV
+ R
BV
= W
P4


Taking moment about A
W
P4
90 = R
BV
270

R
BV
= Vertical Reaction at B
R
AV
= Vertical Reaction at A

As we know that bending moment at A and B will be Zero.
M
AV
= M
BV
= 0

M
AV
and M
BV
are the vertical bending moments at point A and B respectively.
B. M. At C = R
AV
90

Now,
Resolving all the forces horizontally,
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

775 www.ijergs.org

R
AH
+ R
BH
= F
3
+ F
4




Fig.:- Horizontal load Diagram

Taking moment about A

(F
3
+ F
4
) 90 = R
BH
270

We know that B.M. at A and B will be zero.
M
AH
= M
BH
= 0

M
AH
and M
BH
are the horizontal bending moments at point A and B respectively.

B.M. at C, M
CH
= R
AH
90

Resultant Bending Moment,
M
C
= M
CV

2
+M
CH

2


Now, for diameter of shaft,

max
=
16
d
3
K
b
M
2
+ K
t
T
d

2


Now, Recommended value for K
b
and K
t

For rotating shaft,
Suddenly applied load (Heavy shocks)
K
b
= 2 to 3 = 2.5
K
t
= 1.5 to 3 = 2.3

max
= 44.4 N/mm
2


Consider Shaft 1 under loading


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

776 www.ijergs.org


W
P2
= Weight of pulley-2,
W
P3
= Weight of pulley-3.



Fig.:- Vertical Load Diagram

Resolving all the forces vertically,
R
AV
+ R
BV
= W
P2
+ W
P3


Taking moment about A
W
P3
35 + W
P2
120 = R
BV
180

R
BV
= Vertical Reaction at B
R
AV
= Vertical Reaction at A

We know that B.M. at A and B is zero
M
AV
= M
BV
= 0

M
AV
and M
BV
are the vertical bending moments at point A and B respectively.

B.M. at C, M
CV
= R
AV
35
B.M. at D, M
DV
= R
BV
60

Resolving all the force horizontally,
R
AH
+ R
BH
= (F
1
+ F
2
) + (F
3
+ F
4
)



Fig.:- Horizontal Load Diagram

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

777 www.ijergs.org

Taking moment about A
R
BH
180 = (F
1
+ F
2
) 120 + (F
3
+ F
4
) 35

R
BH
= Horizontal Reaction at B
R
AH
= Horizontal Reaction at A.
We know that B.M. at A and B is zero,
M
AH
= M
BH
= 0
M
AH
and M
BH
are the horizontal bending moments at point A and B respectively.

B.M. at C, M
CH
= R
AH
35
B.M. at D, M
DH
= R
BH
60

Resultant Bending Moment,
M
C
= M
CV

2
+ M
CH

2


M
D
= M
DV

2
+M
DH

2


Now, diameter of shaft,

max
=
16
d
3
K
b
M
2
+ K
t
T
d

2


For rotating shaft,
Suddenly applied load (Heavy shocks)
K
b
= 2 to 3 = 2.5
K
t
= 1.5 to 3 = 2.3

max
= 44.4 N/mm
2



E. Design of Pulley

L
P
= 11 mm;
b = 3.3 mm;
h = 8.7 mm
e = 15 0.3;
f = 9-12 = 10.5;
= 34 ;
Min. Pitch Diameter, D
P
= 75 mm

Types of construction Web construction for pulley diameter below 150 mm
Types of construction Arm construction for pulley diameter above 150 mm i.e. for bigger pulleys.

No. Of Arms = 4
No. Of Sets = 1
Rim thickness, t = 0.375 D + 3 (Heavy Duty Pulley)
D = Diameter of pulley

Width of Pulley, W = (n - 1) e + 2f
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

778 www.ijergs.org


Where n is no. of belts = 1.

Hub Proportions

Hub diameter, D
h
= 1.5 d
s
+ 25 mm
d
s
= Diameter of shaft = 18 mm
Length of Hub, L
h
= 1.5 d
s

Moment on each Arm,

M =
F
1
F
2
D D
h

n


n = no. of arms
D
h
= Hub diameter


III. FABRICATION


Mechanical Components
Shaft (2 Nos.)
Pulley (4Nos.)
Belt(2Nos.)
Single Slider Mechanism.
Separating Bin.
Crushing Tray.
Angles (For Frame).


IV. CONCLUSION

The above design procedure is been adopted for the fabrication of Automatic Can/Plastic Bottle Crusher machine which will make the
product durable for long time as well as make it efficient also helps to understand the concept of design. Thus, with help of this design
and some other electronic components we can fabricate an automatic can/plastic bottle crusher machine to simply reduce the volume
of cans/plastic bottles as well as to reduce the human fatigue.

REFERENCES:
[1] Mr. Ramkrushna S. More, Sunil J. Rajpal

publishes a review paper on Study of Jaw Plates of Jaw Crusher. International
Journal of Modern Engineering Research (IJMER), Vol.3, Issue.1, Jan-Feb. 2013 pp-518-522 ISSN: 2249-6645.

[2] Mr. Shadab Husain, Mohammad Shadab Sheikh presents paper on Can crusher machine using scotch yoke mechanism. IOSR
Journal of Mechanical and Civil Engineering (IOSR-JMCE) e-ISSN: 2278-1684, p-ISSN: 2320-334X PP 60-63.

[3] Mr. Che Mohd Akhairil Akasyah B Che Anuar Faculty of Mechanical Engineering in University Malaysia Pahang in the year
Nov.2008 in his project report entitled Development of the Can Crusher Machine.

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

779 www.ijergs.org

[4]Cao Jinxi, Qin Zhiyu, Rong Xingfu, Yang Shichun presents a paper on Experimntal Research on Crushing Force and its
Distribution Feature in Jaw Crusher 2007 Second IEEE Conference on Industrial Electronics and Applications. 1-4244-0737-
0/07/$20.00 2007 IEEE.

[5] Tiehong GaoJunyi Cao, Dunyu Zhu, Jinzhang Zhi presents a paper on Study on Kinematics Analysis and Mechanism
Realization of a Novel Robot Walking on Water Surface. Proceedings of the 2007 IEEE International Conference on Integration
Technology March 20 - 24, 2007, Shenzhen, China. 1-4244-1092-4/07/$25.00 2007IEEE.
[6] Design data for machine elementsby B.D.Shiwalkar 2013 edition and a Textbook of Machine Design by R.S. Khurmi and
J.K.Gupta 14
th
revised edition S. Chand publication




















International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

780 www.ijergs.org

Implementation of dual axis solar tracker model by using
microcontroller
Mr. Vishal Bhote
1
, Prof. Jaikaran Singh
2

1
Research scholar (M-Tech),SriSatya Sai Institute of science and technology, Sehore.
2
HOD, Digital Communication Engineering Department, SriSatya Sai Institute of science and technology, Sehore
Vishal.bhute16@gmail.comand 09423423286

Abstract As we are in increasing demand of power these days, power sector has been playing a vital role in our day today life.
Heading towards the damp increasing power, solar energy comes to the picture or in our mind why because it is one of the most
important renewable energy sources on the earth which must be collected and should be utilized to its maximum efficiency.
Considering the utilization of solar power we have tried to develop a dual axis model of a solar panel which can provide or utilize the
maximum solar power headed by the practical efficiency. Experimental designs proved that solar energy utilization can easily solve
the problem of power in the world if used to its maximum. The single axis model have reached up to 50% efficiency and we have tried
to increase the efficiency again by 20-30%.This paper describes the design of a dual axis model of solar panel which tracks the
maximum solar energy with the help of microcontroller. No doubt our system encircled by solar panel, microcontroller, gears, sensors
and stepper motor.
Keywords Solar cell, solar panel, solar tracker, photocell, microcontroller, sensor, stepper motor.
INTRODUCTION
In recent years there has been increasing interest in the solar cell as an alternative source of energy. When we consider that the power
density received from the sun at sea level is about 100 mw/cm
2
, it is certainly an energy source that requires further research and
development to maximise the conversion efficiency from solar to electrical energy. This document explicitly describes the controlling
of solar panel with the help of microcontroller to track maximum solar energy. The precise control of solar panel is done by stepper
motor. Having said that microcontroller is the heart of the design for controlling action. Microcontroller is going to sense the photon
energy with the help of sensor which will provide the interrupt to turn on the controlling action. Photon energy is captured at right
angles to the solar panel by stepper motor.Solar panel consist of series of solar cells whose output power in terms of electrical voltage
is provided to the battery for the storage purpose. The efficiency calculations are provided at the end to have an exact idea of dual axis
model. This dual axis model is totally interactive in nature due to the microcontroller action. The ports of microcontroller define the
specific functions of the design, such as port1 defines the input signal from the sensor, port2 handles the stepper motor, port3 defines
the excited solar cells and the converted power is defined by port4.Envirnmental conditions are also sensed by the microcontroller
such as cloudy conditions, etc.
I. SOLAR CELL
The basic construction of solar cell is as shown in figure. As shown in the top view, every effort is made to ensure that the surface
are perpendicular to the sun is maximum. Also note that the metallic conductor connected to the p-type material and the thickness of
the p-type material are such that they ensure a maximum number of photons of light energy will reach the junction.
A photon of light energy in this region, may collide with valence electron and impart to it sufficient energy to leave the parent atom.
The result is a generation of free electrons and holes. This phenomenon will occur on each side of the junction. In the p-type material
the newly generated electrons are minority carriers and will move rather freely across the junction. A similar discussion is true for the
holes created in the n-type material. The result is the minority carrier flow which is opposite in direction to the conventional forward
current of the p-n junction. This increase in forward current is as shown in figure.


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

781 www.ijergs.org





Fig 1-Voc and Isc versus illumination for a solar cell


Since v=0 anywhere on the vertical axis and represents a short circuit condition, current at this intersection is called the short circuit
current and is represented by the notation I
sc
. Under open circuit condition I
d
=0 the photovoltaic voltage V
oc
will result. This is a
logarithmic function of illumination as shown in figure below. V
oc
is the terminal voltage of the battery under no load or open circuit
condition. Note however in the same figure that the short circuit current is a linear function of the illumination, while the change in
V
oc
is less for this region. The major increase in V
oc
occurs for a lower level increase in illumination. Eventually a further increase in
illumination will have very little effect on V
oc
, although I
sc
will increase causing the power capabilities to increase. Selenium and
silicon are the most widely used materials for solar cells, although gallium arsenide, indium arsenide and cadmium sulphide among
others are also used.
The wavelength of the incident light will affect the response of the p-n junction to the incident photon. In general silicon has the
higher conversion efficiency and greater stability and is less subject to fatigue. Both materials have excellent temperature
characteristics. That is they can withstand extreme low and high temperatures without a significant drop-off in efficiency. A very
recent innovation in the use of solar cell appears in the following figure. The series arrangement of solar cells permits a voltage
beyond that of a single element. The performance of a typical four array solar cell appears in the same figure. At the current of about
2.6mA, the output voltage is about 1.6v, resulting in an output power of 4.16mW
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

782 www.ijergs.org







Fig 2: relative spectral response for Si, Ge and selenium as compared to human eye

.
The schottky barrier diode is included to prevent battery current drain through the power converter that it will appear as an open
circuit to the rechargeable battery and not draw current from it.
The efficiency of operation of a solar cell is determined by the electrical power output divided by the power provided by the light
source,
i.e. n=P
o
(electrical)/P
i
(light energy)*100%
Typical levels of efficiency range from 10% to 40%. A level that should improve measurably if the present interest continues on the
dual axis model.

II. MICROCONTROLLER INTERFACING
Automatic solar tracker:-We all are familiar with the Newton's corpuscular theory of light, that light is made up of small particles
called corpuscles which travel in straight line with a finite velocity and energy. Solar energy is the major Eco-friendly & Pollution
less method of producing the electricity today. According to - U.S. solar research center: If we convert the Total Solar energy
reaches to earth in one time into ELECTRICITY, then it will be more enough than whole power used by all the nations per year.
Solar Panel:-It is a large component made up of the no of photovoltaic cells connected internally with each other. Used to
grab the sunlight and to convert it into the electricity.
Solar tracker:-A Solar tracker is a device used for orienting a solar photovoltaic panel or lens towards the sun by using the solar or
light sensors connected with the machine (ex: stepper motor, servo motor, gas filled piston). Hence, the sun tracking systems can
collect more energy than what a fixed panel system collects.

Need of Solar tracker:-
Increase Solar Panel Output
maximum efficiency of the panel
Maximize Power per unit Area
Able to grab the energy throughout the day

Types of Solar Trackers:- The sun's position in the sky varies both with the seasons (elevation) and time of day as the sun moves
across the sky. Hence there are also two types of Solar Trackers
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

783 www.ijergs.org


1. Single Axis Solar Tracker.
2. Dual Axis Solar Tracker.

1. Single Axis Solar Tracker:-Single axis solar trackers can either have a horizontal or a vertical axle. The horizontal type is used in
tropical regions where the sun gets very high at noon, but the days are short. The vertical type is used in high latitudes (such as in UK)
where the sun does not get very high, but summer days can be very long.

2. Dual Axis Trackers: - Double axis solar trackers have both a horizontal and a vertical axle and so can track the Sun's apparent
motion exactly anywhere in the world this type of system is used to control astronomical telescopes, and so there is plenty of software
available to automatically predict and track the motion of the sun across the sky. Dual axis trackers track the sun both East to West
and North to South for added power output (approx 40% gain) and convenience.

III. BLOCK DIAGRAM DESCRIPTION
Microcontroller:-It is the major part of the system. The microcontroller controls all the operations. The solar panel is aligned
according to the intensity of sunlight under the control of the microcontroller
Sensor:-The system consists of two sensors, each composed of LDR. One unit is made up of four LDRs. These are placed at the four
corners of the solar panel. The intensity of sunlight is sensed by the LDR and the output is sent to the controller. The control unit
analyses it and decides the direction in which the panel has to be rotated, so that it gets maximum intensity of light.
The other unit of sensor is also composed of LDRs which is meant for the control of a lighting load.
Servo motor:-Servo motor is used to rotate the panel in desired direction. It is controlled by the controller



Fig 3: block diagram of dual axis model


D. Solar panel:-Solar panel is used for the conversion of solar energy directly into electricity. It is composed of photo voltaic cells,
which convert solar energy into electrical energy.

E. Charge control:-It is meant to control the charging of battery. It sends the status of battery to the microcontroller unit.

F. Battery:-It is for the storage of energy received from the panel. A rechargeable battery is normally employed for this purpose.
IV. PROBLEM IDENTIFICATION & PROPOSED METHODOLOGY
The main goal of this project is to develop and implement a prototype of two-axis solar tracking system based on a
microcontroller. The parabolic reflector or parabolic dish is constructed around two feed diameter to capture the sun's energy. The
focus of the parabolic reflector is theoretically calculated down to an infinitesimally small point to get extremely high temperature.
This two axis auto-tracking system has also been constructed using AT89C51 microcontroller. The assembly programming language
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

784 www.ijergs.org

is used to interface the AT89C51 with two-axis solar tracking system. The temperature at the focus of the parabolic reflector is
measured with temperature probes. This auto-tracking system is controlled with two 12V, 6W DC gear box motors. The five light
sensors (LDR) are used to track the sun and to start the operation (Day/Night operation). Time Delays are used for stepping the motor
and reaching the original position of the reflector. The two-axis solar tracking system is constructed with both hardware and software
implementations. The designs of the gear and the parabolic reflector are carefully considered and precisely calculated and the solar
tracker can be still enhanced additional features like rain protection and wind protection which can be done as future also dual axis
solar tracker can be constructed using AVR microcontroller such as Atmega 8/16/32 whish has inbuilt 32 KB flash memory and
inbuilt Analog to Digital converter
ACKNOWLEDGMENT
I would like to thanks Prof. Mukesh Tiwari for providing me the chance to work on the concept dual axis model using
microcontroller operating on a solar panel.

CONCLUSION
In this paper an attempt has been made to implement dual axis model by using microcontroller operating on a solar panel. The
design is going to extract maximum power from the sun by tracking it using a dual axis solar panel. This is possible if solar panel is
perpendicular to the intensity of light coming from the sun. The paper puts forward a novel approach in improving the output power as
well as protection requirements for the circuit from wind and rain.

REFERENCES:
[1]S. Rahman, Green power: what is it and where can we find it? IEEE Power and Energy Magazine, vol. 1, no. 1, pp. 30-37,2003.
[2] D. A. Pritchard, Sun racking by peak power positioning for photovoltaic concentrator arrays, IEEE Contr. Syst. Mag., vol.
3,no. 3, pp. 2-8, 1983. 6
[3] A. Konar and A. K. Mandal, Microprocessor based automatic sun tracker, IEE Proc. Sci., Meas. Technol., vol. 138, no. 4, pp.
237-241,1991.
[4] B. Koyuncu and K. Balasubramanian, A microprocessor controlled automatic sun tracker, IEEE Trans. Consumer Electron.,
vol.37, no. 4,pp. 913-917, 1991. [
[5] J. D. Garrison, A program for calculation of solar energy collection by fixed and tracking collectors, Sol. Energy, vol. 72, no. 4,
pp. 241-255, 2002.
[6] P. P. Popat Autonomous, low-cost, automatic window covering system for day lighting Applications, Renew. Energy. vol. 13,
no. 1, pp.146,1998.
[7] M. Berenguel, F. R. Rubio, A. Valverde, P. J. Lara, M. R. Arahal, E. F.Camacho, and M. Lpez, An artificial vision-based control
system for automatic heliostat positioning offset correction in a central receivr solar power plant, Sol. Energy, vol. 76, no. 5,
pp.563- 75, 2004.
[8] J. Wen and T. F. Smith, Absorption of solar energy in a room, Sol. Energy, vol. 72, no. 4, pp. 283-297, 2002.
[9] T. F. Wu, Y. K. Chen, and C. H. Chang, Power Provision and Illumination of Solar Light, Chuan Hwa Science Technology ook
CO., LTD, 2007.
[10] C. C. Chuang, Solar Energy Engineering-Solar Cells, Chuan Hwa Science Technology Book CO., LTD, 2007.
[11] Solar Tracking Application,A Rockwell Automation Publication.
[12] Azimuth-Altitude Dual Axis Solar Tracker, worcester polytechnic institute, December 16, 2010.
[13] http://en.wikipedia.org/wiki/Microcontroller#Other_microcontroller_features, November 2009






International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

785 www.ijergs.org

SNM Analysis of Sram Cells at 45nm, 32nm and 22nm Technology
Gourav Arora
1
, Poonam
2
, Anurag Singh
3
Department of Electronics & Communication Engineering
OM Institute of Technology and Management, Hisar, India (125001)
1
gouravarora17@gmail.com,
2
poonamj2110@gmail.com,
3
anurag.sangwan@hotmail.com
Abstract: High Read and Write Noise Margin is one of the important challenges of SRAM design. This paper analyzes the read
stability and write ability of 6T and 7T SRAM cell structures at different technologies. SRAM cell stability analysis is typically based
on Static Noise Margin (SNM) investigation and SNM affects both read and write margin. This paper represents the simulation of
both SRAM cells and their comparative analysis on the basis of RNM and WNM. The 7T SRAM cell provide higher read and write
noise margin as compared to 6T SRAM cell at different technologies. All simulations of SRAM cells have been carried out on 45nm,
32nm and 22nm CMOS technology at Tanner EDA tool.
Keywords: 6T Memory, 7T Memory, Static Noise Margin (SNM), read stability, read noise margin (RNM), write noise margin
(WNM)
1. INTRODUCTION
Static random access memory (SRAM) is a type of semiconductor memory that uses bistable circuitry to store each bit. The memory
circuit is said to be static if the stored data can be retained indefinitely (as long as sufficient power supply voltage is provided),
without any need for a periodic refresh operation An SRAM (Static Random Access Memory) is designed to fill two needs: to provide
a direct interface with the CPU at speeds not attainable by DRAMs and to replace DRAMs in systems that require very low power
consumption. In the first role, the SRAM serves as cache memory, interfacing between DRAMs and the CPU. The second driving
force for SRAM technology is low power applications. Figure 1 shows a typical PC microprocessor memory configuration [2].

Figure 1 Typical PC Microprocessor Memory Configuration
2. LITERATURE REVIEW OF SRAM CELLS
2.1.6T SRAM CELL
Figure 2 shows the Schematic of 6T SRAM cell. This SRAM cell is composed of six transistor; four transistors (Q1 Q4) comprise
two cross coupled CMOS inverters plus two NMOS transistors (Q5 and Q6) for access. This configuration is called a 6T cell. Each bit
in an SRAM is stored on four transistors that form two cross coupled inverters. This storage cell has two stable states which are used
to denote either 0 or 1. Two access transistors (Q5 and Q6) serve to control the access to a storage cell during read and write
operations. Access to cell is enabled by the word line (WL in Figure 2) which controls the two access transistors which, in turn,
control whether the cell should be connected to the bit lines: BL and BLB. They are used to transfer data for both read and write
operations. During read, the WL voltage VWL is raised, and the memory cell discharges either BL (bit line true) or BLB (bit line
complement), depending on the stored data on nodes Q and QB. A sense amplifier converts the differential signal to a logic-level
output. Then, at the end of the read cycle, the BLs returns to the positive supply rail. During write, VWL is raised and the BLs are
forced to either VDD (depending on the data), overpowering the contents of the memory cell. During hold, VWL is held low and the
BLs are left floating or driven to VDD. A 6T CMOS SRAM cell is the most popular SRAM cell due to its superior robustness, low
power and low voltage operation.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

786 www.ijergs.org


Figure 2 Schematic of 6T SRAM Cell
i. Write Operation
The write operation is similar to a reset operation of an SR latch. The start of a write cycle begins by applying the value to be written
to the bit lines. If we wish to write a 0, we apply a 0 to the bitlines, i.e. setting BL to 0 and BLB to 1. Similarly, a 1 is written by
inverting the values of the bitlines, i.e. setting BL to 1 and BLB to 0.WL is then asserted and the value that is to be stored in latch in.
Suppose we want to write 1 to this SRAM cell, we apply a 1 to BL, 0 at BLB and the word lines (WL) is VDD. At the same time, WL
is turned ON; current is flowing from bit lines to the storage nodes (Q and QB). At the same time, transistor Q4 is turned ON as soon
as the potential at the inverse storage node (QB), current will flow from VDD to the node and transistor Q1 is turned ON as soon as
the potential at storage node (Q), current will flow to inverse node to ground. Figure 3 shows the simplified model of a 6T CMOS
SRAM cell for write 1. In this case, transistor Q5 has to be stronger than transistor Q4 to change its state. The transistor Q4 is a PMOS
transistor and inherently weaker than the NMOS transistor Q5 (the mobility of NMOS is higher than the mobility of PMOS). Careful
sizing of the transistors in an SRAM cell is needed to ensure proper operation [5].

Figure 3 Simplified model of a 6T CMOS SRAM cell during for write 1
ii. Read Operation
Assume that the content of the memory is a 1; stored at Q. Figure 4 shows the simplified model of a 6T CMOS SRAM cell to read 1.
The read cycle is started by precharging both the bit lines (BL and BLB) to logical 1, then asserting the word line WL, enabling both
the access transistors (Q5 and Q6). Upon read access, the bit line voltage V
BL
remains at precharge level. And the complementary bit
lines voltage V
BLB
is discharged through transistors Q1 and Q5 to logical 0 (i.e. eventually discharging through the transistor Q1 as it
is turned on because the Q is logically set to 1) connected in series. On the BL side, the transistors Q4 and Q6 pull the bit line towards
VDD, a logical 1 (i.e. eventually being charged by the transistor Q4 as it is turned on because Q is logical set to 0). Then these BL and
BLB will have a small difference of delta between them and then these reach a sense amplifier, which will sense which line has higher
voltage and thus will tell 1 was stored. If the sensitivity of the sense amplifier, the speed of read operation is faster. Similarly, if the
content of the memory was a 0, the opposite would happen and BLB would be towards 1 and BL towards 0 [5].
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

787 www.ijergs.org


Figure 4 Simplified model of a 6T CMOS SRAM cell during to read 1
2.2.7T SRAM CELL
The circuit of 7T SRAM cell consists of two CMOS inverters (Q1-Q4) that connected to the cross coupled to each other with
additional NMOS transistor (Q7) which connected to write line (w) and having two access NMOS transistors (Q5 and Q6) connected
to bit lines (BL) and bit lines bar (BLB) respectively. Figure 5 shows Schematic of 7T SRAM Cell, where the access transistor Q5 is
connected to the word line (WL) to perform the access write and Q6 is connected to the read line (R) to perform the read operations.
Bit lines act as I/O nodes carrying the data from SRAM cells to a sense amplifier during read operation, or from write in the memory
cells during write operations. The proposed 7T SRAM cell depends on cutting off the feedback connection between the two inverters,
inv1 and inv2, before a write operation. The feedback connection and disconnection is performed by an extra NMOS transistor Q7 and
the cell only depends on BLB to perform a write operation [7].

Figure 5 Schematic of 7T SRAM Cell
i. Write Operation
The write operation of 7T SRAM cell starts by turning transistor Q7 off, this cut off the feedback connection. BLB carries
complement of the input data, Q5 is turned ON, while is Q6 off. This type of 7T SRAM cell looks like two cascaded inverters
connected in series, inv2 followed by inv1, access transistor Q5 transfers the data from BLB to which drives inv2, Q1 and Q3, to
develop Q, the cell data. Similarly, Q drives inv1, Q2 and Q4, to develop QB. Then, the word line (WL) is turned off and transistor Q7
is turned ON to reconnect the feedback connection between the two inverters to stably store the new data.
ii. Read Operation
In the read operation of 7T SRAM cell, both word line (WL) and read signal R are turned on, while transistor Q7 is kept ON. When Q
= 0, the read path consists of transistor Q1 and Q6 and it behaves like a conventional 6T cell. When Q = 1, the read path consists of
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

788 www.ijergs.org

transistor Q2, Q7 and Q5, which represents a read path. In this, the three transistors are connected in series, which reduces the driving
capability of the cell unless these transistors are carefully sized.
3. STATIC NOISE MARGIN
The stability of SRAM circuits depends on the static noise margin. There are two methods to measure the SNM of SRAM cell. First
method is a graphical approach in which SNM can be obtained by drawing and mirroring the inverter characteristics and then finding
the maximum possible square between them. Figure 6 shows the standard setup of SNM. The second approach involves the use of
noise source voltages at the nodes.

Figure 6 The standard setup for SNM
In a graphical technique, Plot the Voltage Transfer Characteristic (VTC) of Inverter 2 (inv2) and inverse VTC 1 from Inverter 1
(inv1). The resulting two lobed curves are called a butterfly curve and are used to determine the SNM. Figure 7 shows the general
SNM characteristics during Standby. The SNM is defined as the length of the side of the largest square that can be embedded inside
the lobes of the butterfly curve [6].

Figure 7 General SNM characteristics during Standby
SNM calculation: We have done the SNM calculation by this way with respect to above butterfly curve:
SNM = Maximum Side of the square.
Maximum side of the Square = Maximum lengths of diagonal of Square / 2.
So, SNM = Maximum length of diagonal of square / 2.
i. Read Noise Margin
The cell is most vulnerable when accessed during a read operation because it must retain its state in the presence of the bit line
precharge voltage. If the cell is not designed properly, it may change its state during a read cycle which results in either a wrong data
being or a destructive read where the cell changes state. Thus, the worst noise obtained during access. Figure 8 shows VTC curve for
RSNM characteristics. During write operation, the situation is reversed; the requirement is to switch the content of Q and Q easily.
Read margin (RM) is calculated based on transistor current model. The read margin defines the read stability of the SRAM cell.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

789 www.ijergs.org


Figure 8 VTC curve for RSNM characteristics
ii. Write Noise Margin
The write noise margin is defined as the minimum bit line needed to flip the state of cell. The value of the write margin depends on the
cell design, SRAM array size and process variation. Write SNM (WSNM) is measured using butterfly or VTC curves as shown in
Figure 9, which are obtained from a dc simulation sweeping the input of inverters (QB and Q). WSNM for writing 1 is the width of
the smallest square that can be embedded between the lower right half of the curves. WSNM for writing 0 can be obtained from a
similar simulation.

Figure 9 VTC curve for WSNM characteristics (writing 0)
4. SIMULATION AND RESULTS
Table 1-3 presents the static noise margin of 6T and 7T SRAM cell at 45nm, 32nm and 16nm technology. Results show that RSNM
and WSNM of 7T SRAM cell has higher than 6T SRAM cell. In Figure 10-12, the profiles of the Read and Write Margin for both
cells are shown at different technologies. To make the impartial testing environment all the circuits has been simulated on the sane
input patterns. All the circuits have been simulated in 45nm, 32nm and 22nm technology on Tanner EDA tool with supply voltage 1v.
i. AT 45nm TECHNOLOGY
Table 1 Static Noise Margin of SRAM Cells at 45nm
SRAM
CELLS
READ SNM
(mV)

WRITE SNM
(mV)

6T 170.97 282.84
7T 206.56 301.72

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

790 www.ijergs.org


Figure 10 Calculation of SNM at 45nm technology
ii. AT 32nm TECHNOLOGY
Table 1 Static Noise Margin of SRAM Cells at 32nm
SRAM
CELLS
READ SNM
(mV)

WRITE SNM
(mV)

6T 153.21 268.70
7T 202.71 290.49


Figure 11 Calculation of SNM at 32nm technology

iii. AT 22nm TECHNOLOGY
Table 1 Static Noise Margin of SRAM Cells at 22nm
SRAM
CELLS
READ SNM
(mV)

WRITE SNM
(mV)

6T 145.32 261.62
7T 180.67 270.97

0
50
100
150
200
250
300
350
6T 7T
S
N
M

(
m
V
)
SRAM CELLS
READ SNM (mV)
WRITE SNM (mV)
0
50
100
150
200
250
300
350
6T 7T
S
N
M

(
m
V
)
SRAM CELLS
READ SNM (mV)
WRITE SNM (mV)
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

791 www.ijergs.org


Figure 12 Calculation of SNM at 22nm technology
5. CONCLUSIONS

In this paper, stability analyses of 6T and 7T SRAM cell topologies have been presented. The 6T SRAM cell provides very less RNM
and WNM. The 7T SRAM cell provide higher read and write noise margin as compared to 6T SRAM cell at different technologies.
The main reason of read and write stability improvement in 7T SRAM cell is that it depends on cutting off the feedback connection
between the two inverters, inv1 and inv2, before a write operation. The feedback connection and disconnection is performed by an
extra NMOS transistor and the cell only depends on BLB to perform a write operation. All the above figures depict that 7T SRAM cell at
45nm, 32nm and 22nm technology shows better read and write stability than conventional 6T SRAM cell. This paper tries to find out an
efficient SRAM memory cell with higher read and write stability at different technologies.

REFERENCES:
[1] Prajna Mishra, Eugene John and Wei-Ming Lin Static Noise Margin and Power Dissipation Analysis of various SRAM
Topologies IEEE 56
th
International Midest Symposium on Circuits and Systems (MWSCAS), pp469-472, 2013
[2] Integrated Circuit engineering Corporation Sram Technology pp. 8.1-8.11.
[3] WIKIPEDIA Static Random Access memory en.wikipedia.org/wiki/Static_random-access_memory, 2014
[4] Sung-Mo Kang, Yusuf Leblebici CMOS Digital Integrated Circuits Analysis and design TATA \McGRAW-HILL, Third
Edition, 2003
[5] Deepa Yagain, Ankit Parakh, Akriti Kedia ,Gunjan kumar Gupta Design and implementation of High speed, Low area
Multiported loadless 4T Memory Cell IEEE Fourth International Conference on Emerging Trends in Engineering & Technology,
pp 268-273, 2011
[6] Nahid Rahman, B.P. Singh Static-Noise-Margin Analysis of Conventional 6T SRAM Cell at 45nm Technology International
Journal of Computer Applications, pp19-23, 2013
[7] Deependra Singh Rajput, Manoj Kumar Yadav, pooja johri, Amit S. Rajput SNM analysis during Read operation Of 7T SRAM
Cells In 45nm Technology For Increase Cell Stability International Journal of Emerging Technologies in Computational and
Applied Sciences (IJETCAS), pp.2112-2017,2012
[8] Anie Jain, Shyam Akashe Optimization of low power 7T SRAM cell in 45nm Technology IEEE Second International
Conference on Advanced Computing & Communication Technologies (ACCT), pp 324-327,2012
[9] Aminul Islam and Mohd Hassan VARIABILITY ANALYSIS OF 6T AND 7T SRAM CELL IN SUB-45NM TECHNOLOGY
IIUM Engineering Journal, pp 13-29, 2011
[10] Andrei Pavlov, Manoj Sachdev-CMOS SRAM Circuit Design and Parametric Test in Nano-Scaled Technologies, Intel
Corporation, University Of Waterloo, 2008
[11] Sapna Singh, Neha Arora, Meenakshi Suthar and Neha Gupta PERFORMANCE EVALUATION OF DIFFERENT SRAM
CELL STRUCTURES AT DIFFERENT TECHNOLOGIES International Journal of VLSI design & Communication Systems
(VLSICS), pp97-109, 2012
[12] Christiensen D.C. Arandilla, Anastacia B. Alvarez, and Christian Raymund K. Roque Static Noise Margin of 6T SRAM Cell in
90-nm CMOS IEEE UKSim 13th International Conference on Modelling and Simulation, pp534-539, 2011
0
50
100
150
200
250
300
6T 7T
S
N
M

(
m
V
)
SRAM CELLS
READ SNM (mV)
WRITE SNM (mV)
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

792 www.ijergs.org

Analysis of a Regenerative Gas Turbine Cycle for Performance Evaluation
Rajiv Ranjan
1
, Dr. Mohammad Tariq
1
Resarch Scholar (Ph.D), Department of Mechanical Engineering, SSET, SHAITS-DU, Naini, Allahabad, India
1
Assistant Professor, Department of Mechanical Engineering, SSET, SHAITS-DU, Naini, Allahabad, India
E-mail- mohdtariq7@gmail.com

ABSTRACT
The effect of a regenerative heat exchanger in a gas turbine is analyzed using a regenerative Brayton cycle model, where all fluid
friction losses in the compressor is quantified by an isentropic efficiency term and all global irreversibilities in the heat exchanger are
taken into account by means of an effective efficiency. This analysis, which generalizes that reported by Gordon and Huleihil for a
simple, non-regenerative Brayton cycle, provides a theoretical tool for the selection of optimal operating conditions in a regenerative
gas turbine for optimum value of compressor efficiency. Regenerative gas turbine engine cycle is presented that yields higher cycle
efficiencies than simple cycle operating under the same conditions. The power output, efficiency and specific fuel consumption are
simulated with respect to operating conditions. The analytical formulae about the relation to determine the thermal efficiency are
derived taking into account the effected operating conditions (ambient temperature, compression ratio, regenerator effectiveness,
compressor efficiency and turbine inlet temperature). Model calculations for a wide range of parameters are presented, as are
comparisons with simple gas turbine cycle. The power output and thermal efficiency are found to be increasing with the regenerative
effectiveness, and the compressor efficiency. The efficiency increased with increase the compression ratio to 15, then efficiency
decreased with increased compression ratio, but in simple cycle the thermal efficiency always increases with increased in compression
ratio. The increased in ambient temperature caused decreased thermal efficiency, but the increased in turbine inlet temperature
increase thermal efficiency.
Keywords: Gas turbine cycle, Regeneration, compressor efficiency, TIT, OPR, thermal efficiency, regenerative effectiveness .

1. Introduction
Today, gas turbines are one of the most widely-used power generating technologies. Gas turbines are a type of internal combustion
(IC) engine in which burning of an air-fuel mixture produces hot gases that spin a turbine to produce power. It is the production of hot
gas during fuel combustion, not the fuel itself that gives the gas turbines the name. Combustion occurs continuously in gas turbines, as
opposed to reciprocating IC engines, in which combustion occurs intermittently. So for understanding the history of the gas turbine,
one would have to read several different papers and select material written by personnel from the aviation, and land-based sectors. At
that point, one can fill in the gaps. What follows therefore are two different accounts of the gas turbines development. Neither of
them is wrong. The first of these presents an aircraft engine development perspective [1].

In the original 19th-century Brayton engine, ambient air is drawn into a piston compressor, where it is compressed; ideally an
isentropic process. The compressed air then runs through a mixing chamber where fuel is added, an isobaric process. The heated (by
compression), pressurized air and fuel mixture is then ignited in an expansion cylinder and energy is released, causing the heated air
and combustion products to expand through a piston/cylinder; another ideally isentropic process. Some of the work extracted by the
piston/cylinder is used to drive the compressor through a crankshaft arrangement.

2. Materials and Methods:
2.1 Analysis of the Ideal Cycle
The Air Standard cycle analysis is used here to review analytical techniques and to provide quantitative insights into the performance
of an ideal-cycle engine. Air Standard cycle analysis treats the working fluid as a calorically perfect gas, that is, a perfect gas with
constant specific heats evaluated at room temperature. In Air Standard cycle analysis the heat capacities used are those for air. In the
present work the heat capacities for air and gas are taken as per their chemical constituents. Regeneration involves the installation of a
heat exchanger (recuperator) through which the turbine exhaust gases (point 4 in Fig.1) pass. The compressed air (point 2 in Fig.1) is
then heated in the exhaust gas heat exchanger, before the flow enters the combustor (Fig.1).
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

793 www.ijergs.org


Fig.1 Schematic diagram of Regenerative gas turbine cycle

A gas turbine cycle is usually defined in terms of the compressor inlet pressure and temperature, p
i
and T
i
, the compressor pressure
ratio, r
p
= p
e
/p
i
, and the turbine inlet temperature, TIT, where the subscripts correspond to the inlet and exit.

2.2 Thermodynamic analysis of various components
Gas Model
The thermodynamic properties of air and products of combustion are calculated by considering variation of specific heat and with no
dissociation. Table containing the values of the specific heats against temperature variation have been published in many references
such as Chappel and Cockshutt [10]. The curve fitting the data is used to calculate specific heats, specific heat ratio, and enthalpy of
air and fuel separately from the given values of temperature. Mixture property is then obtained from properties of the individual
component and fuel air ratio (FAR). Following equations are used to calculate the specific heats of air and gas as a function of
temperatures.

(1) If T
a
800

c
pa
= 1.0189 10
3
0.13784. T
a
+1.9843 10
4
. T
a
2
+4.2399 10
7
. T
a
3
3.7632 10
10
. T
a
4

(1)

(2) If T
a
> 800

c
pg
= 0.0086. c
p,Ar
+0.7154. c
p,N
2
+0.0648. c
p,O
2
+ 0.1346. c
p,H
2
O
+ 0.0666. c
p,CO
2

In the above equations, T stands for gas or air temperature in deg K and t =
T
100


Combustion Chamber
One of the goals of combustion chamber design is to minimize the pressure loss from the compressor to the turbine. Ideally, then, p3 =
p2, as assumed by the Air Standard analysis. More realistically, a fixed value of the combustor fractional pressure loss, p
cc
, (perhaps
about 0.05 or 5%) may be used to account for burner losses:
p
cc
= p
b,i
p
b,e
(2)
The rate of heat released by the combustion process may then be expressed as:
Q
a
= m
a
(1 + FAR) c
pg
(T
3
T
2
) [kW] (3)
where FAR is the mass fuel-air ratio

Energy balance:

b
. m
f,cc
. LCV
f
= m
g,e
. h
g,e
m
a,i
. h
a,i
(4)
The fuel to air ratio (FAR) is calculated as,
FAR =
c
pg
.T
e
c
pa
.T
i

b
.LCV
f
c
pg
.T
e
(5)
In this equation T
e
is the turbine inlet temperature, T
i
is stagnation or total exit temperature of HP compressor,
b
is the combustion
efficiency of the main combustion chamber, normally taken between 0.98 to 0.99 and LCV
f
is the lower calorific value of the fuel
taken as 42000 kJ/kgK assuming fuel as diesel. Values of specific heat of air and gases are to be calculated from the gas model.


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

794 www.ijergs.org

Regenerator
A regenerator is modeled as a gas-to-gas counter flow heat exchanger (Fig 2). In this, heat is exchanged between compressed air
coming out of the compressor and the hot gas exiting the gas turbine after expanding before entering the HRSG.

The advantage of using a recuperator is that, some heat is added to the compressed air in the recuperator itself before it enters into the
combustion chamber, so the same turbine inlet temperature is achieved as that when no such recuperator is employed, with lower fuel
consumption hence the efficiency of the plant increases. The following are the assumptions for modeling of a recuperator.
- A concept of effectiveness of the recuperator is introduced to account for its inefficiencies.
- There is a pressure drop in the streams passing through the recuperator, which is taken as percentage of inlet
pressure.
The enthalpy of air entering the combustion chamber is given by energy balance equation

rc
=
T
rc,g

i
-T
rc,g

e
T
rc,g

i
-T
rc,a

i
(6)
and

rc,g
. c
p,g
.
rc
. T
rc,g

in
-T
rc,g

out
=m
rc,a
. c
p,a
. T
rc,a

out
-T
rc,a

in

(7)



Fig.2 Schematic and temperature- heat transfer surface area for a counter-flow surface type recuperator

Figure 2 shows a gas turbine with a counter flow heat exchanger that extracts heat from the turbine exhaust gas to preheat the
compressor discharge air to T
5
ahead of the combustor. As a result, the temperature rise in the combustor is reduced toT
4
. T
5
a
reduction reflected in a direct decrease in fuel consumed.

Note that the compressor and turbine inlet and exit states can be the same as for a simple cycle. In this case the compressor, turbine,
and net work as well as the work ratio are unchanged by incorporating a heat exchanger.

The effectiveness of the heat exchanger, or regenerator, is a measure of how well it uses the available temperature potential to raise the
temperature of the compressor discharge air. Specifically, it is the actual rate of heat transferred to the air divided by the maximum
possible heat transfer rate that would exist if the heat exchanger had infinite heat transfer surface area.

The actual heat transfer rate to the air is c
p
(T
5
T'
2
), and the maximum possible rate is c
p
(T'
4
T'
2
) Thus the regenerator
effectiveness can be written as

rc
=
T
5
T'
2
T'
4
T'
2
(8)

It is seen that the combustor inlet temperature varies from T2 to T4 as the regenerator effectiveness varies from 0 to 1. The regenerator
effectiveness increases as its heat transfer area increases. Increased heat transfer area allows the cold fluid to absorb more heat from
the hot fluid and therefore leave the exchanger with a higher T
5
.

On the other hand, increased heat transfer area implies increased pressure losses on both air and gas sides of the heat exchanger, which
in turn reduces the turbine pressure ratio and therefore the turbine work. Thus, increased regenerator effectiveness implies a tradeoff,
not only with pressure losses but with increased heat exchanger size and complexity and, therefore, increased cost.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

795 www.ijergs.org


Fig. 3 Schematic of a Regenerative gas turbine


Fig. 4 T-s representation of Regenerative gas turbine cycle

The exhaust gas temperature at the exit of the heat exchanger may be determined by applying the steady-flow energy equation to the
regenerator. Assuming that the heat exchanger is adiabatic and that the mass flow of fuel is negligible compared with the air flow and
noting that no shaft work is involved.

2.3 Gas Turbine Analysis with Regeneration
Fig. 4 shows the T-S diagram for regenerative gas turbine cycle. The actual processes and ideal processes are represented in dashed
line and full line respectively. The compressor efficiency (
c
), the turbine efficiency (
t
) and effectiveness of regenerator (heat
exchanger) are considered in this study. These parameters in terms of temperature are defined as in (1) [8]:

c
=
T
2

T
1
T
2
T
1
(9)

t
=
T
4
T
5
T
4
T
5

(10)
Regenerative effectiveness is given by
=
T
3
T
2
T
5
T
2
(11)
The work required to run the compressor is expressed as in (2):
W
c
= c
pa
T
1

r
p

a
1

a
1

c
(12)
The work developed by turbine is then rewritten as in (5):
W
t
= c
pg
T
4

t
1
1
r
p

g
1

g
(13)
where turbine inlet temperature (TIT) = T4 .The net work is expressed as in (6)

W
net
= W
t
W
c
(14)
i.e.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

796 www.ijergs.org

W
net
= c
pg
T
4

t
1
1
r
p

g
1

g
c
pa
T
1

r
p

a
1

a
1

c
(15)
In the combustion chamber, the heat supplied by the fuel is equal to the heat absorbed by air, Hence,
Q
ad
= c
pg

T
4
T
1
1 1 +
r
p

a
1

a
1

c
T
4
1
t
1
1
r
p

g
1

(16)
Power output is given by:
P = m
a
W
net
(17)
Fuel to air ratio is given by
FAR =
C.V.
Q
ad
(18)
And Specific Fuel consumption
SFC =
C.V.
AFR W
net
(19)
Thermal Efficiency is given by

th
=
W
net
Q
ad
(20)
Work Ratio:
The work ratio is given by the following equation:
WR =
W
net
W
t
(21)
Specific Fuel Consumption:
If the mass of fuel consumed is given in kg/s and the net work developed is in kW then the specific fuel consumption will be
calculated in kg/kWh.
SFC =
3600 m
f
W
net
(Kg/kWh) (22)
Net Power
Net power available is calculated by the following equation.
Net Power = m
a
w
net

gen
(23)
If mass of air flow is given in kg/s and net power is given in kJ/kg then the net power will be calculated in kW.

3. Results and Discussion
In the following paragraphs the results of the present work have been discussed with the help of graphs. The results are based on the
software developed in C++ and afterwards graphs have plotted with the help of menu driven software Origin 50. The graphs are
plotted for various parameters for different compressor, turbine efficiency, turbine inlet temperature, overall pressure ratios and
regenerative effectiveness. The results have been given the thermal efficiency, compressor work, specific fuel consumption and power
developed etc. of the gas turbine cycle. The various compressor efficiencies have been considered for the various output values.
0.5 0.6 0.7 0.8 0.9
1330
1340
1350
1360
1370
1380
1390
1400
1410
1420


Regenerative Effectiveness
EFFT=0.9
TIT=1500 K
OPR=30
EFFC=0.80
EFFC=0.82
EFFC=0.84
EFFC=0.86
EFFC=0.88
EFFC=0.90
H
e
a
t

i
n
p
u
t

t
o

C
o
m
b
u
s
t
o
r

(
k
J
)

Fig. 5 Variation of Heat input to combustor with Regenerative effectiveness
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

797 www.ijergs.org

Figure 5 represents the heat input to combustor with different regenerative effectiveness for different compressor polytropic
efficiency. On increasing the compressor efficiency, heat input to combustor has been increases. On the other hand, increase in the
regenerative effectiveness results in the decrease in the heat input to combustor of the gas turbine cycle
Figure 6 represents the compressor work with different regenerative effectiveness for different compressor polytropic efficiency. On
increasing the compressor efficiency, compressor work has been decreases. On the other hand, increase in the regenerative
effectiveness results in the increase in the compressor work of the cycle.
Figure 7 represents the thermal efficiency with different regenerative effectiveness for different compressor polytropic efficiency. On
increasing the compressor efficiency, thermal efficiency has been increases. On the other hand, increase in the regenerative
effectiveness also results in the increase in the thermal efficiency of the gas turbine cycle.
Figure 8 represents the specific fuel consumption with different regenerative effectiveness for different compressor polytropic
efficiency. On increasing the compressor efficiency, specific fuel consumption has been decreases. On the other hand, increase in the
regenerative effectiveness also results in the decrease in the specific fuel consumption of the gas turbine cycle.

0.5 0.6 0.7 0.8 0.9
460
480
500
520
540
560
580
600
EFFT=0.9
TIT=1500 K
OPR=30


Regenerative Effectiveness
EFFC=0.80
EFFC=0.82
EFFC=0.84
EFFC=0.86
EFFC=0.88
EFFC=0.90
C
o
m
p
r
e
s
s
o
r

W
o
r
k

(
k
J
)

Fig. 6 Variation of Compressor work with Regenerative effectiveness
0.5 0.6 0.7 0.8 0.9
0.33
0.34
0.35
0.36
0.37
0.38
0.39
0.40


EFFT=0.9
TIT=1500 K
OPR=30
Regenerative Effectiveness
EFFC=0.80
EFFC=0.82
EFFC=0.84
EFFC=0.86
EFFC=0.88
EFFC=0.90
T
h
e
r
m
a
l

E
f
f
i
c
i
e
n
c
y

Fig. 7 Variation of Thermal efficiency with Regenerative effectiveness

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

798 www.ijergs.org

0.5 0.6 0.7 0.8 0.9
0.245
0.250
0.255
0.260
0.265
0.270
0.275
0.280
0.285
0.290
0.295
0.300


Regenerative Effectiveness
EFFT=0.9
TIT=1500 K
OPR=30
EFFC=0.80
EFFC=0.82
EFFC=0.84
EFFC=0.86
EFFC=0.88
EFFC=0.90
S
p
e
c
i
f
i
c

F
u
e
l

C
o
n
s
u
m
p
t
i
o
n

Fig. 8 Variation of Specific fuel consumption with Regenerative effectiveness
Figure 9 represents the heat supply to the combustor with different regenerative effectiveness for different compressor ratio. On
increasing the compressor ratio, heat supply to the combustor has been increases. On the other hand, increase in the regenerative
effectiveness results in decrease in the heat supply to the combustor of the cycle.
Figure 10 represents the thermal efficiency with different regenerative effectiveness for different compression ratio or overall pressure
ratio (OPR). On increasing the OPR, thermal efficiency decreases. On the other hand, increase in the regenerative effectiveness results
in increase in the thermal efficiency of the gas turbine cycle. At low OPR, thermal efficiency increases on increasing in the
regenerative effectiveness but for higher values of OPR, the rate of increase in the thermal efficiency is very slow. Therefore, the
optimum value of thermal efficiency has an optimum value of overall pressure ratio.
Figure 11 represents the heat input to the combustor with different turbine inlet temperature (TIT) for different compressor efficiency.
On increasing the TIT, heat input to the combustor increases.
Figure 12 represents the compressor work with different compressor efficiency for different turbine inlet temperature (TIT). On
increasing the turbine inlet temperature, compressor work increases. On the other hand, increase in the compressor efficiency, results
in decrease in the compressor work of the gas turbine cycle.
Figure 13 represents the thermal efficiency with different compressor efficiency for different turbine inlet temperature (TIT). On
increasing the turbine inlet temperature, thermal efficiency increases. On the other hand, increase in the compressor efficiency, results
in increase in the thermal efficiency of the gas turbine cycle. Figure 14 represents the specific fuel consumption with different
compressor efficiency for different turbine inlet temperature (TIT). On increasing the turbine inlet temperature, specific fuel
consumption increases. On the other hand, increase in the compressor efficiency, results in decrease in the specific fuel consumption
of the gas turbine cycle.

0.5 0.6 0.7 0.8 0.9
1150
1200
1250
1300
1350
1400
1450
EFFT=0.9
TIT=1500 K


Regenerative Effectiveness
OPR=10
OPR=15
OPR=20
OPR=25
OPR=30
OPR=35
OPR=40
H
e
a
t

i
n
p
u
t

t
o

c
o
m
b
u
s
t
i
o
n

c
h
a
m
b
e
r

(
k
J
)

Fig.9 Variation of Heat input to combustor with Regenerative effectiveness
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

799 www.ijergs.org

0.5 0.6 0.7 0.8 0.9
0.31
0.32
0.33
0.34
0.35
0.36
0.37
0.38
0.39
0.40


Regenerative Effectiveness
EFFT=0.9
TIT=1500 K
OPR=10
OPR=15
OPR=20
OPR=25
OPR=30
OPR=35
OPR=40
T
h
e
r
m
a
l

E
f
f
i
c
i
e
n
c
y

Fig. 10 Variation of Thermal Efficiency with Regenerative effectiveness
1000 1100 1200 1300 1400 1500 1600
0
200
400
600
800
1000
1200
1400
Turbine Inlet Temperature (K)
EFFT=0.9
REGEFF=0.9
OPR=30
EFFC=0.80
EFFC=0.82
EFFC=0.84
EFFC=0.86
EFFC=0.88
EFFC=0.90
H
e
a
t

i
n
p
u
t

t
o

c
o
m
b
u
s
t
i
o
n

c
h
a
m
b
e
r

Fig.11 Variation of Heat input to combustor with TIT

1000 1100 1200 1300 1400 1500 1600
440
460
480
500
520
540


Turbine Inlet Temperature (K)
EFFT=0.9
REGEFF=0.9
OPR=30
EFFC=0.80
EFFC=0.82
EFFC=0.84
EFFC=0.86
EFFC=0.88
EFFC=0.90
C
o
m
p
r
e
s
s
o
r

w
o
r
k

(
k
J
)

Fig. 12 Variation of Compressor work with Regenerative effectiveness
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

800 www.ijergs.org

1000 1100 1200 1300 1400 1500 1600
0.15
0.20
0.25
0.30
0.35
0.40


Turbine Inlet Temperature (K)
EFFT=0.9
REGEFF=0.9
OPR=30
EFFC=0.80
EFFC=0.82
EFFC=0.84
EFFC=0.86
EFFC=0.88
EFFC=0.90
T
h
e
r
m
a
l

E
f
f
i
c
i
e
n
c
y

Fig. 13 Variation of Thermal efficiency with Turbine inlet temperature
1000 1100 1200 1300 1400 1500 1600
0.25
0.30
0.35
0.40
0.45
0.50
0.55


Turbine Inlet Temperature (K)
EFFT=0.9
REGEFF=0.9
OPR=30
EFFC=0.80
EFFC=0.82
EFFC=0.84
EFFC=0.86
EFFC=0.88
EFFC=0.90
S
p
e
c
i
f
i
c

F
u
e
l

C
o
n
s
u
m
p
t
i
o
n

Fig. 14 Variation of Specific fuel consumption with Turbine inlet temperature

ACKNOWLEDGEMENT
I would like to express my sincere gratitude and appreciation to my supervisor, Dr. Mohammad Tariq for his guidance, advice, effort
and cooperation throughout the stages of this study and special thanks are due to HOD of the Mechanical Engg. Department for his
kind cooperation and help. Great thanks are extended to my family for their support and prayers. Special thanks go to anyone who
supported, encouraged and helped me.
4. Conclusion
The regenerative gas turbine power plant has been analyzed for various parameters. The most important parameter which has been
covered in this work is the various polytropic efficiencies of compressor. The gas turbine regeneration package is designed to increase
the efficiency of gas turbines with exhaust gas recirculation. Besides the gas turbine itself, four basic components are used to make up
the efficiency package: the first is an exhaust gas cooler, the second is an exhaust gas cooling fan, the third is a recycle exhaust gas
regenerator and finally there is a compressed ambient air regenerator. Regenerative gas turbine engine cycle is presented that yields
higher cycle efficiencies than simple cycle operating under the same conditions. The power output, efficiency and specific fuel
consumption are simulated with respect to operating conditions. The analytical formulae about the relation to determine the thermal
efficiency are derived taking into account the effected operation conditions (ambient temperature, compression ratio, regenerator
effectiveness, compressor efficiency, and turbine inlet temperature). Model calculations for a wide range of parameters are presented,
as are comparisons with variable turbine and compressor efficiencies of gas turbine cycle. The power output and thermal efficiency
are found to be increasing with the regenerative effectiveness, and the compressor and turbine efficiencies. The efficiency increased
with increase the compression ratio to 15, then efficiency decreased with increased compression ratio, but in simple cycle the thermal
efficiency always increase with increased in compression ratio. The increased in ambient temperature caused decreased thermal
efficiency, but the increased in turbine inlet temperature increase thermal efficiency.

REFERENCES:
[1] The History of Aircraft Gas Turbine Development in the United States, St. Peter, J., Published IGTI, ASME, 1999.
[2] Sanjay et al. (2008), Influence of different means of turbine blade cooling on the thermodynamic performance of combined
cycle, Applied Thermal Engineering 28, 23152326
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

801 www.ijergs.org

[3] Sanjay et al., Comparative performance analysis of cogeneration gas turbine cycle for different blade cooling means,
International Journal of Thermal Sciences 48 (2009) 14321440
[4] Ashley De S and Sarim Al Zubaidy, Gas turbine performance at varying ambient temperature, Applied Thermal Engineering 31
(2011) 2735e2739
[5] J.W. Baughn, R.A. Kerwin, A comparison of the predicted and measured thermodynamic performance of a gas turbine
cogeneration system, ASME Journal of Engineering for Gas Turbine and Power 109 (1987) 3238.
[6] I.G. Rice, Thermodynamic evaluation of gas turbine cogeneration cycles: Part 1. Heat balance method analysis, ASME Journal of
Engineering for Gas Turbine and Power 109 (1987)
[7] R. Bhargava, A. Peretto, A unique approach for thermo-economic optimization of an intercooled, reheated and recuperated gas
turbine for cogeneration application, ASME Journal of Engineering for Gas Turbine and Power 124 (2001) 881891
[8] F.S. Basto, H.P. Blanco, Cogeneration system simulation and control to meet simultaneous power, heat and cooling demands,
ASME Journal of Engineering for Gas Turbine and Power 127 (2005) 404409.
[9] M. Bianchi, G.N. Montenegro, A. di Peretto, Cogenerative below ambient gas turbine performance with variable thermal power,
ASME Journal of Engineering for Gas Turbine and Power 127 (2005) 592598.
[10] A. Poullikkas, An overview of current and future sustainable gas turbine technologies,
Renewable and Sustainable Energy Reviews 9 (2005) 409443.
[11] Torbidini, L. and A. Massardo, Analytical Blade Row Cooling Model for Innovative Gas Turbine Cycle Evaluations Supported
by Semi-Empirical Air- Cooled Blade Data. Journal of Engineering for Gas Turbines and Power, 2004 126: p. 498-506.
[12] Vittal, S., P. Hajela, and A. Joshi, Review of Approaches to gas turbine life management, in 10th AIAA/ISSMO
Multidisciplinary Analysis and Optimization. 2004, AIAA: Albany, NY.
[13] Zifeng Yang and Hui Hu, An experimental investigation on the trailing edge cooling of turbine blades,Propulsion and Power
Research 2012;1(1):3647
[14] Cun-liang Liu et al., Film cooling performance of converging slot-hole rows on a gas turbine blade, International Journal of
Heat and Mass Transfer 53 (2010) 52325241
[15] Mahmood Farzaneh-Gord and Mahdi Deymi-Dashtebayaz, Effect of various inlet air cooling methods on gas turbine
performance, Energy 36 (2011) 1196-1205
[16] J. H. Horlock (2003), Advanced Gas Turbine Cycles, F. R. Eng., F.R.S. ELSEVIER SCIENCE Ltd The Boulevard, Langford
Lane Kidlington, Oxford OX5l GB, UK
[17] Thamir K. Ibrahim et al., Improvement of gas turbine performance based on inlet air cooling systems: A technical review,
International Journal of Physical Sciences Vol. 6(4), pp. 620-627, 18 February, 2011
[18] Ashok D. Rao and David J. Francuz, An evaluation of advanced combined cycles, Applied Energy 102 (2013) 1178118











International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

802 www.ijergs.org

Review Paper of Adaptive Work Performance Analysis of Turbojet Engines
Kamal Kumar Pradhan
1
, Bhoumika Sahu
1
, Hemrani Gajendra
1

1
Research Scholar (PG), Department of Mechanical Engineering, MATS University, Raipur

ABSTRACT- The review paper describes major designs of adaptive jet engines as well as their structural and
operational advantages. Particular attention paid to a double-rotor engine designed by Pratt & Whitney, where a
portion of air is bled from downstream of the compressor and then supplied to the area downstream the turbine
when the engine is operated at its maximum performance (turbine bypass engine).Actual thermodynamic cycles
of such engines and energy balance of flows. It was shown that real working cycles of these engines represent
figures with variable surface areas, which is the reason for the second name of these units engines with
variable thermodynamic cycle. Engine parameters were measured in order to facilitate the recognition of
incipient engine difficulties. In addition, a successful effort was made to operate the engines satisfactorily when
they were severely damaged. The analysis has been conducted separately for internal and external channel. The
sensitivity analysis for the working cycle makes it possible to select parameters that are potentially controllable
and adjustable. To sum up the foregoing deliberations one has to state that operation of turbojet adaptive
engines is the topic that needs much more investigation. Even the design solution of the engine itself, although
being really exciting and promising, is troublesome in the aspects of the design and process engineering, and
will lead to a series of operation and maintenance problems.

Keywords -Parameters of the engine air, sensitivity of the work cycle, adaptation engine

INTRODUCTION
The turbojet is an engine, as shown in fig. 1 usually used in aircraft. It consists of a gas turbine with a
propelling nozzle. The compressed air from the compressor is heated by the fuel in the combustion
chamber and then allowed to expand through the turbine.
Fig. 1 Turbojet engine
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

803 www.ijergs.org

Turbojets have been replaced in slower aircraft by turboprops which use less fuel. At higher speeds,
where the propeller is no longer efficient, they have been replaced by turbofans. The turbofan is
quieter and uses less fuel than the turbojet. Turbojets are still common in medium range cruise
missiles, due to their high exhaust speed, small frontal area, and relative simplicity.
One of the most innovative and original direction of research is devoted to designs of so called turbine adaptive
engines (also referred to in literature sources as engines with variable thermodynamic cycle). The basic aim of
such developments is to fill the existing gap between the single-flow and double-flow engines.

The jet engine is only efficient at high vehicle speeds, which limits their usefulness apart from aircraft. Turbojet engines
have been used in isolated cases to power vehicles other than aircraft, typically for attempts on land speed records. Where
vehicles are 'turbine powered' this is more commonly by use of a turbo shaft engine, a development of the gas turbine
engine where an additional turbine is used to drive a rotating output shaft. These are common in helicopters and
hovercraft. Turbojets have also been used experimentally to clear snow from switches in rail yards.

The common use of turbine jet engines as basic drive units for both military and civil aircrafts as well as
tremendous increase of demands to cost-effectiveness, noise and emission of toxic pollutants has led to drawing
up of new research directions for their further development.

GAS TURBINES IN AIRCRAFT - JET ENGINES

Although the analysis of the jet engine is similar to that of the gas turbine, the configuration and design of jet
engines differ significantly from those of most stationary gas turbines. The criteria of light weight and small
volume, mentioned earlier, apply here as well. To this we can add the necessity of small frontal area to
minimize the aerodynamic drag of the engine, the importance of admitting air into the engine as efficiently
(with as little stagnation pressure loss) as possible, and the efficient conversion of high-temperature turbine exit
gas to a high-velocity nozzle exhaust. The resulting configuration is shown schematically in Figure 2.


Fig.2 Jet engine notation and temperature
entropy Diagram.

In early turbojet engines: solid blades the maximum admissible temperature was directly related to
improvement of structural materials (Tmax ~ 1100 C)

From 1960-70: development of early air-cooled
turbine blades
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

804 www.ijergs.org

Hollow blades
Internal cooling of blades (casting using the lost wax technique)

JET ENGINE PERFORMANCE

It is seen that engine thrust is proportional to the mass flow rate through the engine and to the excess of the jet
velocity over the flight velocity. The specific thrust of an engine is defined as the ratio of the engine thrust to its
mass flow rate. From Equation below equation:
The specific thrust is


Because the engine mass flow rate is proportional to its exit area, A5/m depends only on design nozzle exit
conditions. As a consequence, F/m is independent of mass flow rate and depends only on flight velocity and
altitude. Assigning an engine design thrust then determines the required engine-mass flow rate and nozzle exit
area and thus the engine diameter. Thus the specific thrust, F/m, is an important engine design parameter for
scaling engine size with required thrust at given flight conditions.

Another important engine design parameter is the thrust specific fuel consumption, TSFC, the ratio of the mass
rate of fuel consumption to the engine thrust


ENERGY BALANCE

Real working cycles of these engines represent figures with variable surface areas, which are the reason for the
second name of these units engines with variable thermodynamic cycle. Working cycles for adaptive engines
of VCE or VSCE types are typical cycles of a double flow engine, where the working area is split between
bypass channels, depending on the aircraft flight conditions and operation range of the engine.




Fig. 3 Real cycle of a turbojet adaptive engine


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

805 www.ijergs.org



Fig. 4 Real cycle of a turbojet engine of the bypass type

The working cycle of the bypass engine is a typical cycle of a single-flow engine with variable area of the
corresponding thermal cycle. The curve shape depends on the amount of air that is bled from downstream of the
compressor and fed downstream the turbine (Fig. 4).

SENSITIVITY OF THE WORK CYCLE

The sensitivity analysis of a mathematic model is understood as estimation of increments exercised by variables
of the model caused by variations of its parameters. Increments of variables are usually evaluated by differential
approximations.
The sensitivity analysis for the work cycle makes it possible to select parameters that are potentially
controllable and adjustable. In addition, sensitivity of the work cycle serves as the measure how individual
design and operational parameters of the engine affect its internal and external characteristics. The sensitivity
can be determined when the relationship that determines the work cycle is expanded into the Taylor series,
where only two first terms of the expansion are taken into account:



Where
lob - effective work for a cycle of an equivalent
single-flow engine



Next, it is necessary to find out appropriate relationships between the partial differentials and selected status
parameters.

Any increase of the engine compression results in continuous drop of the work cycle sensitivity to variation of
the given parameter (Fig. 5 the sensitivity of the work value within the operation cycle is related to the work
value for the specific channel at the point of expansion). However, the effect is very weak, so that parameter is
not used as a control factor. According to the work cycle sensitivity of the internal channel decreases in pace
with increase in efficiency of the compression process (Fig. 6) but again, the sensitivity within the range of
compression values that are commonly applied (i.e. >0.80) is insignificant, which confirms only slight
effect of that parameter on the work value within the operation cycle. For more precise evaluation, how
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

806 www.ijergs.org

parameters of the engine thermal cycle affect efficiency of its operation, the information should be sought by
estimating values of natural derivatives for specific parameters.



Fig.5. Sensitivity of the work value within the
thermal cycle to variations of the engine
compression.




Fig.6. Sensitivity of the work value within the
thermal cycle to variations of the
compression process efficiency


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

807 www.ijergs.org

The foregoing relationship demonstrates that increase of the split factor (degree of double-flow operation) m
between two streams always leads to drop of the work value for the thermal; cycle in the external channel (Fig.
7).



Fig. 7.Sensitivity of the work value within the
thermal cycle to variation of the double
flow factor used

However, the nature of the curve reveals very high sensitivity of the work value within the thermal cycle to
variations of that parameter within low range of the parameter values, whereas as early as from m >1.5 the
dependence is really insignificant. All the above serves as a proof that further increase of the m value above the
mentioned threshold, no longer decides about the work values within the thermal cycle of the external channel.
It may be the reason for the fact that for all the already examined adaptive engines the air bleeds via the external
channels not more than 15 20%, i.e. within the limits of the highest sensitivity of the work value within the
thermal cycle to that parameter.

Therefore it is impossible to unambiguously determine, how the mentioned parameter affects the value of work
within the thermal cycle as the interrelationship changes in pace with variation of
M
.

In case of bypass motors, the sensitivity analysis for the value of work within the thermal cycle can be carry
out with consideration to only the second term of the relationship as it is the term that decides about the
difference in the value of work within the thermal cycle as compared to the second motor where such air bleeds
are not applied. When to express the second part of the relationship in the form of functions, the following form
is achieved:



CONCLUSION

In summary, operation of turbojet adaptive engines is the topic that needs much further investigations. Even the
design solution of the engine itself, although being really exciting and promising, is troublesome in the aspects
of design, technology and shall led to a series of operation and maintenance problems. The most difficult issue
seems to be a solution meant to control bleeding of air depending on the flight speed of an aircraft. Therefore
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

808 www.ijergs.org

future prospects for design solutions of adaptive engines are open and offer big opportunities for further
development.

In addition, the degree of separation of the stream in the adaptive engines can be a parameter that adjusts the
value of the working cycle engine because of its high sensitivity, especially to small values, and since m > 1.5 is
very small. It may be the reason for the fact that for all the already examined adaptive engines the air bleeds via
the external channels not more than15 - 20%, i.e. within the limits of the highest sensitivity of the work value
within the thermal cycle to that parameter.
So, I am interested for this topic, because this is related to turbo machinery and do want to something new.


REFERENCES:

[1] Bammert, K., and Deuster, G., .Layout and Present Status of the Closed-Cycle Helium Turbine Plant
Oberhausen.. ASME paper 74- GT-132, 1974.


[2] Dixon, S. L., Fluid Mechanics,Thermodynamics of Turbomachinery. New York: Pergamon Press, 1978.

[3] Weston, Kenneth C., .Turbofan Engine Analysis and Optimization Using Spreadsheets.. ASME
Computers in Engineering Conference, Anaheim, California, July 30-August 3, 1989.

[4] Kowalski, M., Orkisz, M., Szczeciski, S., Napd lotniczy typu bypass. (Avionic drive of the bypass
type). WPT 3/1993.

[5] Kowalski, M., Orkisz, M., Szczeciski, S., Silniki adaptacyjne perspektywy ich zastosowania. AERO-
Technika Lotnicza 2/1992 Kwartalny dodatek specjalny. (Adaptive engines prospects and possible
applications. AERO Avionic Technology 2/1992.

[6] Kowalski, M., Orkisz, M., Wraliwo pracy obiegu turbinowych silnikw adaptacyjnych. III Sympozjum
naukowe - WSOSP, Deblin 1996. (Sensitivity demonstrated by the value of work during a thermal cycle in
turbojet adaptive engines. Scientific symposium, - WSOSP, Deblin 1996)







International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

809 www.ijergs.org

Effect of Fiber Orientations on Tribological Behaviour of PALF Reinforced
Bisphenol-A Composite
Manu Prasad .M .P
1
, Vinod. B
2
, Dr. L.J. Sudev
3
1
Research Scholar, Department of Mechanical Engineering, Vidya Vardhaka College of Engineering, Mysore, Karnataka
2
Assistant professor, Department of Mechanical Engineering, Vidya Vardhaka College of Engineering, Mysore, Karnataka
3
Professor, Department of Mechanical Engineering, Vidya Vardhaka College of Engineering, Mysore, Karnataka
Email-manuprsd340@gmail.com, 9008727017
Abstract Ironically, despite the growing familiarity with composite materials and ever-increasing range of applications, use of
natural fibers as reinforcement in polymeric composites for technical application has been a research subject of scientist. Pineapple
leaf fibre (PALF) is one of them that have also good potential as reinforcement in polymer composite. In this present work, an
experimental study has been conducted to determine the effect of fiber orientation, namely, unidirectional, bidirectional and 45
orientation on specific wear rate and frictional coefficient of PALF reinforced Bisphenol-A (BPA) composite by using a pin-on-disc
wear and frictional testing machine. The wear samples are slided against stainless steel disc by varying loads of 5N, 10N & 15 N
under constant sliding distance, velocity, speed and dry conditions. It is found that the wear resistance of pure Bisphenol-A resin is
improved after PALF reinforcement. Among three types of fiber orientation in composite, bidirectional composite shows least specific
wear rate and coefficient of friction.

Keywords PALF, BPA, Natural fibers, Alkaline treatment, Fiber orientations, Specific wear rate, coefficient of friction.
. INTRODUCTION
Now a days polymeric materials are used in almost all the applications because of their specific characteristics such as light
weight, self lubricancy and reduced noise. Natural fibers are advantageous over synthetic fibers as they are renewable, eco friendly,
low in density, biodegradable and less abrasive. The abundant availability of natural fibers and ease in composite manufacturing has
triggered the interest among the researchers to study about their tribological behaviour under reinforcement in polymers. During the
tribological applications like bearings, gears, cams etc., the major failure mechanism experienced is due to wear only. The wear takes
place during the time of relative movement between tribo materials.

Pineapple Leaf Fibre (PALF) serving as reinforcement fibre in most of the plastic matrix has shown its significant role as it is cheap,
exhibiting superior properties when compared to other natural fibre. PALF is multi-cellular and lignocelluloses materials extracted
from the leave of plant Ananas cosomus belonging to the Bromeliaceae family by retting (separation of fabric bundles from the
cortex). PALF has a ribbon-like structure and is cemented together by lignin material, which contribute to the strength of the fibre.

C.H. Chandra rao et al [1] investigated on wear behavior of coir fiber reinforced epoxy composites. L. Bhoopati et al [2] studied the
wear behaviour of Borassus fruit fiber reinforced epoxy composites. S.R.Chauhan [3] studied the friction and wear behaviour of
vinylester composites under dry and water lubricated conditions. Mohit Sharma et al [4] studied the influence of fiber orientation on
abrasive wear of unidirectionally reinforced carbon fiberpolyetherimide composites. Punyapriya mishra et al [5] studied the abrasive
wear behavior of bagasse fiber reinforced polymer composite. Therefore in the present work an attempt has been made to investigate
the specific wear rate and coefficient friction of long PALF reinforced Bisphenol-A composite for different fiber arrangements under
various loads from 5N to 15N.
. EXPERIMENT
A. Materials
PALF extracted from the leaf of pineapple plant by biological method supplied from Chandra Prakash. Co, Jaipur, Rajastan.
Bisphenol-A resin was supplied from Balaji fabrications, Mysore, Karnataka.
B. Chemical treatment of fiber
Alkali treatment or mercerization using sodium hydroxide (NAOH) is the most commonly used treatment for bleaching and
cleaning the surface of natural fibers to produce high-quality fibers. 5% NaOH solution was prepared using sodium hydroxide pellets
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

810 www.ijergs.org

and distilled water. Pineapple leaf fibers were then dipped in the solution for 1hour. After 1 hour fibers were washed with 1% HCl
solution to neutralize the fibers. Then it is washed with distilled water. It was then kept in hot air oven for 3hours at 65-70c.

C. Preparation of composites and samples
All specimens in this study were manufactured by hand layup technique. The mould that was used is made of poly propylene
with dimension (100*70*10 mm
3
) is sown in the Figure 2.1. The chemical treated fiber yarns are woven in to unidirectional,
bidirectional and 45 orientational mats. The mould was filed by the mixture of Bisphenol-A resin and hardener (HY 951) of 10:1
ratio at room temperature. The mats (30% volume fraction) were added to mixture of resin and hardener. The load was applied to
solidify, when the solidification process for all moulds is completed after 24 hours, the casts are released from the moulds. The
composite laminates with different fiber orientation are shown in Figure 2.2. The laminates were cut in to sample of dimensions
8810 mm
3
. The samples were attached to sample holder which was of c/s area of 8832 mm
3
. Samples attached to sample holder s
are shown in Figure 2.3.

D. Pin-on-disc wear test
Wear tests were carried out by using a pin-on-disc. The samples attached to the sample holder is connected to the 88c/s area
chuck against rotating wear disc (EN-31(56-60 HRC) of 165mm * 8mm thick. The disc was made to rotate at constant speed of 160
rpm, velocity of 1 m/s and sliding distance of 180 m under different applied loads (5N, 10N and 15N). Three samples namely: A, B
and C from unidirectional, bidirectional and 45 orientational composite are used for 5N, 10N and 15N loads respectively. After the
end of testing the specimens were removed and weighed to determine the weight loss due to wear. The differences in weight measured
before and after tests gives wear of the composite specimen. The following relations 1& 2 are used to investigate the, specific wear
rate and coefficient of friction respectively, V = volume of material removed by wear in cm
3
, P = Normal load in N, L= sliding
distance in meter (m), F = Tangential frictional force in N. Figure 2.4 shows sample against disc.
K
o
=
V
P.L
(2.1)

=
F
P
(2.2)


Figure 2.2 1007010 mm
3
mould

(a) (b) (c)
Figure 2.2: (a) Unidirectional (b) bidirectional (c) 45 orientational

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

811 www.ijergs.org


Figure 2.3 Samples with sample holder Figure 2.4 Sample against steel disc


. RESULTS AND DISCUSSION
Table 3.1 shows the wear & frictional properties of composite for different fiber orientations for Constant parameters such as
velocity 1m/s, sliding distance 180m and speed 160rpm. The study on the frictional and wear properties is important for the
application of composite where there the composite will be subjected for the friction and wear. Properties which are required to
analyze the wear behavior of the composite is specific wear rate (K
o
) and frictional properties are, coefficient of friction () and
tangential friction force (F
t
), Specific wear rate is not the inherent property of the composite, but it gives wear resistance of composite
sample being tested in terms of volume of material removed with respect to the applied normal load and sliding distance. The relation
for the specific wear rate is given in the Equation 2.1. Coefficient of friction gives relation between tangential friction force that will
be existed between composite sample surface and abrasive disc surface. Specific wear rate and coefficient of friction for different fiber
arrangements in composite are listed in Table 3.1. Specific wear rate, coefficient of friction and trend of specific wear rate and
coefficient of friction with respect to 5N, 10N and 15N are shown the Figure 3.1, 3.2, 3.3 & 3.4 respectively.

Specific wear rate (K
o
) as a function of load can be seen in the Figure 3.1. It is observed that, the specific wear rate decreased
with increase in load and holds good the Equation 2.1, since it is inversely proportional to the normal loads. The bidirectional
composite has shown the less specific wear rate compare to specific wear rate of the unidirectional and 45 orientation fiber
composite. Specific wear rate of bidirectional fiber composite is 62.28%, 20% and 54%, 65.52%, 37% and 58% and 63.17%, 38% and
53% less than unreinforced Bisphenol-A resin, 45 orientation and unidirectional fiber composite for 5N, 10N and 15N respectively.
The reason for the less specific wear rate in case of bidirectional fiber composite is the presence of fibers in either direction
(longitudinal and transverse). Due to this, less applied load transmitted through the bidirectional fiber orientation. In addition to this,
when load increases, more material will be removed and this removed material induces self lubrication as it is held in between the
contact surface of sample and abrasive disc, which resulted in the existence of less tangential frictional force (F
t
) between the mating
surfaces of sample and disc, which in turn resulted in less coefficient of friction () and less specific wear rate. In the case of
unidirectional fiber composite, since the fibers were arranged at 90 to the sliding direction, the fibers transmitted most of the applied
load and causes high tangential frictional force and coefficient of friction to exist between mating surfaces of sample and disc, which
in turn resulted in high specific wear rate. In case of 45 orientation fiber composite, since the fibers are oriented at an angle 45 to the
sliding direction, specific wear rate is less than unidirectional composite. According to Mohit Sharma et al [4], the specific wear rate
of fiber reinforced composite increases with increase in angle of fiber arrangement with respect to sliding direction. In addition to this,
he also mentioned that, the presence of micro cracks and voids also increases the specific wear rate. Figure 3.2 shows the variation of
coefficient of friction with respect to different normal loads. It can be observed that, the coefficient of friction decreased with
increased load. This is due to self lubrication of the samples. Figure 3.3 and 3.4 shows the trend of specific wear rate and coefficient
of friction with respect to different fiber orientations in composite.

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

812 www.ijergs.org

Table 3.1 wear & frictional properties of composite for different fiber orientations




Figure 3.1 Load v/s Specific wear rate
0
0.002
0.004
0.006
0.008
0.01
0.012
0.014
0 2 4 6 8 10 12 14 16
S
p
e
c
i
f
i
c

w
e
a
r

r
a
t
e
,

K
o

(
m
m
3
/
N
-
m
)
Load (N)
Unidirectional Bidirectional 45* orientation Bisphenol-A
Sl
no
Specimen Load(N)
Depth of
wear
(microns)
Frictional
force (N)
Specific
Wear rate 10
-3
(mm
3
/N-m)
Frictional
Co-efficient
1 Bisphenol-A
5 175 4.7 12.4444 0.94
10 236 8.5 8.3911 0.85
15 277 11.4 6.5659 0.76
2 Unidirectional
5 142 4.5 10.0977 0.9
10 197 8.1 7.0044 0.81
15 221 11.2 5.2385 0.74
3 Bidirectional
5 65 3.8 4.6220 0.76
10 82 7.1 2.9155 0.71
15 102 10.5 2.4177 0.70
4 45orientation
5 82 4.1 5.8311 0.82
10 132 7.7 4.6933 0.77
15 167 11.0 3.9585 0.73
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

813 www.ijergs.org


Figure 3.2 Load v/s Coefficient of friction

Figure 3.3 Specific wear rate for the resin & PALF composite for different fiber orientation

Figure 3.4 Coefficient of friction for the resin & PALF composite for different fiber orientation
0.65
0.7
0.75
0.8
0.85
0.9
0.95
1
0 2 4 6 8 10 12 14 16
C
o
e
f
f
i
c
i
e
n
t

o
f

f
r
i
c
t
i
o
n
,

Load (N)
Unidirectional Bidirectional 45* orientation Bisphenol-A
0.00E+00
2.00E-03
4.00E-03
6.00E-03
8.00E-03
1.00E-02
1.20E-02
1.40E-02
Unidirectional Bidirectional 45* orientation Bisphenol-A
S
p
e
c
i
f
i
c

w
e
a
r

r
a
t
e
,

K
o

(
m
m
3
/
N
-
m
)
Resin &PALF composite for different fiber orientation
5N 10N 15N
0.65
0.7
0.75
0.8
0.85
0.9
0.95
1
Unidirectional Bidirectional 45* orientation Bisphenol-A
C
o
e
f
f
i
c
i
e
n
t

o
f

f
r
i
c
t
i
o
n
,

Resin & PALF composite for different fiber orientation


5N 10N 15N
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

814 www.ijergs.org

IV. CONCLUSION
In the study of wear and frictional properties, the composite with different fiber orientations namely: unidirectional,
bidirectional and 45 orientation are subjected to different loading conditions under constant sliding distance, speed & velocity. All
three types of composite specimens show less wear rate than unreinforced Bisphenol-A resin. Specific wear rate decreases with
increase in normal load. The bidirectional composite shows least specific wear rate, due to the presence of fibers in either direction,
made the composite capable of observing more load and shows less frictional force at the mating surfaces of samples and abrasive
disc. The Coefficient of friction for all the three types of fiber orientations in composite decreases with increase in normal load, due to
self lubrication of the testing samples, as the normal load increased, more material is removed and the removed material is held at the
mating surfaces of composite and abrasive disc which is resulted in self lubrication. Hence fiber orientation greatly influence the
tribological behaviour of PALF reinforced Bisphenol-A composite.

REFERECES:
[1] C.H.Chandra Rao, S.Madhusudan, G. Raghavendra, E. Venkateswara Rao Investigation in to Wear behavior of Coir Fiber
Reinforced Epoxy Composites with the Taguchi Method International Journal of Engineering Research and Applications (IJERA),
ISSN: 2248-9622, Vol. 2, Issue 5, September- October 2012, pp.371-374

[2] L. Boopathi, P.S. Sampath, K. Mylsamy Influence of Fiber length in the Wear Behaviour of Borassus Fruit Fiber Reinforced
Epoxy Composites ISSN : 0975-5462, Vol. 4 No.09 September 2012

[3] S.R.Chauhan, Bharti Gaur, Kali Dass Effect of Fiber Loading on Mechanical Properties, Friction and Wear Behaviour of
Vinylester Composites Under Dry and Water Lubricated Conditions, IJMS Vol.1 Iss.1 2011 PP.1-8

[4] Mohit Sharma, I. Mohan Rao, Jayashree Bijwe Influence of Fiber Orientation on Abrasive Wear of Unidirectionally Reinforced
Carbon FiberPolyetherimide Composites. Tribology International 43(2010)959964.

[5] Punyapriya Mishra Statistical Analysis for the Abrasive Wear behavior of Bagasse Fiber Reinforced Polymer Composite
International Journal of Applied Research in Mechanical Engineering (IJARME) ISSN: 2231 5950, Vol-2, Iss-2, 2012

[6] Suresha B., Chandramohan G., and Prakash J. N., "The Role of Fillers on Friction and Slide Wear Characteristics in Glass-Epoxy
Composite System", J.M., &Eng., v.5, no.1, pp 87-101, 2006.

[7] Suresha, B. G. Chandramohan, J.N. Prakash, V. Balusamy and K. Sankarayanasamy, The Role of Fillers on Friction and Slide
Wear Characteristics in Glass-Epoxy Composite Systems, Journal of Minerals and Materials Characterization and Engineering,
Vol.5, No.1, (2006).

[8] Verma, A. P. and Sharma, P. C., Abrasive Wear Behaviour of GRP Composite, The Journal of the Institute of Engineers (India),
Pt MC2,Vol.72, pp. 124, 19

[9] Kishore, Sampathkumaran, P., Seetharamu, S., Vynatheya, S., Murali, A., Kumar, R.K., SEM Observations of the Effect of
Velocity and Load on the Slide Wear Characteristics Glass-Fabric Epoxy Composites with Different Fillers.Wear, Vol.237, pp.20-27,
2000.

[10] Wang, J., Gu, M., Songhao, Ge, S., The Role of the Influence of MOS
2
on the Tribological Properties of Carbon Fiber
Reinforced Nylon 1010 Composites. Wear, Vol. 255, pp. 774779, 2003.

[11] Pedro V., Jorge F., Antonio M. and Rui L., "Tribological Behavior of Epoxy Based Composites for Rapid Tooling, Wear 260,
pp. 30-39, 2006.

[12] Patel R., Kishorekumar B. and Gupta N.,"Effect of Filler Materials and Preprocessing Techniques on Conduction Processes in
Epoxy-based Nano dielectrics", IEEE Electrical Insulation Conference, Montreal, QC, Canada, 31 May- 3 June-2009.

[13] D. Hull and T.W. clyne, "An Introduction to Composite materials", second edition, Cambridge university press, London, 1996

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

815 www.ijergs.org

Study and Analysis of Flow of an In-compressible Fluid past an Obstruction
1
Deepak Kumar, Assistant Professor
1
Department of Mechanical and Automation Engineering, Amity University, Haryana, India
E-mail: deepak209476@gmail.com
Contact No: 09416627599
Abstract In present work, the time independent laminar flow of a viscous, incompressible fluid in two dimensions flow has been
studied. The fluid had been allowed to flow in a channel with an obstruction which was a rectangular plate of definite dimensions. The
flow of water had been considered to be steady with uniform incident velocity with various boundary conditions. The resulting
Navier- Stokes equations had been solved with the help of software FLEX PDE. Reynolds number had been varied as 10, 50, 100
and variation of streamlines, velocity, and pressure along with flow pattern in the form of velocity vector had been investigated.
Keywords Rectangular Obstruction, Incompressible Fluid Flow, FLEX PDE, Computational Fluid Dynamics, Navier- Stokes
Equations, Streamlines, Reynoldss Number..
INTRODUCTION
Understanding the complexities of laminar and turbulent flow is a problem that has been studied for many years. Researchers in
this field have been creative and innovative by introducing several new techniques and definitions. Here, the well-known problem of
laminar flow in a channel with a rectangular obstruction had been studied and investigated. Alvaro Valencia [1] studied laminar flow
past square bars arranged side by side in a plane channel. His aim was to provide information on the unsteady flow processes. He
captured the effects of vortex shedding by solving the continuity and the Navier-Stokes equations in two dimensions. He made
computations for 11 transverse bar separation distances for constant Reynolds number. The numerical results reveal the complex
structure of the flow. B.N. Rajani [2] focused on the analysis of two- and three-dimensional ow past a circular cylinder in dierent
laminar flow regimes. Here an implicit pressure-based nite volume method is used for time-accurate computation of compressible
ow. Results are studied for pressure, skin friction coecients, and also for the Strouhal frequency of vortex shedding. The complex
three dimensional ow structure of the cylinder wake is also reasonably captured. M. Boubekri and M. Afrid [3] considered the
numerical simulation of the two-dimensional viscous flow over a solid ellipse with an aspect ratio equal 3.5. Sufficiently far from the
ellipse, the flow is assured potential. The flow is modeled by the two dimensional partial differential equations of conservation of
mass and momentum. The numerical solutions revealed that the flow over the ellipse is steady with zero vortex up to Re = 40. For
Reynolds numbers between 50 and 190, the flow is steady with two vortices in the wake. For Re = 210 the flow becomes unstable. B.
H. Lakshmana Gowda and Myong-Gun Ju [4] analyzed the reverse flow in a square duct with an obstruction at the front (which is a
square plate). The gap g between the obstruction and the entry to the duct was systematically varied, and it was found that maximum
reverse flow occurs around a g/w value of 0.75. Wisam K. Hussam, et al. [5] for shallow flow past an obstacle in a channel, the
channel depth and blockage ratio play a significant role .In this study, the flow past a confined circular cylinder is investigated
numerically using a spectral element algorithm. The incompressible Navier-Stokes equations are solved over a two dimensional
domain. A parametric study is performed for the two-dimensional flow by varying the Reynolds number (Re) and blockage ratio (),
over the ranges 20 Re 2000 and 0.2 0.6.
Shivani T. Gajusingh [6] did experimental study to investigate the impact of a rectangular baffle inside a square channel. The
measurements were conducted for two Reynolds numbers in the fully turbulent regime. The changes to the ow structure due to the
insertion of a baffle were quantied by a direct comparison with the ow structure in the absence of a baffle, under similar conditions.
Signicant enhancement of turbulence was observed in a region up to two times the baffle height immediately downstream of the
baffle and the thickness of this layer increased to three times the baffle height further downstream of the baffle. Zou Lin [7] carried out
a three-dimensional numerical investigation of cross-flow past four circular cylinders in a diamond arrangement at Reynolds number
of 200. With the spacing ratios (L/D) ranging from 1.2 to 5.0, the flow patterns can be classified into three basic regimes. The
relationship between the three-dimensional flow patterns and force characteristics around the four cylinders shows that the variation of
forces and Strouhal numbers against L/D are generally governed by these three kinds of flow patterns. It is concluded that the spacing
ratio has important effects on the force and pressure characteristics of the four cylinders. S. B. Doma et al. [8] described the motion of
steady flow of a viscous incompressible fluid passing a rectangular plate. The cross section of the plate is considered to be in the form
of a rectangle. The fluid is assumed to be steady flow of water. The boundary conditions are discussed in details. The resulting
equations are solved numerically. The Reynolds number is varied as 0.5, 1, 10, 20, 100, 200 and 300 and the variation of streamlines
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

816 www.ijergs.org

is studied. Also the values of the pressure force, the velocity magnitude, vorticity magnitude are analyzed at each position point.
Gera.Bet al. [9] carried out a numerical simulation for a two dimensional unsteady flow past a square cylinder for the Reynolds
number (Re) considered in the range 50250 so that flow is laminar. The main objectives of this study were to capture the
features of flows past a square cylinder in a domain with the use of CFD. The variation of Strouhal number with Reynolds number
was found from the analysis. It was found that up to Reynolds number 50, the flow is steady. Between Reynolds numbers 50 to 55,
instability occurs and vortex shedding appears and flow becomes unsteady. Vikram C. K. [10] analyzed numerical investigation of
two dimensional unsteady flow past two square cylinders with in-line arrangements in a free stream. The main aim of the study is to
systematically investigate the influences on size of the eddy, velocity, frequency of vortex shedding, pressure coefficient and lift
coefficient by varying pitch to perimeter Ratio of two square cylinders. It has been found that the size of the eddy and the monitored
velocity in between the square cylinders increases with increase in PPR. Frequency of vortex shedding is found to be same in between
the cylinders and in the downstream of the cylinder. The pressure distribution near to the surface of the cylinder is quite low due to
viscous effects. The upstream cylinder is found to experience higher lift compared to the downstream cylinder.
PROBLEM SPECIFICATION


Figure A
In order to solve the flow of a viscous, incompressible fluid in a channel with a rectangular obstruction some assumptions are
necessary, since mathematical expressions become simpler and the solutions are still close to real cases
Assumptions
- Flow is 2-Dimensional and laminar
- The fluid is water which is considered incompressible and Newtonian
- Flow is not temperature dependent
- Flow is not affected by the gravity field

GOVERNING EQUATIONS
The flow in a channel with an obstruction was computed by solving the Navier-Stokes equations for incompressible fluid in a two-
dimensional geometry. The governing equations are:
A. Continuity Equation
This equation states that mass of a fluid is conserved.
Rate of increase of mass in fluid element = Net rate of flow of mass into fluid element
For three dimensional and unsteady flow

+
()

+
()

+
()

= 0
For 2-D, incompressible, steady flow

= 0
B. X- Momentum Equation
Momentum equations are based on Newtons second law which states that, the rate of change of momentum equals the sum of
forces on fluid particle.
For three dimensional and unsteady flow
( )

+
()

+
( )

+
()

[. + 2

] +

[ (

)]+

[ (

)]+


Where V=

is velocity vector field


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

817 www.ijergs.org

f = Body force per unit mass and

is its x component, = -
2
3

For 2-D, incompressible, unsteady and with no body forces

2
+

2

C. Y- Momentum Equation
()

+
()

+
()

+
()

[. + 2

] +

[ (

)]+

[ (

)]+


Where V=

is velocity vector field


f = Body force per unit mass and
=
y Component of body force, = -
2
3

For 2-D, incompressible, unsteady and with no body forces

2
+

2

Now we will convert Continuity equation, X- momentum equation, and Y- momentum equation into non dimensional form. Now
let us consider



To make these equations dimensionless, we must have to derive the non- dimensional form of various time and space derivate. The
time derivatives with respect to the dimensional variable can be written as:
( )

=
1

( )


Similarly, the spatial derivate are given by
( )

=
1

( )


( )

=
1

( )


and

2
( )

2
=
1

2
( )

2
( )

2
=
1

2
( )

2

Thus continuity equation becomes

=
1

+
1

= 0

= 0
The non- dimensional form of continuity equation is given by,

= 0
Using similar process, non- dimensional Navier-Stokes equations can be given by,

2
+

2
+

2

There are three dimensionless groups in the non-dimensional Navier-Stokes equations, these are


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

818 www.ijergs.org

Therefore, the continuity and Navier-Stokes equations in dimensionless form for two dimensional channel flow where width (w) of
channel is the characteristics length (L) and U is the free stream uniform velocity at the entry of the test channel. Here is the
dynamic viscosity; is the density of the fluid, and Re is the Reynolds number. The continuity and momentum equations for steady
flow in dimensionless form can be written as,

= 0

+
1

2
+

+
1

2
+

2


D. Equation of Stream Function
We know that vorticity () is given by
= (

)
Also

= and

=
Putting these values in the equation of vorticity we will get

2
+

2
=
Hence the governing equations are

= 0

+
1

2
+

+
1

2
+

2


RESULTS AND DISCUSSION
A computer program was developed here for quantitative and qualitative analysis of laminar incompressible flow
through a channel with a rectangular obstruction. The equations used in the program are Navier- stokes equations.
We have taken different Reynoldss number as 10, 50, 100 and at these Reynoldss number the variation of following is seen by taking
height of obstruction constant = 0.5.
(1) Streamlines
(2) Velocity in x- direction (u)
(3) Flow pattern
(4) Pressure
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

819 www.ijergs.org


Figure 1: Streamlines at Re=10 Figure 2: Velocity in x-direction (u) at Re=10

Figure 3: Flow at Re=10 Figure 4: Pressure at Re=10
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

820 www.ijergs.org



Figure 5: Streamlines at Re=50 Figure 6: Flow at Re=50


Figure 7: Velocity in x-direction (u) at Re=50 Figure 8: Pressure at Re=50

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

821 www.ijergs.org



Figure 9: Streamlines at Re=100 Figure 10: Flow at Re=100



Figure 11: Velocity in x-direction (u) at Re=100 Figure 12: Pressure at Re=100


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

822 www.ijergs.org



Figure 13: Velocity in x - direction at inlet of obstruction Figure 14: Velocity in x - direction at centre of obstruction


Figure 15: Velocity in x - direction at outlet of obstruction Figure 16: Velocity in y - direction at outlet of obstruction

Figure 17: Velocity in y - direction at inlet of obstruction Figure 18: Pressure at inlet of obstruction

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

823 www.ijergs.org


Figure 19: Velocity in y - direction at centre of obstruction Figure 20: Pressure at centre of obstruction


Figure 21: Pressure at outlet of obstruction
CONCLUSION
The FLEX PDE software enables rapid evaluation of the flow characteristics past an obstruction (rectangular plate). It is found to
be very effective to solve the partial differential equations. It makes the solution of coupled PDEs very easy going and less
calculating efforts are required. The main points of conclusion are
- It is analyzed that there is a variation in the magnitude of velocity in x direction at different Reynolds numbers but it is
found that variation in the magnitude of velocity in y direction is negligible. It nearly remains constant.
- With the increase in Reynolds number, the magnitude of stream function gets also increased.
As we increase the Reynolds number the size of right vortex also increases.



International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

824 www.ijergs.org

REFERENCES:
[1] B. H. lakshmanaGowda' and E. G. Tulapurkaraz Reverse flow in a channel with an obstruction at the entry J. Fluid Mech.
(1989), vol. 204, pp. 229-244.
[2] M.M. Zdravkovich conceptual overview of laminar and turbulent flows past smooth and rough circular cylinders
Journal of Wind Engineering and Industrial Aerodynamics, 33 (1990) 53-62.
[3] J. LI,* A. Chambarel, M. Donneaud and R. Martin numerical study of laminar flow past one and two circular cylinders
Computers & Fluids Vol. 19, No. 2, pp. 155--170, 1991.
[4] Yinglong Zhang and Songping Zhu Open channel flow past a bottom obstruction Journal of Engineering Mathematics 30:
487--499, 1996.
[5] Alvaro Valencia laminar flow past square bars arranged side by side in a plane channel.
[6] K.M. Lam, M.Y.H. Leung Asymmetric vortex shedding ow past an inclined at plate at high incidence European Journal
of Mechanics B/Fluids 24 (2005) 3348.
[7] A. K. Singha1, A. Sarkar2 and P. K. De wall effect on heat transfer past a circular cylinder - a numerical study International
Conference on Mechanical Engineering 2005 (ICME2005) 28- 30 December 2005, Dhaka, Bangladesh.
[8] A. Mueller1, J. Anthoine1 and P. Rambaud1 Vortex shedding in a conned laminar ow past a square cylinder 5th European
Congress on Computational Methods in Applied Sciences and Engineering (ECCOMAS 2008) June 30 July 5, 2008 Venice,
Italy.
[9] B.N. Rajani A. KandasamySekharMajumdar Numerical simulation of laminar ow past a circular cylinder Applied
Mathematical Modelling 33 (2009) 12281247.
[10] Chang-Hyun Sohn*, B. H. LakshmanaGowda and Myong-Gun Ju Reverse flow in a square duct with an obstruction at the
entry Journal of Mechanical Science and Technology 23 (2009) 2376~2389.
[11] Shivani T. Gajusingh, NasiruddinShaikh, Kamran Siddiqui Inuence of a rectangular baffle on the downstream ow structure
Experimental Thermal and Fluid Science 34 (2010) 590602.
[12] ZouLin Lin Yu-feng Lu Hong flow patterns and force characteristics of laminar flow past four cylinders in diamond
arrangement 2011,23(1):55-64 DOI: 10.1016/S1001-6058(10)60088-1.
[13] S. B. Doma , I. H. El-Sirafy and A. H. El-Sharif Two-Dimensional Fluid Flow Past a Rectangular Plate with Variable Initial
Velocity.
[14] Vikram C. K., Dr. Y. T. KrishneGowda, Dr.H.V. Ravindra Numerical simulation of two dimensional unsteady flow past two
square cylinders gopalax -International Journal of Technology and Engineering System (IJTES): Jan March 2011-
Vol.2.No.3.
[15] M.A. Kabir, M.M. K. Khan and M. G. Rasul Numerical Modelling of Reverse Flow Phenomena in a Channel with Obstruction
Geometry at the Entry wseas transactions on fluid mechanics.
[16] D. Greenspan Numerical Studies of Steady, Viscous, Incompressible Flow in a Channel with a Step Journal of
Engineering Mathematics, Vol. 3, No. l, January 1969.
[17] A. K. Dhiman, R. P. Chabra and V. Eswaran, Steady Flow Across a Confined Square Cylinder: Effects of Power-Law Index
and Blockage Ratio J. Non-Newtonian Fluid Mech., (2008).
[18] S. Bhattacharyya, Kharagpur, and D. K. Maiti, Vortex shedding suppression for laminar flow past a square cylinder near a
plane wall: a two-dimensional analysis ActaMechanica, 184: 1531 (2006).
[19] Jonhn D. Anderson, Jr. Computational Fluid Dynamics McGraw-Hill International Editions.
[20] Wisam K. Hussam, Mark C. Thompson and Gregory J. Sheard A Quasi- two-dimensional investigation of unsteady
transition in shallow flow past a circular cylinder in a channel Seventh International Conference on CFD in the Minerals and
process Industries CSIRO, Melbourne, Australia 9-11 December 2009.
[21] S. Kumar, C. Cantu and B. Gonzalez Flow past a rotating cylinder at Low and high rotation rates Journal of Fluids
Engineering Copyright 2011 by Asme April 2011, vol. 133 / 041201-1





International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

825 www.ijergs.org

Influence of Fiber Length on the Tribological Behaviour of Short PALF
Reinforced Bisphenol-A Composite
Supreeth S
1
, Vinod B
2
, Dr.L.J. Sudev
3
1
Research Scholar, Department of Mechanical Engineering, Vidyavardaka college of Engineering, Mysore, Karnataka
2
Assistant Professor, Department of Mechanical Engineering, Vidyavardaka college of Engineering, Mysore, Karnataka
3
Professor, Department of Mechanical Engineering, Vidyavardaka college of Engineering, Mysore, Karnataka
Email-supreeth.mechanical@gmail.com, 9738412597
AbstractIn recent years, natural fiber reinforced composites have received increasing attention in light of the growing
environmental awareness. The use of natural fibers as reinforcement in polymeric composites for technical application has been a
research subject of scientist. Among several natural fibers, Pineapple leaf fibre (PALF) is one that has good potential as reinforcement
in polymer composite. PALF was extracted from raw pineapple leaf; it was then chemically treated and dried in hot air oven to hinder
the water content. In the present work the composite specimens are prepared by using Bisphenol-A (BPA) as a matrix and the short
PALF fiber with length < 15mm and 30% volume fraction as reinforcement. The composites were prepared by hand lay-up technique.
The objective of the present work is to investigate the Tribological behavior of Short PALF reinforced Bisphenol-A composite. The
composites reinforced with the fiber length of 2mm, 4mm, 6mm, 8mm, 10mm, 12mm & 14mm was subjected wear test. The wear
behavior of the composites was performed using pin on disc machine at varying loads of 5N, 10N & 15N and at constant sliding
distance, velocity and speed. The result shows that, the wear rate increases with increase in load for the composite specimen which has
less interfacial bond strength. From this experimental study, it was observed that the fiber length greatly influences the wear properties
of reinforced composites.
Keywords PALF, BPA, Fiber length, Natural fibers, Alkaline treatment, Specific wear rate, Co-efficient of friction.
I. INTRODUCTION

Composite materials are materials made from two or more constituent materials with significantly different physical or
chemical properties, that when combined, produce a material with characteristics different from the individual components. The
individual components remain separate and distinct within the finished structure. The new material may be preferred for many
reasons: common examples include materials which are stronger, lighter or less expensive when compared to traditional materials.

Natural fibers are plant based which are lignocelluloses in nature and composed of cellulose, hemicelluloses, lignin, pectin and waxy
substances. Cellulose gives the strength, stiffness and structural stability for the fibre, and is the major framework components of the
fibre. Pineapple Leaf Fibre (PALF) is one such fiber source known from a long time obtained from the leaves of pineapple. Pineapple
leaves from the plantations are being wasted as they are cut after the fruits are harvested before being either composted or burnt.
Additionally, burning of these beneficial agricultural wastes causes environmental pollution. Over the past decade, cellulosic fillers
have been of greater interest, since they give improved mechanical properties to composite material compared to those containing
non-fibrous fillers.

Bisphenol-A (BPA) resin is a thermoset resin with good thermal and environmental stability, high strength and wears resistance. This
combination of properties permits the application of BPA in polymer-based heavy duty sliding bearings. For these purposes, BPA
usually is compounded with reinforcements like glass or carbon fibers and ceramic .mineral oxides and inorganic fillers. The use of
fibers in polymeric composites helps to improve tensile and compressive strengths, tribological characteristics, toughness (including
abrasion), dimensional stability, thermal stability, and other properties.

II. MATERIALS AND METHODOLOGY

Pineapple Leaf Fibre (PALF) is one such fiber source known from a long time obtained from the leaves of pineapple plant
(Ananascomosus) from the family of Bromeliaceous.
Bisphenol-A (BPA) is an organic compound which belongs to the group of diphenyl methane derivatives and Bisphenol. The
chemical formula is (CH
3
)
2
C (C
6
H
4
OH)
2
. BPA is used to make certain plastics and epoxy resins; it has been in commercial use since
1957.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

826 www.ijergs.org


A. Materials

PALF extracted from leaf of pineapple plant by biological method supplied from Chandra prakash.co, Jaipur, Rajasthan
.Bisphenol-A resin was supplied from Balaji fabrications, Mysore, Karnataka.

B. Chemical treatment

Extracted fibers subjected to Alkali treatment or mercerization using sodium hydroxide (NAOH) is the most commonly used
treatment for bleaching and cleaning the surface of natural fibers to produce high-quality fibers. Modifying natural fibers with alkali
has greatly improved the mechanical properties of the resultant composites. Firstly 5% NAOH solution was prepared using sodium
hydroxide pellets and distilled water. Pineapple leaf fibers were then dipped in the solution for 1hour. After 1 hour fibers were washed
with 1% HCl solution to neutralize the fibers. Then it is washed with distilled water. It was then kept in hot air oven for 3hours at 65-
70c. Then fibers were chopped to different fiber length.

C. Manufacturing of composite

A polypropylene (PP) mould having dimensions of 80 X 60 X 10 mm is used for composite fabrication. The mass fraction
for the prepared mould is calculated using equation of volume fraction of the fiber and density of fiber. The mould was first cleaned
with wax so that the laminate easily comes out of the die after hardening. Then around 15 to 20 ml of promoter and accelerator
are added to Bisphenol and the color of the resin changes from pale yellow to dark yellow with the addition of these two agents.
The laminates of different fibers lengths of short PALF are prepared using hand layup method. This method of manufacturing is a
relatively simple method compared to other methods like vacuum bag molding, resin transfer molding, autoclave molding etc. Fig
2.1 shows the PALF reinforced laminated composites with fiber length of 2, 4, 6, 8, 10, 12 and 14mm respectively.



2MM 4MM 6MM 8MM 10MM 12MM 14MM

Figure 2.1: PALF reinforced composites of different fiber length



Figure 2.2: 80 X 60 X 10 mm
3
mould

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

827 www.ijergs.org

The mass fraction for the prepared mould and for desired volume fraction of fiber is calculated using equations:

Volume fraction of fiber (V
F
)

= v
f
/ vc (1)

Density of the fiber () = m
f
/ v
f
(2)


Were v
f
= Volume of the fiber, v
c
= Volume of the composite, m
f
=

Mass fraction of the fiber.


D. Pin-on-Disc Wear Test

Wear tests were carried out on a pin on disc wear test rig at room temperature as shown in Fig 2.3. The square bar specimen of
size 8*8*10 mm was prepared from different fiber length PALF sample and glued to a pin(mild steel) of dimension 8*8*32 mm as
shown in Fig 2.4. The specimen attached pin is mounted on to the arm of tribometer. The sample is made to rotate on the wear disc of
EN-31(56-60 HRC), 165mm* 8mm thick. Then the desired load is applied on to the pin by a pulley arrangement in the range of 5N,
10N, 15N respectively. The disc was made to rotate at constant speed of 160 rpm, velocity 1m/s and sliding distance 180m under
different loads 5N, 10N and 15N. The 3 samples namely A, B and C from different fiber length PALF composite are used for different
loads respectively. The sample will be weighed before and after the wear test to determine the weighed loss due to wear.

Figure 2.3: Pin on disc machine Figure 2.4: Wear samples
III. Results and Discussion
Table 3.1 shows wear and frictional properties of composites for different fiber length PALF composites for constant parameters such
has velocity 1m/s, sliding distance 180m and speed 160rom. The study on the frictional and wear properties is important for the
application of composite where there the composite will be subjected for the friction and wear. Properties which are required to
analyze the wear behavior of the composite are specific wear rate (K
o
), coefficient of friction () and tangential friction force (F
t
).
Coefficient of friction gives relation between tangential friction force that will be existed between composite sample surface and
abrasive disc surface. Specific wear rate is not the inherent property of the composite, but it gives wear resistance of composite sample
being tested in terms of volume of material removed with respect to the applied normal load and sliding distance.

Coefficient of friction gives relation between tangential friction force that will be existed between composite sample surface and
abrasive disc surface. It was observed that the Co-efficient of friction decreased with increasing the load. At the beginning the fibers
offered resistance to wear. Due to this the frictional coefficient increases and resulting in thermo mechanical loading. Therefore micro
cracks were developed on the surface and hence reducing wear resistance. The sliding direction and the random orientation of the
fibers were the influencing factors to the friction coefficient values [6]. Coefficient of friction with respect to varying load is plotted in
graph Fig 3.1.

When load increases, more material will be removed and this removed material induces self lubrication as it is held in between the
contact surface of sample and abrasive disc, which resulted in the existence of less tangential frictional force (F
t
) between the mating
surfaces of sample and disc, which in turn resulted in less coefficient of friction () and led to less specific wear rate. In the case of
2mm & 14mm PALF composite, fiber delaminates easily when shear force acts between sample and the disc. Therefore fibers
transmitted most of the applied load and caused high tangential frictional force and coefficient of friction to exist between mating
surfaces of sample and disc, which in turn resulted in high specific wear rate.

The co-efficient of friction for 8mm fiber length PALF composite was decreased by 18.75%, 17.33%, and 14.2% then unreinforced
Bisphenol-A for 5N, 10N and 15N respectively. As the length of the fiber increases the adhesion between matrix increases till 8mm,
then it decreases. Reason for this is too short fibers, long fiber in composites acts as a flaw hence composite strength decreases [15].
Hence from the experimental results 8mm fiber considered as the optimum fiber length.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

828 www.ijergs.org


Specific wear rate decreased with increase in load. The un-reinforced Bisphenol-A resin showed more specific wear rate when
compared to reinforced Bisphenol-A. This is due to presence of reinforcing fibers that lead to increase the average hardness of the
composite. Less specific wear rate is observed for 8mm PALF composite. The reason is due to the presence of strong interfacial bond
strength between matrix and fibers. Therefore this prevents the delamination and fiber pullouts during the wear process. The specific
wear rate for 8mm fiber length PALF composite was decreased by 75.60%, 77.15%, and 61.62% then unreinforced Bisphenol-A for
5N, 10N and 15N respectively. Specific wear rate with respect to varying load is plotted in graph Fig 3.2.

Table 3.1: Wear and frictional properties of composites for different fiber lengths of PALF composites
Sliding distance =180m, Velocity=1m/s, Time=3min

Fiber length
Load
(N)
Wear in
microns
Frictional force
(N)
CO-
EFFICIENT
OF FRICTION
SPECIFIC
WEAR-
RATE
(mm
3
/Nm)

Bisphenol-
resin
5 279 4.5 0.8 19.84E-3
10 356 7.7 0.75 12.65E-3
15 385 10.7 0.70 9.12E-3

2mm
5 268 4 0.8 19.05E-3
10 340 7.5 0.75 12.08E-3
15 375 10.5 0.70 8.8E-3

4mm
5 212 3.7 0.75 15.075E-3
10 256 7.2 0.72 9.10E-3
15 292 10.1 0.67 6.92E-3

6mm
5 210 3.6 0.73 14.9E-3
10 221 7.0 0.7 7.85E-3
15 268 10.1 0.67 6.35E-3

8mm
5 68 3.3 0.65 4.85E-3
10 80 6.2 0.62 3.48E-3
15 147 9 0.60 2.84E-3

10mm
5 107 3.4 0.67 7.6E-3
10 213 6.5 0.65 7.57E-3
15 280 9.3 0.62 6.6E-3

12mm
5 178 3.6 0.72 12.65E-3
10 213 6.9 0.69 7.57E-3
15 280 10.1 0.67 6.6E-3

14mm
5 225 3.8 0.76 16.0E-3
10 272 7.0 0.70 9.67E-3
15 310 10.3 0.69 7.34E-3


The specific wear rate and co-efficient of friction is calculated experimentally from the equation:
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

829 www.ijergs.org


Ko = V / (P.L) (3)

= F/ P (4)

Were Ko = specific wear rate in mm
3
/ N-m, V = volume of material removed By wear in cm3, P = Normal load in N, L= sliding
distance in Meter (m), = frictional coefficient, F = Tangential frictional force in N.
Figure 3.1: Coefficient of friction with respect to varying load


Figure 3.2: Specific wear rate with respect to varying load
IV. Conclusion

In the study of wear and frictional properties, the unreinforced resin material and composites with different fiber lengths are
subjected to varying load at constant sliding distance, velocity. The PALF composite of fiber length 8mm shows less specific wear
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

830 www.ijergs.org

rate and coefficient of friction. The present investigation shows Specific wear rate & coefficient of friction decreased with increase in
normal load. As the length of the fiber increases, the strength & adhesion between fibers increases till optimum length of 8mm later it
decreases. The presence of too short fibers, long fibers in composites acts as a flaw, hence composite strength decreases. The random
distribution of fibers, made the composite material to observe more load and less frictional force at the mating surfaces of samples and
abrasive disc. Coefficient of friction for all the different fiber length PALF composites decreased with increase in normal load due to
Self lubrication of the testing samples. As the normal load increased, more material was removed and the removed material was held
at the mating surfaces of composite and abrasive disc which resulted in self lubrication. From this experimental study, it was observed
that the fiber length greatly influences the wear properties of reinforced composites.


REFERENCES:

[1] Pedro V., Jorge F., Antonio M. and Rui L., "Tribological behavior of epoxy based composites for rapid tooling, Wear 260, pp.
30-39, 2006.

[2] Patel R., Kishorekumar B. and Gupta N.,"Effect of Filler Materials and Preprocessing Techniques on Conduction Processes in
Epoxy-based Nano dielectrics", IEEE Electrical Insulation Conference, Montreal, QC, Canada, 31 May- 3 June-2009.

[3] Suresha B., Chandramohan G., and Prakash J. N., "The role of fillers on friction and slide wear characteristics in Glass-Epoxy
composite system", J.M., &Eng., v.5, no.1, pp 87-101, 2006.

[4] Suresha, B. G. Chandramohan, J.N. Prakash, V. Balusamy and K. Sankarayanasamy. The Role of Fillers on Friction and Slide
Wear Characteristics in Glass-Epoxy Composite Systems, Journal of Minerals and Materials Characterization and Engineering,
Vol.5, No.1, (2006).

[5] Punyapriya Mishra Statistical Analysis for The Abrasive Wear behavior of Bagasse Fiber Reinforced Polymer Composite
International Journal of Applied Research in Mechanical Engineering (IJARME) ISSN: 2231 5950, Vol-2, Iss-2, 2012

[6] Mohit Sharma, I. Mohan Rao, Jayashree Bijwe Influence of Fiber Orientation on Abrasive Wear of Unidirectional Reinforced
Carbon FiberPolyetherimide Composites. Tribology International 43(2010)959964.

[7] K. Suganuma, T. Fujita, K. Nihara and N. Suzuki, 1989, AA6061 composite reinforced with potassium titanate whisker, J.
Mater. Sci. Lett. Vol.8, No. 7, pp. 808810.

[8] De S. K. and White, J. R., Short Fibre Polymer Composites, 1996, Woodhead Publishing Limited, Cambridge, UK.

[9] Verma, A. P. and Sharma, P. C., Abrasive Wear Behaviour of GRP Composite, The Journal of the Institute of Engineers (India),
Pt MC2,Vol.72, pp. 124, 19

[10] Kishore, Sampathkumaran, P., Seetharamu, S., Vynatheya, S., Murali, A., Kumar, R.K., SEM observations of the effect of
velocity and load on the slide wear characteristics glass-fabric epoxy composites with different fillers.Wear, Vol.237, pp.20-27,
2000.

[11] D. Hull and T.W. clyne, "An Introduction to Composite materials", second edition, Cambridge university press, London, 1996.

[12] Wang, J., Gu, M., Songhao, Ge, S., The role of the influence of MoS2 on the tribological properties of carbon fiber reinforced
Nylon 1010 composites. Wear, Vol. 255, pp. 774779, 2003.

[13] Kishore, Sampathkumaran, P., Seetharamu. S., Thomas, P., Janardhana, M. A., Study on the effect of the type and content of
filler in epoxy-glass composite system on the friction and wear characteristics. Wear Vol. 259, pp. 634-641, 2005.

[14] Youxi Lin, Chenghui Gao and Ning Li, 2006, Influence of CaCO whisker content on mechanical and tribological properties of
polyetheretherketone composites, J.Mater. Sci.Technol. Vol.22, No.5, pp. 584-588.

[15] Uma Devi, L., Bhagawan, S.S. and Thomas, S. (1997). Mechanical Properties of Pineapple Leaf Fiber-Reinforced Polyester
Composites. Journal of Applied Polymer Science. 64: 1739-1748
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

831 www.ijergs.org

Applications of MATLABs Toolbox to Recognize Handwritten Characters
Part 2: Experimental Results
1
Pallavi Aggarwal, Yashasvi Rawal
Bharti Vidyapeeth College of Engineering, New Delhi, India
1
iampallavi15@gmail.com, (+91)-7838907734

Abstract Handwritten character recognition is a challenging task in the field of research on image processing, artificial
intelligence as well as machine vision since the handwriting varies from person to person. Moreover, the handwriting styles, sizes and
its orientation make it even more complex to interpret the text. The numerous applications of handwritten text in reading bank
cheques, Zip Code recognition and in removing the problem of handling documents manually has made it necessary to acquire
digitally formatted data. This paper presents the recognition of handwritten characters using either a scanned document, or direct
acquisition of image using Matlab, followed by the implementation of various other Matlab toolboxes like Image Processing and
Neural Network Toolbox to process the scanned or acquired image. Experimental Results are given to present the proposed model in
order to recognize handwritten characters accurately.

Keywords Image Acquisition, Image Rendering, Character Extraction, Image Processing, Edge Detection, Neural Network, Back
Propagation Network, Multi Layer Perceptron Network

1. INTRODUCTION
With the advancement of technology, the interfacing between man and machine has increased the scope of research in various
domains, thereby making majority of tasks automated and easier to perform. MATLAB is one such powerful machine tool where in
the availability of Image Acquisition Toolbox, Image Processing Toolbox and Neural Network Toolbox simplifies the task of
obtaining and understanding handwritten text.
[2] The two commonly used methods of handwritten character recognition, On-line and Off-line methods, have their own advantages
and disadvantages. Where the off-line method provides more accuracy, the on-line method is superior in recognizing characters due to
temporal information available to it. [10] Handwriting recognition principally entails optical character recognition (OCR). [9]The
OCR systems process the image using several steps including segmentation, feature extraction, and classification. [1]It matches the
images against stored bitmaps based on specific fonts. This poses a problem due to the inaccuracy in results of limited patterns
available for drawing comparisons. Though, [3] it is possible that both hand written and printed characters may be recognized using
OCR, the performance is directly dependent upon the quality of the input documents
The presented procedure uses MATLABs neural network technology to overcome the challenges through analysis of hand strokes,
irregularities in written characters and further matching it against multiple stored characters. [7]Other methods which use Euclidean
metric distance as a form of measure have been employed earlier using neural network which have proved to be ineffective in later
researches.
To recognize these characters, the first step is to acquire an image for processing. Next, it is required to use Image Processing toolbox
to use various image properties in order to extract characters and later use Neural Network toolbox to train a suitable dataset. After
training the network, testing is done and the performance curve is generated along with the individual required characters. The various
steps for performing this task have been illustrated as follows:
- Acquiring an Image
- Image rendering
- Character extraction
- Training and testing


2. ACQUIRING AN IMAGE
To acquire an image, any one of the two methods can be followed. In the first one, image is scanned in order to make it machine
editable. The image can be of any specific format like jpeg, bmp etc. A series of operations are performed after the image is taken as
an input. The other method uses Image Acquisition toolbox to take the input image. [11]By opening the tool using Start Toolboxes
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

832 www.ijergs.org

More Image Acquisition Tool from Matlab window, one can directly set image acquisition parameter and preview the image
acquired. This can later be exported as Image Data to Matlab for further operations.


3. IMAGE RENDERING
After the image is acquired, Image processing toolbox comes into play. Image is first converted into a greyscale image. The
purpose of converting an RGB image into greyscale is due to the fact that it eliminates the hue and saturation information and retains
the needed luminance.
The greyscale image is further processed into a binary image which replaces all pixels in the input image with luminance greater than
level with the value 1 (white) and replaces all other pixels with the value 0 (black).
This follows the usage of edge detection to find the edges in the image. [13] It finds the places in the image where there is a rapid
change in intensity, by following one of the two given definitions:
- Looks for the intensity change at the first derivative, having larger magnitude than some other threshold
- Looks for intensity change at second derivative having a zero crossing. It gives a number of derivative estimators, and using
these estimators; operational sensitivity related to horizontal edges, vertical edges or even both can be possibly specified. If
the edges are found, the binary image is returned with 1s else 0s. After detecting the edges, it is necessary to use
morphology for dilating and filling the image which defines the region to fill by selecting the required points.






Fig1: Block Diagram to represent Image Rendering


4. CHARACTER EXTRACTION
The image is then sent through connectivity test in order to check for the maximum connected components and the properties of each
component, which is in the form of a box. After locating the box, the individual characters are then cropped into different sub images
that is the raw data for the following feature extraction routine. The size of the sub-images is not fixed since they are exposed to noises
which will affect the cropping process to vary from one to another. This will causing the input of the network become non- standard
and hence, prohibit the data from feeding through the network. To avoid this, the sub-images have to be resized and then by finding
the average value in each 10 by 10 blocks, the inputs for the network can be determined.
By this, extraction of the character is possible and could be passed to another stage for future classification and training purpose of the
neural network.



5. TRAINING AND TESTING A NETWORK
Next, we create a training vector for the neural network in order to match the input accepted by the neural network function. The steps
performed in creating and training the neural network has been illustrated below:
Type nntool in Matlab. A dialog box appears where in we are required to Import the Inputs and Targets from the MATLAB
workspace. After importing, the created network appears in the network list. Open the network and select training tab. Here, we can
choose the training parameters and data (inputs and targets) and finally click on Train option to train the network. I used feed forward
back propagation neural network. In other words, [4] implementation based on Multi- Layer Perceptron Network (MLPN) trained with
back propagation was done. [6] Other complex training methods employing Error Back Propagation Algorithm have been used earlier.
There were two hidden layers used with TANSIG (tan-sigmoid) function. Further, experimental results in the next section illustrate
the steps performed.

READING AN
IMAGE
CONVERTING TO
BINARY IMAGE

CONVERTING
TO GRAYSCALE
IMAGE
DETECTING
EDGES IN THE
IMAGE
IMAGE
DILATING
USING EDGES
IMAGE
FILLING
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

833 www.ijergs.org



Fig2: Neural network training tool [1]

5.1 ABOUT NEURAL NETWORK
As the name suggests, Neural is related to neurons, which are an important part of biological nervous system. Like [5] human nervous
system processes the information it receives from nerves, in a similar way, this Artificial Network processes information to solve
specific problems. Every neural network comprises of interconnected neurons which is trained or configured for a specific application.
This is used in various field of study like pattern recognition, data classification and so forth to analysis a problem and adjust its
parameters accordingly.
The need of neural network can be realized by comparing it with the use of conventional computers which requires an algorithm to
solve a specific problem. Unlike computers, neural networks follow parallel processing architecture thereby resulting in maximum
efficiency. Moreover, there are multiple network types like Perceptron, feed forward, feedback networks which present a variable
ways to associate input with the output.
Neural network is not just confined to MATLAB but also suitable for real time systems. It also contributes to research in medicine
such as neurology to study brain mechanism in detail. The scope of neural network is not just limited to be used alone. [8] It can be
used in solving Zip Code Recognition problem. It can be integrated with other important related subjects like Fuzzy logic and
Artificial Intelligence for faster response and computations.



Fig3: Neural Network Architecture [12]



International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

834 www.ijergs.org

PART 2: EXPERIMENTAL RESULTS


Fig 4: Reading an image Fig 5: Binary image

Fig 6: Character Location using Edge Detection Fig 7: Image Dilation

Fig 8: Image Filling Fig 9: Character Extraction on binary image

Fig 10: Character extraction on input image Fig 11: Extracted Characters Blocks
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

835 www.ijergs.org

ACKNOWLEDGMENT
We would sincerely like to thank the Head of Department, Mrs. Arti Kane, who provided us with necessary lab equipments and timely
assistance in the work that we carried out. Our professor, Mr. Arvind Rehalia, was a great support as he read the paper and suggested
the necessary changes. He also encouraged us and offered detailed advice related to drafting our paper. The lab assistant along with the
other staff members helped too, during the research work. Our heartiest thanks to our parents, and friends who provided the advice and
financial support. This research paper would not have been possible without all of them.

CONCLUSION
As presented in the paper, handwritten character recognition is categorized into three main divisions: Image Acquisition, Image
processing and neural network training of the dataset. The experimental results illustrate how an input image leads us to character
extraction and further usage of neural network recognizes the handwritten patterns, accordingly.
The reason for choosing artificial neural networks as a part of the research to perform character recognition is due to their high noise
tolerance. The designed systems have the ability to yield accurate results, provided the correct dataset is available at the time of
training the network. The current stage of research depicts that the software does perform well both in terms of speed or accuracy. But
the character location is not efficient since the size of every block varies. This can be taken care of by initializing the weights during
training of dataset. There is a scope of improvement the current system. Hence, a simple yet effective approach for recognition of
handwritten characters using artificial neural networks has been described.

REFERENCES:
[1] iga Zadnik Handwritten character Recognition: Training a Simple NN for classification using MATLAB
[2] Kauleshwar Prasad, Devvrat C. Nigam, Ashima Lokhtiya and Dheeren UmreCharacter Recognition Using Matlabs Neural
Network Toolbox International Journal of u- and e- Service, Science and Technology Vol. 6, No. 1, February, 2013
[3] Sandeep Tiwari, Shivangi Mishra, Priyank Bhatia, Praveen Km. Yadav , Optical Character Recognition using MATLAB
,International Journal of Advanced Research in Electronics and Communication Engineering (IJARECE) Volume 2, Issue 5, May
2013
[4] Mathias Wellner, Jessica Luan, Caleb Sylvester, Recognition of Handwritten digits using a Neural Network, 2002
[5] http://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol4/cs11/report.html
[6] Vijay Patil and Sanjay Shimpi, Handwritten English character recognition using neural network , Elixir Comp. Sci. & Engg. 41
(2011)
[7] Sumit Saha,Tanmoy Som, Handwritten character recognition by using Neural-network and Euclidean distance metric IJCSIC -
International Journal of Computer Science and Intelligent Computing Vol. 2, No. 1, November 2010
[8] O.S. matan , R.K. Kiang,C.E. Stenard, Handwritten Character Recognition Using Neural Network Architectures, 4th USPS
technology Conference, November 1990
[9] A Matlab project in OCR by Jesse Hansen : www.ele.uri.edu/~hansenj/projects/ele585/OCR/OCR.pdf
[10] http://en.wikipedia.org/wiki/Handwriting_recognition
[11] http://www.mathworks.in/products/imaq/description3.html
[12] http://pages.cs.wisc.edu/~bolo/shipyard/neural/local.html
[13] http://www.mathworks.in/help/images/detect-edges-in-images.html#f11-12512







International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

836 www.ijergs.org

Review on Relay Node Placement Techniques to Increase System Capacity in
WSN
Baldeep Kaur Brar
1
, Abhilasha
2

1
Student, GZS PTU Campus, Bathinda
1
Associate Professor, GZS PTU Campus, Bathinda
E-mail-baldeepkaurbrar@gmail.com, 9646024306

Abstract- A wireless sensor network consists of sensor nodes which are capable to perform sensing, computation and transmission.
These sensor nodes have limited battery power which is difficult to replace due to hostile environment. Therefore to increase the
lifetime of wireless sensor network, it is required to develop such techniques to consume lesser energy. Less consumption of overall
energy of the network results in increase in the system capacity. One of these techniques is to deploy some relay nodes to
communicate with the sensor nodes, other relay nodes and the base stations. The relay node placement problem for wireless sensor
networks is concerned with placing the minimum number of relay nodes into the wireless sensor network to meet certain connectivity
requirements
Index Terms wireless sensor networks (WSN), sensor node, relay node placement, direct communication, Amplify and forward,
decode and forward, system capacity
I. Introduction
Hundreds or thousands of Sensor Nodes (SN) are spatially distributed in WSN with the limited battery to sense the environment and
send this information to sink or base station (BS). Sensor nodes act as a sensor and data router. Sink node is equipped with unlimited
battery. The process of sensing and communication from sensor nodes to BS is shown in Figure 1. As SNs are deployed in hostile
environment, it is quite impossible to replace these batteries. The various researchers proposed different techniques or methods in
order to enhance the life time of WSN. Energy consumption in a transmission in WSN is directly proportion to square or forth power
of the distance from source to destination node. when the sensor nodes are apart from each other and from sink and there is more
energy consumption which is not economical. To overcome this problem the relay nodes may be placed to reduce the energy
consumption. Comparing to the sensor nodes, relay nodes are much less costly. Relay nodes are needed to forward readings from each
individual sensor in multiple hops to the sink
.
Fig.1 Wireless Sensor Network

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

837 www.ijergs.org

Relay node works like a repeater that amplifies the signal and forward to sink. Relay node job is only to relay data generated by other
sensor nodes, without sensing the environment. Relay node can remove burden from the overloaded nodes. Relay node main task is to
communicate with the sensor nodes and with other relay nodes. The placement of relay node plays a critical role in the system
performance. One or more relay nodes can be placed between the sensor nodes and sink depending upon the situation. When the
distance between the sensor node and sink is greater than the transmission range means sensor node cannot send data directly to the
sink then there is need for relay node placement.







Figure 2: Showing Relay Node position with source and Sink
The deployment of relay nodes in sensor networks has been proposed for maximizing the network lifetime, energy-efficient data
gathering to increase the system capacity, load-balanced data gathering as well as making the network fault tolerant, less energy
consumption as energy conservation is directly related to the lifetime of sensor networks. Relaying will be especially beneficial
when there is no line-of-sight path between the source and the destination. Figure 3 shows the basic functionality of relay
node. For enhancement of lifetime of the network, many researchers have presented many relay node placement techniques. In this
pa per a survey of relay placement techniques is presented.









Figure 3: Basic functionality of Relay node

Exiting Techniques
Wireless networks support relay-based communication, in which a well-placed relay node receive a message from a
source node, amplify it, and forward it to its destination node as shown in Figure 4. This will result in performance gains
for end-users. Relaying will be especially beneficial in cases when there is no line-of-sight path between the source and
the destination. Relay node act like an intermediate between sender and receiver.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

838 www.ijergs.org


Bredin et al. [9] extended the relay node placement problem to the case of k-connectivity, instead of the 1-connectivity, that is the
problem of deploying relay nodes to provide desired fault tolerance through multipath connectivity (k-connectivity) between all sensor
nodes in WSN and presented the polynomial time O (1)-approximation algorithms for any of fixed k.



Figure 4: Show relay with sender and receiver


In [10], Kashyap et al. presented a 10-approximation algorithm ensuring 2-connectivity. In this assume that the transmission range of
the relay nodes is the same as that of sensor nodes such that relay nodes can send the data at the same rate as the sensor node.
In [11], Lloyd and Xue refer to the case where relay nodes have transmission range greater than the transmission range of sensor
nodes. There is need for the relay nodes because sensor nodes cannot directly transfer the data to sink due to their less transmission
range.
MIMO Relay Channels: SISO relay channels are those channels where each terminal employs a single antenna. Under
this type of setup, though, there are many channel conditions where the relay may not be able to assist the source in its
transmission. For example, the minimum of the source-relay and relay-sink channel gains may be less than the source-
sink channel gain. To avoid this issue by considering MIMO relay channels, where each terminal employs multiple
antennas. MIMO relay channels introduce additional degrees of freedom that allow for partial cooperation between the
transmitter and the relay. Under this setup, one can exploit the multiple antennas at the source node and the relay node to
perform more sophisticated encoding and decoding schemes, which will lead to improved performance.

1. The More Relay Nodes, the More Energy Efficient?
It is not possible to greatly reduce the energy consumption by increasing the number of relay nodes. Slightly increasing the number of
relay nodes can significantly reduce energy consumption. On the contrary, blindly adding the relay nodes does not necessarily
improve the energy efficiency when the number of relay nodes exceeds a certain threshold. Zhu et. al. has proved it in 2009[1].

1. Relay Node Placement Techniques:- For the proper placement of relay nodes different researchers gave different
techniques which are discussed as follows:-
1) Minimum-energy transmission model: In the minimum-energy transmission model, nodes located closer to the base station
need to relay data at much higher rate than the nodes located further away from the base station. This uneven energy dissipation
among the nodes may lead to the faster death of the burdened nodes, assuming that initial energy provisioning for all nodes are
equal. Such death of some nodes due to the unbalanced energy dissipation may cause an undesirable effect on the functionality of
the sensor networks, as the dead nodes will neither be able to perform the sensing nor the routing. This may even cause the
network to lose its usefulness. This problem has been addressed to optimally balance the energy dissipation among all nodes in a
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

839 www.ijergs.org

sensor network. To solve the problem of balancing of energy dissipation in the network, relay nodes can be placed in the
transmission path of longer distances. Star, Steiner Minimum Tree (SMT-11) is the two techniques used for optimal relay node
placement.

1.1 STAR Technique: - In this technique sensor node and sink are in line of sight so as to reduce the energy consumption as
energy consumption is directly proportional to the distance. When distance increases between sensor nodes and sink, more energy
will be consumed. A Star topology is the most efficient structure for minimising the distance between sensor nodes and sink. In
this technique each of the sensor nodes has to send data to the sink means all the sensor nodes sends its own data directly to the
sink. Relay nodes are placed in a straight line between each of the sensor node and sink.

1.2 SMT-II (Steiner Minimum Tree):- The minimum number of relay nodes needed to maintain the network connectivity. This
has been modelled as the Steiner Minimum Tree. One can show a transmission structure SMT, which connect the entire sensor
network using the approximate minimum number of relay node, such that every node can send their data directly, one hope or
more than one hope. The general idea is to start with an initial structure generated from SMT and then gradually adding the
remaining relay nodes in such a way so as to reduce the average energy consumption of the network and increase the system
capacity.




Figure 5: STAR TECHNIQUE


SMT will provide a transmission structure that requires the minimum number of relay nodes at the price of very high energy
consumption. So to avoid this there is need for SMT-II algorithm that will provide an energy efficient relay node placement and the
transmission structure with the limited number of available relay nodes. In this technique take search radius to divide the sensor and
relay nodes into different distance levels depending on their distance from the sink. If after doing this placement of relay nodes still
there are relay nodes available, then increment the search radius. Divide the nodes into new distance levels and repeat the procedure
again. This process will stop until all the relay nodes are placed or all the sensor nodes have the same distance levels.
2.2 Hierarchical Architectures: - in this architecture each sensor node belongs to only one cluster and the relay node act as cluster
head. Each sensor nodes sends data to the respective cluster-head node. The cluster-head nodes, on the other hand bear much more
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

840 www.ijergs.org

responsibilities, e.g. data gathering, data aggregation and routing. These nodes may form networks among themselves and forward the
data, gathered from its own cluster as well as from other cluster heads, towards the base station, using multi-hop paths. There are
many schemes for the relay node placement in hierarchical architecture. One of these is cluster string topology [ ]. In this technique n
relay nodes are divided into y clusters uniformly between source and sink as shown in figure 6.

Figure 6: Cluster string topology
Assign one node in the left most clusters to be a source. To minimise the interference, any node in one cluster is only within
communication range of the next immediate cluster/sink such that
x/d-1<=y<2x/d-1
Where x is the distance between source to sink and d is the distance between two adjacent cluster heads.
3. Single Tier Relay node placement: - in this sensor node sends the information to relay node or base station and both relay
nodes and sensor nodes participate in forwarding of received packets.
4. Two Tier Relay node placement: -In this architecture relay nodes are added to overcome the problem of long distance
communication between the base station and edge nodes. To specify relay node placement and mapping from edge nodes to relay
nodes, one method is developed named binary integer programming (BIP). This is a three step recursive process.
1. Based upon the location of relay node, calculate power for each relay node to maximize its capability.
2. Make optimal relay routing table using BIP which provides mapping from edge nodes to the relay nodes.
3. Update each relay node position using clustering method.

Conclusion
In this paper, techniques for proper placement of relay nodes between source and destination pairs so as to decrease the energy
consumption, increase link capacity and to increase the system capacity has been discussed. We have discussed various techniques for
optimal placement of relay node which includes minimum energy transmission model, Star Technique, Steiner Minimum Tree (SMT-
II), hierarchical architecture etc.

References
[1] Ying Zhu and Qi Han The More Relay Nodes, the More Energy Efficient? A research paper form University of Electronic
Science and Technology in China and Colorado School of Mines Golden, CO, USA
[2] C.K. Lo, R.W. Heath, Jr., and S.Vishwanath, The Impact of Channel Feedback on opportunistic Relay Selection for Hybrid-
ARQ in Wireless Networks, submitted to the IEEE Trans. Veh. Technol., June 2007
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

841 www.ijergs.org

[3] C.K. Lo, S. Vishwanath, and R.W. Heath, Jr., Relay Subset Selection in Wireless Networks Using Partial Decode-and-Forward
Transmission, submitted to the Proc. of the IEEE VTC-Spring, May 2008.
[4] Q. Han, A. P. Jayasumana, T. Illangasekare, and T. Sakaki, A wireless sensor network based closed-loop system for subsurface
contaminant monitoring, in Proceedings of the NSF Workshop on Next Generation Software (NSFNGS), 2008.
[5] G.-H. Lin and G. Xue, Steiner tree problem with a minimum number of Steiner points and bounded edge-length, Inf. Process.
Lett., vol. 69, no. 2, pp. 5357, 1999.
[6] J. Augustine, Q. Han, P. Loden, S. Lodha, and S. Roy, Energy efficient shortest path algorithms for converge cast in sensor
networks, in submitted for publication, 2009.
[7] D. Chen, D.-Z. Du, X.-D. Hu, G.-H. Lin, L. Wang, and G. Xue, Approximations for Steiner trees with a minimum number of
Steiner points. J. Glob. Optimal, vol. 18, no. 1, pp. 1733, 2000.
[8] X. Cheng, D.-Z. Du, L. Wang, and B. Xu, Relay sensor placement in wireless sensor networks, Wirel. Netw., vol. 14, no. 3,
pp. 347355, 2008.
[9] J. L. Bredin, E. D. Demaine, M. Hajiaghayi, and D. Rus, Deploying sensor networks with guaranteed capacity and fault
tolerance, MobiHoc05: Proceedings of the 6th ACM international symposium on Mobile ad hoc networking and computing,
pp. 309319, 2005.
[10] A. Kashyap, S. Khuller, and M. Shayman,Relay placement for higher order connectivity in wireless sensor networks, in
INFOCOM 2006. 25th IEEE International Conference on Computer Communications. Proceedings, April 2006, pp. 112.
[11] E. L. Lloyd and G. Xue, Relay node placement in wireless sensor networks, IEEE Trans. Comput., vol. 56, no. 1, pp. 134
138, 2007.
[12] J. Augustine, Q. Han, P. Loden, S. Lodha, and S. Roy, Energy efficient shortest path algorithms for converge cast in sensor
networks, in submitted for publication, 2009.
[13] D. Chen, D.-Z. Du, X.-D. Hu, G.-H. Lin, L. Wang, and G. Xue, Approximations for Steiner trees with a minimum number of
Steiner points. J. Glob. Optimal, vol. 18, no. 1, pp. 1733, 2000











International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

842 www.ijergs.org

Evaluation of Under-Five Malaria Treatment in Sierra Leone: A Case Study
Kenema District Hospital
Gegbe. B
1
, Kokofele I
1

1
Department of Mathematics and Statistics, School of Technology, Njala University
E-mail- bgegbe@njala.edu

Abstract - Malaria remains the leading cause of both morbidity and mortality in Sierra Leone, especially among young children,
despite the strive to reduce the infection rates over the past decade, and if robust mechanism is not put in place, for the eradication of
the disease then the notion would still be faced with challenges in controlling of malaria that may have greater effect on under-five
children. The main purpose of this research is to evaluate the efficiency of under-five malaria treatments administered in the Kenema
Government Hospital. The study was descriptive and designed to evaluate under-five malaria treatment using chi-square test. The
researcher used 12274 reported cases of morbidity within the period 2010 to 2012 for under-five children and 385 were mortality
cases (death cases as a result of malaria) and 11916 reported cases of those treated and recovered. The study was descriptive and
designed to evaluate under-five malaria treatment using chi-square test. In 2012, there was a rapid increased of 42% in morbidity
reported cases with 4028 cases reported which was almost five times the morbidity cases in 2010. It sounds alarming and worrisome
for a district in the eastern part of Sierra Leone. It can be observed that the rate of morbidity cases was getting higher every year in
under- five children in Kenema District. In 2012 there were 3897 under five children reported to survive out of 4072 morbidity cases
which indicate 40% increased of survival for under- five children in the Kenema Government District Hospital. Again, from 2011 to
2012 the chances of survival for children were almost the same as chances of morbidity cases. Moving toward 2013, there were clear
indications that the rate of morbidity may increase at a proportion equal to recovery rate. There is no significant defence between the
morbidity and those treated and recovered (survived) which implies that the many challenges that were faced by this hospital
management were overcame. These challenges may exist in the form of number of beds available, number of qualified doctors and
nurses, use of appropriate and available drugs for the treatment of malaria etc. However in mist of these challenges the management
was able to treat and recover a lot of cases though more morbidity cases waiting in queue
Keywords: Morbidity; mortality; recovered; survival; treated

ACKNOWLEDGMENT
I owe depth of gratitude to God Almighty through Jesus for giving me knowledge, wisdom and understanding throughout my
academic pursuit.
My sincere thanks go to Miss Marian Johnson who works assiduously as a typist to ensure that this work comes to an end. I am
particularly grateful to my wife for her architectural role in my academic activities. Thanks and appreciations go to my mother and late
father, they nurtured me to the level I am today.

INTRODUCTION
Malaria kills a child somewhere in the world every minute. It infects approximately 219 million people each year (range
154-289 million), with an estimated 66,000 deaths, mostly children in African. Ninety percent of malaria deaths occurred
n Africa, where malaria accounts for about one in six of all childhood deaths. This disease contributes greatly to anaemia
among children a major cause of poor growth and development (UNICEF 2013).
According World Health Organization (WHO) report 2013, Malaria is a disease caused by parasites that are transmitted to
human via mosquito bite. Symptom of infection may include fever, chill, headache, muscle pain, fatigue, nausea, and
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

843 www.ijergs.org

vomiting. In sever cases, the disease can be life threatening. In older children abdominal distress is often observed. The
abdomen is distended and tender especially in hepatic and spleen areas. The other complications that may result in
children with malaria are cerebral malaria and severe anaemia. The prognosis of this disease especially in children is that
growth and development may be seriously impaired. Convulsion is common in children with cerebral malaria and thus
significantly leads to mortality.
Malaria is the greatest contributor to rising morbidity of all infectious diseases followed by acute respiration infection
(ART).30% of Sierra Leonean dies from malaria before their birthday. Currently, forty(40) children die in Sierra Leone
daily from malaria disease(WHO,UNICEF, 2005).Sierra Leone is responsible for the great number of consultation(30% of
new cases in health centre) within the public services at it is the most common for hospital admission. Contraceptive
prevalent Rate (percent of women) is 8% and the adolescent fertility rate (birth per 1,000 women Age 15-19) is 98.Sierra
Leone where 43% of the population account for youths, and under- age 15(percent), one begins to wonder the trend of
malaria for under five children infected if drastic dimension of control is not considered (WHO report-2012
According to WHO report 2013, on malaria situation, it was estimated that nearly one million cases of malaria are
reported every year with an estimated 6,000 children under- five years old killed yearly in Sierra Leone.
THEORETICAL AND CONCEPTUAL FRAME WORK


The above chart is a simple concept that display the in and out flow of malaria for under -five children. The pool of
children is the first category we have. Test on those who are infected will get into the morbidity category and those
who have proved to be negative move downwards to join another category of those who are not infected with malaria.
From the category of morbidity, children are now faced with a lot of factor. Factor of better medical treatment, better
prescribe medicine, qualify doctors and nurses etc. in the hospital. That will allow children either to recover and
discharged and there after they will get back into the category of those proved to be negative and move upwards to the
pool of children. The recycling process continues for the infection of under-five children. With all these challenges
put together, there is the absolute need to find out if there is any significant difference between the morbidity cases
and those children treated and recovered.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

844 www.ijergs.org

THE PURPOSE OF THE STUDY
The purpose of the study is evaluating under-five malaria treatment in Kenema District Hospital.
RESEARH QUESTION
The research questions are;
- How many morbidity cases of malaria were reported in Kenema District Hospital within the period 2010-2013?
- How many children were successfully treated and recovered from malaria treatment?
- How many mortality cases were reported from both out-patient and in-patient that were attributed to malaria
within the period 2010-2013.
- Is there any significance difference between morbidity cases and those treated and recovered?

OBJECTIVES
The specific objectives of this research are:
- Give comparative analysis of morbidity of reported malaria cases within the period 20110-2013.
- Identify successful treated patients within the period 2010-2013
- Evaluate the number of mortality and morbidity attributed to malaria in kenema District hospital.
- Evaluate if there is any significance difference between morbidity cases and those treated and recovered in
Kenema District Hospital.

DESCRIPTION OF DATA SOURCE
The paediatric ward of Kenema Government Hospital is situated in Nongowa chiefdom of the Eastern part of Sierra
Leone and serves as a provincial headquarters of the region. The paediatric ward stretches about three miles radius .It
shares boundary with a village called Tissoh in the north-east Combema in the east and Gbenderu in the sourth, Bandama
in the west and Komboi hills in the north and north-east.
Kenema Government hospital is symbolic for treatment of all diseases to people of Eastern provinces. It is well staffed
with qualified paediatrician, other medical officers and a good number of State Registered Nurses. It is attended by
children from the entire district and beyond
STUDY DESIGNED
The study was descriptive and designed to evaluate under-five malaria treatment using chi-square test. The focused was
on using non-parametric test to establish a relation between the number of morbidity of both in and out-patient malaria
cases and those treated and recovered from malaria infection in Kenema Government Hospital in Sierra Leone within the
period: 2010-2013 inclusive.
STUDY POPULATION
All reported cases of malaria treatment for under-five treatment of malaria in the Kenema Government Hospital within the
period 2010-2013 inclusive were considered


International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

845 www.ijergs.org

DATA SOURCE
Data were extracted from the National Malaria Control Programme Reports (2010-2013) database system which includes
information gathered via acute case investigation on all malaria cases contacted and followed-up between 2010-2013
inclusive. The researcher obtains sample of children who were diagnose for malaria infection, those who were treated
successfully and the number of deaths attributed to malaria infection.
SAMPLE SIZE
The researcher considered a sample size of 12274 reported cases of morbidity within the period 2010 to 2012 for under-
five children, and 385 were mortality cases (death cases as a result of malaria) and 11916 reported case of those treated
and recovered.
DATA ANALYSIS
Simple multiple bar chart was used for data analysis. By using SPSS,a chi-square test was used to test if there is any
significance difference between morbidity and the number of recovered cases.
VARIABLES
For the purpose of this study the variables are categorised:
- Y
t
(dependent variable) = The total number of morbidity(Both in and out-patient) that are attributed to malaria
- X
1
(independent variable) = Total number of mortality attributed to malaria
- X
2
(independent variable) = Total number treated for malaria successfully.

ASSUMPTION
- The data never consider those children who were sick, tested prove to be negative for malaria
- only those tested and prove to be positive were considered
- Morbidity cases were consider as dependent variable
- Mortality (those who died) as a result of malaria and those treated for malaria recovered(survived) are
considered independent variable
-
HYPOTHESIS TESTED
H
o
: There is no significant difference between morbidity cases and the cases of
treated and recovered (survived)
H
1
: There is significant difference between morbidity cases and the cases treated and
recovered (survived)
RESULTS AND DICUSSIONS


Chart 1: Reported cases in 2010
Reported Cases of Malaria for Under-Five Children, Kenema District Hospital (2010)
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

846 www.ijergs.org








MORBIDITY AND MORTALITY-2010
In 2010, 2627 cases were reported for malaria in kenema for under-five children of whom 89 death cases were reported.
2548 cases were treated and discharged.2610 cases were in-patient and 17 were out-patient cases. There were no reported
cases for of death for out-patient. And the month of July captured the highest reported cases of malaria while the month of
October and November equally indicated the highest number of death cases. The morbidity cases had their pick between
the month of May and ending of august, while the death cases were at equal proportion throughout the year.


Chart 2: Reported cases in 2011






MORBIDITY AND MORTALITY-2011
In 2011, 2865 under-five malaria cases and 107 cases of death were reported respectively.2758 were successfully treated
of which 2832 were in- patient cases and 33 were out-patient cases respectively. October indicates the highest cases of
morbidity and mortality. There was a sharp increase in morbidity in June followed by October through July.
Morbidity
mortality
Treated and Recovered
Malaria Reported Cases for Under-Five Children in Kenema Government, 2011
Morbidity
mortality
Treated and Recovered
Jan.
Feb.
March
April
May
June
July
August
Sept.
Oct.
Nov
Dec.
0
50
100
150
200
250
300
350
400
Jan.
Feb.
March
April
May
June
July
Aug.
Sept.
Oct.
Nov.
Dec.
0
50
100
150
200
250
300
350
400
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

847 www.ijergs.org



Chart 3: Reported cases in 2012







MORBIDITY AND MORTALITY-2012
In 2012, 4072 morbidity cases were reported of which 25 were out-patients and 4052 were in-patient cases respectively.
175 mortality cases were reported, of which October showed the highest morbidity cases and July the highest mortality
cases. There were no reported cases for January and September.

Chart 4: Reported cases in 2013






MORBIDITY AND MORTALITY-2013
Half way through 2013, 2710 morbidity case were reported, and 91 morbidity cases respectively.6 cases were out-patient
and 2704 were in-patient cases
Malaria Reported Cases for Under-Five Children, Kenema Government Hospital , 2012
Reported Malaria Cases
Reported Death Cases
Treated and Recovered
Malaria Reported Cases for Under-Five Malaria Children,Kenema District Hospital, 2013
Reported Malaria Cases
Reported Death Cases
Treated and Recovered
Feb.
March
April
May
June
July
Aug.
Sept.
Oct.
Nov.
Dec.
0
100
200
300
400
500
600
Jan. February March April May June
0
100
200
300
400
500
600
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

848 www.ijergs.org

RELIABILITY AND VALIDITY OF DATA

Table 1: Test of normality

Tests of Normality
b,c,d,e,f,g,h,i


Mortali
ty
Kolmogorov-Smirnov
a
Shapiro-Wilk

Statistic df Sig. Statistic df Sig.
Morbidity 5 .288 5 .200
*
.859 5 .226
6 .261 5 .200
*
.853 5 .204
7 .260 2 .

8 .260 2 .

10 .248 7 .200
*
.922 7 .482
12 .260 2 .

14 .260 2 .

15 .195 3 . .996 3 .883
17 .260 2 .

20 .260 2 .




For the validity and reliability of data, Shapiro-Walk normality of test was done. Since p-value for Sharipiro-Wilk test are
0.226 and 0.204 this implies that morbidity and those who survived after treatment of under-five malaria reported cases
are normal because their p-values are greater than alpha=0.05

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

849 www.ijergs.org

Chart 5:Morbidity against Treated and Recovered(Survived)

The above figure shows that there is positive trend display between morbidity and number of children treated and
recovered. They sense of direction displayed by scattered points is a positive straight line. The line is not a perfect one
which shows that there is strong positive relation between morbidity cases and those children treated and recovered. The
points of deviation moving away from the line are as a result of mortality cases. This shows that there is no perfect strong
and positive relation between morbidity cases and survival cases.
TEST OF HYPOTHESIS
Table 2: Chin Square Test
Chi-Square Tests

Value df
Asymp. Sig.
(2-sided)
Pearson Chi-Square 1.520E3
a
1444 .080
Likelihood Ratio 292.338 1444 1.000
Linear-by-Linear
Association
37.431 1 .000
N of Valid Cases 40

a. 1521 cells (100.0%) have expected count less than 5. The
minimum expected count is .03.

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

850 www.ijergs.org

DECISION
Reject null hypothesis and accept the alternative hypothesis ie there is no significant difference between the morbidity
cases of malaria for under-five children and those treated and survived, since the chi-square value = 0.00152, p=0.819
CONCLUSION
In 2010 only, 2627 cases of morbidity were reported followed by an increasement of 9% in 2011 with 2865 reported
cases.
In 2012, there was a rapid increasement of 42% in morbidity reported cases with 4028 cases reported which was almost
five times the morbidity cases in 2010. It sound alarming and worrisome. It can be observed that the rate of morbidity
cases was getting higher every year in under- five children in Kenema District.
In 2010, 2538 case were reported to have survived after treatment out of 2627 cases of morbidity in 2011, there were 2758
reported case of under- five children who survived after treatment which is 9% increasement. So the changes of recovery
were increasing at the same rate as the rate of morbidity as 2010 to 2011.
In 2012 there were 3897 under five children reported to survive out of 4072 morbidity cases which indicate 40%
increasement for changes of survival for under-five children in the Kenema Government District Hospital. Again, from
2011 to 2012 the changes of survival for children were almost the same as chances of morbidity cases. Moving toward
2013, there were clear indications that the rate of morbidity may increase at a rate equal to the rate at which children
treated and recovered.
2010, 89 death cases were reported out of 2627 morbidity cases. And in 2011, 107 cases were reported out of 2865
reported morbidity cases. This implies that there was an increase of 2% from 2010 to 2011.
In 2012 there 175 mortality cases reported out of 4072 morbidity reported cases. This implies that there was 64%
increased of mortality case from 2011 to 2012.
From the analysis, the morbidity, mortality and recovery (survival) cases are moving at the same trend. This was
manifested in the graph that was plotted morbidity on treated and recovered (survived). The graph portrays a linear trend
which implies that the more morbidity cases are reported the more increase for death and the more increase for the
children to be treated and recovered.
The alarming rate at which disease is increasing no doubt to believe that malaria is one of the leading causes of mortality
morbidity among children in Sub-Saharan Africa of which Sierra Leone is not excluded
High rate of morbidity cases of malaria for under -five children did occur between June and October of all the years, and
is a manifestation of the raining season.
There is no significant deference between the morbidity and those treated and recovered (survived) which implies that the
many challenges that were that were faced by this hospital management were overcame. These challenges may exist in the
form of number of beds available, number of qualified doctors and nurses, use of appropriate and available drugs for the
treatment of malaria etc. However in mist of these challenges the management was able to treat and recover a lot of cases
though there more morbidity cases waiting in queue



International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

851 www.ijergs.org

REFERENCES:
[1] Sierra Leone Demographic and Health Survey, 2008
[2] WHO Report, 2005, 2013
[3] Malaria Control Programme in Sierra Leone-Report-2010-2013
[4] Centers for Disease Control and Prevention &Center for Global Health bulletin report 201




















International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

852 www.ijergs.org

Environmental Contamination By Radionuclides And Heavy Metals Through
The Application Of Phosphate RocksDuring Farming And Mathematical
Modeling Of Their Impacts To The Ecosystem
Meserecordias W. Lema
1
, Jasper N. Ijumba
1
, Karoli N. Njau
1
, Patrick A. Ndakidemi*
1

ndakidemipa@gmail.com. +255 757744772.P.O Box 447.NM-AIST.Arusha.United Republic of Tanzania.

AbstractMost of rock phosphates contain radioactive elements and heavy metals because theyoriginate from phosphate deposits.
The application of these rock phosphates may result to the transfer of these dangerous materials into the ecosystem. Once these
dangerous minerals become readily available for plants uptake and for animal consumptions, negative impacts may prevail, both to
plants and animals esp. human beings.This review focuses on the environmental contamination by radioactive elements and heavy
metals as a result of the application of rock phosphates during farming and the need to develop a mathematical model that can be used
to predictthe associated impacts to the ecosystem.
Keywords radioactivity, heavy metal, ecosystem,phytotoxic, rock phosphates, soil, plant.
INTRODUCTION
Research [1] has revealed that the main limiting plant-growth factor in highly weathered tropical acidic soils is phosphorus (P). These
types of soils are characterized by low total and available phosphorus content and high phosphate retention capacities [2], [3]. They
also have low P availability to crops and exhibit high capacity of fixing phosphorous and therefore phosphorous deficiency is a major
constraint to crops production [4]. Research conducted in several areas, including Europe, America, Asia as well as Africa, have
revealed that phosphorus deciency is a widespread fertility constraint of many acidic and calcareous soils [5], [6], [7], [8], [9].
Furthermore, it is estimated that, 50% of all cultivated acidic soils in the United Republic of Tanzania (URT) have high deficiencies in
phosphorous [10].

From 1950 to date, the application of plant nutrients such as rockphosphates to nutrient-deficient soils has increased substantially [8].
All over the world, farmers are advised to use phosphatic fertilizers,including RP to increase crop production and improve nutrients
availability in the unfertile P deficient soils. However, the main source of phosphate fertilizers for small scale farmers especially in
rural areas is through natural occurring phosphate deposits. For example, in East Africa, Minjingu phosphate rock is commonly
applied directly to cropsduring farming [11], [12], [13].

Despite their positive values in enhancing crop productivity, research [9], [14], [15], [16], [17] has revealed that phosphate rocks
contain a substantial concentration of uranium, thorium, radium and their decay products. It has also been estimated that when these
phosphate rocks are applied to unfertile fields during farming, they could raise radioactivity levels in soils [18], [19], [20].
Furthermore, several studies [21], [22], [23], [24] conducted in different parts of the world have shown that phosphate rocks contain a
substantial amount of heavy metals and rare earth metals. Also these phosphate rocks have been identified to be among the sources of
heavy metal pollution to air, soil, water, plants and animals, through soil-plant-man chain [23], [24].

The transfer of natural radionuclides and heavy metals from the PR through the biosphere becomes an important study considering
their presence, persistence and effects to the natural ecosystem [25]. Soilplantman is recognized to be one of the major pathways for
the transfer of radionuclides to human being [26]. Contamination of cultivated lands by trace metals and some naturally occurring
radioactive materials caused by application of these rock phosphatesmaybecoming a potential threat to human beings and animals
[27]. The radionuclides accumulated in arable soil can be incorporated metabolically into plants and ultimately get transferred into the
bodies of animals (including humans) when contaminated foods are consumed.

The radionuclides accumulated in different plant parts may be consumed by human beings or animals in form of food and finally
accumulate in different organs of their bodies. Accumulation of radionuclides in human bodies may be harmful if the maximum dose
is exceeded [28], [29] a situation which may cause serious health problems to human beings. For example, when
226
Ra is deposited in
bone tissue, it has a high potential for causing biological damage through continuous irradiation of human skeleton over many years
and may induce bone sarcoma [30]. On the other hand, leaching of these radioactive minerals is another source of dissemination and
possible transfer to waters and nally to human beings and animals [16].
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

853 www.ijergs.org


Likewise, heavy metals are toxic especially to plants and animal health. In plants, the excessive accumulation of toxic levels of heavy
metals beyond established phytotoxic levels may cause growth abnormalities such as alterations in germination process, leaf chlorosis
or death of the whole plant [22], [31], [33], [34]. Higher concentrations of heavy metals in soil may further pose health risks to
animals, esp. human beings through either soil-plant-man or water-man or direct contact pathway. Excessive concentrations of some
heavy metals in human beings are highly dangerous to human health, and may even cause death [31]. For example, some of heavy
metals such as Cadmium, Nickel and Arsenic are known to be a major cause of different types of cancer to human beings[52].

The transfer of nutrients through soil-plan-man pathway has been described mathematically by some authors through mathematical
modeling [35], [36], [37], [38]. In this context, mathematical modeling is a vital tool when investigating the impacts of variation of
different parameters in the farming environment against plant mineral uptake, especially radioactive elements such as Uranium and
heavy metals such as Cadmium and others that in one way or another may cause negative effects to plants. Soil and nutrient
properties, through their influence on nutrient diffusion rates in the soil, may play a key role in determining the outcome of plant
competition for nutrients [38].
AIM OF THIS REVIEW
The main aim of this paper is to provide a review on the environmental contamination by radioactive elements and heavy metals as a
result of the application of rock phosphates during farming and mathematical modelling of the associated impacts to the ecosystem.
This review is intended to gather useful information on the topic as highlighted by other authors/researchers as well as unleashing
research gap/s that need/s attention for further research and experimentation to be undertaken.
A. RADIONUCLIDES AND HEAVY METALS CONTAMINATION IN SOIL AND PLANTS AND THEIR ASSOCIATED HEALTH
RISKS
Research has revealed that phosphate deposits contain a wide range of heavy metals i.e. Hg, Cd, As, Pb, Cu and Ni [21], [22], [23],
[24] and naturally occurring radioactive materials (NORM) i.e. U, Th, and K [9], [14], [15], [16], [17], [30]. These phosphate ores can
be used directly (without any industrial processes) to increase fertility to many unfertile soils of the world. Several studies conducted
on rock phosphates to investigate the amount of heavy metals and NORM also revealed a substantial concentration of both heavy
metals [21], [22], [23], [24] and NORMs [39], [40], [41], [42].

Although it is common for farmers to use rock phosphates to increase crop production and improve nutrients availability in the
unfertile soils [14], it is very unfortunate that to date, there are no standards associated with the acceptable levels of concentrations of
either heavy/toxic/radioactive minerals for rock phosphates to be safe for agriculture use [26], [28], [29], [43]. This brings a rather
challenging concern on the use of these rock phosphatesduring farming as long as their distribution and health effects might be
significant to our natural environment. In lieu of that, it is therefore very important to undertake studies on the levels of contamination
caused by the application of such rock phosphates in agricultural soils and the transfer of dangerous minerals (radionuclides and
heavymetals) to plants and animals. This will serve two purposes; one is to determine the level of toxicity these toxic elements may
posse toplants and animals (including human beings) and two is to establish minimum limits of such elements in rock phosphates for
safe application during farming.

a) In soil
i. Radionuclides
Research [25], [39], [40], [44], has shown that the use of rock phosphatesduring agricultural activities is one of the mechanism
through which significant amounts of radioactive materials i.e. uranium, thorium, potassium, radium and its decay products are
redistributed throughout the agricultural soils of the world. Although radioactivity measurements in soil may bring different results
from one soil type to another, research [5] has shown that one of the sources of radioactivity in soils, apart from those of natural origin
is mainly due to extensive use of rock phosphatesduring farming.

In 1968, Menzel [40] undertook a study to measure Uranium, Radium, and Thorium content in phosphate rocks and their possible
radiation hazards in Florida, Unites States of America (USA). In this study, it was shown that phosphate rocks in Florida, contain a
substantial amount of Uranium, Radium and Thorium. Furthermore, another study was undertaken in Jordan and Pakistan by Tufail et
al., in 2006[46] to measure radioactivity levels in rock phosphates. Results from this study showed that the activity mass
concentrations of
238
U (
226
Ra) in the rock phosphates(428 +/- 11 Bqkg(-1) in Jordan and 799 +/- 10Bq kg(-1)in Pakistan) were higher
than the world average ranges [47].

Furthermore, another study was undertaken in Brazil by Saueia and Mazzilli [42] in 2006 to study the distribution of natural
radionuclides in the production and use of phosphate fertilizers in agriculture. One of the main components of the study was to predict
increase in the concentrations of radionuclides in the soil as a result of the application of phosphate fertilizers during farming. The
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

854 www.ijergs.org

study used the following mathematical formula to estimate the increase in activity concentrations in the soil;

,
=

(1

)

,

Whereby;
Cs,i = radionuclide activity concentration i in soil (Bq/Kg)
Cf,i = radionuclide activity concentration i in fertilizer (Bq/Kg)
af = surface application rate of fertilizer (Kg/m2/y)
E,i = effective rate constant for reduction of the activity concentration of radionuclide i in the root zone of soil (1/y). E,i = i + s
i = rate constant for radioactive decay of radionuclide i (1/y).
s = rate constant for reduction of the concentration of material deposited in the root zone of soils owing to processes other than
radioactive decay (1/y).
td = time of deposition in soil (y)
P = surface density for the effective root zones in soil (Kg/ m2).
Bq = Becquerel
Y = Year
It was found that there was an increase of up to 0.87 Bq/Kg for grain crops and 7.6 Bq/Kg for green crops.

Another study by Wassila and Ahmed [9] was also undertaken in 2011, to measure radioactivity levels in soil and phosphate fertilizers
in Algeria. In this study, both virgin and fertilized soil samples from Setif, Algeria were collected. Results of this study showed that
there was a significant increase in radionuclides in the fertilized soils compared with virgin soils. However, the measured
concentrations of radionuclides i.e. K, U, Th and Ra were within the world average ranges [47].

From a research point of view[9], [25], [39], [40], [42], [44-47], it is purely evident that a significant amount of radionuclides are
distributed in farm soils as a result of the application of phosphate fertilizers during agricultural activities.Very little is known about
the distribution of such dangerous minerals in the farm soils of the East African Region such as Tanzania as a result of the application
of rock phosphates, especially the ones that are locally mined from the Minjingu Phosphate Deposit (MPD).Based on this fact, there is
a great demand of undertaking further studies to measure concentrations of radioactive elements in Tanzanian farm soils as a result of
the application of locally availablerock phosphates. This will provide good information to the environmental stakeholders and may
trigger the establishment of minimum allowable limits of such elements in rock phosphatesby relevant organs i.e. World Health
Organization (WHO) and Food and Agriculture Organization (FAO).

ii. Heavy metals
Research [21], [22], [23], [40] has also shown that continuous application of rock phosphates in farm-soils could increase the
concentration of heavy metals to levels above natural abundances in soils. According to a study [48], concerns on heavy metals in soil
is mainly observed in acidic soil with low cation exchange and low phosphorous fertility and which is fertilized with phosphate rock
fertilizers.It is revealed through research [49] that normal agricultural practices do not pose a sound impact on heavy metal content of
farm-soils, but the use of rock phosphates in a long run, could cause dangerous heavy metals to accumulate in farm-soils. A review
[50] by Jiao et al., in 2012 on environmental risks of trace elements associated with long-term P-fertilizers application concluded that
the application of P-fertilizers can significantly contribute to potentially dangerous heavy metals and trace elements i.e. Arsenic,
Cadmium and Lead in farm soils.

In 2001, Abdel-Haleemet al. undertook a study [21] that aimed at determining the elemental pattern in phosphate ingredients/raw
materials (rock phosphate, limestone and sulfur) as well as in the produced phosphate fertilizer.In this study it was revealed that there
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

855 www.ijergs.org

was existence of elevated contents of heavy metals Fe, Zn, Co, Cr and Sc as well as rare earth elements La, Ce, Hf, Eu, Yb and Sm in
all the phosphate related materials that were investigated.Another study by Giuffret al., in 1997, [22] to determine concentrations of
chromium, cadmium, copper, zinc, nickel and lead in common-used fertilizers in Argentina, revealed that rock phosphate contained
the highest levels of cadmium and zinc, chromium. The study also concluded that continuous application of P-fertilizer in farm-soils
increased the concentration of heavy metals to levels above natural abundances in soils. The study also recommendedthat special
attention should be put in looking at the transfer of these metals to the human food chain.

Another study by Javiedet al., in 2008 [24] for the purpose of investigating heavy metal pollution from phosphate rock used for the
production of fertilizer in Pakistan revealed that phosphate rock is among the sources of heavy metal pollution to air, soil, water, and
to the food chain. The study therefore concluded that there is a need to remove heavy metals from the rock prior to its use as the
presence of such dangerous heavy metals in phosphate rock may cause detrimental effects to both human and plants. A field trial to
determine the ideal rock phosphate (RP) and the level of cow dung fertilizer combination with respect to heavy metal contamination of
soil and crops was undertaken in Nigeria by Awotoyeet al. [23] in 2010. The obtained results reported an increase in the levels of Pb,
Zn, Cu and Cd in the measured soils.In 2008, a research by Nziguheba and Smolders [51] was undertaken in 12 different European
countries for the purpose of establishing the concentrations of trace elements in agricultural soils as a result of the application of
phosphate fertilizers. The results of the research concluded that mineral P-fertilizers are one of major sources of heavy metals
accumulation in agricultural soils.

The review presented above [21-24], [40], [51] brings a sound concern on the presence of dangerous heavy metals in farms soils
where rock phosphateshave been applied to increase agricultural productivity of crop lands. Many studies conducted in East Africa
and Tanzania in particular where Minjingu Phosphate Rock is mined and used do not provide satisfactory information on the
concentrations of these particular toxic elements in farm soil where therock phosphates have been applied for agricultural purposes. In
this way, it seem very important to undertake studies that will bring better information on the amount of heavy metals that are
distributed to farm soils as a result of theapplication of local phosphate rocks. This information will be vital to the environmental
stakeholders and may assist in the process of establishing critical levels of such elements in phosphate rocks.

iii. Health impacts to animals, human and plants
The presence of radioactive elements i.e. K, U, Th and Ra, as well as heavy metals i.e. Cd, Pb, Ni and As in agricultural soils is
associated with negative impacts to the ecosystem [21], [25], [31-34]. Danger may be posed to both, animals, human beings and plants
[28-29], [31], [34]. Detrimental effects may be caused either directly or indirectly, through different pathways i.e. direct
ingestion/inhalation, drinking of contaminated water, contact with contaminated soil and the food chain [21], [25], [52].

The transfer of radionuclides from farm soils to human bodies may be harmful if the maximum dose is exceeded [28-29], [47] a
situation which may cause serious health problems to human beings. For example, once radionuclides accumulate in human body
tissues at higher levels than the standard limit [43], they may cause severe health problems such as cancer [22], [27]. Also, when
226
Ra
is deposited in bone tissue, it has a high potential for causing biological damage through continuous irradiation of human skeleton
over many years and may induce bone sarcoma [30].

Heavy metal contamination in soil may pose risks and hazards to human beings. Excessive concentrations of some heavy metals in
biological systems, especially animals (human beings in particular) are highly dangerous to human health, and may even cause death
[31]. For example, heavy metals such as Cadmium, Nickel and Arsenic are carcinogenic. Table 1 gives a summary of some dangerous
heavy metals that are commonly present in farm soils and their health impacts to human beings [52].

Table 1: Dangerous heavy metals and their health impacts to human beings (as stipulated by Wuanaand Okieimen [52])

Heavy Metal Health Impact/s
Pb Mental lapse or even death
Cr Allergic dermatitis
As Skin damage, cancer, affects kidney and central nervous system
Zn Zinc shortages can cause birth defects
Cd Affects kidney, liver and GI tract
Cu Anaemia, liver and kidney damage, and stomach/intestinal irritation
Hg Kidney damage
Ni Various kinds of cancer

The contaminated soils with heavy metals may also cause health impacts to plants [31], [34]. In plants, the excessive accumulation of
toxic levels of heavy metals, beyond established phytotoxic levels may cause growth abnormalities such as alterations in germination
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

856 www.ijergs.org

process, leaf chlorosis or death of the whole plant. Table 2 gives a summary of toxic limits (concentration-mg/Kg) of few heavy metals
in soil and their health impacts to plants [31], [32], [33].


Table 2. Toxic levels of some heavy metals in soil and their health impacts to plants

Heavy
Metal
Phytotoxic
limits in
soils(mg/K
g)
Health Impacts to plants Reference
Cd 4 Chlorosis, necrosis, purple coloration [31]
Pb 50 Dark-green leaves [31]
Ni 30 Decrease in leaf area, chlorosis, necrosis and stunting [33]
Cr 1 Alterations in the germination process, stunted growth, reduced yield and
mutagenesis
[31]
Zn 50 Stunting and reduction of leaves elongation [32]
Cu 100 Chlorosis in plants, yellow coloration, inhibition of root growth and less
branched roots
[33]
Fe 100 Dark green foliage. Thickening of roots, brown spots on leaves [31]
Mn 300 Marginal chlorosis and necrosis of leaves, crinkled leaves [31]

Reviews [21-22], [25], [27-29], [30-34], [43], [47], [52] on health impacts associated with the application of rock phosphatesin farm
soils has shown that the presence of heavy metals in soil brings more negative impacts to plants than to animals. This is because plants
tend to accumulate these dangerous minerals directly into their systems through roots during plant mineral uptake for nutritional
purposes. In this way, plants are more affected than animals, whose direct contamination from soil is mainly through exposure. It is
therefore very necessary to study uptake and distribution of these heavy metals in plants and measure levels of concentrations of these
toxic elements as a result of the application of rock phosphates to farm soils. This will help to identify which elements are taken up by
plant species in excessive amounts from the locally available rock phosphates. The information may be used by environmental
planners and activists to claim for restriction or control of the elements inrock phosphates and hence minimising negative impacts to
plants.

b) In plants
i. Radionuclides
The study of transfer mechanism and plant mineral uptake of natural radionuclides such as
238
U and other dangerous heavy metals
through our natural environment is of high importance regarding their complexity in terms of existence and persistence [25], [48].
Minerals uptake by plants is usually described as concentration ratio (CR), which is sometimes known as transfer factor (TF) [53]. CR
is calculated by dividing the mineral concentration in the plant by mineral concentration in soil. The most dominant factor for mineral
nutrient acquisition by plants is the root surface properties [37]. This includes root size (and its increase with time), nutrient inflow
into the roots as related to nutrient concentration in the soil solution near the root surface (this incorporates both kinetic and plant
demand factors), and nutrient transport in the soil by convection or diffusion.

Dreesen et al. in 1978 [54] undertook a study to investigate contaminant transport, revegetation and trace element studies at inactive
uranium mill tailing piles in USA. In this study, amongst other objectives, the uptake of toxic trace elements and radionuclides by
vegetation growing in soils-covered tailings was also examined as a mechanism of contaminant transport. Results of this study showed
that the uptake of trace elements and radionuclides may constitute a significant contaminant transport mechanism, particularly from
covered tailings area. Measurements of As, Se and Ra showed that the uptake of radionuclides by plants depends on the plant species,
the type of radionuclide and on substrate characteristics.

In 1985, Rumble and Ardell [56] undertook a study to measure the concentrations of Uranium and Radium in plants growing in soils
around a Uranium mill tailing in South Dakota, USA. In this study, it was observed that plants growing in the study sites had elevated
levels of Uranium and Radium compared withcontrol sites. It was also observed that the amount of radionuclides taken up by plants
from soils depends on the radionuclide form, soil moisture and chemical and mineralogical composition of the soil.

In 2003, another study by Pulhaniet al. [25] was undertaken to investigate uptake and distribution of natural radioactivity in wheat
plants from soil in India. In this study, the uptake of Uranium, Thorium, Radium and Potassium by wheat plant from two
morphologically different types of soils was studied under natural field conditions. The transfer factors were calculated and were used
to study uptake of essential and nonessential elements by plants. It was observed that the availability of Calcium and Potassium in soil
for uptake affects the Uranium, Thorium and Radium content of the plant. The availability of these radionuclides in soil for plants was
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

857 www.ijergs.org

also observed to be hindered by the fact that Illite clays of alluvial soil do trap potassium in its crystal lattice and that phosphates form
insoluble compounds with thorium. It was also observed that a major percentage (5475%) of total
238
U,
232
Th and
226
Ra activity in the
plant is concentrated in plant roots.

Mlwiloet al. [20] undertook a study in 2006 to measure radioactivity levels of staple foodstuffs and dose estimates for most of the
Tanzanian population. In this study, staple food products including maize and rice from various localities of the United Republic of
Tanzania were measured to establish radioactivity levels of
40
K,
232
Th and
238
U. Results showed that one type of foodstuff (maize)
contained relatively high average concentrations of the measured radionuclides and this was due to the extensive use of rock
phosphates during farming for maize.

Research [20], [25], [37], [48], [53-56] has shown that a significant amount of radionuclides are distributed in foodstuffs as a result of
the application of rock phosphates in farm soils during agricultural activities. Information about this phenomenon in East Africa and
Tanzania in particular where different forms of Phosphate Rock fertilizers are used by farmers is almost negligible. Studies of the
same nature should be undertaken inthese areasto determine the actual levels of radioactive elements present in plant species as a
result of the application of rock phosphates. This will provide good information on the extent of danger that plants species in Tanzania
stand, through the application of rock phosphates in farm soils.

ii. Heavy metals
A study by Mortvedt and Beaton [48] in 1995 to investigate heavy metal and radionuclides contaminants in phosphate fertilizers
indicates significant differences among plant species in their ability to take up different types of heavy metals supplied through the
application of phosphate fertilizers. The toxic impacts of heavy metals within a plant system are associated with their accumulation in
different plant tissues [31]. For instance, studies [57-58] have shown that different plants will accumulate certain heavy metals in
different concentrations in their leaves, stems and roots, and in which critical levels may vary amongst themselves (see Table 2).

In 1974, Bazzaz et al. [59] conducted a study to investigate the effect of heavy metals on plants in Illinois, USA. The measured heavy
metals included Pb, Ni, Cd and Tl. The plant used was sunflower (Helianthus AnnuusL.). In this study, it was revealed that relatively
low concentrations of Pb, Cd, Ni and Tl inhibited photosynthesis and transpiration of detached sunflower leaves. The primary mode of
action is the interference with stomatal functions that results in reduction of photosynthesis by 50%, when leaf tissue concentration (in
ppm) is 63, 96, 193 and 79 for Tl, Cd, Pb and Ni respectively.

In 1975, Kumar et al. [60] undertook a study to investigate the use of plants to remove heavy metals from soils. This study was
supported by previous research that had shown great possibilities of some wild plants grown on metal-contaminated soil to accumulate
large amounts of heavy metals. In this study, various crop plants were compared for their ability to accumulate heavy metals i.e. Pb,
Cr, Cd, Ni, Zn and Cu. It was finally found that Brassica juncea (L.) Czern accumulated large amounts of heavy metals i.e. Pb, Cr, Cd,
Ni, Zn and Cu both in roots and shoots. The plant (Brassica juncea (L.)Czern) was therefore termed as favourite phytoextraction
agent.

Furthermore, Wenzel and Jockwer in 1999 [61] undertook a research to study the accumulation of heavy metals in plants grown on
mineralized soils of the Austrian Alps. In this study, a field survey of higher terrestrial plants growing on eighteen metalliferous sites
of the Austrian Alps was conducted to identify species that accumulate exceptional large concentrations of selected heavy metals (Cd,
Cu, Ni, Pb and Zn). Results showed that several plant species (Minuartiaverna,Biscutella laevigata, Thlaspi rotundifolium ssp. cepaei-
folium, Cardaminopsishaller and Thlaspi goesingense) were found to contain elevated levels of heavy metals and were therefore
categorised as hyperaccumulators.

Awotoyeet al. in 2010 [23] conducted field experiments to determining the ideal rock phosphate (RP) and the level of cow dung
fertilizer combination with respect to heavy metal contamination of soil and crops. Plants used in these experiments were maize (Zea
mays (L)) and okra (Abelmuscusesculentum). Results showed that the application of RP in combination with various levels of cow
dung elevated the Pb, Zn and Cu content in the tissue of maize relative to the control. Only Cu and As were found in excessive
amounts in okra.

Review on heavy metals in plants [23], [31], [48], [57-61]has revealed that potential heavy metals are found in plants that are grown in
farm soils where extensive use of phosphate fertilizers is common. In East Africa and Tanzania in particular, few studies have been
undertaken to date to investigate the amount of dangerous heavy metals that are present is plant species grown in farm soils that have
been supplied with row phosphate rock fertilizers or of similar origin. It is therefore very important to quantify the amount of these
dangerous elements in plants so as to avoid the transfer of such minerals to animals, especially human beings through consumption of
food products from plants. Furthermore, these dangerous minerals may be one of the reasons for under-productivity of certain species
of plants, so it is of high importance to investigate their presence in plants.

International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

858 www.ijergs.org

iii. Health impacts
One of the major pathways through which radioactive materials are transferred to human beings from farm soils is soilplantman
[26]. In this regard, plants play a key role in the transfer of radionuclides from soil to human beings. Other pathways include direct
inhalation of contaminated air, drinking of contaminated water as well as direct contact with contaminated soils [21], [25], [53]. Many
studies conducted in many areas of the world [20] [25], [48], [52] to measure dose rates due to the intake of radionuclides by human
beings as a result of consumption of plant foodstuff have shown that the concentrations are lower than the recommended doses for the
general public [26], [28-29], [62-65].

On the other hand, several studies [66-69] have shown that heavy metals in foodstuff derived from plants are associated with health
hazards to human beings. For example, a research was conducted in Tianjin, China by Wang et al. in 2005 [66] to study health effects
of heavy metals i.e. Cu, Zn, Pb, Cd, Hg, and Cr to the general public through consumption of vegetables and fish. Results showed that
consumption of both vegetable and fish at once may lead to potential health risks especially to children because the target hazard
quotient (THQ) of the two foodstuff sums up to greater than the recommended value [29], [62], [64-65]. Health risk to adults from
consumption of both vegetable and fish was mainly associated with Cd only.

Another study was conducted by Zeng et al. in 2007 [67] to investigate health risks associated with Hg, Pb, Cd, Zn, and Cu to the
inhabitants around Huludao Zinc plant in China through consumption of vegetables. Results showed that the THQfor Cd and Pb were
found to be higher than the recommended value [29], [62], [64-65] for both adults and children. This may lead to potential health risks
since the THQs for the two heavy metals i.e. Cd and Pb exceed maximum allowed limits in foodstuff.

In 2008, Khan et al. [68] studied health risks associated with heavy metals in contaminated soils and food crops i.e. vegetables
irrigated with wastewater in Beijing, China. Results from this study indicated that there is a substantial buildup of heavy metals in
wastewater-irrigated soils collected from the study sites. Heavy metal concentrations in plants grown in these sites we found to exceed
the permissible limits set by the State Environmental Protection Administration (SEPA) [70-71] in China and the World Health
Organization (WHO) [72-73]. However, health risk index values was less than one indicating a relative absence of health risks
associated with the ingestion of contaminated vegetables.

In 2009, another study was conducted by Zhuang et al. [69] around Dabaoshan mine, South China. This study aimed at investigating
health risks from heavy metals (Cu, Zn, Pb and Cd) consumed from food crops. Plants investigated in this study include rice and
vegetables. Results showed that the estimated daily intake (EDI) and THQs for Cd and Pb of rice and vegetables exceeded the
FAO/WHO permissible limit[72-73].

According to thesestudies[21-22], [25], [27-29], [30-34], [43], [47], [52], few investigationshave reported the presence of excessive
concentrations of heavy metal above the permissible limits in plant foodstuff such as rice and vegetables. It is important, therefore to
study uptake and distribution of these heavy metals in plants and measure levels of concentrations of these toxic elements as a result of
the application of rock phosphates to farm soils. This will help to identify which elements are taken up by plant species in excessive
amounts from the locally available rock phosphates.
B. MATHEMATICAL MODELING
Plants nutrients uptake from soil is dependent to the inner interactions between plants themselves and soil [74]. However, the
concentration of a particular nutrient, available at the root surface of the soil solution, dictates the rate of uptake of that nutrient.
Research shows that mathematical models on nutrient uptake by plants have been usefully used to investigate the effect of various soil
and plant factors on nutrient flux to plant roots [36]. However, most mathematical models that describe processes involved in nutrient
uptake through root system in soil integrate values for root size (usually length) and its increase with time, nutrient inflow into the
roots as related to nutrient concentration in the soil solution near the root surface (this incorporates both kinetic and plant demand
factors), and nutrient transport in the soil by convection or diffusion [35], [37], [38]. Root hairs which are lateral extensions of
epidermal cells are also involved, and these root hairs increase the effective surface area of the root system available for water and
nutrient uptake [35].

Soil and nutrient properties, through their influence on nutrient diffusion rates in the soil may play a key role in determining the
outcome of plant competition for nutrients [38]. Epstein [75] and Nielsen [76] quantitatively described the existing relationship
between nutrients concentration and its rate of uptake. Another study by Barber [77]has shown that the transport of nutrients from soil
to plant, through plant roots is a function of mass flow and diffusion.

In early 1980, a mathematical model was developed by S.A. Barber and J.H. Cushman (Barber-Cushman Model) to simulate nutrient
uptake by roots [77]. The model assumes that plants roots are evenly distributed in the soil and nutrient flow in the soil to the roots can
be described by a one dimensional radial flow. The main governing formulae for the development of this model were as follows;
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

859 www.ijergs.org

i. The concentration in the liquid is linearly related to the solid concentration


ii. The flux of a nutrient from one node to another is described by the combined effect of diffusion (fix law) and mass flow
(works on liquid concentration and needs therefore be multiplied with b)

=


, +1



iii. The flux from the most inner node into the root is described by Michealis-Menten kinetics.

=

0

Where;
Efflux is independent of the Concentration outside the root. Sometimes E is represented by a minimum uptake concentration C
min
for
which C is corrected.

iv. The Flux over the outside boundary is zero.
= 0


In 1983, Silberbush and Barber [22] used Cushman simulation model, with its eleven plant and soil parameters, for a sensitivity
analysis of the parameters involved in P uptake. Simulation models were verified for P uptake by corn and soybeans. In that study, it
was revealed that, root growth rate and root radius were the most sensitive parameters influencing P uptake from the soil. It was also
found that soil P supply parameters were more sensitive than root physiological uptake parameters. Furthermore, P concentration in
soil solution affected P uptake more than the diffusion coefficient and buffer power. On reducing root radius while keeping root
volume constant, P uptake was increased.

Also, in 1986 an important step in the overall process of nutrient uptake from soil was reached when a test was undertaken to compare
measured and calculated nutrient depletion next to root surfaces [76]. A mathematical model was first developed based on the idea of
ion transport from the soil to the roots by mass flow and diffusion and on Michaelis-Menten kinetics of nutrient uptake from soil
solution by plant roots. The model had based on a study by Nye and Marriott [80], which describes the transport of nutrients to the
root by mass flow and diffusion.

=
1

.
1


Where;
C
1
= concentration of the soil solution
r = radial distance from the root axis
ro = root radius
D
e
= effective diffusion coefficient
b = buffer power
V
o
= rate of water uptake
t = time

As it was described by Barber [80], a comparison between calculated and measured total K uptake by a growing root system under
different soil conditions was done. The obtained results showed that the developed model is useful in simulating uptake of available
nutrients in soil by plants.

In 2000, another study by Adhikari and Rattan [81] used Barber-Cushman mechanistic nutrient uptake model to describe and predict
nutrient uptake by crop plants at different stages of crop growth. The aim of the study was to compare the predicted Zn uptake at
different stages of growth to the measured Zn uptake by rice cultivars grown on sandy loam soil under green-house conditions. At the
end of the experiments, the predicted Zn uptake was significantly collated with the observed uptake (r
2
=0.99).

In 2003, Barber-Cushman mechanistic P uptake model was used to examine the predictability of phosphorous uptake in maize plants
[82]. This study was undertaken in South Dakota, U.S.A. and the primary goal was to examine how phosphorus (P) fertilizer affected
the predictability of phosphorous uptake in maize when applied to a silt loam soil. Results showed that the model predicted 86-90% of
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

860 www.ijergs.org

the observed P uptake. This shows that mathematical models have become vital tools in estimating mineral uptake by plants.

Most studies [22], [35-38], [74-77], [80-81] have shown that many existing mathematical models integrate values for root size (usually
length) and its increase with time, nutrient inflow into the roots as related to nutrient concentration in the soil solution near the root surface
and nutrient transport in the soil by convection or diffusion. There is a need to develop a mathematical model that can be able to use input
parameters i.e. soil type (acidity, organic matter content, cation exchange capacity), type of fertilizer, amount of fertilizer, time of application
of the fertilizer, amount of water, to predictthe amount of radionuclide/heavy metal to be taken up by certain species of a plants and take
precautionary control measures of contamination and health risks.

CONCLUSION
From this review, it has been shown that the application of rock phosphates during farming may be a major source of radionuclides
and heavy metals contamination to farm-soils, plants and animals other than that of natural origin. Several studies have recommended
establishment of allowable limits for both radionuclides and heavy metals in rock phosphates for safe application during farming. It
was also recommended that whenever possible, radioactive elements and heavy metals should be removed before rock phosphates are
used for farming. Furthermore, it has been recommended that studies on the investigation of levels of these dangerous elements
(radionuclides and heavy metals) in farm-soils of the Eastern African Region as a result of the application of rock phosphates during
farming should be undertaken for environmental purposes. Also is has been revealed that mathematical models that exist are not
sufficient to predict the outputs i.e. mineral concentrations when inputparameters are given i.e. soil parameters and fertilizer type.

ACKNOWLEDGEMENT
Special thanks to the Nelson Mandela African Institution of Science and Technology (NM-AIST) and the Commission for Science and
Technology (COSTECH) of Tanzania that supported this study.

REFERENCES:

[1] Rajani SSS, Watkinson JH, Sinclair AG. Phosphate rock for direct application to soils. Advanced Agronometry. 1996; 57:77
159.
[2] Friesen DK, Rao IM, Thomas RJ, Oberson A, Sanz JL. Phosphorus acquisition and cycling in crop and pasture systems in low
fertility tropical soils. Journal of Plant Nutrition and Soil Science. 1997; 196:289294.
[3] Borggaard OK, Elberling B. PedologicalBiogeo-chemistry. Royal Veterinary and Agricultural University and University of
Copenhagen, Copenhagen, 2004.
[4] Mokwunye U. Can Africa feed more than 40% of its population in 2005? 2000. Accessed: 2 August 2013.
Available: http//www/zef.de/gdialogue/Program/Paper day1/Sid-soil-Mokwunye pdf.
[5] Szilas C. The Tanzanian Minjingu phosphate rock-possibilities and limitations for direct application. PhD Thesis, Royal
Veterinary and Agricultural University, Copenhagen, 2002.
[6] Mnkeni PNS, Semoka JMR, Buganga BBS. Effectiveness of Minjingu phosphate rocks as a source of phosphorus for maize in
some soils of Morogoro, Tanzania. Zimbabwe Journal of Agricultural Research. 1991; 29:2737.
[7] Ibrikci H, Ryan J, Ulger AC, Buyuk G, Cakir B, Korkmaz K, Karnez E, Ozgenturk G, Konuskan O. Maintenance of phosphorus
fertilizer and residual phosphorus effect on corn production. Nutrient Cycling in Agroecosystems. 2005; 72(3):279-286.
[8] Ashraf EMK, AL-Sewaidan HA. Radiation exposure due to agricultural uses of phosphate fertilizers. Journal of Radiation
Measurements. 2008; 43:14021407.
[9] Wassila B, Ahmed B. The radioactivity measurements in soils and fertilizers using gamma spectrometry technique. Journal of
Environmental Radioactivity. 2011; 102:336-339.
[10] Semoka JMR, Kalumuna M. Potential and constraints of using rock phosphate for crop production in Tanzania. Tanzania Soil
Fertility Initiative: Background paper. Ministry of Agriculture and Coorporation, Dar es Salaam/FAO, Rome, 2000.
[11] Semoka JMR, Mnkeni PNN, Ringo HD. Effectiveness of Tanzanian phosphate rocks of igneous and sedimentary origin as
sources of phosphorus for maize. Zimbabwe J. Agric. Res. 1992; 30: 127-136.
[12] Sikora FJ. Evaluating and quantifying the liming potential of phosphate rocks. Nutr. Cycl. Agroecosyst. 2002; 63: 59-67.
[13] Butegwa CN, Mullins GL, Chien SH. Agronomic evaluation of fertilizer products derived from Sukulu hills phosphate rocks.
Fert. Res. 1996; 44: 113-122.
[14] Banzi FP, Kifanga LD, Bundala FM. Natural radioactivity and radiation exposure at the Minjingu phosphate mine in
Tanzania, Journal of Radiation Protection. 2000; 20:4151.
[15] Skorovarov JI, Rusin LI, Lomonsov AV, Chaforian H, Hashemi A, Novaseqhi H. Development of uranium extraction
technology from phosphoric acid solutions with extract. In Procurement International Conference of Uranium Extraction from
Soil. 2000; 217:106-113.
[16] Azouazi M, Ouahidi Y, Fakhi S, Andres Y, Abbe JCh, Benmansour M. Natural radioactivity in phosphates, phosphogypsum
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

861 www.ijergs.org

and natural waters in Morocco. Journal of Environmental Radioactivity. 2000; 5:231242.
[17] Makweba MM, Holm E. The Natural Radioactivity of the Rock Phosphate, Phosphatic Products and their Environmental
Implications. Journal of Science for total environment. 1993; 133:99110.
[18] Akhtar N, Tufail M, Ashraf M. Natural environmental radioactivity and estimation of radiation exposure from saline soils.
International Journal of Environmental Science and Technology. 2005; 1(4):279-285.
[19] Akhtar N, Tufail M, Ashraf M. Mohsin-Iqbal M. Measurement of environmental radioactivity for estimation of radiation
exposure from saline soil of Lahore, Pakistan. Journal of Radiation Measurements. 2005; 39:11-14.
[20] Mlwilo NA, Mohammed NK, Spyrou NM. Radioactivity levels of staple foodstuffs and dose estimates for most of the
Tanzanian population. Journal of Radiological Protection. 2007; 27:471-480.
[21] Abdel-Haleem AS, Sroor A, El-Bahi SM, Zohny E. Heavy metals and rare earth elements in phosphate fertilizer components
using instrumental neutron activation analysis. Journal of Applied Radiation and Isotopes. 2001; 55(4):569-573.
[22] Giuffr LCL, Ratto MS, Marban L. Heavy metals input with phosphate fertilizers used in Argentina. Journal of Science for
Total Environment. 1997; 204(3):245-250.
[23] Awotoye OO, Oyedele DJ, Anwadike BC. Effects of cow-dung and rock phosphate on heavy metal content in soils and plants.
Journal of Soil Science and Environmental Management. 2010; 2(7):193-197.
[24] Javied S, Mehmood T, Chaudhry MM, Tufail M, Irfan N. Heavy metal pollution from phosphate rock used for the production
of fertilizer in Pakistan. Microchemical Journal. 2008; 91:9499.
[25] Pulhani VA, Dafauti S, Hegde AG, Sharma RM, Mishra UC. Uptake and distribution of natural radioactivity in wheat plants
from soil. Journal of Environmental Radioactivity. 2003; 79:331346.
[26] International Atomic Energy Agency, (IAEA). Generic Models and Parameters for Assessing the Environmental Transfer of
Radionuclides from Routine Releases, Exposure of Critical Groups. Safety Series No. 57, IAEA, Vienna, 1982.
[27] Lambert R, Grant C, Sauve C. Cadmium and zinc in soil solution extracts following the application of phosphate fertilizers.
Journal of Science for Total Environment. 2007. 378:293305.
[28] International Commission for Radiation Protection (ICRP). Annual limits of intake of radionuclides by workers based on the
1990 recommendations. A report from Committee 2 of the ICRP, ICRP Publication 61 (Ann. ICRP 21 (4)) (Oxford: Pergamon),
1990.
[29] United Nations Scientific Committee on the Effects of Atomic Radiations (UNSCEAR), Ionizing radiation: source effects and
biological effects. Report to the General Assembly with Annexes 107140, 1982.
[30] Tomislav B, Gordana M, Zdenko F, Jasminka S, Maja B. Radioactive contamination in Croatia by phosphate fertilizer
production, Journal of Hazardous Materials2008; 162:11991203.
[31] Ayeni OO, Ndakidemi PA, Snyman RG, Odendaal JP. Chemical, biological and physiological indicators of metal pollution in
wetlands. Scientific Research and Essays. 2010; 5(15):1938-1949.
[32] Bonnet M, Camares O, Veisseire P. Effects of zinc and influence of Acremoniumlolii on growth parameters, chlorophyll
fluorescence and antioxidant enzyme activities of ryegrass (loliumperenne L. cv Apollo). J. Exp. Bot. 2000; 51: 945-953.
[33] Kukkola E, Raution P, Huttunen S. Stress indications in copper and nickel exposed Scots pine seedlings. Environ. Exp. Bot.
2000; 43: 197-210.
[34] EPA (United States Environmental Protection Agency). XRF Technologies 71 for Measuring Trace Elements in Soil and
Sediment. Innovative Technology Verification Report, Contract No. 68-C-00-181 Task Order No. 42. 2006.
[35] Leitner D, Klepsch S, Ptashnyk M, Marchant A, Kirk GJD, Schnepf A, Roose T. A dynamic model of nutrient uptake by root
hairs. New Phytologist Journal. 2010; 185:792802.
[36] Claassen N, Barber SA. Simulation Model for Nutrient Uptake from Soil by a Growing Plant Root System. Agronomy Journal.
1976; 68(6):961-96.
[37] David TC. Factors affecting mineral nutrient acquisition by plants, Journal of Annual Review of Plant Physiology. 1985;
36:77-115.
[38] Reynaud X, Paul WL. Soil characteristics play a key role in modeling nutrient competition in plant communities. Journal of
Ecology. 2004; 85:22002214.
[39] Barisic D, Lulic S, Miletic P. Radium and Uranium in phosphate fertilizers and their impact on the radioactivity of waters.
Journal of Water Resources. 1992; 26:607611.
[40] Menzel RG. Uranium, radium, and thorium content in phosphate rocks and their possible radiation hazard. Journal of
Agricultural and Food Chemistry. 1968; 16(2): 231-234.
[41] Righi S, Lucialli P, Bruzzi L. Health and environmental impacts of a fertilizer plant - Part I: Assessment of radioactive
pollution. Journal of Environmental Radioactivity. 2005; 82:167-182.
[42] Saueia CHR, Mazzilli BP. Distribution of natural radionuclides in the production and use of phosphate fertilizers in Brazil.
Journal of Environmental Radioactivity. 2006; 89:229-239.
[43] International Atomic Energy Agency, (IAEA). Measurement of radionuclides in food and environment. IAEA Technical Report
Series 295, Vienna, IAEA, 1990.
[44] Ahmed NK, Abdel GM. Natural radioactivity in farm soil and phosphate fertilizer and its environmental implications in Qena
governorate, Upper Egypt. Journal of Environmental Radioactivity. 2005; 84:51-64.
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

862 www.ijergs.org

[45] Paivi R, Sari M, Toini H, Jukka J. Soil-to-plant transfer of uranium and its distribution between plant parts in four boreal forest
species. Boreal Environmental Research Journal. 2011; 16:158-166.
[46] Tufail M, Akhtar N, Waqas M. Radioactive rock phosphate: the feed stock of phosphate fertilizers used in Pakistan. Journal of
Health Physics. 2006; 90(4):361-70.
[47] United Nations Scientific Committee on the Effects of Atomic Radiation. UNSCEAR, 2000. Report to the General Assembly,
with Scientific Annexes. Sources and Effects of Ionizing Radiation United Nations, New York.
[48] Mortvedt JJ, Beaton JD. Heavy metal and radionuclides contaminants in phosphate fertilizers. Scope-scientific Committee on
Problems of the Environment International Council of Scientific Unions 54, pp. 93-106, 1995.
[49] Chen W, Chang AC, Wu L. Assessing long-term environmental risks of trace elements in phosphate fertilizers,. Journal of
Ecotoxicology and Environmental Safety.2007; 67(1):48-58.
[50] Jiao W, Chen W, Chang AC, Page AL. Environmental risks of trace elements associated with long-term phosphate fertilizers
applications: A review. Journal of Environmental Pollution. 2012; 168(0):44-53.
[51] Nziguheba G, Smolders E. Inputs of trace elements in agricultural soils through phosphate fertilizers in European countries.
Journal of Science for Total Environment. 2008; 390(1):53-57.
[52] Wuana, RA, Okieimen FE. Heavy metals in contaminated soils: A review of sources, chemistry, risks and best available
strategies for remediation. ISRN Ecology, vol. 2011, Article ID 402647, 20 pages, 2011. Doi:10.5402/2011/402647.
[53] Avila R. Model of the long term transfer of radionuclides in forests. Technical report TR-06-08, Swedish Nuclear Fuel and
Management Co., Stockholm, 2006.
[54] Dreesen DR, Marple ML, Kelley NE. Contaminant transport, revegetation, and trace element studies at inactive uranium mill
tailings piles. In Symposium on Uranium Mill Tailings Management, 20-21 November 1978, Civil Engineering Department,
Colorado State University, Fort Collins, CO, pp. 111-113, 1978.
[55] Dreesen DR, Williams JM, Marple ML, Gladney ES, Perrin DR. Mobility and bioavailability of uranium mill tailings
contaminants. Journal of Environmental Science Technology. 1982; 16:702-709.
[56] Rumble, Mark A., and Ardell J. Bjugstad. "Uranium and radium concentrations in plants growing on uranium mill tailings in
South Dakota." Reclamation and Revegetation Research 4.4 (1986): 271-277.
[57] Breckler SW, Kahle H. Effects of toxic heavy metals (Cd, Pb) on growth and mineral nutritional growth of beech
(Fagussylvatica L.). Plant Ecology. 1992; 101:43-53.
[58] Baker AJM, Reeves RD, Hajar ASM, Heavy metal contamination and tolerance in British population of the metallophyte
Thalassic caerulescens J. and C. Presl (Brassicaceae). New Phytology. 1994; 127: 61-68.
[59] Bazzaz, F. A., Carlson, R. W., Rolfe, G. L. The effect of heavy metals on plants: Part I. Inhibition of gas exchange in sunflower
by Pb, Cd, Ni and Tl. Environmental Pollution 1970. 1974; 7(4): 241-246.
[60] Kumar, P. B. A. N. Dushenkov, V. Motto, H. Raskin, I. Phytoextraction: The Use of Plants To Remove Heavy Metals from
Soils. Environmental Science & Technology 1995; 29(5): 1232-1238.
[61] Wenzel, W. W. and F. Jockwer. Accumulation of heavy metals in plants grown on mineralized soils of the Austrian Alps.
Environmental Pollution. 1999; 104(1): 145-155.
[62] International Atomic Energy Agency, (IAEA). Measurement of radionuclides in food and the environment. Technical Report
Series No. 295, IAEA, Vienna, 1989.
[63] International Atomic Energy Agency, (IAEA). International Safety Basic Standards for Protection Against Ionizing Radiation
and for Safety of Radiation Sources. Safety Series No. 115, IAEA, Vienna, 1996.
[64] International Commission for Radiation Protection (ICRP). Report of task group on reference man, ICRP Publication 23 (Ann.
ICRP 4 (3-4) p III) (Oxford: Pergamon), 1975.
[65] International Commission for Radiation Protection (ICRP). Age-dependent doses to members of the public from intake of
radionuclides. Ingestion and inhalation coefficients, ICRP Publication 72 (Ann. ICRP 26 (1)) (Oxford: Pergamon), 1996.
[66] Wang, X. Sato, T. Xing, B. Tao, S. Health risks of heavy metals to the general public in Tianjin, China via consumption of
vegetables and fish. Science of The Total Environment. 2005; 350(13): 28-37.
[67] Zheng, N. Wang, Q. Zheng, D. Health risk of Hg, Pb, Cd, Zn, and Cu to the inhabitants around Huludao Zinc Plant in China via
consumption of vegetables. Science of The Total Environment. 2007; 383(13): 81-89.
[68] Khan, S. Cao, Q. Zheng, Y. M. Huang, Y. Z. Zhu, Y. G. Health risks of heavy metals in contaminated soils and food crops
irrigated with wastewater in Beijing, China. Environmental Pollution. 2008; 152(3): 686-692.
[69] Zhuang, P. McBride, M. B. Xia, H. Li, N. Li, Z. Health risk from heavy metals via consumption of food crops in the vicinity of
Dabaoshan mine, South China. Science of The Total Environment. 2009; 407(5): 1551-1561.
[70] State Environmental Protection Administration (SEPA). Environmental quality standard for soils. China. GB15618. 1995.
[71] State Environmental Protection Administration (SEPA). The Limits of Pollutants in Food. China. GB2762. 2005.
[72] World Health Organization (WHO). Evaluation of certain food additives and contaminants. 33
rd
Report of the Joint FAO/WHO
Expert Committee on Food Additives. Technical Report Series. Geneva: WHO; 1989.
[73] World Health Organization (WHO). Evaluation of certain food additives and contaminants. 41
st
Report of the Joint FAO/WHO
Expert Committee on Food Additives. Technical Report Series. Geneva: WHO; 1993.
[74] Claasen N, Syring KM, Jungk A. Verification of a mathematical model by simulating potassium uptake from soil. Plant and
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

863 www.ijergs.org

Soil. 1986; 95: 209-220.
[75] Epstein E. Mineral nutrition of plants: Principles and Perspectives. John Wiley and Sons Inc., New York. 1972.
[76] Nielsen NE. A transport kinetic concept for ion uptake by plants. III. Test of the concept by results from water culture and pot
experiments. Plant and Soil 1976; 45:659-677.
[77] PENNSTATE UNIVERSITY (PU). Plant Science: Barber-Cushman Model. 2013. Accessed: 8 August 2013.
Available: http://plantscience.psu.edu/research/labs/roots/methods/computer/simroot/simroot-components/barber-cushman-
model.
[78] Silberbush M, Barber SA. Sensitivity of simulated phosphorus uptake to parameters used by a mechanistic-mathematical model.
Journal of Plant and Soil. 1983; 74(1):93-100.
[79] Nye PH, Tinker PB. Solute movement in the soil-root system. Blackwell Scientific Publishers, Oxford, England. 1977
[80] Barber SA. A diffusion and mass flow concept of soil nutrient availability. Soil Sci. 1962; 93, 39-49.
[81] Adhikari T, Rattan RK. Modelling zinc uptake by rice crop using a Barber-Cushman approach. Journal of Plant Nutrition and
Soil Science. 2000; 227(1-2):235-242.
[82] Macariola-See N, Woodard HJ, Schumacher T. Field verification of the Barber-Cushman mechanistic phosphorus uptake model
for maize. Journal of Plant Nutrition and Soil Science. 2003; 26(1):139-158
International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014
ISSN 2091-2730

864 www.ijergs.org

Potrebbero piacerti anche