Sei sulla pagina 1di 56

WIRELESS SENSOR NETWORK

Acknowledgement
We would like to thank Mr. Manoj Joseph who has been there always to help me in carrying out this expression acting out the guiding spirit behind the compiling of this report

Certificate
This is to certify that Neeraj Kumar Baswal and Pramod Kumar Sankhla students of Ajmer Institute of Technology (ECE, 6th Semester) have completed the following project under my supervision.
WIRELESS SENSOR NETWORK

(Manoj Joseph)

INTRODUCTRY
This report contains as follows: 1.
2.

About ISRO, NRSC, RRSC

Introduction to remote sensing & image processing Introduction to GIS & GPS Abstract of the project

3. 4.

1.ABOUT ISRO:
ISRO is stands for Indian Space Research Organisation. The Indian Space Research Organisation (Hindi: ) is the primary body for space research under the control of the Government of India, and one of the leading space research organizations in the world. It was established in its modern form in 1969 as a result of coordinated efforts initiated earlier. Taking into consideration its budget, it is probably one of the most efficient space organizations on the globe. The objective of ISRO is to develop space technology and its application to various tasks.

Goals and objectives


The prime objective of ISRO is to develop space technology and its application to various national tasks. The Indian space program was driven by the vision of Dr Vikram Sarabhai, considered the father of Indian Space Programme. As stated by him: There are some who question the relevance of space activities in a developing nation. To us, there is no ambiguity of purpose. We do not have the fantasy of competing with the economically advanced nations in the exploration of the moon or the planets or manned space-flight. But we are convinced that if we are to play a meaningful role nationally, and in the community of nations, we must be second to none in the application of advanced technologies to the real problems of man and society.

As also pointed out by Dr APJ Kalam: Many individuals with myopic vision questioned the relevance of space activities in a newly independent nation, which was finding it difficult to feed its population. Their vision was clear if Indians were to play meaningful role in the community of nations, they must be second to none in the application of advanced technologies to their real-life problems. They had no intention of using it as a mean to display our might.

India's economic progress has made its space program more visible and active as the country aims for greater self-reliance in space technology. Hennock etc. hold that India also connects space exploration to national prestige, further stating: "This year India has launched 11 satellites, including nine from other countriesand it became the first nation to launch 10 satellites on one rocket. Some critics maintain that India's spending on space exploration is unjustifiable given the high levels of poverty and lack of basic services throughout parts of the country. Indian Space programme born in the church beginning, space activities in the country, concentrated on achieving self reliance and developing capability to build and launch communication satellites for television broadcast, telecommunications and meteorological applications; remote sensing satellites for management of natural resources. ISRO has established two major space systems, INSAT for communication, television broadcasting and meteorological services, and Indian Remote Sensing Satellites (IRS) system for resources monitoring and management. ISRO has developed two satellite launch vehicles, PSLV and GSLV, to place INSAT and IRS satellites in the required orbits. Indian Space Research Organisation (ISRO) has successfully operationalised two major satellite systems namely Indian National Satellites (INSAT) for communication services and Indian Remote Sensing (IRS) satellites for management of natural resources; also, Polar Satellite Launch Vehicle (PSLV) for launching IRS type of satellites and Geostationary Satellite Launch Vehicle (GSLV) for launching INSAT type of satellites.

Launch vehicle fleet

Comparison of Indian carrier rockets. Left to right: SLV, ASLV, PSLV, GSLV, GSLV III.

Geopolitical and economic considerations during the 1960s and 1970s compelled India to initiate its own launch vehicle program. During the first phase (1960s-1970s) the country successfully developed a sounding rockets program, and by the 1980s, research had yielded the Satellite Launch Vehicle-3 and the more advanced Augmented Satellite Launch Vehicle (ASLV), complete with operational supporting infrastructure. ISRO further applied its energies to the advancement of launch vehicle technology resulting in the creation of Polar Satellite Launch Vehicle (PSLV) and Geosynchronous Satellite Launch Vehicle (GSLV) technologies.

Satellite Launch Vehicle (SLV)


The Satellite Launch Vehicle, usually known by its abbreviation SLV or SLV-3 was a 4-stage solid-fuel light launcher. It was intended to reach a height of 500 km and carry a payload of 40 kg. Its first launch took place in 1979 with 2 more in each subsequent year, and the final launch in 1983. Only two of its four test flights were successful.

Augmented Satellite Launch Vehicle (ASLV)


The Augmented Satellite Launch Vehicle, usually known by its abbreviation ASLV was a 5stage solid propellant rocket with the capability of placing a 150 kg satellite into LEO. This project was started by the ISRO during the early 1980s to develop technologies needed for a payload to be placed into a geostationary orbit. Its design was based on Satellite Launch Vehicle. The first launch test was held in 1987, and after that 3 others followed in 1988, 1992 and 1994, out of which only 2 were successful, before it was decommissioned.

Polar Satellite Launch Vehicle (PSLV)


The Polar Satellite Launch Vehicle, usually known by its abbreviation PSLV, is an expendable launch system developed to allow India to launch its Indian Remote Sensing (IRS) satellites into sun synchronous orbits, a service that was, until the advent of the PSLV, commercially viable only from Russia. PSLV can also launch small satellites into geostationary transfer orbit (GTO). The reliability and versatility of the PSLV is proven by the fact that it has launched 30 spacecraft (14 Indian and 16 from other countries) into a variety of orbits so far. In April 2008, it successfully launched 10 satellites at once, breaking a world record held by Russia.

Geosynchronous Satellite Launch Vehicle (GSLV)


The Geosynchronous Satellite Launch Vehicle, usually known by its abbreviation GSLV, is an expendable launch system developed to enable India to launch its INSAT-type satellites into geostationary orbit and to make India less dependent on foreign rockets. At present, it is ISRO's heaviest satellite launch vehicle and is capable of putting a total payload of up to 5 tons to Low Earth Orbit.

Geosynchronous Satellite Launch Vehicle Mark-III (GSLV III)


The Geosynchronous Satellite Launch Vehicle Mark-III is a launch vehicle currently under development by the Indian Space Research Organization. It is intended to launch heavy satellites into geostationary orbit, and will allow India to become less dependent on foreign rockets for heavy lifting. The rocket is the technological successor to the GSLV, however is not derived from its predecessor. The maiden flight is scheduled to take place in 2011.

Earth observation and communication satellites

INSAT-1B. India's first satellite, the Aryabhata, was launched by the Soviets in 1975. This was followed by the Rohini series of experimental satellites which were built and launched indigenously. At present, ISRO operates a large number of earth observation satellites.

The INSAT series


INSAT (Indian National Satellite System) is a series of multipurpose geostationary satellites launched by ISRO to satisfy the telecommunications, broadcasting, meteorology and searchand-rescue needs of India. Commissioned in 1983, INSAT is the largest domestic communication system in the Asia-Pacific Region. It is a joint venture of the Department of Space, Department of Telecommunications, India Meteorological Department, All India Radio and Doordarshan. The overall coordination and management of INSAT system rests with the Secretary-level INSAT Coordination Committee.

The IRS series


Indian Remote Sensing satellites (IRS) are a series of earth observation satellites, built, launched and maintained by ISRO. The IRS series provides remote sensing services to the country. The Indian Remote Sensing Satellite system is the largest constellation of remote sensing satellites for civilian use in operation today in the world. All the satellites are placed in polar sun-synchronous orbit and provide data in a variety of spatial, spectral and temporal resolutions to enable several programs to be undertaken relevant to national development.

Oceansat series
Oceansat are a series of satellites to primarily study ocean, part of IRS Series. IRS P4 is also known as Oceansat-1, was launched on 27 May 1999. On 23 September 2009 Oceansat-2 was launched.

Other satellites
ISRO has also launched a set of experimental geostationary satellites known as the GSAT series. Kalpana-1, ISRO's first dedicated meteorological satellite, was launched by the Polar Satellite Launch Vehicle on 12 September 2002. The satellite was originally known as MetSat-1. In February 2003 it was renamed to Kalpana-1 by the then Indian Prime Minister Atal Bihari Vajpayee in memory of Kalpana Chawla a NASA astronaut of Indian origin who perished in Space Shuttle Columbia.

Extraterrestrial exploration
India's first mission beyond Earth's orbit was Chandrayaan-1, a lunar spacecraft which successfully entered the lunar orbit on 8 November 2008. ISRO plans to follow up Chandrayaan-1 with Chandrayaan-2 and unmanned missions to Mars and Near-Earth objects such as asteroids and comets.

Lunar exploration
Chandrayaan-1 is India's first mission to the moon. The unmanned lunar exploration mission includes a lunar orbiter and an impactor called the Moon Impact Probe. India launched the spacecraft using a modified version of the PSLV is C11 on 22 October 2008 from Satish Dhawan Space Centre, Sriharikota. The vehicle was successfully inserted into lunar orbit on 8 November 2008. It carries high-resolution remote sensing equipment for visible, near infrared, and soft and hard X-ray frequencies. Over its two-year operational period, it is intended to survey the lunar surface to produce a complete map of its chemical characteristics and 3-dimensional topography. The polar regions are of special interest, as they might contain ice. The lunar mission carries five ISRO payloads and six payloads from other international space agencies including NASA, ESA, and the Bulgarian Aerospace Agency, which were carried free of cost. The Chandrayaan-1 along with NASA's LRO played a major role in discovering the existence of water on the moon.

Planetary exploration
The Indian Space Research Organisation had begun preparations for a mission to Mars and had received seed money of Rs10 crore from the government. The space agency was looking at launch opportunities between 2013 and 2015. The space agency would use its Geosynchronous Satellite Launch Vehicle (GSLV) to put the satellite in orbit and was considering using ion-thrusters, liquid engines or nuclear power to propel it further towards Mars. The Mars mission studies had already been completed and that space scientists were trying to collect scientific proposals and scientific objectives.

Human spaceflight program

Indian Navy Frogmen recovering the SRE-1 The Indian Space Research Organization has been sanctioned a budget of Rs. 12,400 crore for its human spaceflight program. According to the Space Commission which passed the budget, an unmanned flight will be launched in 2013-2014 and manned mission likely to launch by 20142015.If realized in the stated time-frame, India will become only the fourth nation, after the USSR, USA and China, to successfully carry out manned missions indigenously.

Technology demonstration
The Space Capsule Recovery Experiment (SCRE or more commonly SRE or SRE-1) is an experimental Indian spacecraft which was launched using the PSLV C7 rocket, along with three other satellites. It remained in orbit for 12 days before re-entering the Earth's atmosphere and splashing down into the Bay of Bengal. The SRE-1 was designed to demonstrate the capability to recover an orbiting space capsule, and the technology for performing experiments in the microgravity conditions of an orbiting platform. It was also intended to test thermal protection, navigation, guidance, control, deceleration and flotation systems, as well as study hypersonic aero-thermodynamics, management of communication blackouts, and recovery operations.

Astronaut training and other facilities


ISRO will set up an astronaut training centre in Bangalore by 2012 to prepare personnel for flights onboard the crewed vehicle. The centre will use water simulation to train the selected astronauts in rescue and recovery operations and survival in zero gravity, and will undertake studies of the radiation environment of space.

ISRO will build centrifuges to prepare astronauts for the acceleration phase of the mission. It also plans to build a new launchpad to meet the target of launching a manned space mission by 2015. This would be the third launchpad at the Satish Dhawan Space Centre, Sriharikota.

Development of crew vehicle


The Indian Space Research Organisation (ISRO) is working towards a maiden manned Indian space mission vehicle that can carry three astronauts for seven days in a near earth orbit. The Indian manned spacecraft temporarily named as Orbital Vehicle intend to be the basis of indigenous Indian human spaceflight program. The capsule will be designed to carry three people, and a planned upgraded version will be equipped with a rendezvous and docking capability. In its maiden manned mission, ISRO's largely autonomous 3-ton capsule will orbit the Earth at 248 miles (400 km) in altitude for up to seven days with a two-person crew on board. The crew vehicle would launch atop of ISRO's GSLV Mk II, currently under development. The GSLV Mk II features an indigenously developed cryogenic upper-stage engine. The first test of cryogenic engine to be held on 15 April 2010 and after this launch India will be the Sixth country to developed such complex cryogenic technology after United States, Russia, China, Japan and Israel but unfortunately this was a failure as cryogenic engine did not work as expected.So the launch has been re-scheduled to 2011.

Planetary sciences and astronomy


Indian space era dawned when the first two-stage sounding rocket was launched from Thumba in 1963. However even before this epoch making event, noteworthy contributions were made by the Indian scientists in the following areas of space science research:

Cosmic rays and high energy astronomy using both ground based as well as balloon borne experiments/studies such as neutron/meson monitors, Geiger Muller particle detectors/counters etc. Ionospheric research using ground based radio propagation techniques such as ionosonde, VLF/HF/VHF radio probing, a chain of magnetometer stations etc. Upper atmospheric research using ground based optical techniques such as Dobson spectrometers for measurement of total ozone content, air glow photometers etc. Indian astronomers have been carrying out major investigations using a number of ground based optical and radio telescopes with varying sophistication.

With the advent of the Indian space program, emphasis was laid on indigenous, self-reliant and state-of-the-art development of technology for immediate practical applications in the fields of space science research activities in the country. There is a national balloon launching facility at Hyderabad jointly supported by TIFR and ISRO. This facility has been extensively used for carrying out research in high energy (i.e., xand gamma ray) astronomy, IR astronomy, middle atmospheric trace constituents including CFCs & aerosols, ionization, electric conductivity and electric fields. The flux of secondary particles and X-ray and gamma-rays of atmospheric origin produced by the interaction of the cosmic rays is very low. This low background, in the presence of

which one has to detect the feeble signal from cosmic sources is a major advantage in conducting hard X-ray observations from India. The second advantage is that many bright sources like Cyg X-1, Crab Nebula, Scorpius X-1 and Galactic Centre sources are observable from Hyderabad due to their favourable declination. With these considerations, an X-Ray astronomy group was formed at TIFR in 1967 and development of an instrument with an orientable X-Ray telescope for hard X-Ray observations was undertaken. The first balloon flight with the new instrument was made on 28, April 1968 in which observations of Scorpius X-1 were successfully carried out. In a succession of balloon flights made with this instrument between 1968 and 1974 a number of binary X-ray sources including Scorpious X1, Cyg X-1, Her X-1 etc. and the diffuse cosmic X-ray background were studied. Many new and astrophysically important results were obtained from these observations. One of most important achievements of ISRO in this field was the discovery of three species of bacteria in the upper stratosphere at an altitude of between 2040 km. The bacteria, highly resistant to ultra-violet radiation, are not found elsewhere on Earth, leading to speculation on whether they are extraterrestrial in origin. These three bacteria can be considered to be extremophiles. Until then, the upper stratosphere was believed to be inhospitable because of the high doses of Ultra-violet radiation. The bacteria were named as Bacillus isronensis in recognition of ISRO's contribution in the balloon experiments, which led to its discovery, Bacillus aryabhata after India's celebrated ancient astronomer Aryabhata and Janibacter Hoylei after the distinguished Astrophysicist Fred Hoyle.

Field installations
ISRO's headquarters is located at Antariksh Bhavan in Bangalore.

Research facilities
Description Solar planetary physics, infrared astronomy, geo-cosmo Physical physics, plasma physics, astrophysics, archaeology, and Research Ahmedabad hydrology are some of the branches of study at this institute. Laboratory An observatory at Udaipur also falls under the control of this institution. Research & Development in the field of semiconductor Semi-Conductor Chandigarh technology, micro-electromechanical systems and process Laboratory technologies relating to semiconductor processing. National Atmospheric The NARL carries out fundamental and applied research in Chittoor Research Atmospheric and Space Sciences. Laboratory Raman Research RRI carries out research in selected areas of physics, such as Bangalore Institute (RRI) astrophysics and astronomy. The SAC deals with the various aspects of practical use of Space space technology. Among the fields of research at the SAC are Applications Ahmedabad geodesy, satellite based telecommunications, surveying, Centre remote sensing, meteorology, environment monitoring etc.The SEC additionally operates the Delhi Earth Station. Facility Location

Test facilities
Facility Liquid Propulsion Systems Centre Description The LPSC handles testing and implementation of liquid Bangalore, propulsion control packages and helps develop engines for Thiruvananthapuram, launch vehicles and satellites. The testing is largely and Mahendragiri conducted at Mahendragiri. The LPSC also constructs precision transducers. Location

Construction and launch facilities


Description The venue of eight successful spacecraft projects is also one of the main satellite technology bases of ISRO. The ISRO facility serves as a venue for implementing indigenous Satellite Bangalore spacecrafts in India.The satellites Ayrabhata, Bhaskara, Centre APPLE, and IRS-1A were constructed at this site, and the IRS and INSAT satellite series are presently under development here. With multiple sub-sites the Sriharikota island facility acts as a launching site for India's satellites. The Sriharikota Satish facility is also the main launch base for India's sounding Dhawan Andhra Pradesh rockets. The centre is also home to India's largest Solid Space Centre Propellant Space Booster Plant (SPROB) and houses the Static Test and Evaluation Complex (STEX). The largest ISRO base is also the main technical centre and the venue of development of the SLV-3, ASLV, and Vikram PSLV series.The base supports India's Thumba Equatorial Sarabhai Thiruvananthapuram Rocket Launching Station and the Rohini Sounding Space Centre Rocket program. This facility is also developing the GSLV series. Thumba Equatorial Rocket Thumba TERLS is used to launch sounding rockets. Launching Station Facility Location

Tracking and control facilities


Description This network receives, processes, archives and distributes the Indian Deep spacecraft health data and payload data in real time. It can Space Network Bangalore track and monitor satellites up to very large distances, even (IDSN) beyond the Moon. The NRSC applies remote sensing to manage natural National resources and study aerial surveying. With centers at Remote Sensing Hyderabad Balanagar and Shadnagar it also has training facilities at Centre Dehradun in form of the Indian Institute of Remote Sensing. Indian Space Bangalore Software development, ground operations, Tracking Research (headquarters) Telemetry and Command (TTC), and support is provided by Facility Location

and a number of ground this institution.ISTRAC has Tracking stations throughout the stations country and all over the world in Port Louis (Mauritius), throughout Bearslake (Russia), Biak (Indonesia) and Brunei. India and World. Geostationary satellite orbit raising, payload testing, and inorbit operations are performed at this facility.The MCF has Master Control Hassan; earth stations and Satellite Control Centre (SCC) for Facility Bhopal controlling satellites.A second MCF-like facility named 'MCF-B' is being constructed at Bhopal. Organisation Telemetry, Tracking and Command Network

Human resource development


Facility Location Description Indian Institute of Space The institute offers undergraduate and graduate courses Science and Thiruvananthapuram in avionics and aerospace engineering. Technology (IIST) Indian Institute IIA is a premier institute devoted to research in of Astrophysics Bangalore astronomy, astrophysics and related physics. (IIA) The centre works for education, research, and training, Development mainly in conjunction with the INSAT program.The and Educational main activities carried out at DECU include Ahmedabad Communication GRAMSAT and EDUSAT projects. The Training and Unit Development Communication Channel (TDCC) also falls under the operational control of the DECU.

Commercial wing
Facility Antrix Corporation Location Bangalore Description The marketing agency under government control markets ISRO's hardware, manpower, and software.

Other facilities include: Satish Dhawan Space Centre


Balasore Rocket Launching Station (BRLS) Orissa INSAT Master Control Facility (IMCF) Bhopal ISRO Inertial Systems Unit (IISU) Thiruvananthapuram Indian Regional Navigational Satellite System (IRNSS) Aerospace Command of India (ACI) Indian National Committee for Space Research (INCOSPAR) Inter University Centre for Astronomy and Astrophysics (IUCAA) Indian Department of Space (IDS) Indian Space Science Data Centre (ISSDC) Spacecraft Control Centre (SCC)

Regional Remote Sensing Service Centres (RRSSC) Development and Educational Communication Unit (DECU)

Vision for the future

A model of the Geosynchronous Satellite Launch Vehicle III

A model of the RLV-TD

ISRO plans to launch a number of new-generation Earth Observation Satellites in the near future. It will also undertake the development of new launch vehicles and spacecraft. ISRO has stated that it will send unmanned missions to Mars and NearEarth Objects.

Indian lunar exploration programme

Following the success of Chandrayaan-1, the country's first moon mission, ISRO is planning a series of further lunar missions in the next decade, including a manned mission which is stated to take place in 2020 approximately the same time as the China National Space Administration (CNSA) manned lunar mission and NASA's Project Constellation plans to return to the moon with its Orion-Altair project. Chandrayaan-2 is the second unmanned lunar exploration mission proposed by ISRO at a projected cost of Rs. 425 crore (US$ 90 million). The mission includes a lunar orbiter as well as a Lander/rover. The wheeled rover will move on the lunar surface and pick up soil or rock samples for on-site chemical analysis. The data will be sent to Earth via the orbiter

Space exploration

ISRO plans to carry out an unmanned mission to Mars in this decade. According to ISRO, the Mars mission remains at a conceptual stage but is expected to be finalized shortly. The current version of India's geo-synchronous satellite launch vehicle will be used to loft the new craft into space.

ISRO is designing a solar probe named Aditya. This is a mini-satellite designed to study the coupling between the sun and the earth. It is planned to be launched in 2012.

IRNSS
The Indian Regional Navigational Satellite System (IRNSS) is an autonomous regional satellite navigation system being developed by Indian Space Research Organisation which would be under total control of Indian government. The requirement of such a navigation system is driven by the fact that access to Global Navigation Satellite Systems like GPS are not guaranteed in hostile situations. ISRO plans to launch the constellation of satellites between 2010 and 2012.

Development of new launch vehicles


ISRO is currently developing two new-generation launch vehicles, the GSLV-Mk III and the AVATAR RLV. These launch vehicles will increase ISRO's present launch capability and provide India with a greater share of the global satellite launch market.

Applications
India uses its satellites communication network one of the largest in the world for applications such as land management, water resources management, natural disaster forecasting, radio networking, weather forecasting, meteorological imaging and computer communication. Business, administrative services, and schemes such as the National Informatics Centre (NICNET) are direct beneficiaries of applied satellite technology. ISRO has applied its technology to improve rural education named as Satellite Instructional Television Experiment (SITE)conducted large scale video broadcasts resulting in significant improvement in rural education. ISRO has applied its technology to "telemedicine", directly connecting patients in rural areas to medical professionals in urban locations via satellites. Since high-quality healthcare is not universally available in some of the remote areas of India, the patients in remote areas are diagnosed and analyzed by doctors in urban centres in real time via video conferencing.The patient is then advised medicine and treatment. ISRO has also helped implement India's Biodiversity Information System, completed in October 2002. Nirupa Sen details the program: "Based on intensive field sampling and mapping using satellite remote sensing and geospatial modelling tools, maps have been made of vegetation cover on a 1 : 250,000 scale. This has been put together in a web-enabled database which links gene-level information of plant species with spatial information in a BIOSPEC database of the ecological hot spot regions, namely northeastern India, Western Ghats, Western Himalayas and Andaman and Nicobar Islands. This has been made possible with collaboration between the Department of Biotechnology and ISRO."

ABOUT N.R.S.C.
1. INTRODUCTION
National Remote Sensing Centre (NRSC) is one of the centres of Indian Space Research Organisation under the Department of Space, Govt. of India, engaged in operational remote sensing activities. The operational use of remote sensing data span wide spectrum of themes which include water resources, agriculture, soil and land degradation, mineral exploration, groundwater targeting, geomorphologic mapping, coastal and ocean resources monitoring, environment, ecology and forest mapping, land use and land cover mapping and urban area studies, large scale mapping, etc. The chief activities of NRSC are satellite data and aerial data reception, data processing, data dissemination, applications for providing value added services and training and distribution of date from foreign satellites RADARSAT, IKONOS, QUICKBIRD and ORBVIEW. NRSC has its own ground station at Shadnagar, 60 Km south of Hyderabad to acquire remote sensing satellite data from the Indian Remote Sensing satellites, the latest being Cartosat-1 (IRS-P5), and other foreign satellites like LandSat, NOAA, ERS, TERRA and AQUA.. Two modern Aircraft (Beach Craft 200) having INS, K-GPS fitted with Multi-spectral Scanner, Photogrammetric Cameras, SAR, Electromagnetic Sensors are available for aerial remote sensing. NRSC has acquired the capability to design, develop, deploy and operationalize multi-sensor satellite based ground systems comprising of ground and application segments to meet domestic and international requirements. The acquired data are being processed in-house, churning out variety of data products for distribution among the user community. Apart from data supply, it has the capability to undertake projects in a variety of disciplines. NRSC has organized training activities to train professionals, scientists as well as decision makers on Remote Sensing and GIS. A dedicated training center, Indian Institute of Remote Sensing, located at Dehradun gives PG Diploma course of 10-months to 4-days decision makers courses. The NRSC Headquarters at Hyderabad has a Training Division, which organizes similar training courses as well as custom made courses. NRSC has collaboration with ITC, The Netherlands for training. NRSC is a one-stop center for all the users remote sensing data solutions. 2.PRODUCTS :The National Remote Sensing Centre (NRSC) is the focal point for distribution of remote sensing satellite data products in India and its neighboring countries. NRSC has an earth station at Shadnagar, about 55Km from Hyderabad, to receive data from almost all contemporary remote sensing satellites such as IRS-P5, IRS-P6, IRS-P4, IRS-1D, IRS-1C, IRS-P3, ERS-1/2, NOAA series, AQUA and TERRA satellites.

The data are recorded at Shadnagar / Balanagar on Digital Linear Tapes (DLTs) or CDROMs, DVDs depending on the mission, and archived for providing data products to users as and when orders are received. NDC is a one-stop-shop for a range of data products with a wide choice of resolutions, processing levels, product media, output scales, area coverage, revisit, season and spectral bands. Data products can be supplied on a wide variety of media and formats. 3. IMAGES OF N.R.S.C. In this Image gallery we have several images acquired by IRS-1C/1D/P4 sensors & also a few special products like merged & mosaics. We have also inserted few images of ERS,Landsat & Radarsat.

PAN Images

LISS-III Images

WiFS Images

OCM Images

PAN + LISS-III Images

MOSAICS

SAR Image

LANDSAT TM Image

ABOUT REMOTE SENSING:Remote sensors measure electromagnetic (EM) radiation that has interacted with the Earths surface. Interactions with matter can change the direction, intensity, wavelength content, and polarization of EM radiation. The nature of these changes is dependent on the chemical make-up and physical structure of the material exposed to the EM radiation. Changes in EM radiation resulting from its interactions with the Earths surface therefore provide major clues to the characteristics of the surface materials. BASIC PRINCIPAL:Electromagnetic radiation that is transmitted passes through a material (or through the boundary between two materials) with little change in intensity. Materials can also absorb EM radiation. Usually absorption is wavelength-specific: that is, more energy is absorbed at some wavelengths than at others. EM radiation that is absorbed is transformed into heat energy, which raises the materials temperature. Some of that heat energy may then be emitted as EM radiation at a wavelength dependent on the materials temperature. The lower the temperature, the longer the wavelength of the emitted radiation. As a result of solar heating, the Earths surface emits energy in the form of longerwavelength infrared radiation. For this reason the portion of the infrared spectrum with wavelengths greater than 3 micro meters is commonly called the thermal infrared region. Electromagnetic radiation encountering a boundary such as the Earths surface can also be reflected. If the surface is smooth at a scale comparable to the wavelength of the incident energy, seculars reflection occurs: most of the energy is reflected in a single direction, at an angle equal to the angle of incidence. Rougher surfaces cause scattering, or diffuse reflection in all directions. INTERACTION PROCESS IN REMOTE SENSING:As sunlight initially enters the atmosphere, it encounters gas molecules, suspended dust particles, and aerosols. These materials tend to scatter a portion of the incoming radiation in all directions, with shorter wavelengths experiencing the strongest effect. (The preferential scattering of blue light in comparison to green and red light accounts for the blue color of the daytime sky. Clouds appear opaque because of intense scattering of visible light by tiny water droplets.) Although most of the remaining light is transmitted to the surface, some atmospheric gases are very effective at absorbing particular wavelengths. (The absorption of dangerous ultraviolet radiation by ozone is a well-known example). .

EMI, SOURCES, INTERACTION AND SENSORS:As a result of these effects, the illumination reaching the surface is a combination of highly filtered solar radiation transmitted directly to the ground and more diffuse light scattered from all parts of the sky, which helps illuminate shadowed areas. Reflected solar radiation sensors These sensor systems detect solar radiation that has been diffusely reflected (scattered) upward from surface features. The wavelength ranges that provide useful information include the ultraviolet, visible, near infrared and middle infrared ranges. Reflected solar sensing systems discriminate materials that have differing patterns of wavelength-specific absorption, which relate to the chemical make-up and physical structure of the material. Because they depend on sunlight as a source, these systems can only provide useful images during daylight hours, and changing atmospheric conditions and changes in illumination with time of day and season can pose interpretive problems. Reflected solar remote sensing systems are the most common type used to monitor Earth resources. Thermal infrared sensors that can detect the thermal infrared radiation emitted by surface features can reveal information about the thermal properties of these materials. Like reflected solar sensors, these are passive systems that rely on solar radiation as the ultimate energy source. Because the temperature of surface features changes during the day, thermal infrared sensing systems are sensitive to time of day at which the images are acquired. Imaging radar sensors Rather than relying on a natural source, these active systems illuminate the surface with broadcast microwave radiation, then measure the energy that is diffusely reflected back to the sensor. The returning energy provides information about the surface roughness and water content of surface materials and the shape of the land surface. Long-wavelength microwaves suffer little scattering in the atmosphere, even penetrating thick cloud cover. Imaging radar is therefore particularly useful in cloud-prone tropical regions. IMAGE ACQUICITION:We have seen that the radiant energy that is measured by an aerial or satellite sensor is influenced by the radiation source, interaction of the energy with surface materials, and the passage of the energy through the atmosphere. In addition, the illumination geometry (source position, surface slope, slope direction, and shadowing) can also affect the brightness of the upwelling energy. Together these effects produce a composite signal that varies spatially and with the time of day or season. In order to produce an image which we can interpret, the remote sensing system must first detect and measure this energy. The electromagnetic energy returned from the Earths surface can be detected by a lightsensitive film, as in aerial photography, or by an array of electronic sensors. Light striking photographic film causes a chemical reaction, with the rate of the reaction varying with the amount of energy received by each point on the film. Developing the film converts the pattern of energy variations into a pattern of lighter and darker areas that can be interpreted visually. Electronic sensors generate an electrical signal with strength proportional to the amount of energy received. The signal from each detector in an array can be recorded and transmitted electronically in digital form (as a series of numbers). Todays digital still and video cameras are examples of imaging systems that use electronic sensors. All modern satellite imaging systems also use some form of electronic detectors.

An image from an electronic sensor array (or a digitally scanned photograph) consists of a two dimensional rectangular grid of numerical values that represent differing brightness levels. Each value represents the average brightness for a portion of the surface, represented by the square unit areas in the image. In computer terms the grid is commonly known as a raster, and the square units are cells or pixels. When displayed on your computer, the brightness values in the image raster are translated into display brightness on the screen. SPATIAL RESOLUTION:Spatial resolution is a measure of the spatial detail in an image, which is a function of the design of the sensor and its operating altitude above the surface. Each of the detectors in a remote sensor measures energy received from a finite patch of the ground surface. The smaller these individual patches are, the more detailed will be the spatial information that we can interpret from the image. For digital images, spatial resolution is most commonly expressed as the ground dimensions of an image cell.

Figure 1: Spatial resolution Spectral resolution The spectral resolution of a remote sensing system can be described as its ability to
distinguish different parts of the range of measured wavelengths. In essence, this amounts to the number of wavelength intervals (bands) that are measured, and how narrow each interval is. An image produced by a sensor system can consist of one very broad wavelength band, a few broad bands, or many narrow wavelength bands. The names usually used for these three image categories arepanchromatic, multispectral.

Figure 2: Spectral resolution

FUSING DATA FROM DIFFERENT SENSORS:Materials commonly found at the Earths surface, such as soil, rocks, water, vegetation, and man-made features, possess many distinct physical properties that control their interactions with electromagnetic radiation. In the preceding pages we have discussed remote sensing systems that use three separate parts of the radiation spectrum: reflected solar radiation (visible and infrared), emitted thermal infrared, and imaging radar. Because the interactions of EM radiation with surface features in these spectral regions are different, each of the corresponding sensor systems measures a different set of physical properties. Although each type of system by itself can reveal a wealth of information about the identity and condition of surface materials, we can learn even more by combining image data from different sensors. Interpretation of the merged data set can employ rigorous quantitative analysis, or more qualitative visual analysis.

AN INTRODUCTION TO DIGITAL IMAGE PROCESSING:INTRODUCTION:Images are produced by a variety of physical devices, including still and video cameras, x-ray devices, electron microscopes, radar, and ultrasound, and used for a variety of purposes, including entertainment, medical, business (e.g. documents), industrial, military, civil (e.g. traffic), security, and scientific. Digital image processing is a subset of the electronic domain wherein the image is converted to an array of small integers, called pixels, representing a physical quantity such as scene radiance, stored in a digital memory, and processed by computer or other digital hardware. Digital image processing, either as enhancement for human observers or performing autonomous analysis, offers advantages in cost, speed, and flexibility, and with the rapidly falling price and rising performance of personal computers it has become the dominant method in use.

2. DIGITAL IMAGE
A digital remotely sensed image is typically composed of picture elements (Pixels) located at the intersection of each row i and column j in each K bands of imagery. Associated with each pixel is a number known as Digital Number (DN) or Brightness Value (BV), which depicts the average radiance of a relatively small area within a scene. A smaller number indicates low average radiance from the area and the high number is an indicator of high radiant properties of the area. The size of this area effects the reproduction of details within the scene. As pixel size is reduced more scene detail is presented in digital representation.
3.COLOR COMPOSITES

While displaying the different bands of a multispectral data set, images obtained in different bands are displayed in image planes (other than their own) the color composite is regarded as False Color Composite (FCC). High spectral resolution is important when producing color components. For a true color composite an image data used in red, green and blue spectral region must be assigned bits of red, green and blue image processor frame buffer memory. A color infrared composite standard false color composite is displayed by placing the infrared, red, green in the red, green and blue frame buffer memory (Fig. 2). In this healthy vegetation shows up in shades of red because vegetation absorbs most of green and red energy but reflects approximately half of incident Infrared energy. Urban areas reflect equal portions of NIR, R & G, and therefore they appear as steel grey.

Figure 2: False Color Composite (FCC) of IRS : LISS II Poanta area

4. IMAGE RECTIFICATION AND REGISTRATION

Geometric distortions manifest themselves as errors in the position of a pixel relative to other pixels in the scene and with respect to their absolute position within some defined map projection. If left uncorrected, these geometric distortions render any data extracted from the image useless. This is particularly so if the information is to be compared to other data sets, be it from another image or a GIS data set. Distortions occur for many reasons. For instance distortions occur due to changes in platform attitude (roll, pitch and yaw), altitude, earth rotation, earth curvature, panoramic distortion and detector delay. Most of these distortions can be modelled mathematically and are removed before you buy an image. Changes in attitude however can be difficult to account for mathematically and so a procedure called image rectification is performed. Satellite systems are however geometrically quite stable and geometric rectification is a simple procedure based on a mapping transformation relating real ground coordinates, say in easting and northing, to image line and pixel coordinates. Rectification is a process of geometrically correcting an image so that it can be represented on a planar surface , conform to other images or conform to a map (Fig. 3). That is, it is the process by which geometry of an image is made planimetric. It is necessary when accurate area, distance and direction measurements are required to be made from the imagery. It is achieved by transforming the data from one grid system into another grid system using a geometric transformation. Rectification is not necessary if there is no distortion in the image. For example, if an image file is produced by scanning or digitizing a paper map that is in the desired projection system, then that image is already planar and does not require rectification unless there is some skew or rotation of the image. Scanning and digitizing produce images that are planar, but do not contain any map coordinate information. These images need only to be geo-referenced, which is a much simpler process than rectification. In many cases, the image header can simply be updated with new map coordinate information. This involves redefining the map coordinate of the upper left corner of the image and the cell size (the area represented by each pixel). Ground Control Points (GCP) are the specific pixels in the input image for which the output map coordinates are known. By using more points than necessary to solve the transformation equations a least squares solution may be found that minimises the sum of the squares of the errors. Care should be exercised when selecting ground control points as their number, quality and distribution affect the result of the rectification. Once the mapping transformation has been determined a procedure called resampling is employed. Resampling matches the coordinates of image pixels to their real world coordinates and writes a new image on a pixel by pixel basis. Since the grid of pixels in the source image rarely matches the grid for the reference image, the pixels are resampled so that new data file values for the output file can be calculated.

Figure 3 : Image Rectification (a & b) Input and reference image with GCP locations, (c) using polynomial equations the grids are fitted together, (d) using resampling method the output grid pixel values are assigned

IMAGE ENHANCEMENT TECHNIQUES


Image enhancement techniques improve the quality of an image as Perceived by a human. These techniques are most useful because many satellite images when examined on a color display give inadequate information for image interpretation. There is no conscious effort to improve the fidelity of the image with regard to some ideal form of the image. There exists a wide variety of techniques for improving image quality. The contrast stretch, density slicing, edge enhancement, and spatial filtering are the more commonly used techniques. Image enhancement is attempted after the image is corrected for geometric and radiometric distortions. Image enhancement methods are applied separately to each band of a multispectral image. Digital techniques have been found to be most satisfactory than the photographic technique for image enhancement, because of the precision and wide variety of digital processes.

Contrast
Contrast generally refers to the difference in luminance or grey level values in an image and is an important characteristic. It can be defined as the ratio of the maximum intensity to the minimum intensity over an image. Contrast ratio has a strong bearing on the resolving power and detectability of an image. Larger this ratio, more easy it is to interpret the image. Satellite images lack adequate contrast and require contrast improvement.

Contrast Enhancement
Contrast enhancement techniques expand the range of brightness values in an image so that the image can be efficiently displayed in a manner desired by the analyst. The density values in a scene

are literally pulled farther apart, that is, expanded over a greater range. The effect is to increase the visual contrast between two areas of different uniform densities. This enables the analyst to discriminate easily between areas initially having a small difference in density.

SPATIAL FILTERING
A characteristic of remotely sensed images is a parameter called spatial frequency defined as number of changes in Brightness Value per unit distance for any particular part of an image. If there are very few changes in Brightness Value once a given area in an image, this is referred to as low frequency area. Conversely, if the Brightness Value changes dramatically over short distances, this is an area of high frequency. Spatial filtering is the process of dividing the image into its constituent spatial frequencies, and selectively altering certain spatial frequencies to emphasize some image features. This technique increases the analysts ability. to discriminate detail. The three types of spatial filters used in remote sensor data processing are : Low pass filters, Band pass filters and High pass filters.

IMAGE FUSION TECHNIQUES


The satellites cover different portions of the electromagnetic spectrum and record the incoming radiations at different spatial, temporal, and spectral resolutions. Most of these sensors operate in two modes: multispectral mode and the panchromatic mode. The panchromatic mode corresponds to the observation over a broad spectral band (similar to a typical black and white photograph) and the multispectral (color) mode corresponds to the observation in a number of relatively narrower bands. For example in the IRS 1D, LISS III operates in the multispectral mode. It records energy in the green (0.52 0.59 m), red (0.62-0.68 m), near infrared (0.77- 0.86 m) and midinfrared (1.55 1.70 m). In the same satellite PAN operates in the panchromatic mode. SPOT is another satellite, which has a combination of sensor operating in the multispectral and panchromatic mode. Above information is also expressed by saying that the multispectral mode has a better spectral resolution than the panchromatic mode. Now coming to the spatial resolution, most of the satellites are such that the panchromatic mode has a better spatial resolution than the multispectral mode, for e.g. in IRS -1C, PAN has a spatial resolution of 5.8 m whereas in the case of LISS it is 23.5 m. Better is the spatial resolution, more detailed information about a landuse is present in the imagery, hence usually PAN data is used for observing and separating various feature. Both theses type of sensors have their particular utility as per the need of user. If the need of the user is to separate two different kind of landuses, LISS III is used, whereas for a detailed map preparation of any area, PAN imagery is extremely useful. Image Fusion is the combination of two or more different images to form a new image (by using a certain algorithm).

APPLICATIONS
The main applications of DSP are audio signal processing, audio compression, digital image processing, video compression, speech processing, speech recognition, digital communications, RADAR, SONAR, seismology, and biomedicine. Specific examples are speech compression and transmission in digital mobile phones, room matching equalization of sound in Hifi and sound reinforcement applications, weather forecasting, economic forecasting, seismic data processing, analysis and control of industrial processes, computer-generated animations in movies, medical imaging such as CAT scans and MRI, MP3 compression, image manipulation, high fidelity

loudspeaker crossovers and equalization, and audio effects for use with electric guitar amplifiers.

GEOGRAPHICAL INFORMATION SYSTEM:-

Introduction:A geographic information system (GIS), or geographical information system, is any system that captures, stores, analyzes, manages, and presents data that are linked to location. In the simplest terms, GIS is the merging of cartography and database technology. GIS systems are used in cartography, remote sensing, land surveying, utility management, photogrammetry, geography, urban planning, emergency management, navigation, and localized search engines. Therefore, in a general sense, the term describes any information system that integrates, stores, edits, analyzes, shares, and displays geographic information. In a more generic sense, GIS applications are tools that allow users to create interactive queries (user-created searches), analyze spatial information, edit data, maps, and present the results of all these operations. Geographic information science is the science underlying the geographic concepts, applications and systems, taught in degree and certificate programs at many universities.

GIS techniques and technology


Modern GIS technologies use digital information, for which various digitized data creation methods are used. The most common method of data creation is digitization, where a hard copy map or survey plan is transferred into a digital medium through the use of a computeraided design (CAD) program, and geo-referencing capabilities.

Relating information from different sources


Location may be annotated by x, y, and z coordinates of longitude, latitude, and elevation, or by other geocode systems like ZIP codes or by highway mile markers. Any variable that can be located spatially can be fed into a GIS.A GIS can also convert existing digital information, which may not yet be in map form, into forms it can recognize and use. For example, digital satellite images generated through remote sensing can be analyzed to produce a map-like layer of digital information about vegetative covers.

Data representation
GIS data represents real objects (such as roads, land use, elevation) with digital data. Real objects can be divided into two abstractions: discrete objects (a house) and continuous fields (such as rainfall amount, or elevation). Traditionally, there are two broad methods used to store data in a GIS for both abstractions: raster and vector. A new hybrid method of storing data is point clouds, which combine three-dimensional points with RGB information at each point, returning a "3D color image".

Raster
A raster data type is, in essence, any type of digital image represented in grids. Anyone who is familiar with digital photography will recognize the pixel as the smallest individual unit of an image. A combination of these pixels will create an image, distinct from the commonly used scalable vector graphics which are the basis of the vector model. While a digital image is concerned with the output as representation of reality, in a photograph or art transferred to computer, the raster data type will reflect an abstraction of reality. Aerial photos are one commonly used form of raster data, with only one purpose, to display a detailed image on a map or for the purposes of digitization. Other raster data sets will contain information regarding elevation, a digital elevation model, or reflectance of a particular wavelength of light, Landsat.

Digital elevation model, map (image), and vector data Raster data type consists of rows and columns of cells, with each cell storing a single value. Raster data can be images (raster images) with each pixel (or cell) containing a color value. Additional values recorded for each cell may be a discrete value, such as land use, a continuous value, such as temperature, or a null value if no data is available. While a raster cell stores a single value, it can be extended by using raster bands to represent RGB (red, green, blue) colors, color maps (a mapping between a thematic code and RGB value), or an extended attribute table with one row for each unique cell value. The resolution of the raster data set is its cell width in ground units. Raster data is stored in various formats; from a standard file-based structure of TIF, JPEG, etc. to binary large object (BLOB) data stored directly in a relational database management system (RDBMS) similar to other vector-based feature classes. Database storage, when properly indexed, typically allows for quicker retrieval of the raster data but can require storage of millions of significantly-sized records.

Vector In a GIS, geographical features are often expressed as vectors, by considering those features as geometrical shapes. Different geographical features are expressed by different types of geometry:

Points

A simple vector map, using each of the vector elements: points for wells, lines for rivers, and a polygon for the lake. Zero-dimensional points are used for geographical features that can best be expressed by a single point reference in other words, by simple location. Examples include wells, peaks, features of interest, and trailheads. Points convey the least amount of information of these file types. Points can also be used to represent areas when displayed at a small scale. For example, cities on a map of the world might be represented by points rather than polygons. No measurements are possible with point features.

Lines or polylines

One-dimensional lines or polylines are used for linear features such as rivers, roads, railroads, trails, and topographic lines. Again, as with point features, linear features displayed at a small scale will be represented as linear features rather than as a polygon. Line features can measure distance.

Polygons

Two-dimensional polygons are used for geographical features that cover a particular area of the earth's surface. Such features may include lakes, park boundaries, buildings, city boundaries, or land uses. Polygons convey the most amount of information of the file types. Polygon features can measure perimeter and area. Each of these geometries is linked to a row in a database that describes their attributes. For example, a database that describes lakes may contain a lake's depth, water quality, pollution level. This information can be used to make a map to describe a particular attribute of the dataset. For example, lakes could be coloured depending on level of pollution. Different geometries can also be compared. For example, the GIS could be used to identify all wells (point geometry) that are within one kilometer of a lake (polygon geometry) that has a high level of pollution. Advantages and disadvantages of using a raster or vector data model

Raster datasets record a value for all points in the area covered which may require more storage space than representing data in a vector format that can store data only where needed. Raster data allows easy implementation of overlay operations, which are more difficult with vector data.

Vector data can be displayed as vector graphics used on traditional maps, whereas raster data will appear as an image that may have a blocky appearance for object boundaries. (depending on the resolution of the raster file) Vector data can be easier to register, scale, and re-project, which can simplify combining vector layers from different sources. Vector data is more compatible with relational database environments, where they can be part of a relational table as a normal column and processed using a multitude of operators. Vector file sizes are usually smaller than raster data, which can be 10 to 100 times larger than vector data (depending on resolution). Vector data is simpler to update and maintain, whereas a raster image will have to be completely reproduced. (Example: a new road is added). Vector data allows much more analysis capability, especially for "networks" such as roads, power, rail, telecommunications, etc. (Examples: Best route, largest port, airfields connected to two-lane highways). Raster data will not have all the characteristics of the features it displays.

Non-spatial data Additional non-spatial data can also be stored along with the spatial data represented by the coordinates of a vector geometry or the position of a raster cell. In vector data, the additional data contains attributes of the feature. For example, a forest inventory polygon may also have an identifier value and information about tree species. In raster data the cell value can store attribute information, but it can also be used as an identifier that can relate to records in another table. Data capture Data captureentering information into the systemconsumes much of the time of GIS practitioners. There are a variety of methods used to enter data into a GIS where it is stored in a digital format. The majority of digital data currently comes from photo interpretation of aerial photographs. Soft copy workstations are used to digitize features directly from stereo pairs of digital photographs. These systems allow data to be captured in two and three dimensions, with elevations measured directly from a stereo pair using principles of photogrammetry. Currently, analog aerial photos are scanned before being entered into a soft copy system, but as high quality digital cameras become cheaper this step will be skipped. Raster-to-vector translation Data restructuring can be performed by a GIS to convert data into different formats. For example, a GIS may be used to convert a satellite image map to a vector structure by generating lines around all cells with the same classification, while determining the cell spatial relationships, such as adjacency or inclusion. More advanced data processing can occur with image processing, a technique developed in the late 1960s by NASA and the private sector to provide contrast enhancement, false colour rendering and a variety of other techniques including use of two dimensional Fourier transforms.

Projections, coordinate systems and registration Projection is a fundamental component of map making. A projection is a mathematical means of transferring information from a model of the Earth, which represents a threedimensional curved surface, to a two-dimensional mediumpaper or a computer screen. Different projections are used for different types of maps because each projection particularly suits specific uses. For example, a projection that accurately represents the shapes of the continents will distort their relative sizes. See Map projection for more information. Since much of the information in a GIS comes from existing maps, a GIS uses the processing power of the computer to transform digital information, gathered from sources with different projections and/or different coordinate systems, to a common projection and coordinate system. For images, this process is called rectification.

GLOBAL POSITIONING SYSTEM:1. INTRODUCTION:Using the Global Positioning System (GPS, a process used to establish a position at any point on the globe) the following two values can be determined anywhere on Earth: 1. Ones exact location (longitude, latitude and height co-ordinates) accurate to within a range of 20 m to approx. 1 mm. 2. The precise time (Universal Time Coordinated, UTC) accurate to within a range of 60ns to approx. 5ns. Speed and direction of travel (course) can be derived from these co-ordinates as well as the time. The coordinates and time values are determined by 28 satellites orbiting the Earth. GPS receivers are used for positioning, locating, navigating, surveying and determining the time and are employed both by private individuals (e.g. for leisure activities, such as trekking, balloon flights and cross-country skiing etc.) and companies (surveying, determining the time, navigation, vehicle monitoring etc.). GPS (the full description is: NAVigation System with Timing And Ranging Global Positioning System, NAVSTAR GPS) was developed by the U.S. Department of Defense (DoD) and can be used both by civilians and military personnel. The civil signal SPS (Standard Positioning Service) can be used freely by the public, whilst the military signal PPS (Precise Positioning Service) can only be used by authorized government agencies. The first satellite was placed in orbit on 22 February 1978, and there are currently 28 operational satellites orbiting the Earth at a height of 20,180 km on 6 different orbital planes. Their orbits are inclined at 55 to the equator, ensuring that a least 4 satellites are in radio communication with any point on the planet. Each satellite orbits the Earth in approximately 12 hours and has four atomic clocks on board. During the development of the GPS system, particular emphasis was placed on the following three aspects: 1. It had to provide users with the capability of determining position, speed and time, whether in motion or at rest. 2. It had to have a continuous, global, 3-dimensional positioning capability with a high degree of accuracy, irrespective of the weather. 3. It had to offer potential for civilian use. 2.GANARATING GPS TRANSIT TIME:Twenty eight satellites inclined at 55 to the equator orbit the Earth every 11 hours and 58 minutes at a height of 20,180 km on 6 different orbital planes (Figure 1). Each one of these satellites has up to four atomic clocks on board. Atomic clocks are currently the most precise instruments

known, losing a maximum of one second every 30,000 to 1,000,000 years. In order to make them even more accurate, they are regularly adjusted or synchronized from various control points on Earth. Each satellite transmits its exact position and its precise on board clock time to Earth at a frequency of 1575.42 MHz. These signals are transmitted at the speed of light (300,000 km/s) and therefore require approx. 67.3 ms to reach a position on the Earths surface located directly below the satellite. The signals require a further 3.33 us for each excess kilometer of travel. If you wish to establish your position on land (or at sea or in the air), all you require is an accurate clock. By comparing the arrival time of the satellite signal with the on board clock time the moment the signal was emitted, it is possible to determine the transit time of that signal (Figure 2).

Figure: 1 GPS satellites, Earth on 6 orbital plane.

Figure 2: Determining the transit time

The distance S to the satellite can be determined by using the known transit time : Distance = transit time * speed of the light S=*c Measuring signal transit time and knowing the distance to a satellite is still not enough to calculate ones own position in 3-D space. To achieve this, four independent transit time measurements are required. It is for this reason that signal communication with four different satellites is needed to calculate ones exact position. 3.DETERMINING A POSITION:-

3.1 Determining a position on a plane

Imagine that you are wandering across a vast plateau and would like to know where you are. Two satellites are orbiting far above you transmitting their own on board clock times and positions. By using the signal transit time to both satellites, you can draw two circles with the radii S1 and S2 around the satellites. Each radius corresponds to the distance calculated to the satellite. All possible distances to the satellite are located on the circumference of the circle. If the position above the satellites is excluded, the location of the receiver is at the exact point where the two circles intersect beneath the satellites (Figure 3). Two satellites are sufficient to determine a position on the X/Y plane.

Figure 3: The position of the receiver at the intersection between two circles.

In reality, a position has to be determined in three-dimensional space, rather than on a plane. As the difference between a plane and three-dimensional space consists of an extra dimension (height_ Z), an additional third satellite must be available to determine the true position. If the distance to the three satellites is known, all possible positions are located on the surface of three spheres whose radii correspond to the distance calculated. The position sought is at the point where all three surfaces of the spheres intersect.

Figure 4: The position is determined at the point where all three spheres intersect.

3.2 The effect and correction of time error


We have been assuming up until now that it has been possible to measure signal transit time precisely. However, this is not the case. For the receiver to measure time precisely a highly accurate, synchronized clock is needed. If the transit time is out by just 1 s this produces a positional error of 300m. As the clocks on board all three satellites are synchronized, the transit time in the case of all three measurements is inaccurate by the same amount. Mathematics is the only thing that can help us now. We are reminded when producing calculations that if N variables are unknown, we need N independent equations. If the time measurement is accompanied by a constant unknown error, we will have four unknown variables in 3-Dspace: Longitude (X) Latitude (Y) Height (Z) Time error (delta t)

It therefore follows that in three-dimensional space four satellites are needed to determine a position.

3.3 Determining a position in 3-D space


In order to determine these four unknown variables, four independent equations are needed. The four transit times required are supplied by the four different satellites (sat. 1 to sat. 4). The 28 GPS satellites are distributed around the globe in such a way that at least 4 of them are always visible from any point on Earth (Figure 5). Despite receiver time errors, a position on a plane can be calculated to within approx. 5 10 m.

Figure 5: Four satellites are required to determine a position in 3-D space 4. BASIC MODULE OF GPS: GPS modules have to evaluate weak antenna signals from at least four satellites, in order to determine a correct three-dimensional position. A time signal is also often emitted in addition to longitude, latitude and height. This time signal is synchronized with UTC (Universal Time Coordinated). From the position determined and the exact time, additional physical variables, such as speed and acceleration can also be calculated. The GPS module issues information on the constellation, satellite health, and the number of visible satellites etc. Figure 6 shows a typical block diagram of a GPS module. The signals received (1575.42 MHz) are preamplified and transformed to a lower intermediate frequency. The reference oscillator provides the necessary carrier wave for frequency conversion, along with the necessary clock frequency for the processor and correlator. The analogue intermediate frequency is converted into a digital signal by means of a 2-bit ADC. Signal transit time from the satellites to the GPS receiver is ascertained by correlating PRN pulse sequences. The satellite PRN sequence must be used to determine this time, otherwise there is no correlation maximum. Data is recovered by mixing it with the correct PRN sequence. At the same time, the useful signal is amplified above the interference level. Up to 16 satellite signals are processed simultaneously. The control and generation of PRN sequences and the recovery of data is carried out by a signal processor. Calculating and saving the position, including the variables derived from this, is carried out by a processor with a memory facility.

5.DATA FORMAT AND HARDWARE INTERFACE:GPS receivers require different signals in order to function (Figure 9). These variables are broadcast after position and time have been successfully calculated and determined. To ensure that the different types of appliances are portable there are either international standards for data exchange (NMEA and RTCM), or the manufacturer provides defined (proprietary) formats and protocols

.
Figure 9: Block diagram of a GPS receiver with interfaces

ABSTRACT OF THIS PROJECT


1. Introduction 2. WSN vs Adhoc sensor 3. Features of sensor network

4. Communication network 5. Hardware design issue 6. Applications

1. INTRODUCTION :A wireless sensor network (WSN) is a wireless network consisting of spatially distributed autonomous devices using sensors to cooperatively monitor physical or environmental conditions, Such as temperature,sound, vibration,pressure,motion or pollutants, at different locations. Originally developed as a military application for battlefield surveillance, also covering areas such as environment and habitat monitoring, traffic control, vehicle and vessel monitoring, fire detection, object tracking, smart building, home automation, etc are but few examples. Wireless sensor networks gather data from places where it is difficult for humans to reach and once they are deployed, they work on their own and serve the data for which they are deployed. When the environment changes,sensor network should change too. For an example, it is meaningless, if the sensor network is collecting data of rainfall in the months of January-March in India. However, the same network could be utilized to gather temperature data for the same period. Or at least we should stop retrieving. 2. WIRELESS SENSOR NETWORK VS AD HOC SENSOR:A mobile ad hoc network (MANET), sometimes called a mobile mesh network, is a self configuring network of mobile devices connected by wireless links. Each device in a MANET is free to move independently in any direction, and will therefore change its links to other devices frequently. The difference between wireless sensor networks and ad-hoc networks are outlined below: The number of sensor nodes in a sensor network can be several orders of magnitude higher than the nodes in an ad hoc network. Sensor nodes are densely deployed. Sensor nodes are prone to failures. The topology of a sensor network changes very frequently. Sensor nodes mainly use broadcast communication paradigm whereas most ad hoc networks are based on point-to-point communication. Sensor nodes are limited in power, computational capacities, and memory. Sensor nodes may not have global identification (ID) because of the large amount of overheads and large number of sensors. Sensor networks are deployed with a specific sensing application in mind whereas ad hoc networks are mostly constructed for communication purpose.

3. CHARACTERISTI FEATURES OF SENSOR NETWORKS :-

Our wireless sensor network consists in spatially distributed autonomous devices equipped with sensors, RF transceivers, microcontrollers and batteries. Hereafter I will describe the optimal properties of this kind of wireless sensor network. (a)Self-organization: The network should auto-detect newly arrived nodes or removed/stolen/broken nodes and adapt the tree to route messages accordingly. After the deployment, no human interaction should be required anymore during the network lifetime (limited by battery life). (b)Scalability: Adding a huge number of nodes should still enable acceptable performance in terms of latency and packet drops. Multi-hop transmission: When the base station is not within communication range, nodes should be able to send their data to neighboring nodes that will forward the messages towards the base station. (c)Latency: The network should forward messages towards the sink with limited latency. This is not the highest priority for our application though. (d)Static/Mobile network: In this case, we are considering a wireless sensor network which has a static topology. However, as mentioned before, nodes may be added or removed at different time instants and the topology must adapt automatically. (e)Size: Sensor nodes for environmental applications should be as small as possible to be discrete. Indeed, people might be tempted to steal them or to take them home by pure curiosity (f)Price: Since we have a large number of nodes and because our application is agriculture oriented, our nodes must be as cheap as possible. This is of course not the case at this level of research, because we are lacking

economies of scale. However, we should keep in mind the financial constraint when choosing each components of the system. (g) Robust to physical environment: Nodes must be robust to work long term in a potentially hostile environment such as high humidity during monsoon, extreme heat during daytime, strong electromagnetic fields during thunder, sun rays, presence of animals and human curiosity all these aspects make it extremely difficult to provide a reliable system. (h)Power consumption: One main challenge is to design low-power hardware components and to develop a software platform that minimizes power consumption. This can be achieved by limiting the time during which the radio chip and the microcontroller are turned on. An efficient MAC layer protocol must be considered for this purpose. (i)Low data rate: For this application (sending of sensor data), we do not need high transmission rates. A few kbps will be sufficient. This also enables lower power consumption and lower bit error rate while transmitting.

4. COMMUNICATION NETWORKS:-

The study of communication networks can encompass several years at the college or university level. To understand and be able to implement sensor networks, however, several basic primary concepts are sufficient. (1) Network Topology The basic issue in communication networks is the transmission of messages to achieve a prescribed message throughput (Quantity of Service) and Quality of Service (QoS). QoS can be specified in terms of message delay, message due dates, bit error rates, packet loss, economic cost of transmission, transmission power, etc. Depending on QoS, the installation environment, economic considerations, and the application, one of several basic network topologies may be used. A communication network is composed of nodes, each of which has computing power and can transmit and receive messages over communication links, wireless or cabled. The basic network topologies are shown in the figure and include fully connected, mesh, star, ring, tree, bus. A single network may consist of several interconnected subnets of different topologies. Networks are further classified as Local Area Networks (LAN), e.g. inside one building, or Wide Area Networks (WAN), e.g. between buildings. Fully connected networks suffer from problems of NP-complexity [Garey 1979]; as additional nodes are added, the number of links increases exponentially. Therefore, for large networks, the routing problem is computationally intractable even with the availability of large amounts of computing power. Mesh networks are regularly distributed networks that generally allow transmission only to a nodes nearest neighbors. The nodes in these networks are generally identical, so that mesh nets are also referred to as peer-to-peer (see below) nets. Mesh nets can be good models for largescale networks of wireless sensors that are distributed over a geographic region, e.g. personnel or vehicle security surveillance systems. Note that the regular structure reflects the communications topology; the actual geographic distribution of the nodes need not be a regular mesh. Since there are generally multiple routing paths between nodes, these nets are robust to failure of individual nodes or links. An advantage of mesh nets is that, although all nodes may be identical and have the same computing and transmission capabilities, certain nodes can be designated as group leaders that take on additional functions. If a group leader is disabled, another node can then take over these duties. All nodes of the star topology are connected to a single hub node. The hub requires greater message handling, routing, and decision-making capabilities than the other nodes. If a communication link is cut, it only affects one node. However, if the hub is incapacitated the network is

destroyed. In the ring topology all nodes perform the same function and there is no leader node. Messages generally travel around the ring in a single direction. However, if the ring is cut, all communication is lost. The self-healing ring network (SHR) shown has two rings and is more fault tolerant. In the bus topology, messages are broadcast on the bus to all nodes. Each node checks the destination address in the message header, and processes the messages addressed to it. The bus topology is passive in that each node simply listens for messages and is not responsible for retransmitting any messages. 2. Communication Protocols and Routing Headers. Each message generally has a header identifying its source node, destination node, length of the data field, and other information. This is used by the nodes in proper routing of the message. In encoded messages, parity bits may be included. In packet routing networks, each message is broken into packets of fixed length. The packets are transmitted separately through the network and then reassembled at the destination. The fixed packet length makes for easier routing and satisfaction of QoS. Generally, voice communications use circuit switching, while data transmissions use packet routing. When a node desires to transmit a message, handshaking protocols with the destination node are used to improve reliability. The source and destination might transmit alternately as follows: request to send, ready to receive, send message, message received. Handshaking is used to guarantee QoS and to retransmit messages that were not properly received Switching. Most computer networks use a store-and-forward switching technique to control the flow of information [Duato 1996]. Then, each time a packet reaches a node, it is completely buffered in local memory, and transmitted as a whole. More sophisticated switching techniques include wormhole, which splits the message into smaller units known as flow control units or flits. The header flit determines the route. As the header is routed, the remaining flits follow it in pipeline fashion. This technique currently achieves the lowest message latency. Another popular switching scheme is virtual-cut-through. Here, when the header arrives at a node, it is routed without waiting for the rest of the packet. Packets are buffered either in software buffers in memory or in hardware buffers, and various sorts of buffers are used including edge buffers, central buffers, etc. Multiple Access Protocols. When multiple nodes desire to transmit, protocols are needed to avoid collisions and lost data. In the ALOHA scheme, first used in the 1970s at the University of Hawaii, a node simply transmits a message when it desires. If it receives an acknowledgement,

all is well. If not, the node waits a random time and re-transmits the message. In Frequency Division Multiple Access (FDMA), different nodes have different carrier frequencies. Since frequency resources are divided, this decreases the bandwidth available for each node. FDMA also requires additional hardware and intelligence at each node. In Code Division Multiple Access (CDMA), a unique code is used by each node to encode its messages. This increases the complexity of the transmitter and the receiver. In Time Division Multiple Access (TDMA), the RF link is divided on a time axis, with each node being given a predetermined time slot it can use for communication. This decreases the sweep rate, but a major advantage is that TDMA can be implemented in software. All nodes require accurate, synchronized clocks for TDMA. Routing. Since a distributed network has multiple nodes and services many messages, and each node is a shared resource, many decisions must be made. There may be multiple paths from the source to the destination. Therefore, message routing is an important topic. The main performance measures affected by the routing scheme are throughput (quantity of service) and average packet delay (quality of service). Routing schemes should also avoid both deadlock and livelock. Fixed routing schemes often use Routing Tables that dictate the next node to be routed to, given the current message location and the destination node. Routing tables can be very large for large networks, and cannot take into account real-time effects such as failed links, nodes with backed up queues, or congested links. Adaptive routing schemes depend on the current network status and can take into account various performance measures, including cost of transmission over a given link, congestion of a given link, reliability of a path, and time of transmission. They can also account for link or node failures. Deadlock and Livelock. Large-scale communication networks contain cycles (circular paths) of nodes. Moreover, each node is a shared resource that can handle multiple messages flowing along different paths. Therefore, communication nets are susceptible to deadlock, wherein all nodes in a specific cycle have full buffers and are waiting for each other. Then, no node can transmit because no node can get free buffer space, so all transmission in that cycle comes to a halt. Livelock, on the other hand, is the condition wherein a message is continually transmitted around the network and never reaches its destination. Livelock is a deficiency of some routing schemes that route the message to alternate links when the desired links are congested, without taking into account that the message should be routed closer to its final destination. Many routing schemes are available for routing with deadlock and livelock avoidance [e.g. Duato 1996].

Flow Control. In queuing networks, each node has an associated queue or buffer that can stack messages. In such networks, flow control and resource assignment are important. The objectives of flow control are to protect the network from problems related to overload and speed mismatches, and to maintain QoS, efficiency, fairness, and freedom from deadlock. If a given node A has high priority, its messages might be preferentially routed in every case, so that competing nodes are choked off as the traffic of A increases. Fair routing schemes avoid this. There are several techniques for flow control: In buffer management, certain portions of the buffer space are assigned for certain purposes. In choke packet schemes, any node sensing congestion sends choke packets to other nodes telling them to reduce their transmissions. Isarithmic schemes have a fixed number of permits for the network. A message can be sent only if a permit is available. In window or kanban schemes, the receiver grants credits to the sender only if it has free buffer space 5. HARDWARE DESIGN ISSUE:In a generic sensor node, we can identify a power module, a communication block, a processing unit with internal and/or external memory, and a module for sensing and actuation. (A) Power:Using stored energy or harvesting energy from the outside world are the two options for the power module. Energy storage may be achieved with the use of batteries or alternative devices such as fuel cells or miniaturized heat engines, whereas energy-scavenging opportunities [D37] are provided by solar power, vibrations, acoustic noise, and piezoelectric effects [D38]. Primary (non-rechargeable) batteries are often chosen, predominantly AA, AAA and coin-type. Alkaline batteries offer a high energy density at a cheap price, offset by a non-flat discharge, a large physical size with respect to a typical sensor node, and a shelf life of only 5 years. Voltage regulation could in principle be employed, but its high inefficiency and large quiescent current consumption call for the use of components that can deal with large variations in the supply voltage [A5]. Lithium cells are very compact and boast a flat discharge curve. Secondary (rechargeable) battery typically not desirable, as they offer a lower density and a higher cost, not to mention the fact that in most applications recharging is simply not practical. Fuel cells [D39] are rechargeable electrochemical energy-conversion devices where electricity and heat are produced as long as hydrogen is supplied to react with oxygen. Pollution is minimal, as water is the main byproduct of the reaction. The potential of fuel cells for energy storage and power delivery is much higher than the one of traditional battery technologies, but the fact that they require hydrogen complicates their application. Using renewable energy and scavenging techniques is an interesting alternative.

(B) Communication:Most sensor networks use radio communication, even if alternative solutions are offered by laser and infrared. Nearly all radio-based platforms use COTS (Commercial Off-The-Shelf) components. Popular choices include the TR1000 from RFM (used in the MICA motes) and the CC1000 from Chipcon (chosen for the MICA2 platform). More recent solutions use industry standards like IEEE 802.15.4 (MICAz and Telos motes with CC2420 from Chipcon) or pseudo-standards like Bluetooth. Typically, the transmit power ranges between 25 dBm and 10 dBm, while the receiver sensitivity can be as good as 110 dBm Ultra Wide Band (UWB) is of great interest for sensor networks since it meets some of their main requirements. UWB is a particular carrier-free spread spectrum technique where the RF signal is spread over a spectrum as large as several GHz. This implies that UWB signals look like noise to conventional radios. Such signals are produced using baseband pulses (for instance, Gaussian monopulses) whose length ranges from 100 ps to 1 ns, and baseband transmission is generally carried out by means of pulse position modulation (PPM). Modulation and demodulation are indeed extremely cheap. UWB provides built-in ranging capabilities (a wideband signal allows a good time resolution and therefore a good location accuracy) [D40], allows a very low power consumption, and performs well in the presence of multipath fading. Radios with relatively low bit-rates (up to 100 kbps) are advantageous in terms of power consumption. In most sensor networks, high data rates are not needed, even though they allow shorter transmission times thus permitting lower duty cycles and alleviating channel access contention. It is also desirable for a radio to quickly switch from a sleep mode to an operational mode. Optical transceiver such as lasers offer a strong power advantage, mainly due to their high directionality and the fact that only baseband processing is required. Also, security is intrinsically guaranteed (intercepted signals are altered). However, the need for a line of sight and precise localization makes this option impractical for most applications (C) Processing and Computing:Although low-power FPGAs might become a viable option in the near future [D41], microcontrollers (MCUs) are now the primary choice for processing in sensor nodes. The key metric in the selection of an MCU is power consumption. Sleep mode deserve special attention, as in many applications low duty cycles are essential for lifetime extension. Just as in the case of the radio module, a fast wake-up time is important. Most CPUs used in lower-end sensor nodes have clock speeds of a few MHz. The memory requirements depend on the application and the network

topology: data storage is not critical if data are often relayed to a base station. Berkeley motes, UCLAs Medusa MK-2 and ETHZs BTnodes use low-cost Atmel AVR 8-bit RISC microcontrollers which consume about 1500 pJ/instruction. More sophisticated platforms, such as the Intel iMote and Rockwell WINS nodes, use Intel StrongArm/XScale 32-bit processors. (D) Sensing:The high sampling rates of modern digital sensors are usually not needed in sensor networks. The power efficiency of sensors and their turn-on and turn-off time are much more important. Additional issues are the physical size of the sensing hardware, fabrication, and assembly compatibility with other components of the system. Using a microcontroller with an onchip analog comparator is another energy-saving technique which allows the node to avoid sampling values falling outside a certain range [D43]. The ADC which complements analog sensors is particularly critical, as its resolution has a direct impact on energy consumption. Fortunately, typical sensor network applications do not have stringent resolution requirements. 6.APPLICATIONS:The applications for WSNs are varied, typically involving some kind of monitoring, tracking, or controlling. Specific applications include habitat monitoring, object tracking, nuclear reactor control, fire detection, and traffic monitoring. In a typical application, a WSN is scattered in a region where it is meant to collect data through its sensor nodes. (A) Area monitoring Area monitoring is a common application of WSNs. In area monitoring, the WSN is deployed over a region where some phenomenon is to be monitored. For example, a large quantity of sensor nodes could be deployed over a battlefield to detect enemy intrusion instead of using landmines. When the sensors detect the event being monitored (heat, pressure, sound, light, electro-magnetic field, vibration, etc.), the event needs to be reported to one of the base stations, which can take appropriate action (e.g., send a message on the internet or to a satellite). Depending on the exact application, different objective functions will require different data-propagation strategies, depending on things such as need for real-time response, redundancy of the data (which can be tackled via data aggregation and information fusion] techniques), need for security, etc. (B) Environmental monitoring A number of WSNs have been deployed for environmental monitoring.] Many of these have been short lived, often due to the prototype nature of the projects. Examples of longer-lived deployments are monitoring the

state of permafrost in the Swiss Alps: The PermaSense PermaSense Online Data Viewer and glacier monitoring.

Project,

(C) Machine Health Monitoring or Condition based maintenance Wireless sensor networks have been developed for machinery conditionbased maintenance (CBM) as they offer significant cost savings and enable new functionalities. In wired systems, the installation of enough sensors is often limited by the cost of wiring, which runs between $10 $1000 per foot. Previously inaccessible locations, rotating machinery, hazardous or restricted areas, and mobile assets can now be reached with wireless sensors. Often, companies use manual techniques to calibrate, measure, and maintain equipment. This labor-intensive method not only increases the cost of maintenance but also makes the system prone to human errors. Especially in US Navy shipboard systems, reduced manning levels make it imperative to install automated maintenance monitoring systems. Wireless sensor networks play an important role in providing this capability. (D )Industrial Monitoring (1)Water/Wastewater Monitoring There are many opportunities for using wireless sensor networks within the water/wastewater industries. Facilities not wired for power or data transmission can be monitored using industrial wireless I/O devices and sensors powered using solar panels or battery packs. As part of the American Recovery and Reinvestment Act (ARRA), funding is available for some water and wastewater projects in most states. (2)Landfill Ground Well Level Monitoring and Pump Counter Wireless sensor networks can be used to measure and monitor the water levels within all ground wells in the landfill site and monitor leachate accumulation and removal. A wireless device and submersible pressure transmitter monitors the leachate level. The sensor information is wirelessly transmitted to a central data logging system to store the level data, perform calculations, or notify personnel when a service vehicle is needed at a specific well. It is typical for leachate removal pumps to be installed with a totalizing counter mounted at the top of the well to monitor the pump cycles and to calculate the total volume of leachate removed from the well. For most current installations, this counter is read manually. Instead of manually collecting the pump count data, wireless devices can send data from the pumps back to a central control location to save time and eliminate errors. The control system uses this count information to determine when the pump is in operation, to calculate leachate extraction volume, and to schedule maintenance on the pump. (E) Flare Stack Monitoring

Landfill managers need to accurately monitor methane gas production, removal, venting, and burning. Knowledge of both methane flow and temperature at the flare stack can define when methane is released into the environment instead of combusted. To accurately determine methane production levels and flow, a pressure transducer can detect both pressure and vacuum present within the methane production system. Thermocouples connected to wireless I/O devices create the wireless sensor network that detects the heat of an active flame, verifying that methane is burning. Logically, if the meter is indicating a methane flow and the temperature at the flare stack is high, then the methane is burning correctly. If the meter indicates methane flow and the temperature is low, methane is releasing into the environment. (F) Water Tower Level Monitoring Water towers are used to add water and create water pressure to small communities or neighborhoods during peak use times to ensure water pressure is available to all users. Maintaining the water levels in these towers is important and requires constant monitoring and control. A wireless sensor network that includes submersible pressure sensors and float switches monitors the water levels in the tower and wirelessly transmits this data back to a control location. When tower water levels fall, pumps to move more water from the reservoir to the tower are turned on. (G) Vehicle Detection Wireless sensor networks can use a range of sensors to detect the presence of vehicles ranging from motorcycles to train cars. Fleet monitoring (outdoor and indoor location) It is possible to put a mote with a GPS module on board of each vehicle of a fleet. The mote reports its coordinates so that the location is tracked with real time information. The motes can be equiped with temperature sensors to control any disruption of the cold chain. That helps to ensure the safety in the food and pharmaceutical industries and also some chemical shipments. Using the GSM cells helps to get the position in scenarios Where there is not GPS coverage, like inside buildings, garages and tunnels. This alternative method consists in taking the information sent by the Mobile Phones Cells and look for their location in a previously saved Data Base. (H) Agriculture

Using wireless sensor networks within the agricultural industry is increasingly common. Gravity fed water systems can be monitored using pressure transmitters to monitor water tank levels, pumps can be controlled using wireless I/O devices, and water use can be measured and wirelessly transmitted back to a central control center for billing. Irrigation automation enables more efficient water use and reduces waste. (1)Windrow Composting Composting is the aerobic decomposition of biodegradable organic matter to produce compost, a nutrient-rich mulch of organic soil produced using food, wood, manure, and/or other organic material. One of the primary methods of composting involves using windrows. To ensure efficient and effective composting, the temperatures of the windrows must be measured and logged constantly. With accurate temperature measurements, facility managers can determine the optimum time to turn the windrows for quicker compost production. Manually collecting data is time consuming, cannot be done continually, and may expose the person collecting the data to harmful pathogens. Automatically collecting the data and wirelessly transmitting the data back to a centralized location allows composting temperatures to be continually recorded and logged, improving efficiency, reducing the time needed to complete a composting cycle, and minimizing human exposure and potential risk. An industrial wireless I/O device mounted on a stake with two thermocouples, each at different depths, can automatically monitor the temperature at two depths within a compost windrow or stack. Temperature sensor readings are wirelessly transmitted back to the gateway or host system for data collection, analysis, and logging. Because the temperatures are measured and recorded continuously, the composting rows can be turned as soon as the temperature reaches the ideal point. Continuously monitoring the temperature may also provide an early warning to potential fire hazards by notifying personnel when temperatures exceed recommended ranges. (2)Greenhouse Monitoring Wireless sensor networks are also used to control the temperature and humidity levels inside commercial greenhouses. When the temperature and humidity drops below specific levels, the greenhouse manager must be notified via e-mail or cell phone text message, or host systems can trigger misting systems, open vents, turn on fans, or control a wide variety of system responses. Because some wireless sensor networks are easy to install, they are also easy to move as the needs of the application change.

WIRELESS SENSOR NETWORK - OBJECT TRACKING


1. Wireless Sensor Networks:
Today find their main usage in environmental monitoring problems. The aim of this project was to put these networks to use in a slightly different context that of tracking objects. This sensing of moving objects was done in this project by measuring radio signal strengths rather than using sensors such as magnetometers, accelerometers etc. In order to achieve this, we had to find out an empirical relation via experimental techniques between a received signal strength metric (RSSI) and distances.

Figure 1: Locating objects in a Wireless Sensor Network The motivation for doing this project was to allow to set up a nearly zero configurable network easily in a limited geographical environment where only restricted acces is to be allowed to vehicles, people etc. A typical example would be tracking moving vehicles inside a wildlife sanctuary, where it is pertinent to ensure that these vehicles only stay restricted to certain paths.Why Wireless Sensor Networks ? Wireless Sensor Networks offer distinct advantages over other alternative approaches to solve such an object tracking problem. Some of these are mentioned below. Nearly zero configurability Low cost of sensor nodes compared to GPRS modems 3 Minimal energy consumption Easy expansion & reduction of coverage area

2 Hardware & Software Used


The network was setup using Berkeley motes from Crossbow. The programming was done on MPR-410 Mica2 motes. These motes have an Atmega128L micro-controller, a CC1000 radio transceiver, and can be integrated with a variety of sensors using a 51-pin connector to an external sensor board. The motes are powered by two AA batteries.

Figure 2: The MPR-410 network mote

These motes can be interfaced with a computer using a programming board that connects via a serial port. This interfacing is necessary for programming as well as for transferring data over UART.

Figure 3: The MIB-510 programming board 4

3 Design Strategy
The sensor network setup consists of a number of nodes that have all been programmed with the same code. These nodes are capable of initially establishing routing in the network and once this is done, they go into a listening mode during which they relay all packets from the transmitter being tracked to the base station. The transmitter is assumed to be hostile and does not transmit any information in the packet, only its own ID. These raw packets are taken and a time stamp and a received signal strength parameter value is augmented to their data pay load by all listening nodes. The packets are then routed through the network. It is also ensured that during the routing period, time synchronization is achieved in the network. All the data is collated at a base station which then passes all the data onto a processing server over UART. 3.1 The routing phase To begin with, all the sensor nodes are booted up. The last node to be booted is the base station. The base station on booting broadcasts a route finding message onto the network.

This message carries the node ID of the base along with a hop count field as well as the base stations time stamp. The hop count is initially set by the base station to 1. All nodes within a single hop distance of the base receive this message and set their Parent to be the base station and their Hop Distance to be 1. These nodes also reset their clocks to synchronize with the base station. All these nodes then re-broadcast the routing message writing inside their own node ID instead of the bases, incrementing the hop count by 1 and also update the time stamp of the message. In this manner the route finding messages are flooded through the network and all nodes soon know who their parent node is and also at what hop distance they are residing from the base. All message transmition in the listening period of the node is directed towards the parent node. 3.1.1 Avoiding collisions: When a packet flooding strategy like the one described above is used to determine routing, it is very likely that due to the large number of broadcast packets in circulation, some nodes may not receive the packet in one go due to interference. Hence it becomes essential to re-broadcast packets for a certain period of time to ensure with a probability that every node has received a routing packet atleast once. To ensure this the base station keeps sending out these broadcast messages for a fixed period of time (around 10 seconds) with a random back-off (of the order of 1 second). Every node will thus receive much more than just a single routing packet and it must make a decision about which of these packets it will further transmit as otherwise there will be excessive packet flooding in the network and functionality may get severely affected. To counter this, a simple heuristic was adopted. A node re-broadcasts only those packets which lead to an updation of its Parent and/or Hop Distance . 3.1.2 Updating Parent and Hop Distance Each node receives multiple routing packets during the initial working phase of the network. All of these cannot be allowed to change its Parent and/or Hop Distance. The simplest strategy to follow is that both the values are changed only if the new Hop Distance is less than the old one. This simple check also ensures that atleast one path to the base is available to every node in the network. 3.1.3 Time Synchronisation: The routing messages that flood the network also carry time stamps. Whenever a node receives a message and decides to change its Parent and Hop Distance, it also updates its clock. In such a manner time synchronisation may achieved in the network. There are however a couple of issues with this strategy. The first problem is that of the clocks of adjacent nodes in the network being slightly out of phase (by an amount ). This will happen due to the delay introduced due to message radio transmission, reception and processing. This delay although not the same in every case will roughly be of the same order of magnitude. Therefore in an n-hop network, the worst case delay would be n. This delay due to phase shifting is almost unavoidable, unless some correction factors are introduced in th processing of received data. The strategy we use to counter this phase difference is that although we increment the node clocks every 128 - 1024 milliseconds; the basic unit of time is 1 second. Thus, some of this phase difference is just ironed over. However, if n were to become fairly large, this might pose a problem. The second problem is that of clock skew. This is basically the difference in clock times of two nodes that might occur even though they may have started at the same time value. In case of the hardware used clock skew was of the order of 1 millisecond in every 50,000 milliseconds. Clock skew can be completely taken care of by periodically putting the system into the routing phase; thus resynchronising clocks.

3.1.4 Adding and deleting nodes: The addition and deletion of nodes to the network can be done as long as rerouting is done. Hence to allow for easy addition and deletion of nodes the routing stage needs to be initiated repeatedly.

3.2 The listening phase


After the reception of a routing packet at a node, a short timer is started that signals the end of the routing period and the beginning of the listening period. In this period, the aim of each node is to listen for messages sent out by the transmitter being tracked and then try to route these packets all the way to the base station. Before routing, the nodes augment three pieces of information to the received message : Node ID Time Stamp Received Signal Strength (RSSI value) All the augmented data is essential for the front end processing in order to determine the location of the transmitter. A node ID to coordinate mapping is maintained at the processing server. Hence when the messages reach the server, it knows from where what signal strength is being received. This is the data needed for location estimation. The time stamp is necessary to ensure that when the location estimation is being done, signal strengths measured at the same time, time points are being used as otherwise irrelevant results may be obtained. 3.2.1 Avoiding collisions: All nodes getting the transmitters signal will try to forward the message to their parent. To ensure no colisions are occuring firstly random back-offs are used. Secondly, all message forwards await an acknowledgement from the parent, this will ensure that mesages reach the parent if not in the first go then in subsequent tries.

4 Experimentation
Experiments were carried out with the intention of either arriving at certain empirical results, or at optimal values of certain parameters to enhance network functionality. In this section, we describe some of the important experiments that were carried out.

4.1 Signal strength and Distances


It is essential to have knowledge of a relation between received signal strength and distance of the emitting node, in order to make estimates of location. Hence, experiments were carried out in different field locations to establish the nature of this relation. A signal strength measure is provided by the sensor nodes in the form an RSSI (Reduced Signal Strength Indication) value. This value has a known linear relationship with signal strength. Our aim is to correlate this signal strength indicator with distance. In order to do so, we conducted experiments in different environmental settings to establish such a relation. The experiments were conducted by placing a node programmed to emit a fixed number of messages, at certain measured distances. These messages were then counted at the base station and their RSSI values on reception noted. These data readings were then used to do a quadratic least square curve fitting to determine an empirical relation between distance and RSSI values. The experiment was performed in three environments: 1. Building Corridors 2. Forest Type Environment 3. Open Ground

The major conclusions to be drawn from the experiments were : Each environment offered slightly different relations for the RSSI to distance conversion, but the data obtained showed a quadratic relation by in large. Often small perturbations in the environment such as people walking through could render the readings irrelevant. In indoor conditions, there was no smooth variation of RSSI with distance. Other parameters such as bit error rate could be used more reliably than RSSI.

Figure 3: RSSI to Signal Strength relationship as specified in the CC1000 manual RSSI(dBm) = 51.3 VRSSI 49.2

4.1.1 Obtaining empirical results


The experimental data is obtained as a set of RSSI values for each distance measurement. This data showed a roughly quadratic variation. To get hold of an empirical conversion formula; a least square quadratic curve fitting was done using MATLAB.

4.2 Collision rates


The entire design strategy as far as route determination as well as package routing is concerned uses a lot of non-intelligent random back-offs to prevent collisions of data packets. It was therefore necessary to establish the order of magnitude of these random back-offs in order to ensure that firstly time spent in communication is minimised and secondly that packet loss rate or collision rate is minimised. In order to determine these values, a small single hop network was setup using one base station, three listening nodes and a single transmitter. The nodes simply relayed all messages to the base exactly once without waiting for acknowledgements. The main conclusions drawn were : Simultaneous transmission from three nodes lead to almost 100% packet loss With differences of the order of 200+ ms there was nearly 0% packet loss With decrease in time interval between transmissions, there was increase in packet loss, but this increase was not very dramatic. Even with small intervals of about 20ms, packet loss was below 10%

5 User End Processing


At the user end, the server keeps receiving the signal strength values from base station. These readings are recovered from the UART packet and correlated to get distance estimates. These are then used along with a database that lists the locations of all the sensor nodes to get an estimate of the location of the object being tracked.

Figure 4: Location estimation technique Presently we use a simplistic scheme to determine the location of the object. This is well described by the figure above. We only consider the top three strongest signal values and use them to estimate the location. This sorting of readings and figuring out of top three strongest signal measurements is done at the base station itself and is transparent to the server. This simplistic approach is however not a good idea since it gives no result in a lot of cases due to the nature of solution that is being seeked and the unreliable relation between RSSI and distance. The idea is to have a Wireless Sensor Network up and running, this network being in constant communication with a computer on the network. The User End processing is then fed to a Web Based Interface from where the position of the transmitter can be tracked on screen.

6 Extending Further
This project has a lot of scope for further improvement. A major amount of time was spent in familiarisation with the software tools and programming languages used, and hence complicated protocols for communication could not be implemented. Some further areas of improvement could be : Implementing a better multi-hop communication scheme that minimizes collision A store and forward approach to collate data Decision making about when and which nodes to put to sleep Better algorithms for location estimation Use of more reliable parameters for distance measurement than RSSI Incorporating GPS to determine position of nodes, instead of manually feeding coordinates

Potrebbero piacerti anche