Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
REMOTE SENSING
Interaction
with matter
Data collection
by sensors
Results
Analog
data
Visual
classification
Digital
data
Digital
classification
Map in
raster form
GIS
Map in
vector form
Ancillary data
Sensor
Reflected
energy
TARGET
Transmitted
energy
Emitted
energy
Absorbed
energy
Data
recorder
Required
information
Users
Multi-station
images
involve
successive
overlapping pictures along a flight path, using an
aircraft or a spacecraft for better perception of
three-dimensional features and improved signal-tonoise ratio.
Multi-band images which exploit the fact that each
type of feature tends to exhibit a unique type of
tonal signature, thus, when brightness values seen
in a series of imagery taken in different wavelength
bands are suitably combined, it is possible to
unambiguously identify specific terrestrial features.
Analysis of
customer request
Film processor
Computer system
Analysis results
Oceanograp
hy
Hydrology
Geology
Geography
Forestry
Agriculture
INFORMATION REQUIREMENT
Data analysis and
interpretation
SOME APPLICATIONS
If accurate and/or up-to-date maps of an area are
not available, the remote sensing image can be
treated as rudimentary map for use as base map on
which information obtained from other sources can
be portrayed.
Remote sensing images can be used to delineate
aerial extent or pattern of features for determination
of their relative extent, or measurement of their
respective areas.
Agriculture
and forestry
Land use
mapping
Geology
Water
resources
Oceanography
and marine
Environment
Discrimination
of vegetation
types
Classification of
land use
Recognition of
rock types
Determination
of water
boundaries
Detection of living
marine organism
Monitoring of
surface mining
and reclamation
Measurement of
crop species by
acreage
Cartographic
mapping and map
updation
Mapping of major
geological units
Mapping of
floods and flood
plains
Determination of
turbidity patterns
and circulation
Mapping and
monitoring of
water pollution
Determination
of aerial extent
of snow and
snow
boundaries
Mapping of
shoreline changes
Detection of air
pollution and its
effects
Measurement of
glacial features
Mapping of shoals
and shallow areas
Determination of
effects of natural
disaster
Monitoring
environmental
effects of human
activities
Measurement of
timber acreage
and volume by
species
Categorization of
land suitability
Revising
geological maps
Determination of
range readiness
and biomass
Distinguishing
Urban and rural
areas
Delineation of
unconsolidated
rocks and soils
Determination of
vegetation vigour
Regional planning
Mapping igneous
intrusions
Measurement of
sediment and
turbidity
patterns
Mapping of ice
Determination of
vegetation stress
Management of
transport
networks
Mapping recent
volcanic surface
deposits
Inventory of
lakes
Determination of
soil condition
Mapping pf land
water boundaries
Mapping of
landforms
Delineation of
irrigated fields
Determination of
soil association
Determination of
regional
structures
Assessment of
forest fire
Mapping of
lineaments etc.
ELECTROMAGNETIC RADIATION
Remote sensing relies on the measurement of
Electromagnetic (EM) energy.
EM energy can take several different forms.
One of the most important sources of EM energy is the
Sun which provides energy at all wavelength.
Some of these wavelengths are important to us, such as
wavelength in the visible region which provides light and
energy in ultraviolet wavelength which can be harmful to
human skin.
Electromagnetic Energy
Electromagnetic energy can be modeled by waves or
by energy bearing particles called photons.
In the wave model, electromagnetic energy is
considered to propagate through space in the form of
sinusoidal waves.
These waves are characterized by electrical field (E)
and magnetic field (M) both perpendicular to each
other, and for this reason the term electromagnetic
energy is used.
Wavelength
Distance
Magnetic field
M
Velocity of light c
Electromagnetic Energy
The wavelength of electromagnetic waves,
particularly important for understanding remote
sensing, is defined as the distance between successive
wave crests.
The frequency of the electromagnetic energy is the
number of cycles of a wave passing a fixed point over a
specific period of time.
Electromagnetic Energy
The Photon Model is considered when quantifying the
amount of energy to be measured by multi-spectral
sensor.
The amount of energy for a specific wavelength is
given by
Q = h
where Q is the energy of a photon J and h is Plancks
constant (6.6262 10-34 Js).
Electromagnetic Spectrum
and its Characteristics
All matter above Absolute zero (0 K) radiate EM
energy due to molecular agitation.
Sun, and Earth, radiates energy in the form of waves.
The matter capable of absorbing and re-emitting all EM
energy is known as a blackbody.
For a blackbody, both the emissivity () and the
absorptance () are equal to 1.
The amount of energy radiated by an object depends
on its absolute temperature and emissivity, and it is a
function of the wavelength.
If T increases, peak moves towards shorter wavelengths and the total area under the curve increases
T=12730K=10000C
T=10730K=8000C
2
T=8730K=6000C
T=6730K=4000C
1.0
2.0
3.0
Wavelength, m
4.0
5.0
6.0
7.0
8.0
ULTRAVIOLET PORTION
The ultraviolet portion of the spectrum has the shortest
wavelengths that are of no practical use for remote
sensing.
This radiation is beyond the violet portion of the visible
wavelengths.
Some of the Earths surface materials, such as primary
rocks and minerals, emit visible light when illuminated with
ultraviolet radiation.
Microwave Region
The microwave range
covers
wavelengths
from 1mm to 1m.
The visible region of
the
spectrum,
commonly called light,
occupies
relatively
small portion in the EM
spectrum.
.
Sensor
Sun
Scattered energy
Reflected energy
Emitted energy
Atmosphere
Incident energy
Ground surface
Absorbed/transmitted energy
ATMOSPHERIC SCATTERING
Atmospheric scattering occurs when the particles
or gaseous molecules present in the atmosphere
cause the electromagnetic waves to be
redirected from their original path.
RAYLEIGH SCATTERING
Rayleigh
scattering
predominates
where
electromagnetic radiation interacts with particles
that are smaller than the wavelength of the
incoming light.
Examples are
tiny specks of dust,
nitrogen (N2) and
oxygen (O2) molecules.
MIE SCATTERING
Mie scattering occurs when the wavelength of the
incoming radiation is similar in size to that of the
atmospheric particles.
Mie scattering is caused by aerosols which are the
mixture of gases, water vapour, and dust.
It is generally restricted to the lower atmosphere
where larger particles are more abundant, and
dominates under overcast cloud conditions.
Mie scattering influences the entire spectral region
from the near ultraviolet including the near infrared.
NON-SELECTIVE SCATTERING
Non-selective scattering occurs when the
particle size is much larger than the radiation
wavelength.
Typical particles responsible for this effect are
water droplets and larger dust particles.
Non-selective scattering is independent of
wavelength, with all wavelengths scattered about
equally.
ENERGY INTERACTIONS
WITH THE EARTHS SURFACE
When electromagnetic energy is incident on any
given feature of earth surface, three fundamental
energy interactions with the feature are possible.
Angle of incidence
Angle of reflection
Ideal
specular reflector
Near-perfect
diffuse reflector
Diffuse
reflections
contain
spectral
information on the colour of the reflecting
surface, whereas Specular reflections do
not.
Hence in remote sensing, measuring the
diffuse reflectance properties of terrain
features is an important factor.
70
60
Chlorophyll absorption
bands
50
40
30
20
10
0.4
0.6
0.8 1.0
1.2
1.4
NIR curve of
Visiblereflectance
spectral
1.6
1.8
2.0
2.2
2.4
2.6
2.8
The
green vegetation is distinctive and
quite variable with wavelength.
Middle IR
VEGETATION
In this we see there is low reflectance in the blue and red
regions of the visible spectrum.
This low reflectance corresponds to two chlorophyll
absorptions band, because chlorophyll present in the
green leaf, absorbs most of the incident energy.
These absorption band are centered around 0.45m,
hence a peak occurs due to absence of chlorophyll
absorption band.
Thus, when a plant is under stress, the chlorophyll content
is less, and thus reflectance is high in this part of the
spectrum, particularly in the red band, and appears yellow
or chlorotic.
Bare Soil
Spectral reflectance curves of most soils are not
very complex in appearance
One of the most outstanding reflectance
characteristics of dry soil is, generally, an
increasing level of reflectance with increase in
wavelength, particular in the visible and near
infrared regions.
The energy-matter interactions are perhaps less
complex, as the whole incident energy is either
absorbed or reflected, none being transmitted
through the material.
Reflectance (%)
90
Silt
80
70
60
50
40
Sand
30
20
Muck
10
0.5
0.7
0.9
1.1
1.3
1.5
1.7
1.9
2.1
2.3
2.5
Bare Soil
Although the shapes of the curves are similar, a number of
interrelated soil properties must be considered when
discussing the reasons for the difference in the amplitude of
the curve.
The moisture content, the amount of organic matter, the
amount of iron oxide, and the relative & of dry sand, silt, and
clay, the roughness characteristics of the soil surface all
significantly influence the spectral reflectance of soil.
Soil texture that refers to the proportions of sand, silt and
clay.
Clay has particles of size 0.002 mm in diameter; silt has
0.002 to 0.005 mm diameter particles, and sand 0.05 to 2.0
mm diameter particles.
Bare Soil
60
0-4%
Reflectance( %)
50
40
30
5-12%
20
22-23%
10
0.5
1.7
Bare Soil
60
Reflectance (%)
60
Reflectance (%)
Iron removed
70
70
50
40
30
20
50
40
30
20
10
10
0
0
10
12
0
0.5
0.5 to 0.64 m
1.0
1.5
2.0
Wavelength (m)
2.5
Bare Soil
It has been observed that soils under different climatic
conditions may not show the same relationship between
colour and organic matter content.
Under high temperature, well drained soils (coarse
particles) have high organic matter since soils in cooler
zone appears brown instead of black.
Thus the climatic region and the drainage conditions
must be taken into account when considering the
relationship between organic matter and spectral
reflectance.
Bare Soil
Iron oxide present in soils, has significant influence on
the spectral reflectance characteristics of soil as it can
cause a significant decrease in reflectance at least in
the visible bands.
An excellent inverse relationship between reflectance
in the visible region against percentage iron oxide. It is
found that removal of iron oxide causes a marked
increase in reflectance from 0.5 to 1.1 m, and the
reflectance is insignificant beyond 1.1 m.
It is also seen that removal of organic matter causes a
similar marked increase in reflectance over the same
ranges of wavelengths.
Iron Oxide
70
60
50
40
30
20
10
0
0
0.5 to 0.64 m
10
12
Bare Soil
On the basis of above, following conclusions can be
drawn regarding the spectral characteristic of soil.
Increase in moisture content causes a decrease in
reflectance throughout the reflective portion of the
spectrum.
Texture of soil increases the reflectance characteristics
with decrease in particle size.
Water
Water reflects electromagnetic energy in the visible to the
near-infrared range.
Beyond 1200 nm, all energy is absorbed. Some curves of
different types of water are given in Fig.
High reflectance is given by turbid (silt loaded) water
whereas water containing plants with chlorophyll II has a
reflection peak in the green wavelength.
Compared to vegetation and soils, water has the low
reflectance. Vegetation may reflect up to 50%, soils up to
30-40% while water reflects at the most 10% of the
incoming radiation.
Reflectance (%)
40
30
20
10
0
400
800
1200
Wavelength (nm)
1000
2000
2400
Spatial Resolution
Spatial resolution is one of the important characteristic
of a remote sensing system which determines the ability
of a sensor in discerning details in spatial data.
It refers to the size of the smallest possible feature that
can be identified.
Spatial resolution is analogous to the sharpness of the
image in conventional photography.
In digital images produced by electro-optical scanning
system, spatial resolution is described as Instantaneous
Field of View (IFOV) or the solid angle, and it is as a
linear measure of angles in milliradians.
Spectral Resolution
It refers to those wavelengths to which a sensor is
sensitive.
It describes the ability of a sensor to define fine
wavelength intervals.
The two components which are considered in spectral
resolution are
(i) the number of wavelength bands or channels used
(ii) the width of each wave band.
D
a
t
a
I
n
D
I
f
f
e
r
e
n
t
Band-1
Band-2
B
a
n
d
s
Band-3
Band-4
RADIOMETRIC RESOLUTION
The smallest difference in radiant energy that can
be detected is expressed as radiometric
resolution.
Radiometric characteristics of an image describe
the actual information contained in the image.
RADIOMETRIC RESOLUTION
2-bit image
8-bit image
Temporal Resolution
Temporal resolution refers to the frequency of
data collection.
In remote sensing, to capture changes in
environmental phenomenon occurring, the data
may have to be collected either daily, monthly,
seasonally or yearly.
Temporal Resolution
The absolute temporal resolution of a remote
sensing system to capture the same area at the
same viewing angle a second time is equal to
the period a satellite revisit frequency that may
be several days.
Special characteristics of features may change
over time, and these changes can be detected
by collecting and comparing multi-temporal
imageries.
SENSORS
&
PLATFORM
( Note :- Good for Knowledge point of view)
BROAD CLASSIFICATIONS
GEOSTATIONARY SATELLITES
If a satellite is positioned in the equatorial plane at an
LANDSAT
The first satellite LANDSAT-1 designed to monitor the
Earths surface, initially named as ERTS-1 (Earth
Resources Technology Satellite), was launched by NASA,
USA, on July 23, 1972.
The LANDSAT mission was transferred to NOAA
(National Oceanographic and Atmospheric Administration)
in 1983, and later commercialized in 1985 for providing
data to user community.
In order to optimize the illumination conditions, all
LANDSAT satellites are desired to have same equatorial
crossing time.
The special features of these LANDSAT satellites are a
combination of sensors with spectral bands tailored to
Earth observation, functional spatial resolution, and a
good aerial coverage.
SENSORS
Date of
launch
Altitude
(km)
Orbital
inclinatio
n
Orbital
period
(min)
Time of
EQ
crossing
Repeat
cycle
(days)
LANDSAT-1
23.7.72
900
91
103
9:42 A.M.
LANDSAT-2
22.1.75
900
91
103
9:42 A.M.
LANDSAT-3
LANDSAT-4
LANDSAT-5
LANDSAT-6
LANDSAT-7
5.3.78
16.7.82
1.1.84
5.10.93
15.4.99
900
705
705
705
705
91
98.2
98.2
98.2
98.2
103
99
98.9
98.9
98
9:42 A.M.
9:45 A.M.
9:45 A.M.
10:00A.M.
10:00 A.M.
Sensor
Swath
Width
(km)
Resolution
(m)
No. of
bands
18
MSS
RBV
185
185
80
80
4
3
18
MSS
RBV
185
185
80
80
4
3
MSS
RBV
185
185
80
120
(Thermal
band) 40
5
1
MSS
TM
185
185
75
30
(120 for
Band 6)
4
7
MSS
TM
185
185
75
30
(120 for
Band 6)
4
7
13 15
30
(120 for
Band 6)
1
7
30 m
185
-
60 m
15
1
1
18
16
16
16
16
PAN
TM
(ETM)
ETM +
TM
Thermal
PAN
185
-
RBV
MSS
Spectral range
(m)
0.475 - 0.575
0.580 - 0.680
0.090 - 0.830
0.505 - 0.750
Application
0.5 - 0.6
0.6 - 0.7
0.7 - 0.8
0.8 - 1.1
10.4 - 12.6
Hydrothermal Mapping
TM
ETM
ETM+
0.45 - 0.52
0.52 - 0.60
0.63 - 0.69
0.76 - 0.90
1.55 - 1.75
10.4 - 12.5
2.08 - 2.35
All TM Bands
PAN
0.50 - 0.90
SPOT Satellites
SPOT (System Pour Observation de la Terre) is a
series of imaging satellites designed and launched by
CNES (Centre National tudes Spatiales) of France in
collaboration with Sweden and Belgium to provide Earth
observation data.
SPOT satellites ushered a new era in remote sensing by
introducing linear array sensors having pushbroom
scanning facility.
The SPOT sensors have pointable optics that enables
site-to-site off-nadir view capabilities permitting full scene
stereoscopic imaging of the same area.
It is of tremendous value for terrain interpretation,
mapping, and visual terrain simulation from two different
satellite orbital paths.
SPOT Satellites
Another advantage of pointable optics is the steerability of
the sensor by 27 on either side of the nadir.
This allows the sensor to view an area a number of times
within the revisit period of 26 days, and to monitor specific
locations having dynamic phenomenon.
It also increases the chance of obtaining cloud free scenes.
Within the equatorial region, the revisit frequency reduces
to a time interval of 3 days making possible viewing of an
area 7 times during the revisit period of the satellite.
If the latitudes close to 40 to 45, an area can be imaged
11 times during the same revisit period.
They have equator crossing times around 10.30 AM local
solar time.
Uptil now five SPOT satellites have been launched.
Altitude
822 km
Inclination 98
Orbital
period
101 minutes
Equator
crossing
time
Repeat
cycle
26 days
Swath
width
117 km (60 km
per HRV, 3 km
overlap)
Sensor characteristics
HRV (Visible High-Resolution)
Mode
XS-Multispectral
Band
Spectral
range (m)
Resolution
XS1
0.50 - 0.59
20 m
XS2
0.61 - 0.68
20 m
XS3
0.79 - 0.89
20 m
PAN
0.51 - 0.73
10 m
Orbital characteristics
Orbit
Altitude
830 km
Inclination
98
Orbital
period
101 mins
Equator
crossing time
Repeat cycle
26 days
Swath width
HRVIS
Mode
Band
MIR
Spectral
range (m)
Resolution
0.50 - 0.59
20 m
0.61 - 0.68
20 m
20 m
Mono
spectral
PAN
1.98 - 1.75
10 m
0.61-0.68
10 m
VEGETATION
Band
Resolution
B0
0.43 - 0.47
1165 m
B2
0.61 - 0.68
1165 m
Sensor characteristics
Altitude
832 km
Inclination
98
Mode
Band
Spectral
range (m)
Resolution
Orbital
period
101 mins
B3
B1
0.50 - 0.59
10 m
Equator
10:30 A.M.
crossing time LST
B2
0.61 - 0.68
10 m
0.79
0.89
10 m
Repeat cycle
26 days
SWIR
1.58 - 1.75
20 m
Swath width
PAN
0.51 - 0.73
5 m,
2m
0.51-0.73
10 m
IRS Satellites
The IRS Satellite Systems are under the umbrella of
National
Resources
Management
System
(NNRMS), and coordinated at the national level by
the Planning Committee of NNRMS (PC-NNRMS).
The launch of the IRS-1A on 17 March, 1988, which
is Indias first civil remote sensing satellite, marked
the beginning of a successful journey in the
International Space Programme.
IRS IA & IB
IRS IC
IRS ID
Near polar
Near polar
Sun-synchronous
Sun-synchronous
Sun-synchronous
Altitude
914 km
817 km
821 km
Inclination
99.028
98.69
98.62
Orbital period
103.192 minutes
101.35 minutes
100.56 minutes
Equatorial
crossing time
Repeat cycle
26 days
24 days
Swath width
148 km (LISS I)
74 m (LISS II)
70 km (PAN)
70 km (PAN)
810 km (WiFS)
812 km (WiFS)
72.50 m
(LISS I)
36.25 m
(LISS II)
5.8 m (PAN)
5.8 m (PAN)
188.3 m (WiFS)
188 m (WiFS)
LISS
IRS IC
IRS ID
Spectral range Applications
(m)
Band
Spectral
range (m)
Spectral
range (m)
0.45 - 0.52
-------------
----------
0.52 - 0.59
0.52 - 0.59
0.52 - 0.59
Green vegetation;
soil/rock
discrimination
0.62 - 0.68
0.62 - 0.68
0.62 - 0.68
Chlorophyll
absorption for plant
species
0.77 - 0.86
0.77 - 0.86
0.77 - 0.86
Delineation of land
and water
1.55 - 1.70
1.55 - 1.70
Vegetation mapping
Coastal environment;
chlorophyll
absorption
0.62 - 0.68
0.77 - 0.86
PAN
WiFS
3
IRS P SATELLITES
In order to strengthen Indias own capability to launch
space vehicles, a series of satellites known as IRS-P
was initiated.
IRS-P3 and IRS-P4 was launched from Sriharikota,
India, using a Polar Satellite Launched Vehicle (PSLV).
IRS-P3 satellite was placed in a near polar, Sunsynchronous orbit at an altitude of 817 km with an
equatorial crossing time of 10:30 A.M.
It carried an X-ray astronomy payload and two remote
sensing sensors, namely WiFS and MOS (Modular
Opto-electronics Scanner).
The IRS-P3 WiFS is similar to IRS-1C WiFS except for
the inclusion of an additional band in Middle Infra-Red
(MIR) to study dynamic vegetation studies, while MOS
caters to oceanography applications
MOS-B
MOS-C
WiFS
1598 1395
523523
523
644
188 188
(B3 & B4)
188 246 (B5)
Swath (km)
195
200
192
770
Receptivity
(day)
24
24
24
24
1.5 - 1.7
Resolution (m)
Spectral band
(m)
MOS-A
0.755-0.768
0.408-1.01
IRS P4
IRS-P4, also known as Oceansat, was launched on
May 26, 1999 at 11:52 A.M. from Sriharikota using a
PSLV vehicle.
It carried two payloads, Ocean Colour Monitor
(OCM) and Multi-frequency Scanning Microwave
Radiometer (MSMR).
IRS P6
IRS-P6, also known as ResourceSat-1 was launched
on October 17, 2003 from Sriharikota, India on a PSLV
satellite.
The main objective of ResourceSat-1 is not only to
provide continued remote sensing data for integrated
land and water management, and agricultural and its
related applications, but also to provide additional
capabilities such as real time availability of data to
ground stations anywhere in the World with its
advanced onboard solid state recorder.
Even though ResourceSat-1 carries three sensors
similar to IRS-1C and IRS-1D, but with some additional
features.
Specifications
Spatial resolution
0.52 - 0.59 m
0.62 - 0.68 m
0.77 - 0.86 m
5.8 m (at nadir)
Radiometric resolution
10 bits
Spectral bands
B2
B3
B4
Specifications
Spatial resolution
B2 0.52 - 0.59 m
B3 0.62 - 0.68 m
B4 0.77 - 0.86 m
B5 1.55 - 1.70 m
370 km for each module
740 km combined
Radiometric resolution
10 bits
Spectral bands
Type of
Sensor
Applications
Visual Blue
TM Band 1
Visible Green
TM Band 2
MSS Band1
SPOT XS/X1
Band 1
IRS
LISS
Band 1
Near
Infrared
TM Band 3
MSS Band 2 to 3
SPOT X S / X 1 Band
2
IRS LISS Band 2
TM Band 4
MSS Band 3 to 4
SPOTXS/X1 Band 3
IRS LISS Band 3
TM Band 5
SPOT X1
Band 4
IRS LISS
Band 4
TM Band 7
Short
wave
Infrared
Therma
TM Band 6
l
Infrared
CARTOSAT 1
With the availability of IRS-1C/1D data, it was
possible to prepare cartographic and town planning
applications up to 1:10,000 scale with stereo pairs of
the imageries providing height information to an
accuracy of 10 m approximately.
This provided the necessary impetus to further
develop high resolution sensors dedicated to
cartographic and mapping application.
Specification
along
track)
Spectral resolution
a) No. of bands
b) b) Band width
Radiometric resolution
10 bits
Swath (km)
30 (stereo)
26.855 (Fore + Aft combined in
mono mode)
1 Panchromatic
500 nm to 850
IKONOS
Space Imaging Inc. made history on September 24,
1999, when it launched into space using the Athena
II rocket, the IKONOS, the worlds first commercial
high-resolution imaging satellite.
IKONOS is derived from the Greek word for image.
Orbit
Altitude
Inclination
98.1
4 miles/sec. (17500 miles/hr) or 7 km/sec (26000 km/hr)
Speed
98 minutes
Orbit type
Near-polar, sun-synchronous
Sensors
Resolution
Orbit
Altitude
Inclination
98.1
Speed
Descending nodal
crossing time
10:30 A.M.
Orbit time
98 minutes
Orbit type
Near-polar, sun-synchronous
Sensors
Resolution
Swath widths
Revisit
frequency
Scene sizes
IKONOS 2 characteristics
Characteristics
Launch vehicle
Athena II
Resolution:
10 m panchromatic,
4 m multispectral
Swath width:
13 km at nadir
Scene sizes:
Revisit frequency:
3 days at 1 m resolution
1.5 days at 1.5 m resolution
1 day at 2 m resolution
Orbit type:
sun synchronous
IKONOS IMAGE
IKONOS IMAGE
Quick Bird
The Quick Bird satellite launched in October 2002, by
Digital Globe Inc.
It acquires black and white images with a resolution of
61 cm and colour images (4 bands) with a resolution of
2.44 m covering a surface area of 16.5 km 16.5 km.
Quick Bird provides panchromatic, multi-spectral and
colour image products that enable superior image
classification and analysis based on 4 spectral bands at
11-bit dynamic range.
The imagery products are available at different
processing levels (basic, standard, and ortho) serving
the needs of different user groups.
Sun-synchronous
Orbital altitude
450 km
Swath width
22 km
11
Image area
16.5 16.5 sq km
Revisit time
1 - 3.5 days
Launch date
18 Oct. 2001
Sensor type
PAN
Resolution
0.61m
Channels
Spectral range
0.45 - 0.90 m
Sensor type
MSI
Resolution
2.44 m
Channels
QUICK BIRD MS
IMAGE INTERPRETATION
An image is a detailed photographic documentation or record
of features on the ground at the time of data acquisition.
Such acquired images are examined by an analyst in a
systematic manner with the help of some supporting
information collected from maps, field visit reports, or
previously interpreted images of the same area.
The interpretation of the information is carried out on the
basis of certain physical characteristics of the object(s) and
phenomena appearing in the image.
The success of an image interpretation is dependent upon the
experience of the analyst, the type of object or phenomena
being interpreted, and the quality of the image.
ELEMENTS OF
PHOTOINTERPRETATION
An object or phenomena on an image can be
identified, recognized, and delineated on the
basis of some of the typical characteristics of
the image.
These characteristics, which allow for a
systematic and logical approach in carrying out
image interpretation, are known as the
elements of photo interpretation.
These image characteristics are described
below.
ELEMENTS
SIZE
SHAPE
TONE
TEXTURE
PATTERN
SHADOW
ASSOCIATION
SITE
SIZE
Size of objects in an image is a function of scale.
It is important to assess the size of a target relative to
other objects in a scene, as well as its absolute size, to
aid in the interpretation.
A quick approximation of target size can result into direct
interpretation more quickly.
For example, if an interpreter has identified an area with a
number of buildings, then large buildings such as
factories or warehouses would suggest commercial
property, whereas small buildings would indicate
residential use.
SHAPE
Shape normally refers to the general form or
outline of individual features.
It is a very distinctive clue for interpretation.
Normally, man-made features tend to have
defined edges leading to regular shape, while
natural objects will be having irregular shape.
Roads, canals, buildings are man-made
objects, and have regular shape while forest
areas; water bodies tend to have irregular
shape.
TONE
Tone of an object or phenomena refers to the
relative brightness or colour in an image.
It is one of the fundamental elements for
distinguishing between different objects or
phenomena, and is a qualitative measure.
Generally, tone is designated as dark, medium or
light.
PATTERN
Pattern refers to the spatial arrangement of
visibly discernable objects.
Typically, the repetition of similar tones and
texture produces a distinctive and recognizable
pattern.
For example, houses laid out in orderly manner in
urban area or the trees in an orchard create
different patterns
ROAD
HOUSE
TEXTURE
Texture is referred to as the frequency of tonal
changes in particular areas of an image.
Texture is a qualitative characteristic, and is
normally categorized as rough or smooth.
An area of dry sand will appear on an image
having smooth texture, since the variation of tone
for long stretches is not present.
Similarly, an area covered with forest having
variety of tree species with varying canopy size,
shape, and density will appear with a rough
texture, as tone will be changing very rapidly.
ROUGH
SMOOTH
SHADOW
Shadow is an important characteristic of image
interpretation.
It gives an idea of the profile and relative height of
an object, hence making identification easier.
In mountainous areas, shadow is an important
characteristic of images as it enhances the ground
topography, and hence helpful for identifying
variations in geological landforms.
ASSOCIATION
Association is another important characteristic as
it considers the interrelation with the objects within
the close proximity of an object or phenomena.
For example, white irregular patches adjacent to
river indicate presence of dry sand banks of the
river.
A regular grid of lines having small regular shaped
box like objects is an indication of urban areas.
SITE
Site refers to the vocational characteristics of
objects such as topography, soil, vegetation,
and cultural features.
The primary ordering of each element in the
interpretation procedure with respect to degree
of complexity.
HEIRARCHY
Tone/
Colour
Primary
Degree of
complexity
Height
Secondary
Higher
Shadow
Site
Association
Tertiary
Visual
COMPARISON
Digital
Digital Data
Visual Analysis
Image Classification
Ancillary Data
Unsupervised
Supervised
Classified output
Post-processing operations
Data merging
Assessment of Accuracy
Report
Data
STATISTICAL
STATISTICAL
EVALUATIONOF
EVALUATIONOF
REMOTE
REMOTE
SENSING
SENSING
DATA
DATA
ImageQualityAssessmentandStatisticalEvaluation
ImageQualityAssessmentandStatisticalEvaluation
Many remote sensing datasets contain high-quality,
accurate data.
Unfortunately, sometimes error (or noise) is introduced
into the remote sensor data by:
the environment (e.g., atmospheric scattering),
random or systematic malfunction of the remote
sensing system (e.g., an uncalibrated detector creates
striping), or
improper airborne or ground processing of the remote
sensor data prior to actual data analysis (e.g., inaccurate
analog-to- digital conversion).
ViewingIndividualPixels
ViewingIndividualPixels
Viewing individual pixel brightness values in a remotely sensed
imageisoneofthemostusefulmethodsforassessingthequality
andinformationcontentofthedata.
Virtually all digital image processing systems allow the analyst
to:
use a mousecontrolled cursor
cursor (crosshair) to identify a
geographic location in the image (at a particular row and
column or geographic x,y coordinate) and display its
brightnessvalueinnbands,
displaytheindividualbrightnessvaluesofanindividualband
inamatrix(raster)format.
CursorandRasterDisplayofBrightnessValues
CursorandRasterDisplayofBrightnessValues
IndividualPixelDisplayof
IndividualPixelDisplayof
BrightnessValues
BrightnessValues
RasterDisplayofBrightnessValues
RasterDisplayofBrightnessValues
TwoandThree
TwoandThree
Dimensional
Dimensional
Evaluationof
Evaluationof
PixelBrightness
PixelBrightness
Valueswithina
Valueswithina
GeographicArea
GeographicArea
UNIVARIATE STATISTICS
(1) Histogram
(2) Cumulative histogram
(3) Minimum value
(4) Maximum value
(5) Mean
(6) Standard Deviation
(7) Skewness
(8) Kurtosis
Histogram
A histogram represents the distribution of
information within a defined range.
Here, for each value, the number of times that a
value occurs or exists is counted.
This number of occurrence is known as frequency.
When these frequencies are plotted, with the
values on the x-axis and its associated frequency
on the y-axis, then the resulting plot is known as a
histogram.
x 2
Cumulative Histogram
Some of the contrast enhancement techniques,
such as, histogram matching or histogram
equalization require cumulative histogram.
It is the sum total of all the frequencies up to that
value and generally expressed as a fraction or as a
percentage of all the frequencies in that image.
BVi
i 1
STANDARD DEVIATION
The standard deviation is the root mean square
(RMS) deviation of the values from their arithmetic
mean.
Standard deviation is the most common measure
of statistical dispersion, measuring how the values
in a data set are distributed.
If the data points are all close to the mean, then
the standard deviation is close to zero.
If many data points are far from the mean, then the
standard deviation is far from zero.
If all the data values are equal, then the standard
deviation is zero.
A large standard deviation indicates that the data
points are far from the mean and a small standard
deviation indicates that they are clustered closely
around the mean.
1 N
2
BV
i
N 1 i 1
SKEWNESS
It is a measure of the asymmetry of the probability
distribution of a real-valued random variable.
A distribution has positive skew (right-skewed), if
the right (higher value) tail is longer and negative
skew (left-skewed) if the left (lower value) tail is
longer.
The skewness can be calculated using formula
1
N
BV P
P 1
KURTOSIS
KURTOSIS
A histogram may be symmetric but may have a peak that is
very sharp or one that is subdued when compared with a
perfectly normal distribution.
A perfectly normal distribution (histogram) has zero kurtosis.
The greater the positive kurtosis value, the sharper the peak in
the distribution when compared with a normal histogram.
Conversely, a negative kurtosis value suggests that the peak
in the histogram is less sharp than that of a normal
distribution.
Kurtosis
So, it is a measure of the peakedness of the
probability distribution of a real-valued random
variable.
Higher kurtosis means more of the variance is due
to infrequent extreme deviations, as opposed to
moderately sized deviations.
BV P
P 1
N
Types of Kurtosis
MULTIVARIATE STATISTICS
Scatter plots
Covariance
Correlation.
SCATTERPLOT
A scatterplot is a useful summary for a set of bivariate data, usually drawn before working out a
linear correlation or fitting a regression line.
It gives a good visual picture of the relationship
between the two variables, and aids the interpretation
of the correlation coefficient or regression model.
Each unit contributes one point to the scatterplot, on
which points are plotted but not joined.
The resulting pattern indicates the type and strength
of the relationship between the two variables.
18
16
14
12
10
Positive
16
14
12
Negative
16
14
12
10
2
Non Associated
COVARIANCE MATRIX
A covariance matrix is a detailed form of the error
estimate of the data and is also known as the
variance-covariance matrix.
Band 1
Band 2
Band 3
Band 4
Band 1
12
Cov12
Cov13
Cov14
Band 2
Cov21
22
Cov23
Cov24
Band 3
Cov31
Cov32
32
Cov34
Band 4
Cov41
Cov42
Cov43
42
where
Covij
SPij
n 1
(x
ki
k 1
xkj )
x x
ki
k 1
n( n 1)
kj
k 1
CORRELATION
It is also known as correlation coefficient, and it
indicates the strength and direction of a linear
relationship between two random variables.
In statistics, correlation refers to the departure of two
variables from independence.
There are several coefficients, measuring the degree
of correlation, adapted to the nature of data.
The best known is the Pearson Product Moment
correlation coefficient, which is obtained by dividing
the covariance of the two variables by the product of
their standard deviations.
Covij
i j
RECTIFICATION
AND
RESTORATION
Atmospheric correction
While discussing the initial statistics, it was found that
remote sensing data in the visible bands are affected by
atmospheric effects due to scattering of electromagnetic
energy.
This causes a degradation in the image quality.
A black object tends to have some brightness, which is
an error.
This error has to be removed.
PHYSICAL MODELING
These models permit observed reflectance to be
adjusted close to the true value that might be
observed under clear atmospheric conditions.
These models provide a rigorous and accurate
solution with flexibility to model a large variety of
atmospheric conditions.
However, these also have many significant
disadvantages.
Generally, these are complex models requiring the
use of intricate computer programs.
REGRESSION METHOD
The third category of correction method, while retaining the
concept of examining the brightness of object within each
scene, also accounts for the interrelationship between
separate bands.
In this approach, a visible band is plotted against an infrared
band, and a best-fit (least-squares) straight line is computed
using standard regression methods.
The offset a on the x-axis as shown in Fig. represents an
estimate of the atmospheric correction to be applied for that
particular band.
This method is known as the regression method. The
advantage of using regression method is that it can be used
for local areas also.
Infrared Band
a
Visible Band
GEOMETRIC RECTIFICATION
Geometric rectification is the process by which the
geometry of an image is made planimetric.
It requires the use of GCPs whose image
coordinates in terms of rows and pixel number are
known along with the map or ground coordinates.
It is an accurate process since each pixel is
referenced not only by its image coordinates but
also by a ground coordinate system in a standard
map projection, and hence known as image to
map rectification.
GEOMETRIC CORRECTION
To remove systematic errors, it requires an in-depth
knowledge of a large number of parameters related
to satellite ephemeris data and these errors are
corrected at the Master Data Centre.
Generally, non-systematic errors are corrected by
the user himself by selecting suitable number of
GCPs.
There are two geometric correction procedures,
i.e., geometric rectification and geometric
registration, which are generally applied.
RECTIFICATION
MAP RECIFICATION
In order to rectify a remote sensing image to a map
RMSerror
x' x0 2 y' y0 2
where (x0,y0) and (x', y') are the original and computed
image coordinates of the GCP respectively.
By computing RMSerror for all the GCPs, it is possible to
identify the GCP which exhibits the greatest error and
also the sum total of RMSerror of all GCPs.
INTENSITY INTERPOLATION
After the spatial interpolation is carried out, the next
step is to extract or identify the brightness values of
new image from the old image.
This process is known as resampling.
The resampling process calculates the new pixel
values from the original digital pixel values in the
uncorrected image.
Nearest Neighbour
Nearest neighbour resampling uses the digital
value from the pixel in the original image which is
nearest to the new pixel location in the corrected
image.
This is one of the simplest methods and does not
alter the original values, but may result in some
pixel values being duplicated while others are lost.
This method also tends to result in a disjointed or
blocky image appearance.
INTENSITY INTERPOLATION
BILINEAR INTERPOLATION
Bilinear interpolation resampling takes a weighted
average of four pixels (BVwt) in the original image
nearest to the new pixel location.
z
4
BVwt
k 1
4
/ Dk2
1 / D
2
k
k 1
The average process alters
the original pixel
values, and creates entirely new digital values in
the output image.
CUBIC CONVOLUTION
Cubic convolution resampling goes even further to
calculate a distance weighted average of a block
of sixteen pixels from the original image which
surrounds the new output pixel location.
As in bilinear interpolation, this method also results
in a completely new pixel value.
However, both the methods produce images which
are sharper in appearance and avoid the blocky
appearance of the nearest neighbour method.
Method
Technique
Relative
computing
Advantages
Disadvantages
Nearest
neighbors
Transfers
the
brightness value of
the nearest pixel.
Simple
to Image
compute array disjointed or
values unaltered blocky
Bilinear
Proximity weighted
average
method
from nearest pixels
10
Cubic
Convolution
Proximity weighted
average method of
sixteen
nearest
pixels.
20
Very
image
smooth Complex to
compute,
gray value
altered.
IMAGE
ENHANCEMENT
Contrast
Enhancement
methods
categorized under as follows:
i.
ii.
can
be
Linear
Non Linear
In
linear
enhancement
techniques,
the
mathematical implementation is on the basis of
uniform slope, while in non-linear method, the
enhancement is done using a varying slope.
i.
ii.
iii.
enhancement
MIN-MAX STRETCHING.
It is one of the simplest methods of improving the
contrast.
Here the lower and upper values of the histogram of the
original image are made to fit between the full ranges of
the brightness scale.
The lower value of the original histogram is assigned a
zero brightness value while the upper value of the
original histogram is assigned a value of 255.
In between all other intermediate values of the
histogram, are reassigned new brightness values by
linear interpolation, and are expressed as:
xnew
xin xmin
xr
xmax xmin
Where
xnew = the new brightness value after stretching,
xmin = the minimum brightness value of the original
data,
xmax = the maximum brightness value of the original
data,
xin = the original input brightness value,
xr
= the range of brightness scale (0-255 for a 8bit data).
(a) Original
histogram
Frequency
255
xout
xmin
xmax
xin
255
xin
Frequency
xmin
xout
255
xmax
Original Image
PERCENTILE STRETCHING
Sometimes, when Min-Max stretching is performed,
the tail ends of the histogram are long resulting in
no improvement in image quality.
Under such circumstances, it is better to clip a
certain percentage, i.e., 1%, 2%, 5% of the data
from the two tail ends of the histogram of the
original image.
The method is known as percentile stretching.
Diagram below illustrates the principle of percentile
stretching in which the histogram of an image has
long tail.
(b) Transformation
Frequency
0
xmax
255
xmin
xin
Frequency
xmin
xin
5%
5%
5%
5%
xout
255
xmax
255
Original Image
5% Stretch
255
Frequency
BVout
0
xin
xmax
xi xin
(b) Transformation
255
Frequency
xmin
0 x
min
xi
xout
xmax
255
255
IMAGE
TRANSFORMATIONS
IMAGE SUBTRACTION
In Image subtraction, two geometrically registered
images acquired at two different dates are subtracted
pixel by pixel to yield a new image called as the
difference image.
In these new images, areas where there has been little
or no change are represented in mid-grey tones, while
those areas where significant change has taken place,
and are shown brighter or darker tones depending on
the direction of change in reflectance between the two
images.
This type of image transformation can be useful for
mapping growth of cities, deforestation, crop acreage,
etc. Hence image subtraction is also known as change
detection.
IMAGE ADDITION
Image addition is basically an averaging operation,
in order to reduce the overall effect of noise.
It is very commonly used in spatial filtering to
enhance features.
Image multiplication of two real images is rarely
performed in remote sensing.
IMAGE DIVISION
Image division or spectral ratioing is one of the most
common transforms applied to image data.
It serves to highlight subtle variations in the spectral
responses of various surface covers.
By ratioing the data from two different spectral bands,
yields useful information regarding the objects.
Healthy and green vegetation reflects strongly in the
near-infrared portion of the spectrum while absorbs
strongly in the visible red.
Other types of surface, such as soil and water, show
near equal reflectances in both the near-infrared and
red portions.
Vegetation Indices
The most unique feature of the vegetation reflectance curve is the low
red reflectance and high NIR reflectance.
Cover
NIR
Red
NIR/Red
Water
Low
Low
~1.0
Soil
High
High
~1
Very High
Low
Very High
Healthy Vegetation
l
i
o
S
e
r
a
Dry B
Vegetation
0.4
Clear Water
Near Infrared
Wavelength (m)
m)
2.6
Mid Infrared
62
54
39
43
71
63
24
31
62
71
0.87
49
63
0.78
19
24
0.79
27
31
0.87
NIR RED
NDVI
NIR RED
SAVI
1.0
r = 0.075 mm
0.9
r = 0.2 mm
0.8
r = 0.5mm
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0.35
0.55
0.75
0.95
1.15
1.35
1.55
1.75
1.95
2.15
TMband 2 TMband 5
TMband 2 TMband 5
Where -1 NBR 1.
IMAGE CLASSIFICATION
&
ANALYSIS
IMAGE CLASSIFICATION
AND ANALYSIS
Common classification procedures can be
broken down into two broad subdivisions
based on the method used:
i. supervised classification
ii. unsupervised classification.
SUPERVISED CLASSIFICATION
In a supervised classification, the analyst identifies in the
imagery, homogeneous representative samples of the
different surface cover types (information classes) of interest.
These samples are referred to as training areas.
The selection of appropriate training areas is based on the
analysts familiarity with the geographical area and knowledge
of the actual surface cover types present in the image.
Thus, the analyst is supervising the categorization of a set of
specific classes.
UNSUPERVISED CLASSIFICATION
In essence, it is reverse of the supervised classification process.
Spectral classes are grouped, first, based solely on the numerical
information in the data, and are then matched by the analyst to
information classes (if possible).
Programs called clustering algorithms are used to determine the
natural groupings or structures in the data.
Usually, the analyst specifies how many groups or clusters are to be
looked for in the data.
In addition to specifying the desired number of classes, the analyst
may also specify parameters related to the separation distance
amongst the clusters and the variation within each cluster.
UNSUPERVISED CLASSIFICATION
The final result of this iterative clustering process may result
in some clusters that the analyst would like to subsequently
combine, or that some clusters have been broken down,
each of these require a further iteration of the clustering
algorithm.
Thus, unsupervised classification is not completely without
human intervention. However, it does not start with a predetermined set of classes as in a supervised classification.
SUPERVISED CLASSIFICATION
SELECTION OF APPROPRIATE
CLASSIFICATION ALGORITHM
Various supervised classification methods have been used to assign
an unknown pixel to any one of the classes.
The choice of a particular classifier or decision rule depends on the
nature of the input data and the desired output.
Parametric classification algorithms assume that the observed
measurement vectors Xc obtained for each class in each spectral
band during the training phase of the supervised classification are
Gaussian in nature (i.e., they are normally distributed).
Nonparametric classification algorithms make no such assumption.
Among the most frequently used classification algorithms are the
Minimum Distance, Parallelepiped, and Maximum Likelihood
classifier.
Dist
BV
ijk
ck BVijl cl
2
These
decision
boundaries
form
an
n-dimensional
parallelepiped in feature space.
If the pixel value lies between the low and the high threshold for
a class in all n bands evaluated, it is assigned to that class,
otherwise it is assigned to an unclassified category.
Error Matrix
The standard form for reporting site-specific accuracy is the
error matrix, also known as the confusion matrix or the
contingency table.
An error matrix not only identifies the overall error for each
category, but also the misclassification for each category.
An error matrix is essentially consists of an n by n array,
where n is the number of class or categories on the map
reference.
Here the rows of the matrix represent the true classes or
information on the reference map, while column of the matrix
represent the classes as identified on the classified map.
Error Matrix
Class
Classified Image
Urban
Crop
Range
Water
Forest
Barren
Total
Crop
Row Marginal
Reference Image
Urban
Range
Water
Forest
Barren
Total
Column Marginal
Each row shows errors of omission while each column shows errors of
commission
Correctly identified pixels
Total sum of correctly identified pixels
Error Matrix
The values in the last column gives the total number of true
points per class used for assessing the accuracy.
Similarly, the total at the bottom of each column gives
information regarding the number of points/pixels per class
in the classified map.
The diagonal elements of the error matrix indicate the
number of points/pixels correctly identified both the
reference and classified maps.
The sum total of all these diagonal elements is entered at the
right hand side bottom most element, i.e., total number of
points/ pixels correctly classified both in the reference and
classified maps.
ERROR MATRIX
The off-diagonal elements of the error matrix provide
information on errors of ommission and commission
respectively.
Errors of ommission are found in the upper right half of the
matrix and for each class it is computed by taking the sum of
all the non-diagonal elements along each row, and dividing
it by the row total of that class.
ACCURACY INDICES
Measure
Overall
accuracy
Abbreviation
OA
Explanation
Percent of samples
correctly classified
Formula
1
N
Base
reference (s)
nii
i 1
Story and
Congalton (1986)
Error Matrix
Actual class
Predicted class
Heather
Water
Forest 1
Forest 2
Bare soil
Pasture
Total
826
27
858
Water
878
878
Forest 1
720
183
910
Forest 2
33
21
878
934
Bare soil
61
560
621
Pasture
219
220
920
878
741
1067
589
226
4081
Heather
Total
THANK
YOU