Sei sulla pagina 1di 78

Remote Sensing Systems

1
Elements Involved in Remote Sensing

1. Energy Source or
Illumination (A)
2. Radiation and the
Atmosphere (B)
3. Interaction with the
Object (C)
4. Recording of Energy
by the Sensor (D)
5. Transmission,
Reception and
Processing (E)
6. Interpretation and
Analysis (F)
7. Application (G) 2
Contents

‰ Remote Sensing Systems


‰ Remote Sensing Sensors
‰ Remote Sensing Platforms
‰ Methods of Recording of Electromagnetic Energy
‰ Photographic Remote Sensing
‰ Digital Image
‰ Importance of Digital Images
‰ Characteristics of Digital Image
‰ Characteristics of Satellite
‰ Earth Observation Satellites
‰ Data Reception, Transmission, and Processing
3
Remote Sensing Systems

The measurements of electromagnetic energy are made


by sensors that are attached to a static or moving
platform.

The sensor-platform combination, also known as


remote sensing system, determines the characteristics
of the resulting data.

4
Remote Sensing Sensors
A sensor is a device that measures and records
electromagnetic energy.
Depending on the source of energy, sensors can be divided
into two groups: Passive sensors and Active sensors.

Passive sensors: depends on an Active sensors: have


external source of energy usually their own source of
the sun, and sometimes the earth energy.
itself. - Laser scanner
- Gamma-ray spectrometer. - Imaging radar.
- Aerial camera - Radar altimeter
- Video camera - Side scan sonar
- Multi-spectral scanner
- Thermal scanner
- Microwave radiometer 5
Remote Sensing Platforms

1. Ground-based sensors and


sensors carried by aircraft
generally obtain images at
height of 500 m to 20 km
2. Sensors carried by
spacecraft and satellites
operate at distances of
250- 1,000 km
3. Sensors carried by very
high altitude satellites
which operate at 36,000
km above the Earth

6
Methods of Recording of Electromagnetic Energy

Electromagnetic energy may be detected either


photographically or electronically. The photographic process
uses chemical reactions on the surface of light-sensitive film
to detect and record energy variations.

It is important to distinguish between the terms images and


Photographs in remote sensing.

An image refers to any pictorial representation, regardless of


what wavelengths or remote sensing device has been used to
detect and record the electromagnetic energy.

A photograph refers specifically to images that have been


detected as well as recorded on photographic film. Photos are
normally recorded over the wavelength range from 0.3 µm to
0.9 µm - the visible and reflected infrared. 7
Methods of Recording of Electromagnetic Energy
Based on these definitions, we can say that all photographs
are images, but not all images are photographs.

Therefore, unless we are talking specifically about an image


recorded photographically, we use the term image.

A photograph could also be represented and displayed in a


digital format by subdividing the image into small equal-sized
and shaped areas, called picture elements or pixels, and
representing the brightness of each area with a numeric value
or digital number (DN).

8
Method of Recording of Electromagnetic Energy

Indeed, that is exactly what has


been done to the photo to the left.
In fact, using the definitions we
have just discussed, this is
actually a digital image of the
original photograph!

The photograph was scanned and subdivided into pixels with


each pixel assigned a digital number representing its relative
brightness. The computer displays each digital value as different
brightness levels. Sensors that record electromagnetic energy,
electronically record the energy as an array of numbers in digital
format right from the start.
9
Characteristics of Images
We see color because our eyes detect the entire visible range
of wavelengths and our brains process the information into
separate colors.

Can you imagine what the world would look like if we could
only see very narrow ranges of wavelengths or colors?

That is how many sensors work. The information from a


narrow wavelength range is gathered and stored in a
channel, also sometimes referred to as a band. We can
combine and display channels of information digitally using
the three primary colors (blue, green, and red). The data from
each channel is represented as one of the primary colors and,
depending on the relative brightness (i.e. the digital value) of
each pixel in each channel, the primary colors combine in
different proportions to represent different colors.
10
Characteristics of Images

When we display a single channel or range of wavelengths,


we are actually displaying that channel through all three
primary colors. Because the brightness level of each pixel is
the same for each primary color, they combine to form a black
and white image, showing various shades of gray from black
to white.
When we display more than one channel each as a different
primary color, then the brightness levels may be different for
each channel/primary color combination and they will combine
to form a color image.
11
Photographic Remote Sensing

The representation of a scene is recorded by a camera onto


photographic film. Electromagnetic radiation passes through the
lens at the front of the camera and is focused on the recording
medium (film).
‰ It is the oldest, yet most The main types are:
commonly applied remote
‰ Panchromatic (UV and V)
sensing technique.
‰ Black and white infrared
‰ The science and technique
photography (IR: 0.3-
of making measurements
0.9µm)
from photographs is called
photogrammetry. ‰ Natural color photography
‰ Nowadays almost all ‰ Multiband photography
topographic maps are
based on aerial
photographs. 12
Panchromatic photographs Black and white infrared
photography
• The simplest film is one that records
variations in electromagnetic • The sensitivity of this film,
radiation within the visible range of unlike a panchromatic one,
the spectrum (0.4-0.7 µm) in black extends into near infrared.
and white and shades of gray.
Advantages:
• The resultant image is a
• Use of this film extends the
panchromatic photograph but is often
range at which the spectral
referred to as a black and white
reflectivity characteristics of
photograph.
different objects can be
examined.
Advantage: The panchromatic film is
relatively cheap and does not require • Infrared images are ideal for
sophisticated processing. delineating land-water interfaces
and for vegetation surveys.
Disadvantage: While many features
can be identified on black and white Disadvantage: Lack of
photographs such as roads, bridges penetration of water means that
and buildings, many other objects are objects just below the surface
will not be detected.. 13
not so distinguishable.
14

Near infrared band Blue band


Natural color photography Multiband photography
The film is individually sensitive to Multiband photography involves
the reflected energy at the blue, simultaneously obtaining images
green and red wavelengths of the of the same scene at different
spectrum. wavelengths e.g., at the blue,
green, red and photographic
Advantages:
infrared parts of the spectrum.
• surfaces that are indistinguishable
Advantage: The scene may
in black and white image can often
initially be examined separately in
be easily differentiated on a color
the blue, green, red and infrared
image.
parts of the spectrum and as a
• relatively easy to interpret. natural or false color composite.
Disadvantages: More complex and Disadvantage: The camera
expensive processing of the film. system is much more complex.

15
Digital Image
‰ A digital image is a regular grid array of squares (or rectangles).
‰ The square is referred to as a ‘pixel’, which is a word formed
from the term ‘picture element’.
‰ Each square is assigned a digital number (DN) which is related
to some parameter (such as reflectance or emittance measured
by a remote sensing system sensor).

16
Importance of Digital Images

‰ Film cannot record electromagnetic radiation at


wavelengths longer than 1 µm and consequently no data
can be obtained in the thermal and microwave bands by
camera systems.
‰ It is neither feasible to recover film from a satellite nor
practical to launch an expensive satellite.
‰ Once the satellite is launched it is not possible to replace
the components.
‰ Data that are obtained digitally can be transmitted easily
without any degradation.
‰ Digital data are in the form that can be readily processed
on computers.
17
Spectral
divisions of
visible light

18
If the visible portion of the light spectrum is
divided into thirds, the predominant colors are
red, green and blue. These three colors are
considered the primary colors of the visible light
spectrum.
Primary colors can be arranged in a circle,
commonly referred to as a color wheel. Red,
green and blue (RGB) form a triangle on the color
wheel. In between the primary colors are the
secondary colors: cyan, magenta and yellow
(CMY), which form another triangle.
19
Color Combination

20
21
Color Additive Process

Red+Green=Yellow
Blue+Green=Cyan
Red+Blue=Magenta
Red+Green+Blue=White

22
Color Subtractive Process

Yellow filters blue


Cyan filters red
Magenta filters green
M–B=R
M–R=B
C–B=G
C–G=B
Y–G=R
Y–R=G
23
35 200 25 75 200 156 255 75 255

150 0 255 0 0 255 100 0 255

255 0 0 0 255 0 0 0 255


Band 1: Blue Band 2: Green Band 3: Red
Resulting Image

Color Composite

24
Digital Number: 0 -255
0 RED 255 100 200
0GREEN 255 100 200
Color Composite 0BLUE 255 100 200

255 0 0 255
0 255 0 255
0 0 255 0

0 255 200 100


255 0 0 0
255 255 200 100

0 0 0 0
255 200 100 50
0 0 0 0

25
Characteristics of Digital Image

The quality of image data is primarily determined by the


characteristics of the sensor-platform system. The
image characteristics can be defined by resolutions which
are:

‰ Spatial resolution
‰ Spectral resolution
‰ Radiometric resolution
‰ Temporal resolution

26
Spatial Resolution
‰ A qualitative measure of the amount of detail that can be observed on
an image. The size of pixel sets the limit on the spatial resolution.
‰ A measure of the size of the pixel is given by the Instantaneous Field
Of View (IFOV), which is dependent on the altitude and the viewing
angle of the sensor.
‰ IFOV is defined as the angle which corresponds to the sampling unit.
Information within an IFOV is represented by a pixel in the image
plane.

27
28
Spatial Resolution

Meteosat-8: 1 km (PAN), 3 km (all other bands)


NOAA-17: 1 km × 1 km (at nadir), 6 km × 2 km (at edge)
Landsat-7: 15 m (PAN), 30 m (bands 1-5, 7), 60 m (band 6)
SPOT-1,2,3: 10 m (PAN), 20 m (all other bands)
29
Temporal resolution

‰ The temporal resolution of a remote sensing system


is a measure of how often data are obtained for the
same area.
‰ It is the minimum time between two successive
image acquisitions over the same location on Earth.
Also referred to as revisit time.
‰ Temporal resolution varies from less than 1 hr for
some systems to approximately 20 days for others.

Meteosat-8: 15 minutes
NOAA-17: 2-14 times per day depending on latitude
Landsat-7: 16 days
SPOT-1,2,3: 26 days 30
16-day Repeat Cycle

31
Spectral resolution

‰ Spectral resolution is related to the


widths of the spectral wavelength
bands that the sensor is sensitive to.
‰ A system which measures a large
number of bands which encompass
narrow ranges of the electromagnetic
spectrum is said to have a high
spectral resolution.

Meteosat-8: 12 bands (1 PAN, 11 Multispectral)


NOAA-17: 6 bands (1 PAN, 5 Multispectral)
Landsat-7: 8 bands (1 PAN, 7 Multispectral)
SPOT-1,2,3: 4 bands (1 PAN, 3 Multispectral) 32
33
Radiometric resolution
‰ A film or sensor’s sensitivity to the magnitude of the
EM energy determines the radiometric resolution.
‰ The finer the radiometric resolution of a sensor, the
more sensitive it is to detecting small differences in
reflected or emitted energy
‰ The radiometric resolution is measured in ‘bits’. A 1-bit
system (21 = 2) measures only two grey levels and is
the simplest type of image. It records black and white.
‰ An 8-bit system (28 = 256) records 256 grey levels in
which conventional black is recorded by a DN of zero
and white by a DN of 255.

34
35
Radiometric resolution for a 1-, 2-, 8- and 10-bit system
36
Characteristics of Satellite
‰ Spaceborne remote sensing is carried out using sensors
that are mounted on satellites, space shuttle or space
station.
‰ The monitoring capabilities of the sensor are to a large
extent determined by the parameters of the satellite’s
orbit.
‰ The path followed by a satellite is referred to as its orbit.

37
Characteristics of Orbit

Orbital altitude: The distance from the satellite to the


surface of the Earth.
Orbital inclination angle: The angle between the orbital
plane and the equatorial plane.
Orbital period: The time required to complete one full orbit.
Repeat cycle: The time between two successive identical
orbits.

38
Characteristics of Landsat’s Orbit 39
Types of Orbit
Polar orbit: An orbit with inclination
angle between 800 and 1000.
Sun-synchronous orbit: This is a
near-polar orbit chosen in such a way
that the satellite pass the overhead at
the same time.
Geostationary orbits. This refers to
orbit in which the satellite is placed
above the equator (inclination 00) at an
altitude approximately 36,000 km.
At this distance the orbital period of the
satellite is equal to the rotational period
of the Earth.
40
Swath
As a satellite revolves around the Earth, the sensor "sees"
a certain portion of the Earth's surface. The area imaged
on the surface, is referred to as the swath.

NOAA-17: 2800 km
Landsat-7: 185 km
SPOT-5: 60 km
IKONOS: 11 km

41
Earth Observation Satellites

‰ Low-resolution system: (spatial resolution: 1 km –5 km)


‰ Medium-resolution systems (spatial resolution: 10 m – 100 m)
‰ High-resolution systems (spatial resolution: <10 m)

Low-resolution system
NOAA-17
Orbit: 812 km, 98.70 inclination, sun-synchronous
Swath width: 2800 km (FOV = 1100)
Off-nadir viewing: ±500
Revisit time: 2-14 times per day, depending on latitude
Spatial resolution: 1 km × 1 km (at nadir), 6 km × 2 km (at edge)
42
Medium-resolution system
Landsat-7
Orbit: 705 km, 98.20 inclination, sun-synchronous
Swath width: 185 km (FOV = 150)
Revisit time: 16 days
Spatial resolution: 15 m (PAN), 30 m (bands 1-5, 7), 60 m (band 6)
SPOT-1, 2, 3, 4
Orbit: 832 km, 98.70 inclination, sun-synchronous
Swath width: 60 km
Revisit time: 26 days
Spatial resolution: 10 m (PAN), 20 m (Multispectral)

43
High-resolution systems

SPOT-5
Orbit: 822 km, 98.70 inclination, sun-synchronous
Swath width: 60 km
Revisit time: 2-3 days
Spatial resolution: 5 m (PAN) 10 m (Multispectral)

IKONOS
Orbit: 681km, 98.20 inclination, sun-synchronous
Swath width: 11km
Revisit time: 1-3 days
Spatial resolution: 1m (PAN), 4 m (Multispectral)

44
Quickbird satellite
image of Banda Aceh
before Tsunami

Quickbird satellite
image of Banda
Aceh after Tsunami

45
Data Reception, Transmission, and Processing
Data acquired from satellite platforms need to be electronically
transmitted to Earth.
There are three main options for transmitting data acquired by
satellites to the surface.

1. The data can be directly transmitted to


the Earth if a Ground Receiving
Station (GRS) is in the line of sight of
the satellite (A).
2. If this is not the case, the data can be
recorded on board the satellite (B) for
transmission to a GRS at a later time.
3. Data can also be relayed to the GRS
through the Tracking and Data Relay
Satellite System (TDRSS) (C), which
consists of a series of communications
satellites in geo-synchronous orbit. 46
Digital Image Processing

47
Elements Involved in Remote Sensing

1. Energy Source or
Illumination (A)
2. Radiation and the
Atmosphere (B)
3. Interaction with the
Object (C)
4. Recording of Energy
by the Sensor (D)
5. Transmission,
Reception and
Processing (E)
6. Interpretation and
Analysis (F)
7. Application (G) 48
Contents

Why Image Processing?


Methods of Information Extraction
Visual Image Interpretation
- Elements of Visual Interpretation
Digital Image Processing
- Pre-processing (Radiometric and geometric distortions)
- Image enhancement (Contrast stretching)
- Image transformation (Spectral or band ratioing)
- Image classification and analysis (density slicing,
multispectral classification)
49
Why Image Processing?
- To extract information from image

Types of Information Extraction by Remote Sensing

50
Methods of Information Extraction

• Information extraction by human, also known as


image interpretation or visual image interpretation
• Information extraction by computer also known as
image processing or digital image processing

51
Visual Image Interpretation

Visual image interpretation


in satellite remote sensing
can be made using a
single scene of a satellite
image.

A pair of stereoscopic
aerial photographs can
be used to provide
stereoscopic vision using,
for example, a mirror
stereoscope.
52
Elements of Visual
Interpretation

Eight elements are


mostly used in image
interpretation:
• Size
• Shape
• Shadow
• Tone
• Color
• Texture
• Pattern.

53
Digital Image Processing
Digital image processing involves manipulation and
interpretation of digital images with the aid of a
computer. The main steps in digital image processing
are:

‰ Pre-processing
‰ Image enhancement
‰ Image transformation
‰ Image classification and analysis

54
Pre-processing

Pre-processing, sometimes referred to as image


restoration and rectification, is intended to
correct for sensor- and platform-specific
radiometric and geometric distortions of data.

55
Radiometric Correction

Cosmetic Correction Atmospheric Correction


Correcting the data for sensor Converting the data so that
irregularities and unwanted they accurately represent
sensor or atmospheric noise the reflected or emitted
radiation measured by the
Typical problems requiring
sensor
cosmetic corrections are:
• Periodic line dropout
• Line striping
• Random noise or spike

56
Corrected Image Line Dropout
Cosmetic Correction

57
Spike Noise Line Striping
Cosmetic Correction

58
Geometric Corrections

All remote sensing imagery are inherently subject to


geometric distortions. The distortions may be due to
several factors, including:

‰ The perspective of the sensor’s optics


‰ The motion of the scanning system
‰ The platform altitude and velocity
‰ The terrain relief
‰ The curvature and rotation of the earth.

59
Systematic or predictable distortions
Systematic distortions are well understood and
easily corrected by applying formulas derived by
modeling the sources of distortion mathematically.

Unsystematic or random distortions


Other unsystematic or random errors cannot be
modeled and corrected mathematically.

60
Distortion due to Earth Rotation

61
Unsystematic or random distortion

The unsystematic distortion is corrected by geometric


registration process which involves:

‰ Identifying the image coordinates (i.e. row, column) of


several clearly discernible points, called ground control
points (or GCPs).
‰ Matching them to their true positions in ground
coordinates (e.g. latitude, longitude).
‰ The matching is done by geometric transformation.
‰ The resultant image is called a geo-referenced image.

62
Finding Ground Control Points

63
Finding Ground Control Points

64
Geo-coding
In order to complete the entire rectification process, each
pixel in the corrected image has to be assigned a new DN.
A procedure called resampling is used to determine the DN
to place in the new pixel locations of the corrected output
image. The resultant image is called a geo-coded image.

Process of Geo-coding (Resampling) 65


Geo-coding
The resampling process calculates the new pixel values from
the original digital pixel values in the uncorrected image.

There are three common


methods for resampling:
1. Nearest neighbour
2. Bilinear interpolation, and
3. Cubic convolution

Nearest neighbour resampling


uses the digital value from the
pixel in the original image which
is nearest to the new pixel
66
location in the corrected image.
Image Enhancements

Image enhancement, is done solely to improve the


appearance of the imagery to assist in visual or digital
image interpretation and analysis.
Two major types of enhancements are:
Contrast stretching to increase the tonal distinction
between various features in a scene, and
Spatial filtering to enhance (or suppress) specific
spatial patterns in an image.

67
Contrast enhancement

Contrast enhancement
involves changing the
original values to
increase the contrast
between targets and
their backgrounds.

The key to understanding contrast enhancements is to


understand the concept of an image histogram.
A histogram is a graphical representation of the brightness
values that comprise an image. The brightness values (i.e. 0-
255) are displayed along the x-axis of the graph. 68
Linear stretching
The simplest type of enhancement is a linear contrast stretch.
This involves identifying lower and upper bounds from the
histogram (usually the minimum and maximum brightness values
in the image) and applying a transformation to stretch this range
to fill the full range.
DNmin and DNmax are the
DN st = 255
(DN − DN ) min minimum and maximum DNs in
(DN max − DN min ) the original image. DNst is the
DN in the stretched image

Histogram equalization
A uniform distribution of the input range of values across the full
range may not always be an appropriate enhancement,
particularly if the input range is not uniformly distributed.
This stretch assigns more display values (range) to the frequently
occurring portions of the histogram.
69
Satellite image Histogram of the image
(without contrast)

70
Histogram equalization Linear stretched image
stretched Image

71
Image Transformations

Image transformations usually involve processing of


data from multiple spectral bands.
Arithmetic operations (i.e. subtraction, addition,
multiplication, division) are performed to combine and
transform the original bands into "new" images which
better display or highlight certain features in the scene.
‰ Spectral or band ratioing
‰ Principal components analysis
‰Hue, Intensity, Saturation (HIS) images
‰ Synergistic images
‰ Non-image datasets
72
Spectral or band ratioing
Various mathematical combinations of satellite bands,
have been found to be sensitive indicators of the presence
and condition of green vegetation.
These band combinations are thus referred to as
vegetation indices. Two such indices are the simple
Vegetation Index (VI) and the Normalized Difference
Vegetation Index (NDVI). The VI and NDVI are calculated
by the following two equations respectively:

near infrared
VI =
visible red
near infrared − visible red
NDVI =
near infrared + visible red
73
NDVI is sometimes simply called. NVI (normalized
vegetation index).
NDVI or NVI are indicators of the intensity of biomass.
The larger the NVI is, the denser the vegetation.

74
Image classification and analysis

‰ There is a relationship between land cover and


measured reflection values.
‰ In order to extract information from the image data,
this relationship must be found.
‰ The process to find the relationship is called
classification. Image classification and analysis
operations are used to digitally identify and classify
pixels in the data.
‰ Classification can be done using a single band, in a
process called density slicing, or using many bands
(multispectral classification).

75
Density slicing

‰ In theory, it is possible to base a classification on a


single spectral band of a remote sensing image, by
means of single band classification or density slicing.
‰ Density slicing is a technique, whereby the DNs
distributed along the horizontal axis of an image
histogram, are divided into a series of user-specified
intervals or slices.
‰ The number of slices and the boundaries between the
slices depend on the different land covers in the area.
‰ All the DNs falling within a given interval in the input
image are then displayed using a single class name in
the output map.
76
Dry season SPOT image
(Band 3: Infrared) of the
confluence of the Padma
and the Meghna

Density sliced image


showing the separation
of land and water
77
Multispectral Classification
It is performed on multi-
channel data sets. MC is the
process by which pixels which
have similar characteristics are
identified.
Two generic approaches which
are used in multispectral
classification are:

1. Supervised classification
2. Unsupervised classification

78

Potrebbero piacerti anche