0 valutazioniIl 0% ha trovato utile questo documento (0 voti)

20 visualizzazioni34 pagineclassification

Jan 19, 2015

© © All Rights Reserved

PPTX, PDF, TXT o leggi online da Scribd

classification

© All Rights Reserved

0 valutazioniIl 0% ha trovato utile questo documento (0 voti)

20 visualizzazioni34 pagineclassification

© All Rights Reserved

Sei sulla pagina 1di 34

Depends upon the spatial resolution of the sensor.

Pure

Mixed

Pure

Pure

Pure

Mixed

Mixed

Mixed

Pure

applied to perform the hard and soft classification.

Hard classification allocates

image to a single class.

sensing imagery are pure.

However, often the images are dominated by mixed pixel they

do not represent one particular land cover, but contain two or

more Land Cover (LC) classes in a single pixel.

mixed pixels occurring in a single pixel area.

Although the chances of two or more class contributing

to a mixed pixel are high with a coarse spatial

resolution but the number of such pixels is small. On

the other hand, with improved spatial resolution, the

number of classes within a pixel is reduced, but the

number of mixed pixels increases.

Furthermore, for improved spatial resolution, the

masking due to shadow also results the loss of

information.

a mixed pixel displays a composite spectral response that may be

dissimilar to the spectral response of each of its component LC

classes, and therefore, pixel may not be allocated to any of its

component LC classes.

Conventional image classification techniques may, thus, result into

a lot of information loss present in a pixel. These techniques,

therefore, tend to over-or under-estimate the actual areal extents of

the LC classes on ground, thereby degrading the classification

accuracy of the image contaminated by mixed pixels.

IDENTIFICATION

Spectral analysis methods usually compare pixel spectra with a

reference spectrum (often called a target). Target spectra can be

derived from a variety of sources, including spectral libraries,

regions of interest within a spectral image, or individual pixels

within a spectral image.

Whole pixel analysis methods attempt to determine whether one

or more target materials are abundant within each pixel in a

multispectral or hyperspectral image on the basis of the spectral

similarity between the pixel and target spectra.

classifiers such as Minimum Distance or Maximum

likelihood, as well as tools developed specifically for

hyperspectral imagery such as

Spectral Angle Mapper

Spectral Feature Fitting

image. In such a plot, pixel spectra and target spectra will

plot as points

The Spectral Angle Mapper (Yuhas et al., 1992)

computes a spectral angle between each pixel spectrum

and each target spectrum.

In Spectral Feature Fitting the user specifies a range of

wavelengths within which a unique absorption feature

exists for the chosen target. The pixel spectra are then

compared to the target spectrum using two measurements:

1.The depth of the feature in the pixel is compared

depth of the feature in the target, and

to the

shape of the feature in the target (using a least-squares

technique).

It is also known as spectral mixture modeling or spectral

mixture analysis.

Set of spectrally unique surface materials existing within a

scene are often referred to as the spectral end members .

reflectance spectrum of any pixel is the result of linear

combinations of the spectra of all end members inside that

pixel.

Unmixing simply solves a set of n linear equations for

each pixel, where n is the number of bands in the image.

Matched Filtering

Often called a partial un-mixing.

No need to find the spectra of all end members in the

scene to get an accurate analysis.

Originally developed to compute abundances of

targets that are relatively rare in the scene.

Matched Filtering filters the input image for good

matches to the chosen target spectrum by maximizing

the response of the target spectrum within the data

and suppressing the response of everything else.

Soft Classification

Each pixel may represent the multiple and partial class

memberships.

It is an alternative to hard classification because of its

ability to deal with the mixed pixel.

Membership functions allocates to each pixel a real

value between 0 and 1, i.e. membership grade.

Sub-pixel scale information is typically represented in

the output of a soft classification by the strength of

membership a pixel displays to each class.

It is used to reflect the relative proportion of the classes

in the area represented by the pixel.

Soft classifiers

Most common soft classifiers are:

Maximum likelihood classification

Fuzzy c-means

Possibilistic c-means

Fuzzy set theory

Noise Clustering

based approaches

Artificial neural networks

Decision Trees

into various LC class components, thus generating

soft class outputs.

The output is not a single classified image in soft

classification. Here, a number of images are obtained

as the classified output. The pixel in each image

(generally referred to as fraction image) depicts the

proportion of individual LC classes.

However, these proportions do not actually represent

the spatial distribution of LC classes on ground.

MLC is one of the most widely used hard classifier.

In a standard MLC each pixel is allocated to the class with which

it has the highest posterior probability of class membership.

MLC has been adapted for the derivation of sub-pixel information.

This is possible because a by-product of a conventional MLC are

the posterior probabilities of each class for each pixel.

measure of class membership, and can therefore be used as an

indicator of sub-pixel proportions.

Often many author use the term Fuzzy MLC, to discriminate it

from the (hard) MLC.

Conceptually, there is not a direct link between the proportional

coverage of a class and its posterior probability. In fact, posterior

probabilities are an indicator of the uncertainty in making a

particular class allocation. However, many authors have find that

in practice useful sub-pixel information can be derived from this

approach.

pm pi

Where

X is a vector of DN values of unclassified pixels

pi is likelihood of ith LC class (i=1to c) whereas

pm is likelihood of LC class c and given by

pm

1

t

1

ln N i x i N i x i

2

can be given by:

c

pa pm

mj

j 1

output. For example, the a posterior probabilities of class

memberships for a pixel containing three LC classes; soil, water

and vegetation are obtained as 0.75, 0.03 and 0.22 respectively.

The MLC in its hard form will assign the pixel to soil; its

probability of occurrence being maximum in that pixel. On the

other hand, a softened output will show the probabilities of each

of the LC classes considered in a pixel.

It is an iterative clustering method. may be employed to partition

pixels of a satellite image into different class membership values.

Each pixel in the satellite image is related with every information

class by a function, known as membership function. The value of

membership function known simply as membership, varies between

0 and 1.

The membership value close to 1 implies that the pixel is more

representative of that particular information class, while

membership value close to 0 implies that the pixel has little or no

similarity with the information class.

The net effect of such a function is to produce fuzzy c-partition of a

given data (or satellite image in case of remote sensing).

c

J fcm (U ,V ) ki D ( xk , vi )

m

i 1 k 1

D( xk , vi ) d xk - vi

2

ki

Where

( xk - vi )T A( xk - vi )

i 1

ki

1 for all k ;

k 1

ki

0 for all i ;

0 ki 1 for all k, i

Where

matrix

V (v1 ........vc ) is the collection of the vectors with the information

class center vi

ki is a class membership values of a pixel

U N c

of class i.

c and N are total number of information classes and pixels

respectively.

A is a weight matrix.

m is a weighting exponent (or fuzzifier) 1 m

From the objective function of the FCM the membership value can

calculated as:

D( xk , vi )

ki

D

(

x

,

v

)

j

j 1

k

1

m 1

where D( xk , v j ) D( xk , vi )

i 1

The center of information class can be computed as:

vi

k 1

N

ki

k 1

ki

xk

m

The objective function for PCM can be given by:

c

i 1

k 1

J pcm (U , V ) ki D( xk , vi ) i (1 ki ) m

m

i 1 k 1

max ki 0 for all k;

i

k 1

ki

assigns high membership value) the representative feature

point and de-emphasizes (or assigns low membership value)

the unrepresentative feature point present in the data.

From the objective function of the PCM the membership value can calculated as:

ki

1 D( xk , vi ) i

1

m 1

where

N

i K D( xk , vi )

k 1

m

ki

ki

k 1

In FCM, noisy points (i.e. outliers) are grouped with

information classes with same overall membership value of one.

Noise classes (or outliers) can be segregated from the core

information class (or cluster). They do not degrade the quality

of clustering analysis.

The main concept of the NC algorithm is the introduction of a

single noise information class (c+1) that will contain all noise

data points.

c

J nc (U , V ) ki D ( xk , vi ) k ,c 1

m

i 1 k 1

D ( xk , vi )

j 1 D ( xk , v j )

c

ki

1

m 1

k 1

D ( xk , v i )

1

m 1

,1 i c

j 1 D xk , v j

k ,c 1

1

m 1

().

An ANN is a form of artificial intelligence that imitates some

functions of the human brain.

An ANN consists of a series of layers, each containing a set of

processing units (i.e. neurones).

All neurones on a given layers are linked by weighted connections

to all neurones on the previous and subsequent layers.

During the training phase, the ANN learns about the regularities

present in the training data, and based on these regularities,

constructs rules that can be extended to the unknown data.

Advantages of ANN

It is a non-parametric classifier, i.e. it does not require any assumption about the

statistical distribution of the data.

High computation rate, achieved by their massive parallelism, resulting from a dense

arrangement of interconnections (weights) and simple processors (neurones), which

permits real-time processing of very large datasets.

Disadvantages of ANN

ANN are semantically poor. It is difficult to gain any understanding about how the

result was achieved.

The training of an ANN can be computationally demanding and slow.

ANN are perceived to be difficult to apply successfully. It is difficult to select the type

of network architecture, the initial values of parameters such as learning rate and

momentum, the number of iterations required to train the network and the choice of

initial weights.

Can be used as both the Hard or soft classifier

Advantage:

Ability to handle non-parametric training data, i.e. DT are not based on any

assumption on training data distribution.

DT can reveal nonlinear and hierarchical relationships between input variables

and use these to predict class membership.

DT yields a set of rules which are easy to interpret and suitable for deriving a

physical understanding of the classification process.

Good computational efficiency.

DT, unlike ANN, do not need an extensive design and training.

Disadvantage:

The use of hyperplane decision boundaries parallel to the feature axes may

restrict their use in which classes are clearly distinguishable.

Super-resolution Mapping :

Although the soft classification is informative and

meaningful it fails to account for the actual spatial

distribution of class proportions within the pixel.

Super-resolution mapping (or sub-pixel mapping) is a

step forward.

Super-resolution mapping considers the spatial

distribution within and between pixels in order to

produce maps at sub-pixel scale.

been developed:

Markov random fields

Hopfield neural networks

Linear optimization

Pixel-swapping solution (based on geostatistics)

## Molto più che documenti.

Scopri tutto ciò che Scribd ha da offrire, inclusi libri e audiolibri dei maggiori editori.

Annulla in qualsiasi momento.