Sei sulla pagina 1di 21

# Application of bioinspired techniques to

telecommunication

Overvie
w:
Source of inspiration
algorithms
Applications in field of
telecommunication
Channel equalizer model

LMS

PSO

Why bioinspired?

algorithms:

LM
S

in telecommunication:

## Channel equalizer using

LMS & PSO algorithms

Overview of a Digital
Communication System:Transmitter

sk
Data Source

Encoder

Modulator

Physical Channel

Equalizer

Demodulator

rk

Decision
Device
Decoder

## Figure: Block Diagram of a Digital Communication System

AWGN

s 'k-d

Overview of a Digital
Communication System:-

Data Source

sk

Physical Channel

rk

Equalizer

Decision
Device

Decoder

s 'k-d

## Figure: Baseband Model of a Digital Communication System

AWGN

Digital Channel
Equalizers: Located in the front end of the receivers ( in
series with the channel )
Inverse system of the channel model (Transfer
function of the equalizer is inverse to the
transfer function of the channel)
Use to reduce : Inter-Symbol Interference (ISI)
Inter User Interference in the form of Co-channel
Interference(CCI)
Adjacent Channel Interference(ACI) in the presence
of Additive White Gaussian Noise (AWGN)

Channel
Noise
Random
Binary
Input Digital
x (k)
Channel
H (z)

+
+

Channel y (k)
_
Equalizer

d (k)
+

e (k)
Algorithm
Dela
y
z-m

## Figure: Block Diagram For Channel

Equalization

The LMS
Algorithm:
Standard LMS Algorithm for updating channel
weights, uses a gradient descent to estimate a
time varying signal, finds a minimum, if it exists,
by taking steps in the direction negative of the
An adaptive channel equalizer is basically an
Equalization problem can be viewed as an
optimization problem i.e. it can be viewed as a
squared error minimization problem.
The LMS algorithm approaches the minimum of
a function to minimize error by taking the

## LMS Implementation Using

FIR Filter:Weight Update
Rule
LMS equation to compute
the FIR coefficients:
c(n+1) = c(n) + * e(n) *
x(n)
= step size
d(n) =desired signal
e(n)= error signal
c(n+1)= updated
coefficient

## Standard LMS Algorithm For

Channel Equalization
Delay z-m
x(k)

a0

x(k)
Random Z-1
a1
Binary
x(kInput
-1
Z 1)
a2
( +1, -1 )
x(k2)
Channel

Nois
e

## The weights of the equalizer

that the transmitted
message is reconstructed.
The order of the Channel
equalizer is higher than that
of channel filter (almost
twice of the latter).

Equalizer
h0(k)

Z-1

h1(k)

Z-1

h2(k)

Z-1

h3(k)

Z-1

h4(k)

Z-1
Z-1
Z-1

d (k)

y (k)

h5(k)

e (k)

h6(k)
h7(k)
LMS Algorithm

Error square

## Plot of mean square

error

No. of iterations

Algorithm: Derivative based algorithm, so there are chances
that the parameters may fall to local minima
during training.
Does not perform satisfactorily under high noise
condition.
Does not perform satisfactorily if the order of the
channel increases.
Once close to optimal solution they normally
rattle around it rather than converging.
Does not perform satisfactorily for nonlinear
Channels.
Above mentioned disadvantages motivated us to
go for another adaptive algorithm using PSO

Particle Swarm
Optimization: Optimization heuristic inspired by social behavior
of bird flocking or swarming of bees, proposed by
Eberhart and Kennedy in 1995
Based on attraction of particle to best found
solutions
Each candidate solution is called PARTICLE and
represents one individual of a population
The population is set of vectors and is called
SWARM
The particles change their components and move
(fly) in a space
They can evaluate their actual position using the
function to be optimized. The function is called
10/6/15FITNESS FUNCTION.
16

Particle Swarm
PSO continue..
Optimization: Swarm of particles is flying through the parameter space
and searching for optimum
vi(t)
Each particle is characterized by
Position vector.. xi(t)
Velocity vector...vi(t)

Particle i

xi(t)

## Search space D-Dimensional:

Xi = [xi1, , xiD]T = ith particle of Swarm
Vi = [vi1, , viD]T = Velocity of ith particle
Pi = [pi1, , piD]T = Best previous position
of the ith particle

10/6/15

17

PSO
Algorithm: Each particle has
Individual knowledge pbest
its own best-so-far position
Social knowledge gbest
pbest of its best neighbor
Velocity update:
vi(t+1)=w vi(t) +c1*rand *(pbest(t) xi(t))
+c2*rand*(gbest(t) - xi(t))
Position update:
xi(t+1)=xi(t) + vi(t+1)
10/6/15

18

## Flow chart depicting the

General PSO Algorithm:Start

## For each particles position (x)

evaluate fitness
If fitness(x) better than
fitness(pBest) then pBest= x

## Set best of pBests as gBest

Update particles velocity and
position

10/6/15

iteration

## Loop until all

particles exhaust

## Initialize particles with random position

and velocity vectors.

19

over LMS:

## Simple and fast

Can be coded in few lines
Requires minimal storage.
PSO has memory, i.e., every particle
remembers its best solution (local best) as
well as the groups best solution (global
best).
Its initial population is maintained fixed
throughout the execution of the algorithm,
and so, there is no need for applying
operators to the population

Referenc
es:

## C. A. Belfoior & J. H. Park, Jr. Decision Feedback

Equalization, Proc. IEEE, vol. 67, pp. 1143-1156, Aug.
1979.

## B. Widrow and S. D. Sterns, Adaptive Signal

Processing, Pearson Education, pp. 22, Inc. 1985.

## J. C. Patra, R. N. Pal, R. Baliarsingh and G. Panda,

Nonlinear channel equalization for QAM signal
constellation using Artificial Neural Network, IEEE Trans.
On Systems, Man and Cybernetics Part B: vol. 29, No.
2, pp.262-272, April 1999.

## Faten BEN ARFIA, Mohamed BEN MESSAOUD: Nonlinear

adaptive filters based on Particle Swarm Optimization
Leonardo Journal of Sciences, ISSN 1583-0233, Issue 14,
January-June 2009

## Ali T. Al-Awami, Azzedine Zerguine, Lahouari Cheded,

Abdelmalek Zidouri, Waleed Saif: A new modified
particle swarm optimization algorithm for adaptive
10/6/15
21
equalization