Sei sulla pagina 1di 34

:1

I. Introduction
A communication system can be modeled by a block diagram

Figure 1: Block diagram of communication system We will attack the following three basic problems: The Communication Problem:
What is the massage?

The Radar Problem:


Is there a target Yes or No?

The Estimation Problem:


What is the value of an unknown parameter ?

The Coding Problem:


How to overcome channel errors?

The Communication Track


Digital Communication I Analog Communication Networking and protocols Communication Project Digital Communication I Coding techniques Optical Communication Wireless Communication Information Theory Antenna and RF Design etc. ..
3

Topics Covered in the Course


Introduction to Digital communication system The One dimension detection problem Multi-dimension detection problem Random Process, Rietz Fisher Theorem, Gram Schmidt The Matched Filter, and the error probability Band-Limited Channel Noncoherent Detection Radar Problem Channel Coding Convolutional Code

The Communication Problem


A digital communication system transmits information by sending ones and zeros. when "one" is sent as "zero" is sent as

H0

H1

Examples: Morse symbols - dots/dashes. Compact disk - bin/no bin. Laser communication- Pulse of light yes/no

Note: a priori probabilities of transmission are known

P(one was transmitted )

P( H1 ) = p1
5

P( zeor one was transmitted ) P( H 0 ) = p0 = 1 p1

The Communication Problem


Cos t fu nction: Pe = P(H 1 )P(error | H 1 ) + P(H 0 )P(error | H 0 ). Pe is the probabil ity o f error

The communication problem:


Given P(H1 ), P(H 0 ), and P(received signal | H i ) , i = 0,1 Design an optimal receiver in the sense of minimum probability of error

Related Problems Optimal signal under some constraints (bandwidth, energy, power) M-ary detection problem Detection with unknown parameters
6

The Radar Problem


In a radar system we try to decide whether a target
A target is present (H1) There is no target (H0). Note: it is difficult to assign realistic costs or a priori probabilities

We have the following conditional probabilities : PF -probability of false alarm

P( Decide H1 | H 0 )

Declare that a target exits when there is no target

PM -the probability of miss decision

P( Decide H 0 | H1 )
7

Declare that a no target when there is a target

The Radar Problem


How to make PF as small as possible and PD as large as possible For most problems of practical importance these are conflicting objectives

The Estimation Problem


Given a received signal r(t)=s(t,)+n(t), with unknown parameter vector one has to estimate an unknown parameter based on the observed signal The purpose of this course is to investigate various estimating rules and their implementation

Examples:
Delay : r(t)= s(t-D)+ n(t) , D is unknown Phase estimation: r(t)=A Cos(wt+)+n(t), is unknown
9

Estimation Problem Formulation


Given a received signal r(t)=s(t,)+n(t) find -Minimum squared error estimator - Maximum likelihood estimator - Bounds: Cramer Rao Cost function:

% % E (( )( )T )

The Estimation Problem:


Find an estimator s.t.

% )( )T ) > min % E ((
10

A simple model of a Communication System

s = ( s1 , s2 ,...si ,....) s = ( s1 , s2 ,...si ,....)

x(t ) = ai s (t iT )
i

r (t ) = x (t ) * h (t ) + n (t )

11

The Source
Binary Memoryless Source-BMS A={0,1}
Each unit of time, T seconds, the source emits a random variable which is independent of all prceeding and succeeding outputs
s = {s1 , s2 ,..., s3 ....}, si A = (0,1), i = 1,..., p( si = a | s \ si ) = p( si = a) s = (101011011011....)

Discrete Memoryless Source-DMS s = ( s1 , s2 ,...si ,....), i = 1,..., si A = {a1 , a2 ,...aq } p ( si | s \ si ) = p( si ) A = {3, 1,1,3} s = (3, 1,3,1,1, 3,1, 1, 3,3,....)
12

Entropy of Source
The average amount of information per source output is defined as
q

H ( A) = pi log( pi )
i = q

H(A) is called the entropy of the DMS


If the log is on base 2 - Entropy is given in bits If the log is on base e - Entropy is given in nats

The source with equiprobable output symbols 1 pi = Pr ob( sn = ai ) = | A| have the greatest entropy

H ( A) = log | A |
13

Where it comes from?


Assuming a source with 4 letters (A,B,C,D)
p( A) = 1/ 2 p( B ) = 1/ 4 P(C ) = 1/ 8 P( D) = 1/ 8

A simple code

A = 00 B = 01 C = 10 D = 11

The Average length- 2 units


14

Can we do better?
What about this code?
A =1 B = 01 C = 001 D = 000

Example: ABAACAD->101110011000 The Average length is:


E ( L) = leng ( A) p ( A) + .... + length( D) p ( D 3 = 1*1/ 2 + 2*1/ 4 + 3*1/ 8 + 3*1/ 8 = 1 4 = pi log( pi )
15

Entropy
0 H(A) log (|A|), with equality iff
pi = Pr ob( sn = ai ) = 1 | A|
1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 0.2 0.4 0.6 0.8 1 Prob(a=0) Entropy

Binary memoryless source


H(p) = -plog(p) - (1-p)log(1-p) [bits/source letter]. <=1
Entropy

log(M) is the maximum amount of bits that are required to represent the output in bits. H(A) is a convex function

16

The Transmitter
A transmitter is a mapping from the message set A to a set of waveforms.

ai si (t ), 0 t T
The waveforms si(t), i=1,..,Q, have finite duration of length T For a DMS with a uniform letters probabilities. The transmitter rate is

H ( A) log 2 (| A |) R= = [bits / sec.] T T


17

Example: Pulse Amplitude Modulation

18

FSK Transmitter
p0 (t )

p0 (t )

p1 (t )

f1

p1 (t )

p1 (t ) + p0 (t T ) + p1 (t 2T ) + p1 (t 3T ) + p0 (t 4T ) + ...

19

Transmitter constraints Energy Constraint


T

Ei = s 2i (t )dt Emax
0

Average Energy constraint


E = pi s i2 (t )dt Emax
i =1 0 Q T

Peak Power Constraint Bandwidth Constraint

1 Pi = lim s i2 (t )dt Pmax T > 0 T 0

Si ( f ) = F ( si (t )) Amax for f [ f1 , f 2 ]
20

The Continues Channel


In this course we will deal with linear channels. The channel can be described by a transfer function h(t) or its Fourier transform F(h(t))=H(w)
h(t) / H(w)

s(t)

y(t)=s(t)*h(t)

y (t ) =

h ( ) s (t )d = F 1[ H ( w) S ( w)]
21

Example of a Discrete Channel


The input get value in the set
A={a1 ,a2 , ,aQ }

The output get value in the set


B={b1 ,b2 , ,bT }

The transition Probability of the channel is


b = a 1 p P (b | a ) = 1 p b = a + 1 0 othewise
22

Binary Symmetric Channel


The inputs and the outputs get values in the set A=B={0,1) The transition probability is p and 1-p

23

The Noise We will assume a random process, denoted by n(t), which is added to the received signal and is statistically independent on the channel and the transmitted signal Examples
White Gaussian Noise Colored Gaussian Noise

24

The Receiver
The receiver estimates the transmitted information based on the received signal

r (t ) = y (t ) + n (t ) = s(t ) * h (t ) + n (t )
Cost Function- probability of error. If N messages were transmitted and the receiver made N errors.
e

The probability of error is given by

Ne Pe = lim N N

Ne

25

Main parameters In the design of a communication system we are interested in the following parameters Pe R S/N BW - The error probability - The source rate in bits/sec - Signal-to-noise ratio. - Channel bandwidth constraint
26

The General Decision Problem


We define a cost function W(i,k) 0, which associates a cost W(i,k) is the cost whenever the receiver decides i when the transmitted symbol is k
EW (i, k ) =
k =1 Q Q

p (i , k )
i =1 Q i =1

W ( D ( r = i ), k )

= pk p(i | k ) W ( D ( r = i ), k )
k =1

p (i, k ) = Pr ob(the receiver signal is i , k was transmitted) p (i | k ) = Pr ob(the receiver signal is i | k was transmitted) pk = The aprior probability that k was transmitted D ( r = i ) Decsion rule when r=i
27

Example: Discrete Source/Discrete Channel


W (i, k ) = i , j i= j 0 where i , j = 1 otherwise

E[W (i, k )] =
k =1 Q

i =1 Q

p(i, k ) W ( D( r = i ), k )

= pk p (i | k ) W ( D(r = i ), k )
k =1 Q i =1

= pk
k =1

D ( r =i ) k

i =1

p (i | k ) = Pe

28

The Optimal Receiver

The objective of the optimal receiver is to assign for each received symbol a decision rule to minimize the error

29

The Binary Channel, the decision rule


The input get value in the set
A={0,1 }, P(a=0)=q; P(a=1)=1-q

The output get value in the set


B={x, y)

1-p
xxc p

The transition Probability of the channel is 1-p 1-q and p

1-p

What is the decision rule in order to minimize the error ?


30

The Binary Channel, the decision rule


D(x) is the decision rule when x was received If D(X) = 0 -> P(e|x)=(1-q)p If D(X) = 1 -> P(e|x)=q(1-p)
q
0

1-p
xxc p

D(X) =0

1-q

1-p

What is the decision rule in order to minimize the error ?


31

How to make the decision


Let us minimize the probability of error when x was received
q(1 p ) (1 q) p <
D ( x ) =1 D ( x ) =0

or P( r = x | 0) 1 p 1 q p1 = = < P( r = x | 1) p q p0
D ( x ) =1 D ( x ) =0

Ratio of aprior probabilities

32

The decision Rule for y


The same type of decision rule
qp (1 q)(1 p ) <
D ( y ) =1 D ( y ) =0

or 1 q p1 P( r = y | 0) p = = P( r = y |1) 1 p < q p0
D ( x ) =1 D ( x ) =0

33

The Probability of Error


The error probability depends on the decision rule . Note:
0 a = b W ( a , b) = 1 a b

Pe = p( x, k ) W ( D( x ), k ) + p ( y , k ) W ( D ( y ), k )
k =0

= p0 [ p( x | 0) W ( D ( x ),0) + p ( y | 0) W ( D( y ),0)] + p1[ p ( x |1) W ( D ( x ),1) + p ( y |1) W ( D ( y ),1)] = p0 p ( x | 0) W ( D ( x ),0) + p1 p ( x |1) W ( D ( x ),1) + p0 p ( x | 0) W ( D( y ),0) + p1 p ( y | 0) W ( D ( y ),0)
34

Potrebbero piacerti anche