Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Definition 1 (Optimality)
Estimate m such that the error probability is minimized.
Digital Communications. Chap 04 Ver. 2018.11.06 Po-Ning Chen 4 / 218
Models for analysis
Variance of ni :
T T
2
E[∣ni ∣ ] = E [∫ n(t)φ∗i (t) dt ⋅ ∫ n∗ (τ )φi (τ ) dτ ]
0 0
T T
= ∫ ∫ E [n(t)n∗ (τ )] φ∗i (t) φi (τ ) dtdτ
0 0
T T N0
= ∫ ∫ δ(t − τ )φ∗i (t)φi (τ )dt dτ
0 0 2
N0 T ∗
= φi (τ )φi (τ )dτ
2 ∫0
N0
=
2
N N N
⇒ r (t) − ñ(t) = ∑ r i ⋅ φi (t) = ∑ sm,i ⋅ φi (t) + ∑ ni ⋅ φi (t)
i=1 i=1 i=1
Digital Communications. Chap 04 Ver. 2018.11.06 Po-Ning Chen 8 / 218
Independence and orthogonality
r = [ r1 ⋯ rN ]
⊺
s m = [ sm,1 ⋯ sm,N ]
⊺
n = [ n1 ⋯ nN ]
⊺
⇒ r = sm + n
where n is zero-mean independent and identically Gaussian
distributed with variance N0 /2.
where
⎡σ 2 0 ⋯ 0 ⎤⎥
⎢
⎢ 0 σ2 ⋯ 0 ⎥⎥
⎢
E[nnH ] = ⎢ ⎥
⎢⋮ ⋮ ⋱ ⋮ ⎥⎥
⎢
⎢0 0 ⋯ σ 2 ⎥⎦
⎣
2πE[nx2 ]
1
r −s̃∥2
∥˜
1
= ( ) E[∣ñ∣2 ] = f (˜
r)
−
e
πE[∣ñ∣2 ]
M
= ∑ ∫ f (r ) Pr{s m sent∣r received} dr
D
m=1 m
Pr{s m }
Pr {s m ∣r } = f (r ∣s m )
f (r )
Dm = {r ∈ RN ∶ g (r ) = m} .
Pb,i = ∑ ∑ Pr {s m sent} ∫ f (r ∣s m ) dr
`∈{0,1} s m ∶bi =` B i,(1−`)
Pb ≤ Pe ≤ kPb
Proof:
1 k
Pb = ∑ Pr[bi ≠ b̂i ]
k i=1
1 k
= 1− ∑ Pr[bi = b̂i ]
k i=1
1 k
≤ 1− ∑ Pr[(b1 , . . . , bk ) = (b̂1 , . . . , b̂k )]
k i=1
= Pr[(b1 , . . . , bk ) ≠ (b̂1 , . . . , b̂k )] = Pe
k
1 k
≤ ∑ Pr[bi ≠ b̂i ] = k ( ∑ Pr[bi ≠ b̂i ]) = kPb .
i=1 k i=1
◻
Digital Communications. Chap 04 Ver. 2018.11.06 Po-Ning Chen 18 / 218
Example
Example 1
Consider two equal-probable signals s 1 = [0 0]⊺ and
s 2 = [1 1]⊺ sending through an additive noisy channel with
n = [n1 n2 ]⊺ with joint pdf
exp(−n1 − n2 ), if n1 , n2 ≥ 0
f (n) = {
0, otherwise.
Pe∣2 = ∫ f (r ∣s 2 ) dr = 0
D 1
1
⇒ Pe = Pr{s 1 }Pe∣1 + Pr{s 2 }Pe∣2 = e −2
2
Digital Communications. Chap 04 Ver. 2018.11.06 Po-Ning Chen 20 / 218
Z-channel
s2
1 / s2
> ⎧
⎪
⎪Pe∣1 = e −2
e −2
⎨
⎪
⎪ =0
s1 / s1 ⎩Pe∣2
1−e −2
Proof:
gopt (r ) = arg max Pr {s m ∣r } = arg max Pr{s m }f {r ∣s m }
1≤m≤M 1≤m≤M
= arg max Pr{s m }f (r1 ∣s m ) f (r2 ∣r1 )
1≤m≤M
= arg max Pr{s m }f (r1 ∣s m )
1≤m≤M
sm r ρ m̂
Ð→ Channel Ð→ Preprocessing G Ð→ Detector Ð→
Assume G (r ) = ρ could be a many-to-one mapping. Then
gopt (r , ρ) = arg max Pr {s m ∣r , ρ}
1≤m≤M
= arg max Pr{s m }f {r , ρ∣s m }
1≤m≤M
= arg max Pr{s m }f (r ∣s m ) f (ρ∣r )
1≤m≤M
= arg max Pr{s m }f (r ∣s m ) independent of ρ
1≤m≤M
with
T 2
N0
E[n2i ] = E [∣∫ n(t)φi (t)dt∣ ] =
0 2
The joint probability density function (pdf) of n is given by
N
⎛ 1 ⎞ ∥n∥
2
f (n) = √ exp (− )
⎝ 2π(N0 /2) ⎠ 2(N0 /2)
N0 1
m̂ = arg max [ log(Pm ) − Em + r ⊺ s m ]
1≤m≤M 2 2
= arg max [ηm + r s m ]
⊺
1≤m≤M
where ηm = N0
2 log(Pm ) − 12 Em is the bias term.
N0 1 2
m̂ = arg max [ log(Pm ) − ∥r − s m ∥ ]
1≤m≤M 2 2
2
= arg min ∥r − s m ∥ = arg min ∥r − s m ∥
1≤m≤M 1≤m≤M
m̂ = arg max r ⊺ s m .
1≤m≤M
2
m̂ = arg min ∥r − s m ∥
1≤m≤M
2 2
= arg min (∥r ∥ − 2r ⊺ s m + ∥s m ∥ )
1≤m≤M
1 2
= arg max (r ⊺ s m − ∥s m ∥ )
1≤m≤M 2
T 1 T 2
= arg max (∫ r (t)sm (t)dt − ∫ ∣sm (t)∣ dt)
1≤m≤M 0 2 0
D1 = {r ∈ R ∶ η1 + r ⋅ s1 > η2 + r ⋅ s2 }
N0 Eb √ N0 Eb √
= {r ∈ R ∶ log(p) − + r Eb > log(1 − p) − − r Eb }
2 2 2 2
N0 1−p
= {r ∈ R ∶ r > √ log }
4 Eb p
and
√ √ √
⎛ Eb − rth ⎞ ⎛ Eb + rth ⎞ ⎛ 2Eb ⎞
Pe = pQ √ + (1 − p)Q √ = Q
⎝ N0 /2 ⎠ ⎝ N0 /2 ⎠ ⎝ N0 ⎠
Eb /N0 (dB)
D1 = {r ∈ R2 ∶ ∥r − s 1 ∥ < ∥r − s 2 ∥}
∫D f (r ∣s 2 ) dr and ∫ f (r ∣s 1 ) dr .
D
1 2
The derivation from Slide 4-42 to this page remains solid even
if s1 (t) and s2 (t) are not orthogonal! So, it can be applied as
well to binary antipodal signals.
Digital Communications. Chap 04 Ver. 2018.11.06 Po-Ning Chen 44 / 218
Example 2 (Binary antipodal)
√ √
In this case we have s 1 = Eb and s 2 = − Eb , so
√ 2
2
d12 = ∣2 Eb ∣ = 4Eb
Hence √ √
⎛ 2 ⎞
d12 ⎛ 2Eb ⎞
Pe = Q = Q
⎝ 2N0 ⎠ ⎝ N0 ⎠
we see
BPSK is 3 dB (specifically, 10 log10 (2) = 3.010) better
than BFSK in error performance.
The term NEb0 is commonly referred to as signal-to-noise
ratio per information bit.
Digital Communications. Chap 04 Ver. 2018.11.06 Po-Ning Chen 47 / 218
Pe
Eb
N0 (dB)
T
r ⊺ s m = ∫ r (t)sm (t) dt
0
Then
2 ∞ 2
= ∫ ∣S(f )e ı 2πfT ∣ df .
N0 −∞
The Cauchy-Schwartz inequality holds with equality iff
1 M
Pe = ∑ f (r ∣s m ) dr
M m=1 ∫Dmc
Dm
c
= {r ∶ f (r ∣s m ) ≤ f (r ∣s k ) for some k ≠ m.}
= {r ∶ f (r ∣s m ) ≤ f (r ∣s 1 ) or ⋯ or f (r ∣s m ) ≤ f (r ∣s m−1 )
or f (r ∣s m ) ≤ f (r ∣s m+1 ) or ⋯ or f (r ∣s m ) ≤ f (r ∣s M )}
1 M
≤ ∑ ∑ Pr ∣s m {Em→m′ } (Union bound)
M m=1 1≤m′ ≤M
m′ ≠m
Let
N
S1 = ∑ Pr(Ai )
i=1
⋮
Sk = ∑ Pr(Ai1 ∩ ⋯ ∩ Aik )
i1 <i2 <⋯<ik
⋮
where dm,m′ = ∥s m − s m′ ∥.
2 ⎫ 2
e√−x /2 ⎪ 1⎧ e√−x /2
L2(x) = ⎪
(1 −
⎪
⎪ ⎪
⎪
x2
⎪U1(x)
) =
2πx ⎪
⎪ ⎪
⎪ 2πx
2 ⎪
⎪ ⎪
⎪ 2
e√−x /2 1 1⋅3 1⋅3⋅5 ⎬ ≤ Q(x) ≤ ⎨ e√ /2
−x 1 1⋅3
L4(x) = (1 − x 2 + x 4 − x 6 ) ⎪ ⎪U3(x) = (1 − x2
+ x4
)
2πx ⎪ ⎪ 2πx
2 2
⎪
⎪
⎪ ⎪
⎪
⎪
L(x) = √ 1 e −x /2 ( 1+x x ⎪ ⎪ 1 −x /2 2
2πx 2) ⎪
⎪
⎭ ⎩U(x) = 2 e
⎪
Digital Communications. Chap 04 Ver. 2018.11.06 Po-Ning Chen 61 / 218
Digital Communications. Chap 04 Ver. 2018.11.06 Po-Ning Chen 62 / 218
Digital Communications. Chap 04 Ver. 2018.11.06 Po-Ning Chen 63 / 218
Four union bounds
x2
Using for simplicity, we employ Q(x) ≤ U(x) = 21 e − 2 and obtain
¿
2 2
1 M ⎛ÁÀ dm,m′ ⎞
Á 1 M ⎛ dm,m ′⎞
Pe ≤ ∑ ∑ Q⎜ ⎟≤ ∑ ∑ exp −
M m=1 1≤m′ ≤M ⎝ 2N0 ⎠ 2M m=1 1≤m′ ≤M ⎝ 4N0 ⎠
m′ ≠m m′ ≠m
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶ ´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
bound 1 bound 2
Define dmin = min′ dm,m′ = min′ ∥s m − s m′ ∥
m≠m m≠m
Then we have
¿ ¿
d 2 Ád2 ⎞ 2 2
À min ⎟ and exp ⎛− dm,m′ ⎞ ≤ exp (− dmin )
⎛Á
À m,m
Á ′ ⎞ ⎛
Q⎜ ⎟ ≤ Q ⎜Á
⎝ 2N0 ⎠ ⎝ 2N0 ⎠ ⎝ 4N0 ⎠ 4N0
and hence ¿
⎛Á d 2 ⎞ M −1 d2
Pe ≤ (M − 1)Q ⎜Á À min ⎟ ≤ exp (− min ) .
⎝ 2N0 ⎠ 2 4N0
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶ ´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
bound 4: minimun distance bound
bound 3: minimum distance bound
Digital Communications. Chap 04 Ver. 2018.11.06 Po-Ning Chen 64 / 218
5th union bound: distance enumerator function
Then,
1
bound 2 = T (e −1/(4N0 ) ) .
2M
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¸¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
bound 5
2 2 2 2 2
T (X ) = 48X dmin + 36X 2dmin + 32X 4dmin + 48X 5dmin + 16X 8dmin
2 2 2 2
+16X 9dmin + 24X 10dmin + 16X 13dmin + 4X 18dmin
1 1
Pe ≤ T (e −1/(4N0 ) ) = T (e −1/(4N0 ) ) .
2M 32
Digital Communications. Chap 04 Ver. 2018.11.06 Po-Ning Chen 68 / 218
Example 5 (16QAM)
For 16QAM, Nmin = M; hence,
√ √
Nmin ⎛ dmin2 ⎞ ⎛ dmin
2 ⎞
Pe ≥ Q =Q .
M ⎝ 2N0 ⎠ ⎝ 2N0 ⎠
√
⎛ 4Ebavg ⎞
Pe ≥ Q
⎝ 5N0 ⎠
Pe
Ebavg
N0 (dB)
Digital Communications. Chap 04 Ver. 2018.11.06 Po-Ning Chen 72 / 218
16QAM approximations
Suppose we only take the first terms of bound 1 and bound 2.
Then, approximations (instead of bounds) are obtained.
Pe
Ebavg
N0 (dB)
Digital Communications. Chap 04 Ver. 2018.11.06 Po-Ning Chen 73 / 218
4.3 Optimal detection and error
probability for bandlimited signaling
1 M
Pe = ∑ Pr {error∣m sent}
M m=1
1 dmin dmin
= [(M − 2) ⋅ 2Q ( √ ) + 2 ⋅ Q (√ )]
M 2N0 2N0
√
2(M − 1) ⎛ dmin 2 ⎞
2 −1
= Q (Note Ebavg = 12 M 2
log2 (M) dmin .)
M ⎝ 2N0 ⎠
¿
2(M − 1) ⎛Á À 6 log2 (M) Ebavg ⎟
⎞
= Q ⎜Á
M ⎝ (M 2 − 1) N0 ⎠
(new)
6 log2 (M) Ebavg 6 log2 (2M) Ebavg (new)
≈ ⇒ Ebavg ≈ 4Ebavg
(M 2 − 1) N0 ((2M)2 − 1) N0
√
By symmetry we can assume s 0 = ( E, 0) was
transmitted.
The received signal vector r is
√ ⊺
r = ( E + n1 , n2 )
2
1 n12 + n22
f (n1 , n2 ) = ( √ ) exp (− )
2π(N0 /2) 2(N0 /2)
Thus we have
√
1 (r1 − E)2 + r22
f (r = (r1 , r2 )∣s 0 ) = exp (− )
πN0 N0
√ r2
Define V = r12 + r22 and Θ = arctan
r1
√
v v 2 + E − 2 Ev cos θ
f (v , θ∣s 0 ) = exp (− )
πN0 N0
Pr {error∣s 0 }
= 1−∬ f (v , θ∣s 0 ) dv dθ
√
D0
v 2 + E − 2 Ev cos θ
π
M
∞ v
= 1−∫ ∫0 πN exp (− ) dv dθ
π
−M 0 N0
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¸¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
f (θ∣s 0 )
⎡ √ ⎤2
⎢ ⎛ 2E ⎞⎥
= 1 − ⎢⎢1 − Q ⎥
b
⎝ N0 ⎠⎥⎥
Pe
⎢
⎣ ⎦
When M > 4, no simple Q-function expression for Pe !
However, we can obtain a good approximation.
Same as PAM:
The larger the M is,
the worse the symbol performance!!!
At small M, increasing by 1 bit
only requires additional 4 dB
(such as M = 4 → 8).
Difference from M = 2 to 4 is
very limited!
The true winner will be more
“clear” from BER vs. Eb /N0 plot.
1 3 Mi − 1
SPAMi = {± dmin , ± dmin , ⋯, ± dmin }
2 2 2
SQAM = {(x, y ) ∶ x ∈ SPAM1 and y ∈ SPAM2 }
From Slide 4-75, we have
M12 − 1 2
2 2 M2 − 1 2
E[∣x∣ ] = dmin and E[∣y ∣ ] = 2 dmin
12 12
Thus for M-ary QAM we have
E[∣x∣ ] + E[∣y ∣ ] (M12 − 1) + (M22 − 1) 2
2 2
Ebavg = = dmin
log2 (M) 12 log2 M
Digital Communications. Chap 04 Ver. 2018.11.06 Po-Ning Chen 90 / 218
Hence
1 dmin dmin
Pe,Mi -PAM = 2 (1 − ) Q (√ ) ≤ 2Q ( √ )
Mi 2N0 2N0
we have
dmin
Pe,M-QAM ≤ Pe,M1 -PAM + Pe,M2 -PAM ≤ 4Q ( √ )
2N0
¿
⎛Á ⎞
= 4Q ⎜ÁÀ( 6 log2 M ) Ebavg ⎟
⎝ M12 + M22 − 2 N0 ⎠
M 4 8 16 32 64
10 log10 (3/[2(M − 1) sin2 (π/M)]) 0 1.65 4.20 7.02 9.95
Digital Communications. Chap 04 Ver. 2018.11.06 Po-Ning Chen 94 / 218
4PSK = 4QAM
16PSK is 4dB poorer than 16QAM
32PSK performs 7dB poorer than 32QAM
64PSK performs 10dB poorer than 64QAM
Digital Communications. Chap 04 Ver. 2018.11.06 Po-Ning Chen 95 / 218
4.3-4 Demodulation and detection
For the bandpass signals, we use two basis functions
√
2
φ1 (t) = g (t) cos (2πfc t)
Eg
√
2
φ2 (t) = − g (t) sin (2πfc t)
Eg
for 0 ≤ t < T .
Note:
We usually “transform” the bandpass signal to its
lowpass equivalent signal, and then “vectorize” the
lowpass equivalent signal.
This section shows that we can actually “vectorize” the
bandpass signal directly.
Digital Communications. Chap 04 Ver. 2018.11.06 Po-Ning Chen 96 / 218
Transmission of PAM signals
1 3 M −1
SPAM = {± dmin , ± dmin , ⋯, ± dmin }
2 2 2
where √
12 log2 M
dmin = Ebavg
M2 − 1
Hence the (bandpass) M-ary PAM waveforms are
Define
T
r = ⟨r (t), φ1 (t)⟩ = ∫ r (t)φ∗1 (t) dt
0
N0 1 2
m̂ = arg max [r ⋅ sm + log Pm − ∣sm ∣ ]
1≤m≤M 2 2
where Pm = Pr{sm }.
SPAM (t) = {Re [sm,` (t)e ı 2πfc t ] ∶ sm,` (t) ∈ SPAM,` (t)}
where ZM = {0, 1, 2, . . . , M − 1} .
SPSK (t)
√ 2πk 2πk
= {sm (t) = E [cos ( ) φ1 (t) + sin ( ) φ2 (t)] ∶ k ∈ ZM }
M M
where φ1 (t) and φ2 (t) are defined in Slide 4-96.
SPSK (t) = {Re [sk,` (t)e ı 2πfc t ] ∶ sk,` (t) ∈ SPSK ,` (t)}
Set for 1 ≤ n ≤ N,
T
rn,` = ⟨r` (t), φn,` (t)⟩ = ∫ r` (t)φ∗n,` (t) dt
0
T
nn,` = ⟨n` (t), φn,` (t)⟩ = ∫ n` (t)φ∗n,` (t) dt
0
Hence we have
r ` = s m,` + n`
1 2
m̂ = arg max {Re {r ` s ∗m,` } + N0 log Pm − ∥s m,` ∥ }
1≤m≤M 2
Since n ` is complex (cf. Slide 4-11), the multiplicative constant a on σ 2 (in
the below equation) is equal to 1:
⎛ ∥r ` − s m,` ∥2 ⎞
m̂ = arg max Pm f (r ` ∣s m,` ) = arg max Pm exp −
1≤m≤M 1≤m≤M ⎝ aσ 2 ⎠
1 1 2
= arg max (Re{r ` s ∗m,` } + aσ 2 log Pm − ∥s m,` ∥ )
1≤m≤M 2 2
⎡σ 2 ⋯ 0 ⎤
⎢ ⎥
2 ⎢ ⎥
where σ = 2N0 (for baseband noise) and E [n ` n H
= ⎢ ⋮ ⋱ ⋮ ⎥.
` ]
⎢ 2 ⎥
⎢0 ⋯ σ ⎥
⎣ ⎦
Same derivation can be done for passband signal with a = 2 and σ 2 = N20
(cf. Slide 4-27). This coincides with the filtered white noise derivation on
Slide 2-72.
simulation?
- This is equivalent to
g (t) g (t) g (t)
r` = ⟨r` (t), ⟩ = ⟨r` (t), ⟩ + ⟨n` (t), ⟩
∥g (t)∥ ∥g (t)∥ ∥g (t)∥
√
= sm,` + n` = ± 2Eb + n`
Equivalently since only real part contains info,
1 √ 1
Re { √ r` } = ± Eb + √ nx,` , as n` = nx,` + ı ny ,`
2 2
n2
where E[( √x,` )2 ]= 12 E[nx,`
2
]= 21 ( 12 E[∣n` ∣2 ])= 12 ( 12 (2N0 ))= N20 .
2
√
- In most cases, we will use r = ± Eb + n directly in both
analysis and simulation (in our technical papers) but not the
√
baseband equivalent system r` = ± 2Eb + n` .
N0
with E[nx2 ] = E[ny2 ] = 2 , where nx and ny are the passband
projection noises.
- Rotating 45 degree does not change the noise statistics
and yields
√ √
E E √ √
rx + ı ry = ± ±ı +(nx + ı ny ) = ± Eb ± ı Eb +(nx + ı ny ).
2 2
m̂ = arg max r ⊺ s m
1≤m≤M
It means
√ √
Pr {Correct∣s 1 } = Pr { E + n1 > n2 , ⋯, E + n1 > nM } .
By symmetry, we have
Pr {Correct∣s 1 } = Pr {Correct∣s 2 } = ⋯ = Pr {Correct∣s M };
hence
√ √
Pr {Correct} = Pr { E + n1 > n2 , ⋯, E + n1 > nM } .
Pe = 1 − Pc
⎡ √ ⎤⎥M−1
⎛ ∞⎢+ E ⎞⎥
⎢ n1
= 1 − ∫ ⎢1 − Q ⎜ √ ⎟⎥ f (n1 ) dn1
−∞ ⎢
⎢ ⎝ N0 ⎠ ⎥ ⎥
⎣ 2 ⎦
⎛ ⎢ ⎡ √ ⎤M−1 ⎞
∞
⎜ ⎢ ⎛ n1 + E ⎥⎥ ⎟ 1
⎞ n2
− N1
= ∫ ⎜1 − ⎢1 − Q ⎜ √ ⎟⎥ ⎟ √ e 0 dn1
⎢ ⎝ ⎥
⎠⎥ ⎠
⎝ ⎢⎣ πN
−∞ N 0 0
2 ⎦
√
∞
M−1 1 − (x− 2kγb )2
= ∫ (1 − [1 − Q (x)] ) √ e 2 dx
−∞ 2π
√
where x = n
√1 + E ,
N0 /2
and γb = Eb /N0 , and k = log2 (M).
We then have
E [e]
Pb =
k
1 k k Pe
= ∑e ⋅( )
k e=1 e M −1
1 2k 1
= ⋅ k Pe ≈ Pe
2 2 −1 2
Digital Communications. Chap 04 Ver. 2018.11.06 Po-Ning Chen 116 / 218
Different from PAM/PSK:
The larger the M is,
the better the performance!!!
For example, to achieve Pb = 10−5 ,
one needs γb = 12 dB for M = 2;
but it only requires γb = 6 dB
for M = 64;
a 6 dB save in transmission power!!!
lim Pe = lim Pb = 0?
M→∞ M→∞
√ √
1 x0 (x− 2kγb )2 ∞ 2 /2 (x− 2kγb )2
⇒ Pe ≤ √ min (∫ e − 2 dx + M ∫ e −x e− 2 dx)
2π x0 >0 −∞ x0
√ √
∂ x0 (x− 2kγb )2 ∞ 2 /2 (x− 2kγb )2
(∫ e − 2 dx + M ∫ e −x e− 2 dx)
∂x0 −∞ x0
√ √ ⎧
(x0 − 2kγb )2 (x0 − 2kγb )2 ⎪> 0,
⎪ x > x0∗
− −x02 /2 −
= e 2 − Me e 2 ⎨
⎩< 0,
⎪
⎪ 0 < x < x0∗
√
where x0∗ = 2k log(2).
REb R 2R/W −1
R > C = W log2 (1 + ) = W log2 (1 + W γb ) ⇔ γb < R/W
N0 W
M log2 (M)
For M-ary (orthogonal) FSK, W = 2T and R = T .
2 log2 (M) 2k
Hence, R/W = M = 2k
.
k
22k/2 −1
If γb < inf k≥1 2k/2k
= log(2), then Pe is bounded away from zero.
2 M −1
ESimplex = ∥s − E[s]∥ = EFSK
M
Thus we can reduce the transmission power without affecting
the constellation structure; hence, the performance curve will
be shifted left by 10 log10 [M/(M − 1)] dB.
M 2 4 8 16 32
10 log10 [M/(M − 1)] 3.01 1.25 0.58 0.28 0.14
√ √ ⊺
(Bandpass) SBO = {[± E, 0, ⋯, 0]⊺ , ⋯, [0, ⋯, 0, ± E] }
where M = 2N.
For convenience, we index the symbols by
m = −N, . . . , −1, 1, . . . , N,
where for s m = [s1 , . . . , sN ]T , we have
√
sm = sgn(m) ⋅ E and si = 0 for i ≠ m and 1 ≤ i ≤ N.
Pe = 1 − Pc
⎡ √ ⎤M/2−1
⎢ ⎛ n + E ⎞⎥⎥ 1 n2
⎢
∞
1 − N1
= 1 − ∫ √ ⎢1 − 2Q √ √
⎝ N0 /2 ⎠⎥⎥
e 0 dn1
− E⎢ πN0
⎣ √
⎦
∞ 1 (x− 2kγb )2
= 1−Q(√1 2kγ ) ∫ √ e− 2 dx
b 0 2π
√
∞
M/2−1 1 − (x− 2kγb )2
− ∫ [1 − 2Q(x)] √ e 2 dx
0 2π
√
∞
M/2−1 1 − (x− 2kγb )2
= ∫ ( 1−Q( 2kγ ) − [1 − 2Q(x)]
1
√ )√ e 2 dx
0 b
2π
√
where x = n
√1 + E ,
N0 /2
E = kEb and γb = Eb /N0 .
2
≤ 12η
∫−∞ ∣X (f )∣ df
∞
Since rate R = 1
T × log2 M, we have for M-ary signaling
R log2 (M) 2T log2 (M)
= =2
W T N N
where
log2 (M) is the number of bits transmitted at a time
N is usually (see SSB PAM and DSB PAM as
counterexamples) the dimensionality of the constellation
R
Thus W can be regarded as bit/dimension (it is actually
measured as bits per second per Hz).
R 2 log2 (M)
( ) = ≤ 1
W FSK M
Personal comments:
Hence,
for SSB PAM, N = 1.
for QAM/PSK/DSB PAM as well as DPSK, N = 2.
for orthogonal signals, N = M.
for bi-orthogonal signals, N = M/2.
Theorem 6
Given max power constraint P over bandwidth W , the
maximal number of bits per channel use, which can be sent
over the channel reliably, is
1 P
C= log2 (1 + ) bits/channel use
2 WN0
R P
< log2 (1 + ).
W N0 W
With
E PT P
Eb = = = ,
log2 M log2 (M) R
we have
R Eb R
< log2 (1 + ).
W N0 W
We then obtain
as previously did
2R/W −1 2R/W −1
N0 >
Eb
R/W R/W
R/W
Then
√ √
∞ (r −θ Eb )2 ∞ (r +θ Eb )2
D1 = {r ∶ ∫ f (θ) dθ > ∫ f (θ) dθ}
− −
e N0
e N0
0 0
√ √
∞ (r −θ Eb )2 (r +θ Eb )2
⇔ ∫ [e −e ] f (θ) dθ > 0
− N0
− N0
0
√ √
∞ r 2 +θ 2 Eb 2θ Eb 2θ Eb
r r
⇔ ∫ [e −e ] f (θ) dθ > 0
− −
e N0 N0 N0
0
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¶
>0 iff r >0
⇔ r > 0,
we have
D1 = {r ∶ r > 0} ⇒ D1c = {r ∶ r ≤ 0}
r (t) = sm (t − td ) + n(t)
= Re {sm,` (t − td )e ı 2πfc (t−td ) } + n(t)
= Re{[sm,` (t − td ) exp{ ı (−2πfc td )} + n` (t)]e ı 2πfc t }
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¸¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
=φ
− ENm
⎛ ∣r †` s m,` ∣ ⎞
= arg max Pm e 0 I0 ⎜ ⎟
1≤m≤M ⎝ N0 ⎠
1 2π
x cos(φ)
where I0 (x) = 2π ∫0 e dφ is the modified Bessel function of
the first kind and order zero.
m̂ = arg max ∣r †` s m,` ∣ if { ⟨sm,` (t − td ), φi,` (t)⟩ ≈ ⟨sm,` (t), φi,` (t)⟩
and φ uniform over [0, 2π)
1≤m≤M
r ` = s m,` e ı φ + n`
or equivalently
2Es T ′
= ∫ e ı 2π(m−1)∆f t e − ı 2π(m −1)∆f t dt
T 0
2Es T ′
= ∫ e ı 2π(m−m )∆f t dt
T 0
′
2Es e ı 2π(m−m )∆f T − 1
= ( )
T ı 2π(m − m′ )∆f
′
= 2Es e ı π(m−m )∆f T sinc [(m − m′ )∆f T ]
Hence, if ∆f = k
T,
⎧
⎪ T
⎪∣e (2Es ) + ∫0 n` (t)sm,` (t) dt∣ , m = m
⎪ ıφ ∗ ′
= ⎨ T
⎪
⎪ ∣ n (t)sm∗ ′ ,` (t) dt∣ , m′ ≠ m
⎩ ∫0 `
⎪
Digital Communications. Chap 04 Ver. 2018.11.06 Po-Ning Chen 162 / 218
Coherent detection of FSK
r` (t) = sm,` (t) + n` (t)
m̂ = arg max
′
Re [r †` s m′ ,` ]
1≤m ≤M
T T
= arg max Re [∫ sm,` (t)sm∗ ′ ,` (t) dt + ∫ n` (t)sm∗ ′ ,` (t) dt]
1≤m′ ≤M 0 0
√
Hence, with g (t) = 2E T [u−1 (t) − u−1 (t − T )],
s
T
Re [∫ sm,` (t)sm∗ ′ ,` (t) dt]
0
= 2Es cos (π(m − m′ )∆fT ) sinc ((m − m′ )∆fT )
= 2Es sinc (2(m − m′ )∆fT )
Here we only need ∆f = 2T k
and E {Re [r †` s m′ ,` ]} = 0 for
m′ ≠ m as similar to Slide 3-35.
Digital Communications. Chap 04 Ver. 2018.11.06 Po-Ning Chen 163 / 218
4.5-3 Error probability of
orthogonal signaling with
noncoherent detection
⎧
⎪ †
⎪∣2Es e ı φ + s 1,` n` ∣ , m = 1
∣s †m,` r ` ∣ = ⎨ †
⎪
⎪ ∣s ∣ 2≤m≤M
⎩ m,` n` ,
Digital Communications. Chap 04 Ver. 2018.11.06 Po-Ning Chen 165 / 218
Recall n` is a complex Gaussian random vector with
Thus
r1 sr1 r12 +s 2
fR1 (r1 ) = I0 ( 2 ) e − 2σ2 , r1 > 0
σ σ
where σ 2 = 2Es N0 and s = 2Es .
rm − rm22
fRm (rm ) = e 2σ , rm > 0
σ2
r1 sr1 − (n+1)r212 +s 2
∞
∫0 σ 0 σ 2 ) e
I ( 2σ dr1
∞ r′ s ′r ′ r ′2 +(n+1)s ′2
= ∫ I0 ( 2 ) e − 2σ2 dr ′
0 σ(n + 1) σ
′2 r ′2 +s ′2
1 − 2 ns ∞ r ′ s ′r ′
= e 2σ ∫ I0 ( 2 ) e − 2σ2 dr ′
n+1 0 σ σ
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¶
=1
′2 2
1 − ns 2 1 − 2ns
= e 2σ = e 2σ (n+1)
n+1 n+1
n=0 n + 1 n n=0 n + 1 n
M−1
(−1)n M − 1 −( n+1
n Es
= 1+ ∑ ( )e
)N
0
n=1 n+1 n
Thus
M−1
(−1)n+1 M − 1 −( n+1
n E log M
) bN2
Pe = 1 − Pc = ∑ ( )e 0
n=1 n+1 n
BFSK
For M = 2, the above shows
√
Eb
1 − 2N ⎛ Eb ⎞
Pe = e 0 > Pe,coherent = Q
2 ⎝ N0 ⎠
Digital Communications. Chap 04 Ver. 2018.11.06 Po-Ning Chen 170 / 218
4.5-5 Differential PSK
(k−1) (k)
The received signals given s ` and s m,` are
(k−1) (k−1)
(k−1)
r` ı φ s` n`
r⃗` = [ (k) ] = e [ (k) ] + [ (k) ] = e s
ıφ
⃗m,` + n
⃗`
r` s m,` n`
√ √ (k−1)
r`
⇒ s⃗†m,` r⃗`
= [ 2Es e − ı φ 0 2Es e − ı (θ m +φ 0 ) ] [ (k) ]
r`
√
= 2Es e − ı φ0 (r ` + r ` e − ı θm )
(k−1) (k)
r
(k−1) ⎡s (k−1) ⎤ n
(k−1)
⎢ ⎥
R⃗r ` = R [ ` (k) ] = e ı φ R ⎢ ` (k) ⎥ + R [ ` (k) ] = e ı φ R⃗
s m,` + R⃗
n`
r` ⎢s ⎥ n`
⎣ m,` ⎦
√
2 1 −1
where R = 2
[ ].
1 1
This gives
√
0 2(2Es )
s 1,` = [√
R⃗ ] s 2,` = [
and R⃗ ] and set Es′ = 2Es .
2(2Es ) 0
Notably, the distribution of “rotated” additive noises remains the same.
Digital Communications. Chap 04 Ver. 2018.11.06 Po-Ning Chen 177 / 218
The decision rule on Slide 4-176 becomes
m̂ = arg max ∣⃗s †m,` r⃗` ∣ = arg max ∣(R⃗s m,` )† Rr⃗` ∣
1≤m≤2 1≤m≤2
1 − Es′ 1 − (2Es ) 1 − Es 1 − Eb
Pe,BDPSK = e 2N0 = e 2N0 = e N0 = e N0
2 2 2 2
For coherent detection of BPSK, we have
√
⎛ 2Eb ⎞ 1 − NEb
Pe,BPSK = Q < e 0.
⎝ N0 ⎠ 2
The ML decision is
(k−1) ∗
m̂ = arg max Re {(r ` ) r ` e − ı θm }
(k)
1≤m≤M
⎧
⎪ 2Es 00
⎪
⎪
⎪
⎪
⎪
⎪2Es ı 01
(k−1) ∗
Ô⇒ (r ` ) r` = 2Es e ı θm =⎨
(k)
⎪
⎪
⎪ −2Es 11
⎪
⎪
⎪
⎪
⎩−2Es ı 10
where
and
⎧
⎪ A=B =0
⎪
⎪
⎪
⎪
⎪
⎪ ⎧
⎪
⎪
⎪ ⎪
⎪ 1 − ı for the 1st bit
⎪
⎪
⎪ 2C = ⎨
⎪ ⎪
⎨ ⎩ 1 + ı for the 2nd bit
⎪
⎪
⎪
⎪ √
⎪
⎪
⎪ X = r ` = 2Es e ı (φ+φ0 ) e ı θm + n`
(k) (k)
⎪
⎪
⎪
⎪
⎪
⎪ √
⎪
⎪
⎩Y = r ` = 2Es e ı (φ+φ0 ) + n`
(k−1) (k−1)
1 1 K
f (r1 , . . . , rK ∣s1 , . . . , sK ) = exp {− ∑ (rk − sk )2 }
(πN0 )K /2 N0 k=1
Viterbi algorithm
Of the above two paths, discard the one with larger Euclidean
distance.
Viterbi algorithm
Of the above two paths, discard the one with larger Euclidean
distance.
The remaining path is called survivor at t = 2T .
We therefore have two survivor paths after observing r2 .
Digital Communications. Chap 04 Ver. 2018.11.06 Po-Ning Chen 190 / 218
Suppose the two survivor paths are (0, 0) and (0, 1).
Viterbi algorithm
Of the above two paths, discard the one with larger Euclidean
distance.
Viterbi algorithm.
Of the above two paths, discard the one with larger Euclidean
distance.
K
arg max ∏ [f (rk ∣ sk = O(S (k−1) , S (k) ) ) Pr {S (k) ∣S (k−1) }]
S (0) ,⋯,S (K )
k=1
Example (NRZI).
⎧
⎪Pr {S (k) = S0 ∣S (k−1) = S0 } = Pr {S (k) = S1 ∣S (k−1) = S1 } = Pr {Ik = 0}
⎪
⎨ (k)
⎩Pr {S
⎪
⎪ = S0 ∣S (k−1) = S1 } = Pr {S (k) = S1 ∣S (k−1) = S0 } = Pr {Ik = 1}
1: for k = 2 to K do
2: for all S (k) ∈ S (i.e., for each S (k) ∈ S) do
3: for all S (k−1) ∈ S do
4: Compute B(S (k−1) , S (k) ∣rk ).
5: end for
⎧
⎪
⎪ϕk−1 (S (k−1) )
6: Compute ϕk (S ) based on ⎨
(k)
⎪
⎪ (k−1) , S (k) ∣r )
⎩B(S k
7: Record Pk (S ) (k)
8: end for
9: end for
arg max
S (1) ,⋯,S (K )
K
has exponential complexity O (∣S∣ )
2
The Viterbi algorithm has linear complexity O (K ∣S∣ ) .