Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Lecture
7
MIMO
Communica2ons
where A⇤ is the transpose of the matrix A with each element replaced by its
complex conjugate, and At is just the transpose of A.
• Note that in general the covariance matrix K of the complex random
vector x by itself is not enough to specify the full second-order statistics
of x. Indeed, since K is Hermitian, i.e., K = K⇤, the diagonal elements are
real and the elements in the lower and upper triangles are complex
conjugates of each other.
14/11/24
Lecture
7:
MIMO
Communica2ons
3
Circular Complex Gaussian Vectors
• In wireless communication, we are almost exclusively interested in
complex random vectors that have the circular symmetry property:
(10.1)
(10.2)
(10.5)
• The definition of entropy yields that H(Y|X) = H(N), the entropy in the
noise. Since this noise n has fixed entropy independent of the channel
input, maximizing mutual information is equivalent to maximizing the
entropy in y.
• The mutual information of y depends on its covariance matrix, which
for the narrowband MIMO model is given by
(10.6)
(10.7)
(10.8)
(10.10)
2 2
where ⇢i = Pi / n2 and i = i P/ n is the SNR associated with the ith
channel at full power.
• Solving the optimization leads to a water-filling power allocation for the
MIMO channel:
(10.11)
(10.12)
(10.14)
• Note that for fixed Mr , under the ZMSW model the law of large numbers
implies that
(10.15)
• Substituting this into (10.13) yields that the mutual information in the
asymptotic limit of large Mt becomes a constant equal to
• Fading Channels
• Channel Known at Transmitter: Water-Filling
(10.16)
(10.17)
where the expectation is with respect to the distribution on the channel matrix
H, which for the ZMSW model is i.i.d. zero-mean circularly symmetric unit
variance.
• As in the case of scalar channels, the optimum input covariance matrix
that maximizes ergodic capacity for the ZMSW model is the scaled
identity matrix M⇢t IMt . Thus the ergodic capacity is given by:
(10.19)
(10.21)
(10.22)
(10.25)
where kAkF denotes the Frobenius norm of the matrix A and the minimization
is taken over all possible space time input matrices X T.
• The pairwise error probability for mistaking a transmit matrix X for
another matrix X̂ , denoted as p(X̂ ! X) , depends only on the distance
between the two matrices after transmission through the channel and the
noise power, i.e.
(10.26)
(10.27)
(10.28)
• Let H = vec(HT )T where vec(A) is defined as the vector that results from
stacking the columns of matrix A on top of each other to form a vector.
• Substituting (10.29) into (10.27) and taking the expectation relative to all
possible channel realizations yields
✓ ◆ 1
1 H H
p(X ! X̂) det IMt Mr + 2 Dx H HDx (10.30)
4 n
P
= 2
n
⇣ ⌘ N Mr
(10.33)
4