Sei sulla pagina 1di 40

Wireless Communications

Lecture  7  
 MIMO  Communica2ons

Prof. Chun-Hung Liu


Dept. of Electrical and Computer Engineering
National Chiao Tung University
Fall 2014

14/11/24 Lecture  7:  MIMO  Communica2ons 1


Outline
•  MIMO Communications (Chapter 10 in Goldsmith’s Book)
•  Narrowband MIMO Model
•  Parallel Decomposition of the MIMO Channel
•  MIMO Channel Capacity
•  MIMO Diversity Gain: Beamforming
•  Diversity-Multiplexing Trade-off
•  Space-Time Modulation and Coding

14/11/24 Lecture  7:  MIMO  Communica2ons 2


Circular Complex Gaussian Vectors
•  A complex random vector is of the form x = xR + jxI where xR and xI
are real random vectors.
•  Complex Gaussian random vectors are ones in which [xR , xI ]t is a
real Gaussian random vector.
•  The distribution is completely specified by the mean and covariance
matrix of the real vector [xR , xI ]t .
•  Define

where A⇤ is the transpose of the matrix A with each element replaced by its
complex conjugate, and At is just the transpose of A.
•  Note that in general the covariance matrix K of the complex random
vector x by itself is not enough to specify the full second-order statistics
of x. Indeed, since K is Hermitian, i.e., K = K⇤, the diagonal elements are
real and the elements in the lower and upper triangles are complex
conjugates of each other.
14/11/24 Lecture  7:  MIMO  Communica2ons 3
Circular Complex Gaussian Vectors
•  In wireless communication, we are almost exclusively interested in
complex random vectors that have the circular symmetry property:

•  For a circular symmetric complex random vector x,

for any ✓ ; hence the mean µ = 0 . Moreover

for any ✓ ; hence the pseudo-covariance matrix J is also zero.


•  Thus, the covariance matrix K fully specifies the first and second
order statistics of a circular symmetric random vector.

14/11/24 Lecture  7:  MIMO  Communica2ons 4


Circular Complex Gaussian Vectors
•  And if the complex random vector is also Gaussian, K in fact specifies
its entire statistics.
•  A circular symmetric Gaussian random vector with covariance matrix
K is denoted as .
•  Some special cases:
•  A complex Gaussian random variable w = wR + jwI with i.i.d.
zero-mean Gaussian real and imaginary components is circular
symmetric. In fact, a circular symmetric Gaussian random
variable must have i.i.d. zero-mean real and imaginary
components.
•  A collection of n i.i.d. CN (0, 1) random variables forms a standard
circular symmetric Gaussian random vector w and is denoted
by CN (0, I) . The density function of w can be explicitly written as

•  Uw has the same distribution as w for any complex orthogonal


matrix U (such a matrix is called a unitary matrix and
characterized by the property UU⇤ = I .
14/11/24 Lecture  7:  MIMO  Communica2ons 5
Narrowband MIMO Model
•  Here we consider a narrowband MIMO channel. A narrowband point-to-
point communication system of transmit and receive antennas is
shown in the following figure.

14/11/24 Lecture  7:  MIMO  Communica2ons 6


Narrowband MIMO Model
•  Or simply as

•  We assume a channel bandwidth of B and complex Gaussian noise with


zero mean and covariance matrix , where typically .
•  For simplicity, given a transmit power constraint P we will assume an
equivalent model with a noise power of unity and transmit power
, where ρ can be interpreted as the average SNR per receive
antenna under unity channel gain.
•  This power constraint implies that the input symbols satisfy

(10.1)

where is the trace of the input covariance matrix

14/11/24 Lecture  7:  MIMO  Communica2ons 7


Parallel Decomposition of the MIMO Channel
•  When both the transmitter and receiver have multiple antennas, there is
another mechanism for performance gain called (spatial) multiplexing
gain.
•  The multiplexing gain of an MIMO system results from the fact that a
MIMO channel can be decomposed into a number R of parallel
independent channels.
•  By multiplexing independent data onto these independent channels, we
get an R-fold increase in data rate in comparison to a system with just
one antenna at the transmitter and receiver.
•  Consider a MIMO channel with channel gain matrix
known to both the transmitter and the receiver.
•  Let denote the rank of H. From matrix theory, for any matrix H we
can obtain its singular value decomposition (SVD) as

(10.2)

where Σ is an diagonal matrix of singular values of H.


14/11/24 Lecture  7:  MIMO  Communica2ons 8
Parallel Decomposition of the MIMO Channel
•  Since cannot exceed the number of columns or rows of H,
If H is full rank, which is sometimes referred to as a rich scattering
environment, then .
•  The parallel decomposition of the channel is obtained by defining a
transformation on the channel input and output x and y through transmit
precoding and receiver shaping, as shown in the following figure:

•  The transmit precoding and receiver shaping transform the MIMO


channel into RH parallel single-input single-output (SISO) channels with
input and output , since from the SVD, we have that

14/11/24 Lecture  7:  MIMO  Communica2ons 9


Parallel Decomposition of the MIMO Channel

14/11/24 Lecture  7:  MIMO  Communica2ons 10


Parallel Decomposition of the MIMO Channel
•  This parallel decomposition is shown in the following figure:

14/11/24 Lecture  7:  MIMO  Communica2ons 11


MIMO Channel Capacity
•  The capacity of a MIMO channel is an extension of the mutual
information formula for a SISO channel given in Lecture 3 to a matrix
channel.
•  Specifically, the capacity is given in terms of the mutual information
between the channel input vector x and output vector y as

(10.5)

•  The definition of entropy yields that H(Y|X) = H(N), the entropy in the
noise. Since this noise n has fixed entropy independent of the channel
input, maximizing mutual information is equivalent to maximizing the
entropy in y.
•  The mutual information of y depends on its covariance matrix, which
for the narrowband MIMO model is given by

(10.6)

14/11/24 Lecture  7:  MIMO  Communica2ons 12


MIMO Channel Capacity
where Rx is the covariance of the MIMO channel input.

•  The mutual information can be shown as

(10.7)

•  The MIMO capacity is achieved by maximizing the mutual information


over all input covariance matrices satisfying the power constraint:

(10.8)

where denotes the determinant of the matrix A.

•  Now let us consider the case of Channel Known at Transmitter

14/11/24 Lecture  7:  MIMO  Communica2ons 13


MIMO Channel Capacity
•  Substituting the matrix SVD of H into C and using properties of unitary
matrices we get the MIMO capacity with CSIT and CSIR as
(10.9)

Since ⇢ = P/ n2 , the capacity (10.9) can also be expressed in terms of the


power allocation Pi to the ith parallel channel as

(10.10)

2 2
where ⇢i = Pi / n2 and i = i P/ n is the SNR associated with the ith
channel at full power.
•  Solving the optimization leads to a water-filling power allocation for the
MIMO channel:
(10.11)

14/11/24 Lecture  7:  MIMO  Communica2ons 14


MIMO Channel Capacity
•  The resulting capacity is then

(10.12)

•  Now consider the case of Channel Unknown at Transmitter (Uniform


Power Allocation)
•  Suppose now that the receiver knows the channel but the transmitter
does not. Without channel information, the transmitter cannot optimize
its power allocation or input covariance structure across antennas.
•  If the distribution of H follows the ZMSW channel gain model, there is
no bias in terms of the mean or covariance of H.
•  Thus, it seems intuitive that the best strategy should be to allocate equal
power to each transmit antenna, resulting in an input covariance matrix
equal to the scaled identity matrix:

14/11/24 Lecture  7:  MIMO  Communica2ons 15


MIMO Channel Capacity
•  It is shown that under these assumptions this input covariance matrix
indeed maximizes the mutual information of the channel.
•  For an Mt transmit, Mr-receive antenna system, this yields mutual
information given by
(10.13)

•  Using the SVD of H, we can express this as

where and is the number of nonzero


singular values of H.
•  The mutual information of the MIMO channel (10.13) depends on the
specific realization of the matrix H, in particular its singular values { i }.

14/11/24 Lecture  7:  MIMO  Communica2ons 16


MIMO Channel Capacity
•  In capacity with outage the transmitter fixes a transmission rate C, and
the outage probability associated with C is the probability that the
transmitted data will not be received correctly or, equivalently, the
probability that the channel H has mutual information less than C.
•  This probability is given by

(10.14)

•  Note that for fixed Mr , under the ZMSW model the law of large numbers
implies that
(10.15)

•  Substituting this into (10.13) yields that the mutual information in the
asymptotic limit of large Mt becomes a constant equal to

14/11/24 Lecture  7:  MIMO  Communica2ons 17


MIMO Channel Capacity
•  We can have two important observations from the results in (10.14)
and (10.15)
•  As SNR grows large, capacity also grows linearly with M = min{Mt , Mr }
for any Mt and Mr .
•  At very low SNRs transmit antennas are not beneficial: Capacity only
scales with the number of receive antennas independent of the number of
transmit antennas.

•  Fading Channels
•  Channel Known at Transmitter: Water-Filling

(10.16)

14/11/24 Lecture  7:  MIMO  Communica2ons 18


MIMO Channel Capacity
•  A less restrictive constraint is a long-term power constraint, where we
can use different powers for different channel realizations subject to the
average power constraint over all channel realizations.
•  The ergodic capacity in this case is

(10.17)

•  Channel Unknown at Transmitter: Ergodic Capacity and Capacity


with Outage
•  Consider now a time-varying channel with random matrix H
known at the receiver but not the transmitter. The transmitter
assumes a ZMSW distribution for H.
•  The two relevant capacity definitions in this case are ergodic
capacity and capacity with outage.
•  Ergodic capacity defines the maximum rate, averaged over all
channel realizations, that can be transmitted over the channel
for a transmission strategy based only on the distribution of H.
14/11/24 Lecture  7:  MIMO  Communica2ons 19
MIMO Channel Capacity
•  This leads to the transmitter optimization problem - i.e., finding the
optimum input covariance matrix to maximize ergodic capacity subject
to the transmit power constraint.
•  Mathematically, the problem is to characterize the optimum Rx to
maximize
(10.18)

where the expectation is with respect to the distribution on the channel matrix
H, which for the ZMSW model is i.i.d. zero-mean circularly symmetric unit
variance.
•  As in the case of scalar channels, the optimum input covariance matrix
that maximizes ergodic capacity for the ZMSW model is the scaled
identity matrix M⇢t IMt . Thus the ergodic capacity is given by:

(10.19)

14/11/24 Lecture  7:  MIMO  Communica2ons 20


MIMO Channel Capacity
•  The ergodic capacity of a 4x4 MIMO system with i.i.d. complex
Gaussian channel gains is shown in Figure 10.4.

14/11/24 Lecture  7:  MIMO  Communica2ons 21


MIMO Channel Capacity
•  Capacity with outage is defined similar to the definition for static channels
described in previous, although now capacity with outage applies to a
slowly-varying channel where the channel matrix H is constant over a
relatively long transmission time, then changes to a new value.
•  As in the static channel case, the channel realization and corresponding
channel capacity is not known at the transmitter, yet the transmitter must
still fix a transmission rate to send data over the channel.
•  For any choice of this rate C, there will be an outage probability associated
with C, which defines the probability that the transmitted data will not be
received correctly.
•  The outage capacity can sometimes be improved by not allocating power
to one or more of the transmit antennas, especially when the outage
probability is high. This is because outage capacity depends on the tail of
the probability distribution.
•  With fewer antennas, less averaging takes place and the spread of the tail
increases.

14/11/24 Lecture  7:  MIMO  Communica2ons 22


MIMO Channel Capacity
•  The capacity with outage of a 4 × 4 MIMO system with i.i.d. complex
Gaussian channel gains is shown in Figure 10.5

14/11/24 Lecture  7:  MIMO  Communica2ons 23


MIMO Channel Capacity

14/11/24 Lecture  7:  MIMO  Communica2ons 24


MIMO Diversity Gain: Beamforming
•  The multiple antennas at the transmitter and receiver can be used to
obtain diversity gain instead of capacity gain.
•  In this setting, the same symbol, weighted by a complex scale factor,
is sent over each transmit antenna, so that the input covariance
matrix has unit rank.
•  This scheme is also referred to as MIMO beamforming.
•  A beamforming strategy corresponds to the precoding and receiver
matrices described in previous being just column vectors: V = v and
U = u, as shown in Figure 10.7.
•  In the figure, the transmit symbol x is sent over the ith antenna with
weight vi . On the receive side, the signal received on the ith antenna
is weighted by ui .
•  Both transmit and receive weight vectors are normalized so that
kuk = kvk = 1
•  The resulting received signal is given by
(10.20)
where if n = (n1 , · · · , nMr ) has i.i.d. elements.
14/11/24 Lecture  7:  MIMO  Communica2ons 25
MIMO Diversity Gain: Beamforming

•  Beamforming provides diversity gain by coherent combining of the


multiple signal paths. Channel knowledge at the receiver is typically
assumed since this is required for coherent combining.
14/11/24 Lecture  7:  MIMO  Communica2ons 26
MIMO Diversity Gain: Beamforming
•  The diversity gain then depends on whether or not the channel is known
at the transmitter.
•  When the channel matrix H is known, the received SNR is optimized by
choosing u and v as the principal left and right singular vectors of the
channel matrix H.
•  The corresponding received SNR can be shown to equal = ⇢ max ,
where max is the largest eigenvalue of the Wishart matrix W = HH H.
•  The resulting capacity is C = B log2 (1 + max ⇢) , corresponding to the
capacity of a SISO channel with channel power gain max .
•  When the channel is not known at the transmitter, the transmit antenna
weights are all equal, so the received SNR equals = kHu⇤ k, where u is
chosen to maximize γ.
•  Clearly the lack of transmitter CSI will result in a lower SNR and
capacity than with optimal transmit weighting.

14/11/24 Lecture  7:  MIMO  Communica2ons 27


Diversity-Multiplexing Tradeoffs
•  So far, we already knew that there are two mechanisms for utilizing
multiple antennas to improve wireless system performance.
•  One option is to obtain capacity gain by decomposing the MIMO channel
into parallel channels and multiplexing different data streams onto these
channels. This capacity gain is also referred to as a multiplexing gain.
•  It is not necessary to use the antennas purely for multiplexing or diversity.
•  Some of the space-time dimensions can be used for diversity gain, and the
remaining dimensions used for multiplexing gain.
•  This gives rise to a fundamental design question in MIMO systems: should
the antennas be used for diversity gain, multiplexing gain, or both?
•  The diversity/multiplexing tradeoff or, more generally, the tradeoff
between data rate, probability of error, and complexity for MIMO systems
has been extensively studied in the literature, from both a theoretical
perspective and in terms of practical space-time code designs.
•  This work has primarily focused on block fading channels with receiver
CSI only since when both transmitter and receiver know the channel the
tradeoff is relatively straightforward.
14/11/24 Lecture  7:  MIMO  Communica2ons 28
Diversity-Multiplexing Tradeoffs
•  Antenna subsets can first be grouped for diversity gain and then the
multiplexing gain corresponds to the new channel with reduced
dimension due to the grouping.
•  For finite blocklengths it is not possible to achieve full diversity and full
multiplexing gain simultaneously, in which case there is a tradeoff
between these gains.
•  A transmission scheme is said to achieve multiplexing gain r and
diversity gain d if the data rate (bps) per unit Hertz R(SNR) and
probability of error Pe (SN R) as functions of SNR satisfy

(10.21)

(10.22)

14/11/24 Lecture  7:  MIMO  Communica2ons 29


Diversity-Multiplexing Tradeoffs
•  For each r the optimal diversity gain dopt (r) is the maximum the
diversity gain that can be achieved by any scheme.
•  It is shown that if the fading block length exceeds the total number of
antennas at the transmitter and receiver, then
(10.23)
•  The function (10.23) is plotted in Fig. 10.8.

14/11/24 Lecture  7:  MIMO  Communica2ons 30


Space-Time Modulation and Coding
•  Since a MIMO channel has input-output relationship y = Hx + n , the
symbol transmitted over the channel each symbol time is a vector rather
than a scalar, as in traditional modulation for the SISO channel.
•  Moreover, when the signal design extends over both space (via the
multiple antennas) and time (via multiple symbol times), it is typically
referred to as a space-time code.
•  Most space-time codes are designed for quasi-static channels where the
channel is constant over a block of T symbol times, and the channel is
assumed unknown at the transmitter.
•  Let X = [x1 , . . . , xT ] denote the Mt ⇥ T channel input matrix with ith
column xi equal to the vector channel input over the ith transmission
time.
•  Let Y = [y1 , . . . , yT ] denote the Mt ⇥ T channel output matrix with ith
column yi equal to the vector channel output over the ith transmission
time.
•  Let N = [n1 , . . . , nT ] denote the Mr ⇥ T noise matrix with ith column ni
equal to the receiver noise vector on the ith transmission time.
14/11/24 Lecture  7:  MIMO  Communica2ons 31
ML Detection and Pairwise Error Probability
•  With this matrix representation the input-output relationship over all
T blocks becomes
(10.24)

•  Assume a space-time code where the receiver has knowledge of the


channel matrix H. Under ML detection and given received matrix Y, the
ML transmit matrix X̂ satisfies

(10.25)

where kAkF denotes the Frobenius norm of the matrix A and the minimization
is taken over all possible space time input matrices X T.
•  The pairwise error probability for mistaking a transmit matrix X for
another matrix X̂ , denoted as p(X̂ ! X) , depends only on the distance
between the two matrices after transmission through the channel and the
noise power, i.e.

14/11/24 Lecture  7:  MIMO  Communica2ons 32


ML Detection and Pairwise Error Probability

(10.26)

Let Dx = X X̂ denote the difference matrix between X and X̂. Applying


the Chernoff bound to (10.26) yields

(10.27)

•  Let hi denote the ith row of H, i = 1, . . . , Mr . Then

(10.28)

•  Let H = vec(HT )T where vec(A) is defined as the vector that results from
stacking the columns of matrix A on top of each other to form a vector.

14/11/24 Lecture  7:  MIMO  Communica2ons 33


ML Detection and Pairwise Error Probability
•  So HT is a vector of length Mt Mr . Also define DX = IMr ⌦ Dx , where ⊗
denotes the Kronecker product. With these definitions,

kHDx k2F = kHDx k2 = HDx DxH HH (10.29)

•  Substituting (10.29) into (10.27) and taking the expectation relative to all
possible channel realizations yields
✓  ◆ 1
1 H H
p(X ! X̂)  det IMt Mr + 2 Dx H HDx (10.30)
4 n

•  Suppose that the channel matrix H is random and spatially white, so


that its entries are i.i.d. zero-mean unit variance complex Gaussian
random variables. Then taking the expectation yields
✓  ◆ Mr
P
p(X ! X̂)  det IMt + 2 (10.31)
4 n
where = 1
P DH
x Dx
14/11/24 Lecture  7:  MIMO  Communica2ons 34
ML Detection and Pairwise Error Probability
N ✓
Y ◆M r
1
p(X ! X̂)  (10.32)
1+ k( )/4
k=1

P
= 2
n

⇣ ⌘ N Mr
(10.33)
4

•  Rank and Determinant Criterion: The pairwise error probability in


(10.33) indicates that the probability of error decreases as d for d = N Mr

•  Thus, N Mr is the diversity gain of the space-time code. The maximum


diversity gain possible through coherent combining of Mt transmit and Mr
receive antennas is Mr Mt .
14/11/24 Lecture  7:  MIMO  Communica2ons 35
ML Detection and Pairwise Error Probability
•  To obtain this maximum diversity gain, the space-time code must be
designed such that the Mt ⇥ Mt difference matrix Δ between any two
code words has full rank equal to Mt . This design criterion is referred
to as the rank criterion.
•  The coding gain associated with the pairwise error probability in
(10.33) depends on the first term

•  A high coding gain is achieved by maximizing the minimum of the


determinant of Δ over all input matrix pairs X and X̂ . This criterion is
referred to as the determinant criterion.

14/11/24 Lecture  7:  MIMO  Communica2ons 36


Spatial Multiplexing and BLAST Architectures
•  In order to get full diversity order an encoded bit stream must be
transmitted over all Mt transmit antennas. This can be done through a
serial encoding, illustrated in Figure 10.9.

14/11/24 Lecture  7:  MIMO  Communica2ons 37


Spatial Multiplexing and BLAST Architectures
•  A simpler method to achieve spatial multiplexing, pioneered at Bell
Laboratories as one of the Bell Labs Layered Space Time (BLAST)
architectures for MIMO channels, is parallel encoding, illustrated in
Figure 10.10.

14/11/24 Lecture  7:  MIMO  Communica2ons 38


Spatial Multiplexing and BLAST Architectures

14/11/24 Lecture  7:  MIMO  Communica2ons 39


Spatial Multiplexing and BLAST Architectures

14/11/24 Lecture  7:  MIMO  Communica2ons 40

Potrebbero piacerti anche