Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Channel capacity
The AWGN channel is represented by a series of outputs input and noise, , where distribution with variance (the noise). The at discrete time event index . is the sum of the . is independent and identically distributed and drawn from a zero-mean normal are further assumed to not be correlated with the
The capacity of the channel is infinite unless the noise n is nonzero, and the
most common constraint on the input is the so-called "power" constraint, requiring that for a codeword transmitted through the channel, we have:
where
represents the maximum channel power. Therefore, the channel capacity for the power-constrained
is the distribution of
. Expand
Because
and
From this bound, we infer from a property of the differential entropy that
Additive white Gaussian noise Therefore the channel capacity is given by the highest achievable bound on the mutual information:
Where
A rate is said to be achievable if there is a sequence of codes so that the maximum probability of error tends to zero as approaches infinity. The capacity is the highest achievable rate. Consider a codeword of length codeword vector variance is now a sphere of radius sent through the AWGN channel with noise level . When received, the , and its mean is the codeword sent. The vector is very likely to be contained in around the codeword sent. If we decode by mapping every message received onto
the codeword at the center of this sphere, then an error occurs only when the received vector is outside of this sphere, which is very unlikely. Each codeword vector has an associated sphere of received codeword vectors which are decoded to it and each such sphere must map uniquely onto a codeword. Because these spheres therefore must not intersect, we are faced with the problem of sphere packing. How many distinct codewords can we pack into our -bit codeword vector? The received vectors have a maximum energy of and therefore must occupy a sphere of radius . Each codeword sphere has radius proportional to . The volume of an n-dimensional sphere is directly
, so the maximum number of uniquely decodeable spheres that can be packed into our sphere
Achievability
In this section, we show achievability of the upper bound on the rate from the last section. A codebook, known to both encoder and decoder, is generated by selecting codewords of length n, i.i.d. Gaussian with variance and mean zero. For large n, the empirical variance of the codebook will be very close to the variance of its distribution, thereby avoiding violation of the power constraint probabilistically. Received messages are decoded to a message in the codebook which is uniquely jointly typical. If there is no such message or if the power constraint is violated, a decoding error is declared. Let denote the codeword for message , while is, as before the received vector. Define the following
three events: 1. Event :the power of the received message is larger than . 2. Event : the transmitted and received codewords are not jointly typical.
Additive white Gaussian noise 3. Event : is in , the typical set where , which is to say that the incorrect codeword goes to zero as n . Therefore, for a are independent for by the joint AEP,
is jointly typical with the received vector. An error therefore occurs if , or any of the sufficiently large , we , both that and and
approaches infinity, and by the joint Asymptotic Equipartition Property the same applies to are each less than are also have independent.
Suppose that the power constraint is satisfied for a codebook, and further suppose that the messages follow a uniform distribution. Let be the input messages and the output messages. Thus the information flows as:
Let
Where the sum is over all input messages is, for noise level :
and
And, if
Therefore,
Because each codeword individually satisfies the power constraint, the average also satisfies the power constraint. Therefore
Where f0 = the center frequency of the filter B = the filter bandwidth SNR = the signal-to-noise power ratio in linear terms
References
License
Creative Commons Attribution-Share Alike 3.0 //creativecommons.org/licenses/by-sa/3.0/