Sei sulla pagina 1di 32

EEE 6207 : Broadband Wireless Communication

Instructor : Dr. Md. Forkan Uddin

Reduction of PAPR in OFDM Systems


using Encoder-Decoder Neural Networks

101806-2228
Shahruk Hossain
MSc, EEE, BUET

13.04.2019 code available at : https://github.com/shahruk10/PAPRnet


OFDM System : Transmitter

Bit
01001
Stream

● Stream of 0s and 1
OFDM System : Transmitter
0
Serial . 1
Bit 0
to .
Stream . 0
Parallel
1

● Stream of 0s and 1
● (N x M) bits chunked into groups via buffer
OFDM System : Transmitter

Serial .
Bit M-QAM
to .
Stream . modulation
Parallel

● Stream of 0s and 1
● (N x M) bits chunked into groups via buffer
● Chunks mapped to QAM symbols (r, θ)
OFDM System : Transmitter

Serial .
.
Bit M-QAM .
to .
.
N-IFFT
Stream . modulation
Parallel

● QAM symbols (r, θ) taken in IFFT window of length N


● r treated as magnitude of frequency component
● θ treated as phase of frequency component
● IFFT returns N time domain samples of OFDM Signal
● Samples represent an OFDM symbol
● Each OFDM symbol encodes (N x M) bits
OFDM System : Transmitter
Magnitude Spectra (r)

Time Domain OFDM Signal

N-IFFT
Phase Spectra (θ)
OFDM System : Transmitter

Serial . . Add Ncp


.
Bit M-QAM . .
to . N-IFFT Cyclic
Stream modulation . .
Parallel . Prefix

● Some samples from the end for each OFDM symbol is prepended to itself (cyclic prefix)
OFDM System : Transmitter

Serial . . Add Ncp . Parallel


.
Bit M-QAM . . . Transmitter
to . N-IFFT Cyclic to
Stream modulation . . . Circuitry
Parallel . Prefix Serial

● Ncp samples from the end for each OFDM symbol is prepended to itself (cyclic prefix)
● Samples are serialized out
● Samples converted to analog signal
● Power Amplifiers boost the signal
● Signal is transmitted
Problem : High PAPR
Time Domain OFDM Signal

● OFDM Signal = sum of many sub carriers


● Where phases match = sum to high peaks
● Peak Power >> Average Power
Problem : High PAPR
● PAPR can be as high as 12 ~ 25 dB
● PAPR = 10 dB ~ peak power 10 times greater than average power
● Peaks in power drives power amplifiers into non linear region
● Leads to Intermodulation Distortion
● PAPR increases with number of sub-carriers (N)
High PAPR : Solutions
● Clipping Peaks ** highlighted techniques
implemented for this
● Coding Techniques project

● Tone Reservation
● Partial Transmit Sequence (PTS)
● Selective Mapping (SLM)
● Constellation Extension (CE)
● Encoder-Decoder Neural Network
High PAPR : Solutions
High PAPR : Solutions
● Solutions were implemented in python
● AWGN channel with Rician Fading was used when calculating BER
● N = 512, Upsampling Factor, L = 4
● Modulation type was 4-QAM
● Cyclic prefix was set to 25 % of N =128
● Data for transmission was the picture on the right
● Code available at : https://github.com/shahruk10/PAPRnet
Clipping
● Clipping magnitude above threshold in OFDM signal
● Threshold chosen as % of maximum
● Aggressive clipping = increase in Bit Error Rate (BER)
Clipping

def clipOFDM(self, ofdmChunks):


for cdx, chunk in enumerate(ofdmChunks):
clippingVal = self.clippingPercent * np.max(np.abs(chunk))
for vdx, val in enumerate(chunk):
if np.abs(val) > clippingVal:
# clipping magnitude, phase same as before
ofdmChunks[cdx, vdx] = polar2rect(clippingVal, np.angle(val, deg=True))

return ofdmChunks
Convolutional Encoding
● Select coding scheme that minimizes PAPR

● Adds redundancy and error correcting ability


● Implemented code generators of different code rate
● Shown to reduce PAPR
● Requires extensive calculation to find good codes
Convolutional Encoding

n4
+
n1 n1 n1
+ + +

bt bt bt
bt-1 bt-2 bt-3 bt-1 bt-2 bt-3 bt-1 bt-2 bt-3 bt-4

n2 n2 n2
+ + +
n3 n3
+ +

(a) coding rate = ½ (b) coding rate = ⅓ (c) coding rate = ¼


Selective Mapping
● Involves finding alternate phase shifted Check PAPR
version of OFDM signal with lower
PAPR
. .
M-QAM
● Phase Rotating Vectors (PRVs) are modulation
. SLM . N-IFFT
. .
randomly generated using values from :
[ 1, -1, -j, j ]
● Modulation Symbols multiplied by PRVs Multiply with
different PRV
● New OFDM Signal = IFFT { Rotated Symbols }
● Signal with lowest PAPR selected
Selective Mapping
● Receiver needs to know about PRV to Check PAPR
reverse changes
● Decrease in PAPR depends on number of . .
M-QAM
PRVs tried (number of PRV candidates) modulation
. SLM . N-IFFT
. .

● Distortionless technique (no degradation


in BER)
Multiply with
different PRV
Selective Mapping
def applySLM (self, ofdmChunks ):
np.random.seed( self.seed)
self.SLMPhaseRotationOptions = [1, -1, 1j, -1j]
self.SLMPhaseVectorIdx = np.zeros(ofdmChunks.shape[ 0])
phaseVecCandidates = np.random.choice( self.SLMPhaseRotationOptions, ( self.SLMCandidates, self.N))
slmChunks = np.zeros_like(ofdmChunks)
for cdx, chunk in enumerate (ofdmChunks):
# multiplying by each phaseVec candidate and finding min obtained papr
minPAPR = np.inf
minPdx = None
for pdx, phaseVec in enumerate (phaseVecCandidates):
mapped = np.multiply(chunk, phaseVec)
mappedTimeDomainSq = np.power(np.abs(np.fft.ifft(mapped, self.N, axis=-1)),2)
PAPR = np.divide(np.max(mappedTimeDomainSq), np.mean(mappedTimeDomainSq))
if PAPR < minPAPR:
minPAPR = PAPR
minPdx = pdx
# storing mapped vec with min papr
slmChunks[cdx] = np.multiply(chunk, phaseVecCandidates[minPdx])
self.SLMPhaseVectorIdx [cdx] = minPdx
return slmChunks
Encoder-Decoder Neural Network : PAPRnet
Tx Modulation Symbols
● Why not learn optimum mapping (r, θ)
from data?
● Model told to minimize PAPR while Encoder Model
minimizing BER as well
● Akin to data-driven SLM or CE has min Alternate Symbols
PAPR (z, φ)
● Trained on typical data to be
transmitted
Decoder Model
● Implemented in Keras / Python
● Needs training data for noisy real Modulation Symbols
world condition as well Rx (r, θ)
Encoder-Decoder Neural Network : PAPRnet
Encoder Model Decoder Model Tx Modulation Symbols
(r, θ)
Input (2,N) Input (2,N)

Dense (2, 2N) Dense (2, 2N)


Encoder Model
Batch Normalization Batch Normalization

Dense (2, 2N) Dense (2, 2N) has min Alternate Symbols
PAPR (z, φ)
Batch Normalization Batch Normalization

Dense (2,N) Dense (2,N)


Decoder Model
N-IFFT

Compute Modulation Symbols


PAPR MSE
Gradient Rx (r, θ)
Encoder-Decoder Neural Network : PAPRnet

. . . Add .
Bit Serial to . M-QAM Parallel to Transmitter
. Encoder . N-IFFT . Cyclic .
.
Stream Parallel .
modulation . . . . Serial Circuitry
Prefix

(a) Transmitting Side

. Strip
Bit Parallel to M-QAM . . . . Serial to
. Receiver
. Decoder . N-FFT . Cyclic .
Stream Serial . Demodulation . . . . Parallel Circuitry
Prefix

(b) Receiving Side


Encoder-Decoder Neural Network : PAPRnet
Encoder-Decoder Neural Network : PARPnet
def PAPRnetEncoder (N): def PAPRnetDecoder (N):

enc_in = Input(( 2, N), name="encoder_input" ) dec_in = Input(( 2,N), name="decoder_input" )

h1 = Dense(N *2, activation ='relu' , h4 = Dense(N *2, activation ='relu' , name='Dense4' )(dec_in)
name='Dense1' )(enc_in) h4 = BatchNormalization( name='DenseBN4' )(h4)
h1 = BatchNormalization( name='DenseBN1' )(h1)
h5 = Dense(N *2, activation ='relu' , name='Dense5' )(h4)
h2 = Dense(N *2, activation ='relu' , name='Dense2' )(h1) h5 = BatchNormalization( name='DenseBN5' )(h5)
h2 = BatchNormalization( name='DenseBN2' )(h2)
dec_out = Dense(N, activation ='linear' , name='decoder_output' )(h5)
h3 = Dense(N, activation ='relu' , name='Dense3' )(h2)
h3 = BatchNormalization( name='DenseBN3' )(h3) return Model( inputs =[dec_in], outputs =[dec_out],
name="PAPRnet_Decoder" )
enc_out = Lambda( lambda x: x, name='encoder_output' )(h3)
return Model( inputs =[enc_in], outputs =[enc_out],
name="PAPRnet_Encoder" )
Encoder-Decoder Neural Network : PARPnet
def PAPRnetAutoEncoder(N, enc, dec):

# auto encoder
enc_in = enc.input
enc_out = enc(enc_in)
dec_out = dec(enc_out)

# taking ifft of encoder output - used to minimize PAPR


cmplx = FloatToComplex(enc_out, name='EncoderOut-FloatToComplex')
ifft = IFFT(cmplx, name='%d-IFFT' % N)
ifft = ComplexToFloat(ifft, name='%d-IFFT-ComplexToFloat' % N)

return Model(inputs=[enc_in], outputs=[dec_out, ifft])

Potrebbero piacerti anche