Sei sulla pagina 1di 23

Welcome to LTE Encyclopedia; your one stop resource for all information related to the 3GPP LTE standard

3GPP Long Term Evolution (LTE), is the latest standard in the mobile network technology tree that produced the GSM/EDGE and UMTS/HSxPA network technologies. It is a project of the 3rd Generation Partnership Project (3GPP), operating under a name trademarked by one of the associations within the partnership , the European Telecommunications Standards Institute.

LTE Systems Overview

Long Term Evolution (LTE) is the latest step in moving forward from the cellular 3G services ( e.g. GSM to UMTS to HSPA to LTE or CDMA to LTE). LTE is based on standards developed by the 3
rd

Generation

Partnership Project (3GPP). LTE may also be referred more formally as Evolved UMTS Terrestrial Radio Access (E-UTRA) and Evolved UMTS Terrestrial Radio Access Network (EUTRAN). Even though 3GPP created standards for GSM/UMTS family, the LTE

standards are completely new, with exceptions where it made sense. The following are the main objectives for LTE. Increased downlink and uplink peak data rates. Scalable bandwidth Improved spectral efficiency All IP network A standards based interface that can support a multitude of user types. LTE networks are intended to bridge the functional data exchange gap between very high data rate fixed wireless Local Area Networks (LAN) and very high mobility cellular networks.

Overview of the LTE Standard


The original study item on Long Term Evolution (LTE) of 3GPP Radio Access Technology was initiated with the aim to ensure that 3GPP RAT is competitive in the future (next 10 years). Focus of the study was on enhancement of the radio-access technology (UTRA) and optimization & simplification of radio access network (UTRAN). The key driving factors for LTE are:

Efficient spectrum utilization Flexible spectrum allocation Reduced cost for the operator Improved system capacity and coverage Higher data rate with reduced latency

Targets for LTE

Some specific targets set for LTE are listed below [3GPP TR 25.913]

Increased peak data rate:100Mbps for DL with 20MHz (2 Rx Antenna at UE), 50Mbps for UL with 20MHz Improved spectral efficiency: 5bps/Hz for DL and 2.5bps/Hz for UL Improved cell edge performance (in terms of bit rate) Reduced latency:

Overall Network Architecture

The E-UTRAN uses a simplified single node architecture consisting of the eNBs (E-UTRAN Node B). The eNB communicates with the Evolved Packet Core (EPC) using the S1 interface; specifically with the MME (Mobility Management Entity) and the UPE (User Plane Entity) identified as S-GW (Serving Gateway) using S1-C and S1-U for control plane and user plane respectively. The MME and the UPE are preferably implemented as separate network nodes so as to facilitate independent scaling of the control and user plane. Also the eNB communicates with other eNB using the X2 interface (X2-C and X2-U for control and user plane respectively). Please Refer to LTE Network Infrastructure and Elements for a detailed overview of individual network elements.

Overall Architecture [3GPP TS 36.300]

LTE supports an option of Multicast/Broadcast over a Single Frequency Network (MBSFN), where a common signal is transmitted from multiple cells with appropriate time synchronization. The eNB being the only entity of the E-UTRAN supports all the functions in a typical radio network such as Radio Bearer

control, Mobility management, Admission control and scheduling. The Access Stratum resides completely at the eNB.

Functional Split between E-UTRAN and EPC[ 3GPP TS 36.300]

LTE Physical layer

The LTE physical layer is based on Orthogonal Frequency Division Multiplexing scheme OFDM to meet the targets of high data rate and improved spectral efficiency. The spectral resources are allocated/used as a combination of both time (aka slot) and frequency units (aka subcarrier). MIMO options with 2 or 4 Antennas is supported. Multi-user MIMO is supported in both UL and DL.The modulation schemes supported in the downlink and uplink are QPSK, 16QAM and 64QAM.

Downlink (DL) Physical Channel

The downlink transmission uses the OFDM with cyclic prefix . Some of the reasons for using OFDM are given below: Multiple carrier modulation (MCM) helps in countering the frequency selective fading as the channel appears to have nearly flat frequency response for the narrow band subcarrier. The frequency range of the resource block and the number of resource blocks can be changed (or adapted to the channel condition) allowing flexible spectrum allocation. Higher peak data rates can be achieved by using multiple resource blocks and not by reducing the symbol duration or using still higher order modulation thereby reducing the receiver complexity. The multiple orthogonal subcarriers inherently provides higher spectral efficiency. The cyclic prefix (CP) is the partial repetition of the bit/symbol sequence from the end to the beginning. This makes the time domain input sequence to appear periodic over a duration so that the DFT representation is possible for any frequency domain processing. Also the duration if chosen larger than the channel delay spread, will help in reducing the inter-symbol interference. The following pilot signals are defined for the downlink physical layer: Reference signal: The reference signal consists of known symbols transmitted at a well defined OFDM symbol position in the slot. This assists the receiver at the user terminal in estimating the channel impulse response so that channel distortion in the received signal can be compensated for. There is one reference signal transmitted per downlink antenna port and an exclusive symbol position is assigned for an antenna port (when one antenna port transmits a reference signal other ports are silent). Synchronization signal: Primary and secondary synchronization signals are transmitted at a fixed subframes (first and sixth) position in a frame and assists in the cell search and synchronization process at the user terminal. Each cell is assigned unique Primary sync signal.

Uplink (UL) Physical Channel


The uplink transmission uses the SC-FDMA (Single Carrrier FDMA) scheme. The SC-FDMA scheme is realized as a two stage process where the first stage transforms the input signal to frequency domain (represented by DFT coefficients) and the second stage converts these DFT coefficients to an OFDM signal using the OFDM scheme. Because of this association with OFDM, the SC-FDMA is also called as DFT-Spread OFDM. The reasons (in addition to those applicable for OFDM for downlink) for this choice are given below: The two stage process allows selection of appropriate frequency range for the subcarriers while mapping the set of DFT coefficients to the Resource Blocks. Unique frequency can be allocated to different users at any given time so that there is no co-channel interference between users in the same cell. Also channels with significant co-channel interference can be avoided.

The transformation is equivalent to shift in the center frequency of the single carrier input signal. The subcarriers do not combine in random phases to cause large variation in the instantaneous power of the modulated signal. This means lower PAPR (Peak to Average Power Ratio).

The PAPR (Peak to Average Power Ratio) of SC-FDMA is lesser than that of the conventional OFDMA, so the RF power amplifier (PA) can be operated at a point nearer to recommended operating point. This increases the efficiency of a PA thereby reducing the power consumption at the user terminal.

The following pilot signals are defined for the uplink physical layer: Demodulation Reference signal: This signal send by the user terminal along with the uplink transmission, assists the network in estimating the channel impulse response for the uplink bursts so as to effectively demodulate the uplink channel. Sounding Reference Signal: This signal is sent by the user terminal assists the network in estimating the overall channel conditions and to allocate appropriate frequency resources for uplink transmission.

RLC & MAC Layer

Up to date versions of the RLC [3GPP TS 36.322] and MAC [3GPP TS 36.321] specifications are available and the majority of procedures are specified. The Hybrid-ARQ is suggested at the MAC layer in addition to the ARQ at the RLC layer.

Radio Resource Management

All the following functions are assigned to eNodeB(s) in the E-UTRAN

Radio bearer control Radio admission control Connection mobility management Dynamic resource allocation Inter cell interference coordination Load balancing Inter RAT RRM functions

Cell Reselection Procedures in LTE

Contents

1. 2. 3. 1. 2. 4.

1 Introduction 2 LTE Initial Access 3 Initial synchronization 3.1 Primary Synchronization Signal (PSS) 3.2 Secondary Synchronization Signal (SSS) 4 LTE Cell selection and reselection criteria

Introduction
Cell reselection is a complex process in LTE. The following extract from [1] provides a very good understanding of the overall procedure.

LTE Initial Access


Like all mobile communication systems, in LTE a terminal must perform certain steps before it can receive or transmit data. These steps can be categorized in cell search and cell selection, derivation of system information, and random access. The complete procedure is known as LTE Initial Access and is shown in the Figure below. After the initial access procedure, the terminal is able to receive and transmit its user data.

Initial synchronization

Successful execution of the cell search and selection procedure as well as acquiring initial system information is essential for the UE before taking further steps to communicate with the network. For this reason, it is important to take a closer look at

this fundamental physical layer procedure. This section focuses on the cell-search scheme defined for LTE and the next chapter describes reception of the essential system information. As in 3G (WCDMA), LTE uses a hierarchical cell-search procedure in which an LTE radio cell is identified by a cell identity, which is comparable to the scrambling code that is used to separate base stations and cells in WCDMA. To avoid the need for expensive and complicated network and cell planning, 504 physical layer cell identities of is sufficiently large. With a hierarchical cell search scheme, these identities are divided into 168 unique cell layer identity groups in the physical layer, in which each group consists of three physical layer identities. To remember this hierarchical principle, consider the example of first names and surnames. According to statistics, the most common English surname is Smith, which corresponds to physical layer cell identity group 0. The second most common surname is Johnson, which represents the physical layer cell identity group 1. This example can be extended to the last group, which would be Rose. The most common male first names are James, John, or Robert and female names are Mary, Patricia, and Linda. Each first name represents one of the three physical layer identities. This information is now transmitted using two different signals, generated by Layer 1. The two signals, carrying the physical layer identity and the physical layer cell identity group, are the primary and the secondary synchronization signals respectively. This means that the complete cell search procedure consists of two steps to identify the cells identity as shown Graphically in the Figure below:

Primary Synchronization Signal (PSS)


The UE first looks for the primary synchronization signal (PSS) which is transmitted in the last OFDM symbol of the first time slot of the first subframe (subframe 0) in a radio frame. This enables the UE to acquire the slot boundary independently from the chosen cyclic prefix selected for this cell. Based on the downlink frame structure (Type 1, FDD). The primary synchronization signal is transmitted

twice per radio frame, so it is repeated in subframe 5 (in time slot 11). This enables the UE to get time synchronized on a 5 ms basis, which was selected to simplify the required inter-frequency and inter-RAT measurements. LTE must accommodate handover to and from other radio access technologies, such as GSM/GPRS/EDGE, WCDMA/HSPA or CDMA2000 1xRTT/1xEV-DO.

Secondary Synchronization Signal (SSS)


After the mobile has found the 5 ms timing, the second step is to obtain the radio frame timing and the cells group identity. This information can be found from the SSS. In the xtime domain, the SSS is transmitted in the symbol before the PSS . The SSS also has 5 ms periodicity, which means it is transmitted in the first and sixth subframes (subframes 0 and 5) as shown in the Figure below. Like the PSS, the SSS is transmitted on 62 of the 72 reserved subcarriers around the DC subcarrier.

LTE Cell selection and reselection criteria

The previous section described how initial cell selection will work and the difference between LTE FDD and TD-LTE. However, only when specific criteria are fulfilled is the UE allowed to camp on that cell. These criteria for cell selection as well as cell reselection for LTE are specified in [3]. It is further illustrated by a description of the two procedures: In the initial cell selection procedure, as described in the previous sections, no knowledge about RF channels carrying an E-UTRA signal is available at the UE. In that case the UE scans the supported E-UTRA frequency bands to find a suitable cell. Only the cell with the strongest signal per carrier will be selected by the UE. The second procedure relies on information about carrier frequencies and optionally cell parameters received and stored from previously-detected cells. If no suitable cell is found using the stored information the UE starts with the initial cell selection procedure. S is the criterion defined to decide if the cell is still suitable . This

criterion is fulfilled when the cell selection receive level is Equation below:

is computed based on the

is the measured receive level value for this cell, i.e. the Reference Signal Received Power (RSRP). This measured value is the linear average over the power of the resource elements that carry the cell specific reference signals over the considered measurement bandwidth. Consequently, it depends on the configured signal bandwidth. In the case of receiver diversity configured for the UE, the reported value will be equivalent to the linear average of the power values of all diversity branches. is the minimum required receive level in this cell, given in dBm. This value is signaled as QrxLevmin by higher layers as part of the System Information Block Type 1 (SIB Type 1). QrxLevmin is calculated based on the value provided within the information element (-70 and -22) multiplied with factor 2 in dBm. is an offset to Qrxlevmin that is only taken into account as a result of a periodic search for a higher priority PLMN while camped normally in a Visitor PLMN (VPLMN). This offset is based on the information element provided within the SIB Type 1, taking integer values between (18) also multiplied by a factor of 2 in dB. This gives a wider range by keeping the number of bit transmitting this information. The offset is defined to avoid ping-pong between different PLMNs. If it is not available then Qrxlevminoffset is assumed to be 0 dB.
is a maximum function as shown in Equation 5. Whatever parameter is higher, PEMAX- PUMAX or 0, is the value used for PCompensation. PEMAX [dBm] is the maximum power a UE is allowed to use in this cell, whereas PUMAX [dBm] is the maximum transmit power of an UE according to the power class the UE belongs too. At the moment only one power class is defined for LTE, which corresponds to Power Class 3 in WCDMA that specifies +23 dBm. PEMAX is defined by higher layers and corresponds to the parameter P-MAX defined in [2]. Based on this relationship, PEMAX can take values between -30 to +33

dBm. Only when PEMAX > +23 dBm PCompensation is it considered when calculating Srxlev. The P-MAX information element (IE) is part of SIB Type 1 as well as in the "RadioResourceConfigCommon" IE, which is part of the SIB Type 2. As explained above, all parameters except for Qrxlevmeas are provided via system information. In a real network a UE will receive several cells perhaps from different network operators. The UE only knows after reading the SIB Type 1 if this cell belongs to its operators network (PLMN Identity). First the UE will look for the strongest cell per carrier, then for the PLMN identity by decoding the SIB Type 1 to decide if this PLMN is a suitable identity. Afterwards it will compute the S criterion and decide for a suitable cell or not.

The Figure above shows one possible scenario in a real network. Assume that the UE belongs to network operator 1. There are two other carriers also operating an LTE network but of course at different frequencies. The terminal receives all base stations but at different power levels. Based on the above definition the UE will select the strong cell for each carrier . Using this the UE will start with network operator 3 and figure out after decoding the SIB Type 1 that the PLMN saved on the USIM does not match to the transmitted one. From this information it will stop with its attempt and proceed to the next strongest signal, which is operator 2 . Now the PLMN does not correspond so the UE will continue with signal 3 (green) and the PLMN will match. The UE continues to use the information in SIB Type 1 and Type 2 to compute the cell selection criteria. In this example, the parameters transferred and belonging to eNB1 do not fulfill S > 0 where the UE will move along with demodulating and decoding the information provided by eNB2. S > 0 is fulfilled and the UE starts camping on this cell.

LTE Network Infrastructure and Elements


Contents 1. 2. 1. 1 1. Introduction 2 2. E-UTRAN and eNode Bs 2.1 2.1. History from UMTS

2. 3. 4. 3. 1. 2. 3. 4. 5. 4. 5.

2.2 2.2 eNode Bs: The Single E-UTRAN Node 2.3 2.3 The X2 Interface 2.4 2.4 eNode B Functionalities 3 3. Evolved Packet Core (EPC) and its Components 3.1 3.1 MME (Mobility Management Entity) 3.2 3.2 HSS (Home Subscriber Server) 3.3 3.3 The Serving GW (Serving Gateway) 3.4 3.4 The PDN GW (Packet Data Network Gateway) 3.5 3.5 The PCRF (Policy and Charging Rules Function) Server 4 References 5 External Links

1. Introduction

The following extract from [1] provides a very good understanding of the overall LTE Network Infrastructure and elements. The Figure below describes the LTE & UMTS overall network architecture, not only

including the Evolved Packet Core (EPC) and Evolved UMTS Terrestrial Access Network (EUTRAN), but also other components, in order to show the relationship between them. For simplification, the picture only shows the signalling interfaces. In some cases, both user data and signalling are supported by the interface (like the S1, S2 or 3G PS Gi interfaces) but, in some other cases, the interfaces are dedicated to the Control plane, and only support signalling (like the S6 and S7 interfaces).

The new blocks specific to Evolved UMTS evolution, also known as the Evolved Packet System (EPS), are the Evolved Packet Core (or EPC) and the Evolved UTRAN (or E-UTRAN). Other blocks from the classical UMTS architecture are also displayed, such as the UTRAN (the UMTS Access Network), the PS and the CS Core Networks, respectively, connected to the public (or any private) IP and Telephone Networks. The IMS (IP Multimedia Subsystem) is located on top of the Packet Core blocks and provide access to both public or private IP networks, and the public telephone network via Media Gateway network entities. The HSS, managing user subscription information is shown as a central node, providing services to all Core Network blocks of 3G and evolved 3G architecture. Note: The picture does not represent the nodes involved in the support of charging function. Discussed below are the individual sub-components:

2. E-UTRAN and eNode Bs

2.1. History from UMTS


From the first releases of the UMTS standard, the UTRAN architecture was initially very much aligned with 2G/GSM Access

Network concepts. The general architecture follows the good old 2G/GSM star model, meaning that a single controller (the RNC) may possibly control a large number the typical number in commercial networks is about several hundreds of radio Base Stations (the Node B) over the Iub interface. In addition, an inter-RNC Iur interface was defined to allow UTRAN call anchoring at the RNC level and macro-diversity between different Node B controlled by different RNCs. Macro-diversity was a consequence of CDMA-based UTRAN physical layers, as a means to reduce radio interference and preserve network capacity. The initial UTRAN architecture resulted in a simplified Node B implementation, and a relatively complex, sensitive, high capacity and feature-rich RNC design. In this model, the RNC had to support resource and traffic management features as well as a significant part of the radio protocols.

2.2 eNode Bs: The Single E-UTRAN Node


Compared with UTRAN, the E-UTRAN OFDM-based structure is quite simple. It is only composed of one network element: the eNodeB (for evolved Node B.). The 3G RNC (Radio Network Controller) inherited from the 2G BSC (Base Station Controller) has disappeared from E-UTRAN and the eNodeB is directly connected to the Core Network using the S1 interface. As a consequence, the features supported by the RNC have been distributed between the eNodeB or the Core Network MME or Serving Gateway entities.

2.3 The X2 Interface


A new interface (X2) has been defined between eNodeB, working in a meshed way (meaning that all Node Bs may possibly be linked together). The main purpose of this interface is to minimize packet loss due to user mobility. As the terminal moves across the access network, unsent or unacknowledged packets stored in the old eNodeB queues can be forwarded or tunnelled to the new eNodeB thanks to the X2 interface. From a high-level perspective, the new E-UTRAN architecture is actually moving towards WLAN network structures and Wifi or WiMAX Base Stations.

2.4 eNode B Functionalities


Functional definition eNodeB as WLAN access points support all Layer 1 and Layer 2 features associated to the E-UTRAN OFDM physical interface, and they are directly connected to network routers. There is no more intermediate controlling node (as the 2G/BSC or 3G/ RNC was). This has the advantage of a simpler network architecture (fewer nodes of different types, which means simplified network operation) and allows better performance over the radio interface. As described in Chapter 4, the termination of Layer 2 protocols in eNodeB rather than in the RNC helps to decrease data-transmission latency by saving the delay incurred by the transmission of packet repetitions over the Iub interface. From a functional perspective, the eNodeB supports a set of legacy features, all related to physical layer procedures for transmission and reception over the radio interface:

Modulation and de-modulation. Channel coding and de-coding.


Besides, the eNodeB includes additional features, coming from the fact that there are no more Base Station controllers in the EUTRAN architecture. Those features, which are further described in Chapter 4, include the following:

Radio Resource Control: this relates to the allocation, modification and release of resources for the transmission
over the radio interface between the user terminal and the eNodeB.

Radio Mobility management: this refers to a measurement processing and handover decision.
Radio interface full Layer 2 protocol: in the OSI Data Link way, the layer 2 purpose is to ensure transfer of data between network entities. This implies detection and possibly correction of errors that may occur in the physical layer.

3. Evolved Packet Core (EPC) and its Components

The EPC (Evolved Packet Core) is composed of several functional entities:

The MME (Mobility Management Entity) The HSS (Home Subscriber Server) The Serving Gateway. The PDN Gateway (Packet Data Network). The PCRF (Policy and Charging Rules Function) Server.
The following sub-sections discuss each of these in detail:

3.1 MME (Mobility Management Entity)


The MME is in charge of all the Control plane functions related to subscriber and session management. From that perspective, the MME supports the following: Security procedures this relates to end-user authentication as well as initiation and negotiation of ciphering and integrity protection algorithms.

Terminal-to-network session handling this relates to all the signalling procedures used to set up Packet Data
context and negotiate associated parameters like the Quality of Service.

Idle terminal location management this relates to the tracking area update process used in order for the network to
be able to join terminals in case of incoming sessions. The MME is linked through the S6 interface to the HSS which supports the database containing all the user subscription information.

3.2 HSS (Home Subscriber Server)


The HSS (Home Subscriber Server) is the concatenation of the HLR (Home Location Register) and the AuC (Authentication Center) two functions being already present in pre-IMS 2G/GSM and 3G/UMTS networks. The HLR part of the HSS is in charge of storing and updating when necessary the database containing all the user subscription information, including (list is non exhaustive): User identification and addressing this corresponds to the IMSI (International Mobile Subscriber Identity) and MSISDN (Mobile Subscriber ISDN Number) or mobile telephone number.

User profile information this includes service subscription states and user-subscribed Quality of Service
information (such as maximum allowed bit rate or allowed traffic class). The AuC part of the HSS is in charge of generating security information from user identity keys. This security information is provided to the HLR and further communicated to other entities in the network. Security information is mainly used for:

Mutual network-terminal authentication. Radio path ciphering and integrity protection, to ensure data and signalling transmitted between the network and the
terminal is neither eavesdropped nor altered.

3.3 The Serving GW (Serving Gateway)


From a functional perspective, the Serving GW is the termination point of the packet data interface towards E-UTRAN. When terminals move across eNodeB in E-UTRAN, the Serving GW serves as a local mobility anchor, meaning that packets are routed through this point for intra E-UTRAN mobility and mobility with other 3GPP technologies, such as 2G/GSM and 3G/UMTS.

3.4 The PDN GW (Packet Data Network Gateway)


Similarly to the Serving GW, the PDN gateway is the termination point of the packet data interface towards the Packet Data Network. As an anchor point for sessions towards the external Packet Data Networks, the PDN GW also supports Policy Enforcement features (which apply operator-defined rules for resource allocation and usage) as well as packet filtering (like deep packet inspection for virus signature detection) and evolved charging support (like per URL charging).

3.5 The PCRF (Policy and Charging Rules Function) Server


The PCRF server manages the service policy and sends QoS setting information for each user session and accounting rule information. The PCRF Server combines functionalities for the following two UMTS nodes:

The Policy Decision Function (PDF) The Charging Rules Function (CRF)
The PDF is the network entity where the policy decisions are made. As the IMS session is being set up, SIP signalling containing media requirements are exchanged between the terminal and the P-CSCF. At some time in the session establishment process, the PDF receives those requirements from the P-CSCF and makes decisions based on network operator rules, such as:

Allowing or rejecting the media request. Using new or existing PDP context for an incoming media request. Checking the allocation of new resources against the maximum authorized
The CRFs role is to provide operator-defined charging rules applicable to each service data flow. The CRF selects the relevant charging rules based on information provided by the P-CSCF, such as Application Identifier, Type of Stream (audio, video, etc.), Application Data Rate, etc.

LTE Radio Link Budgeting and RF Planning

Contents

1. 2. 1. 2. 3. 4. 5. 3.

1 1. Introduction 2 2. LTE Radio Link Budgeting 2.1 2.2. Uplink Budget 2.2 2.3. Downlink Budget 2.3 2.4. Propagation (Path Loss) Models 2.4 2.5. Mapping of Path Losses to Cell Sizes 2.5 2.6. Comparison to Other Radio Access Technologies 3 3. References

1. Introduction

The initial planning of any Radio Access Network begins with a Radio Link Budget. As the name suggests, a link budget is simply the accounting of all of the gains and losses from the transmitter, through the medium (free space, cable, waveguide, fiber, etc.) to the receiver in a telecommunication system. In this page, we will briefly discuss link budget calculations for LTE.

2. LTE Radio Link Budgeting

2.1. Typical Parameter Values


The link budget calculations estimate the maximum allowed signal

attenuation g between the mobile and the base station antenna. The maximum path loss allows the maximum cell range to be estimated with a suitable propagation model. The cell range gives the number of base station sites required to cover the target geographical area.The following table shows typical (practical) parameter values used for doing an LTE Radio Link Budget.

a b c

Parameter Base Station maximum transmission power. A typical value for macro cell base station is 20-69 W at the antenna connector. Base Station Antenna Gain Cable loss between the base station antenna connector and the antenna. The cable loss value depends on the cable length, cable thickness and

Typical Value 43 48 dBm Manufacturer Dependent 1 6 dB

d e f

frequency band. Many installations today use RF heads where the power amplifiers are close to the antenna making the cable loss very small. Base Station EIRP, Calculated as A + B - C UE RF noise figure. Depends on the frequency band. Duplex separation and on the allocated bandwidth. Terminal noise can be calculated as: K (Boltzmann constant) x T (290K) x bandwidth. The bandwidth depends on bit rate, which defines the number of resource blocks. We assume 50 resource blocks, equal 9 MHz, transmission for 1 Mbps downlink. Calculated as E + F Signal-to-noise ratio from link simulations or measurements. The value depends on the modulation and coding schemes, which again depend on the data rate and the number of resource blocks allocated. Calculated as G + H Interference margin accounts for the increase in the terminal noise level caused by the other cell. If we assume a minimum G-factor of -4 dB, that corresponds to 10*Log10(1+10^(4/10)) = 5.5 dB interference margin. Control channel overhead includes the overhead from reference signals, PBCH, PDCCH and PHICH. UE antenna gain. Body loss -9 to -7 dB 6 11 dB -104.5 dBm for 50 resource blocks (9 MHz)

g h i j k L M

3 8 dB 10 25 % = 0.4 1.0 dB Manufacturer Dependent Device Dependent

2.2. Uplink Budget


The table below shows an example LTE link budget for the uplink from [1], assuming a 64 kbps data rate and two resource block allocation (giving a 360 kHz transmission bandwidth). The UE terminal power is assumed to be 24 dBm (without any body loss for a data connection). It is assumed that the eNode B receiver has a noise figure of 2.0 dB, and the required Signal to Noise and Interference Ratio (SINR) has been taken from link level simulations performed in [1]. An interference margin of 2.0 dB is assumed. A cable loss of 2 dB is considered, which is compensated by assuming a masthead amplifier (MHA) that introduces a gain of 2.0 dB. An RX antenna gain of 18.0 is assumed considering a 3-sector macro-cell (with 65-degree antennas). In conclusion the maximum allowed path loss becomes 163.4 dB.
Uplink Link Budget for 64 kbps with dual-antenna receiver base station

Data rate (kbps) Transmitter UE


a b c d Max. TX power (dBm) TX antenna gain (dBi) Body loss (dB) EIRP (dBm)

64
24.0 0.0 0.0 24.0 = a + b + c

Receiver eNode B
e f g h Node B noise figure (dB) Thermal noise (dBm) Receiver noise floor (dBm) SINR (dB) 2.0 -118.4 = k(Boltzmann) * T(290K)* B(360kHz) -116.4 = e + f -7.0 From Simulations performed in [1]

i j k l m

Receiver sensitivity (dBm) Interference Margin (dB) Cable Loss (dB) RX antenna gain (dBi) MHA gain (dB)

-123.4 = g + h 2.0 2.0 18.0 2.0 163.4 = d i j k + l - m

Maximum path loss

The table below shows an example LTE link budget

2.3. Downlink Budget


The table below shows an example LTE link budget for the downlink from [1], assuming a 1 Mbps data rate (assuming antenna diversity) and 10 MHz bandwidth. The eNode B power is assumed to be 46 dBm, a value typical among most manufacturers. Again the SINR value is taken from link level simulations performed in [1]. A 3 dB interference margin and a 1 dB control channel overhead are assumed, and the maximum allowed path loss becomes 165.5 dB.
Downlink Link Budget for 1 Mbps with dual-antenna receiver terminal

Data rate (Mbps) Transmitter eNode B


a b c d HS-DSCH power (dBm) TX antenna gain (dBi) Cable loss (dB) EIRP (dBm)

1
46.0 18.0 2.0 62.0 = a + b + c

Receiver UE
e f g h i j k l m UE noise figure (dB) Thermal noise (dBm) Receiver noise floor (dBm) SINR (dB) Receiver sensitivity (dBm) Interference Margin (dB) Control Channel Overhead (dB) RX antenna gain (dBi) Body Loss (dB) 7.0 -104.5 = k(Boltzmann) * T(290K)* B(360kHz) -97.5 = e + f -10.0 From Simulations performed in [1] -107.5 = g + h 3.0 1.0 0.0 0.0 165.5 = d i j k + l - m

Maximum path loss

The table below shows an example LTE link budget

2.4. Propagation (Path Loss) Models


A propagation model describes the average signal propagation, and it converts the maximum allowed propagation loss to the maximum cell range. It depends on: Environment : urban, rural, dense urban, suburban, open, forest, sea Distance Frequency atmospheric conditions Indoor/outdoor

Common examples include Free space, WalfishIkegami, OkumuraHata, LongleyRice, Lee and Young's models. The most commonly used model in urban environments is the Okumura-Hata model as described below: For Urban Areas:

For Small and Medium-sized cities:

For Large cities:

where:

2.5. Mapping of Path Losses to Cell Sizes


For a path loss of 164 dB, based on the assumptions shown in the table below the following cell ranges can be attained with LTE. The cell range is shown for 900, 1800, 2100 and 2500 MHz frequency bands.
Assumptions

OkumuraHata parameter

Urban Indoor 30 1.5 0.0 8.0 95 0 20 8.8

Suburban Indoor 50 1.5 0.0 8.0 95 -5 15 8.8

Rural Indoor 80 1.5 0.0 8.0 95 -15 0 8.8

Base station antenna height (m) Mobile antenna height (m) Mobile antenna gain (dBi) 0 Slow fading standard deviation (dB) Location probability (%) Correction factor (dB) Indoor loss (dB) Slow fading margin (dB)

Rural outdoor fixed 80 5 5.0 8.0 95 -15 0 8.8

Cell Size in Km

2.6. Comparison to Other Radio Access Technologies


In comparison to other Radio Access Technologies such as GSM or UMTS, LTE does not provide a significant increase in cell size or path loss measurements, however, the data rate (services) provided is much superior. In

contrast to HSPA link budgets, the LTE Link budgets show up to roughly 2 dB higher values, which is mainly a result of low interference margins that can be achieved with orthogonal modulation. For a detailed comparison please refer to LTE Link Budget Comparison.

LTE Link Budget Comparison


Contents 1. 2. 3. 4. 1 1. Introduction 2 2. Uplink Budget Comparison 3 2. Downlink Budget Comparison 4 3. References

1. Introduction

The tables below show a link budget comparison between LTE, GSM and UMTS HSPA.

2. Uplink Budget Comparison

The following table based on [1],[2] compares the uplink budget for LTE, HSPA and GSM

RAN Technology Data rate (kbps) Transmitter UE


a b c d Max. TX power (dBm) TX antenna gain (dBi) Body loss (dB) EIRP (dBm)

GSM 12.2
33 0 3 30

HSPA 64
23 0 0 23

LTE 64
23 0 0 23

Receiver BTS/Node B/eNode B


e f g h Node B noise figure (dB) Thermal noise (dBm) Receiver noise floor (dBm) SINR (dB) 2 -108.2 -106.2 -17.3 2 -118.4 -116.4 -7

i j k l m n

Receiver sensitivity (dBm) Interference Margin (dB) Cable Loss (dB) RX antenna gain (dBi) Fast fade margin (dB) Soft handover gain (dB)

-114 0 0 18 0 0 162

-123.4 3 0 18 1.8 2 161.6

-123.4 1 0 18 0 0 163.4

Maximum path loss

The uplink link budget has some differences in comparison to HSPA: specifically the smaller interference margin, no macro diversity gain (Soft handover gain) and no fast fading margin. As can be seen from the table above the link budget was calculated for 64 kbps uplink, which is cannot be classified as a high enough data rate for true broadband service. To guarantee higher data rates for LTE, a low frequency deployment may be required in addition to additional sites, active antenna solutions or local area solutions.

2. Downlink Budget Comparison

The following table based on [1],[2] compares the downlink budget for LTE, HSPA and GSM

RAN Technology Data rate (kbps) Transmitter BTS/Node eNode B


a b c d Max. TX power (dBm) TX antenna gain (dBi) Cable loss (dB) EIRP (dBm)

GSM 12.2 B,
44.5 18 2 60.5

HSPA 1024

LTE 1024

46 18 2 62

46 18 2 62

Receiver UE
e f g h i j k l m UE noise figure (dB) Thermal noise (dBm) Receiver noise floor (dBm) SINR (dB) Receiver sensitivity (dBm) Interference Margin (dB) Control channel overhead (%) RX antenna gain (dBi) Body loss (dB) -119.7 -104 0 0 0 3 161.5 7 -108.2 -101.2 -5.2 -106.4 4 20 0 0 163.4 7 -104.5 -97.5 -9 -106.4 4 20 0 0 163.5

Maximum path loss

The LTE link budget in downlink has several similarities with HSPA and the maximum path loss is similar. The link budgets show that LTE can be deployed using existing GSM and HSPA sites assuming that the same frequency is used for LTE as for GSM and HSPA. LTE itself does not provide any major boost in the coverage. That is because the transmission power levels and

the RF noise figures are also similar in GSM and HSPA technologies, and the link performance at low data rates is not much different in LTE than in HSPA.

Potrebbero piacerti anche