Sei sulla pagina 1di 11

Ethernet Frame Format

VLAN Frame Format


UMTS Physical Channels and its Funtions
RNC to Node-Bs specific synchronization requirements on the downlink
direction (RNC to UEs)
RNC and the Node-Bs (Iub interface) has specific synchronization requirements for packets sent
through a dedicated channel. we focus on the downlink direction (RNC to UEs) since it is the
most restrictive one. Two synchronization requirements that apply to the downlink
communication are described.

First, the fact that the RNC already creates the radio frames that are going to be sent by the
Node-B over the air interface, i.e., assigns to each particular radio frame a connection frame
number, results in a synchronization requirement which specifies that packets have to arrive at
the Node-Bs at a certain instant with a maximum deviation corresponding to a specific receiving
window value. Packets arriving at an instant of time above or below this maximum deviation
trigger a timing adjustment process that is used to keep the synchronization of the DCH data
stream in the downlink direction. If packets arrive at an instant of time above a fixed maximum
deviation they are discarded since the reserved resources are not available anymore. As a result,
the Iub interface synchronization is of major importance to guarantee an efficient use of the
resources available at the air interface.

Second, the soft- and softer-handover mechanisms require the arrival of the radio frames at the
UE received via different Node-Bs/sectors within a certain time interval to allow for a
selection/combination of the received frames. If this synchronization requirement is not fulfilled,
the quality of service experienced for this connection will be degraded and the resources used in
the wired and wireless part wasted.
Packet Scheduling in a congested IP Network

1. Strict Priority Scheduling(SPS)


2. Weighted Fair Queuing Scheduling(WFQS)
3. Earliest-Deadline-First Scheduling(EDF)

All the standard scheduling mechanisms mentioned above suffers from the same major
drawback: they do not take advantage of the relaxed delay requirements of low priority packets,
all packets have to fulfill exactly the same delay deadline and thus, it is no longer possible to
delay some of them without risking their arrival on time at the Node-B. Therefore, these
solutions do not perform well in our scenario since, under congestion, packets with low priority
will very likely reach their destination with a delay larger than their deadline and will therefore
be dropped, and resulting in useless consumed bandwidth both in the bottleneck links and in the
air interface.

Solution:-Differentiated Packet Scheduling Design for UTRAN

Designing a scheduling mechanism that takes into account both the Iub interface synchronization
requirements and the application QoS needs to guarantee an efficient use of the UTRAN scarce
resources.

Packets arriving to the RNC to be delivered to a Node-B are classified depending on their QoS
class and buffered in their corresponding queue. A deadline is assigned to each packet based on
their arrival time and the traffic category. The RNC schedules then the transmission of the
packets to the Node-Bs based on the deadlines of the first packet of each queue using the
Earliest-Deadline-First Scheduling(EDF) method. If a multiplexing scheme is used, the
scheduling is applied to the multiplexed packets taking into account the deadline of the first
packet inserted in the container of each traffic class. Traffic class is set by 3 Bit VLAN Priority
value ranging from 0 to 7)

The proposed scheduling performed at the RNC aims at fulfilling the following objectives:

Maximize the efficiency of the usage of the potentially low capacity links between the RNC
and the Node-Bs by only sending through these links the packets that will arrive on time
(i.e., before the deadline associated to the receiving window).
Maximize the efficiency of the air interface usage, by ensuring that all packets that leave the
RNC and therefore have a connection frame number assigned in the air interface are not
dropped but reach the Node-Bs with a delay lower than their deadline.
Provide packets with a delay appropriate to their requirements (e.g., VoIP packets can be
scheduled with a lower delay than Best Effort data packets).

Additionally, the intermediate routers do not need to support any kind of differentiated
scheduling in the downlink direction since it has been already performed at the RNC.

Effect of Packet Collision/Delay in uplink direction (NodeB to RNC)

High Block Error Ratio (BLER)


Higher target for Signal to Interference Ratio (SIR)
UE has to increase the transmission power which will further increase interference
level of cell area.
Reduction in system capacity due to high interference level
RESULT IN:- LOW DATA THROUGHPUT, 2G to 3G Switching Failures

In Wide-band Code Division Multiple Access (WCDMA) systems all users share the same time
and frequency resources. This leads to the problem that users located far from the Base Station
(NodeB) suffer strong interference from users closer to the NodeB, near-far effect. The current solution
is a power control, which guarantees that the received power levels from all User Equipments (UEs) are
equal at the NodeB. Hence, the power control algorithm aims to reduce transmission power and
interference level, and to maximize system capacity. In WCDMA Frequency Division Duplex (FDD) at
uplink a feedback control-loop is implemented in form of an Inner Loop Power Control (ILPC) and an
Outer Loop Power Control (OLPC). The ILPC controls the transmission power of the UE with the aim of
keeping a target Signal to Interference Ratio (SIR) defined by the OLPC. This mechanism involves two
network components, the NodeB and the UE. The first is measuring the SIR and sending up/down
commands to the UE by means of the downlink channel. The UE has to adjust the transmission power
accordingly. This procedure is executed every 0.667 ms, hence, fast enough to compensate fast fading.
The OLPC is locked on the Quality of Service (QoS), in terms of Block Error Ratio (BLER), requested by
the application the radio connection is established for. The involved network components are Serving
RNC (SRNC) and NodeB. The NodeB has to receive data from the UE and forward it to the SRNC. The
BLER of the data and determine a new target SIR for the ILPC. Iterations of this algorithm are triggered
every 10-100 ms.
Improving Resilient Packet Ring (RPR) Performance

RPR works on a concept of dual counter rotating rings called ringlets. These ringlets are
set up by creating RPR stations at nodes where trac is supposed to drop, per @ow (a @ow is the
ingress and egress of data trac). RPR uses Media Access Control protocol (MAC) messages to
direct the trac, which can use either ringlet of the ring. The nodes also negotiate for bandwidth
among themselves using fairness algorithms, avoiding congestion and failed spans. The
avoidance of failed spans is accomplished by using one of two techniques known as steering and
wrapping. Under steering, if a node or span is broken, all nodes are notied of a topology change
and they reroute their trac. In wrapping, the trac is looped back at the last node prior to the
break and routed to the destination station.All trac on the ring is assigned a Class of Service
(CoS) and the standard species three classes. Class A (or High) trac is a pure committed
information rate (CIR) and is designed to support applications requiring low latency and jitter,
such as voice and video. Class B (or Medium) trac is a mix of both a CIR and an excess
information rate (EIR; which is subject to fairness queuing). Class C (or Low) is best eort
trac, utilizing whatever bandwidth is available.

Similar to the SDH topology, RPR is a reciprocal dual-ring topology, with each optical
span working at the same rate. The dierence is that both the two rings of RPR can transmit
data. These two rings are referred to as Ringlet0 and Ringlet1 respectively. Each RPR station
uses a 48-bit MAC address used in Ethernet as its address ID. From the perspective of the link
layer of the RPR station, these two pairs of physical optical ports of transmission/reception are
only one link layer interface. From the perspective of the network layer, only one IP address
needs to be allocated. The link between two adjacent RPR stations is refereed to as a
span, and multiple continuous spans and the stations on them constitute a domain. From the
perspective of a station, its packet switching structure has changed immensely in comparison
with the traditional packet switching structure. This structure is similar to the ring road of a city,
where the stations on the ring are directly connected, with barely any trac lights needed, and
hence higher eciency. One RPR station has one MAC entity and two physical layer entities. The
physical layer entities are associated with the links. Referred to as the access point, the MAC
entity includes one MAC control entity and two MAC service link entities. Each access point is
associated with a loop. By direction, physical layer entities are divided into east physical layer
and west physical layer. The east and west are based on the assumption that the station is to the
north of RPR. The Tx interface of the east physical layer and the Rx interface of the west
physical layer are connected via the MAC entity into the Ringlet0 of RPR. Similarly, the Rx
interface of the east physical layer and the Tx interface of the west physical layer are
connected into the Ringlet1 of RPR.

Data Operation

In agreement with the ring, the stations are designed with ADM data switching for
various data operations. Common basic data operations are: Insert: It is the process that the
station equipment inserts the packets forwarded from other interfaces into the data stream of the
RPR ring; Copy: It is the process that the station equipment receives data from the data stream of
the RPR ring and gives them to the upper layer for processing; Transit: It is the process that the
data stream assing a station is forwarded to the next station; Strip: It is the process that the data
passing a station is stopped from further forwarding. The data operation for transit is similar to
that of the SDH ADM equipment, in that the transit data streams are not processed by the
upper-layer equipment, which greatly enhances the processing performance of the equipment.
Such ADM switching of packets can easily support various high-speed link interfaces. The
stations use one or any combination of these basic data operations to implement unicast,
multicast and broadcast trac.

At the source station, the insert operation is performed to load the data to Ringlet0 or
Ringlet1. The destination station performs copy and strip operations. The stations in between
only perform the transit operation. It is worth noting that RPR performs strip at the
destination station for unicast trac, which is dierent from the traditional ring network
technology, where strip is performed at the source station. That the destination station
performs the strip operation can effectively enhance bandwidth utilization, so that the space
reuse of bandwidth becomes more effective. For multicast and broadcast traffic, there are
multiple destination stations, so a data transmission mechanism different from that of unicast
should be used. This solution is Ringlet0 broadcast OR dual-ring broadcast.

Frame Format

Except the ring control byte that re@ects the RPR feature, other elds are very similar to
those of the Ethernet frame format. Usually, the Maximum Transmission Unit (MTU) of a RPR
frame is 1616 bytes, and that of an oversized frame is 9216. The ring control byte contains many
control contents, for example, ring selection information, fair bandwidth allocation option, frame
type, service class, fault switching method, broadcast @ag, etc. It provides various functions
including active performance monitoring and fault monitoring, to ensure rich, @exible and
ecient ring operations that can meet the high requirements of the networks for ring network
technology.

Queuing Technique

When RPR processes transit trac, there are two queuing and forwarding methods: Store-
and-forward and direct-through. The storage-and-forward method is easy to implement, while
the direct-through method oers higher eciency. The store-and- forward mode is the basis that
must be supported. Even when the direct-through method is used, the store-and-forward method
may still be used, for example, when the direct-through queue is temporarily blocked. According
to the ADM switching method of the RPR service, the RPR MAC has the insert buering
queue and transit buering queue. One RPR station has three insert buer queues, Queue A,
Queue B and Queue C, which correspond to data service classes A, B, and C, for which dierent
scheduling priorities are provided. RPR divides the trac to insert into these three classes: Class
A, Class B and Class C. Class A is for low-delay/strict jitter trac of high priority, with lowest
end-to-end delay and jitter provided and Committed Information Rate (CIR). Class B is for
Committed Information Rate (CIR) and Excess Information Rate (EIR) trac of medium priority,
where certain bandwidth and end-to-end delay and jitter must be ensured for CIR, but no need
for EIR. Class C is for best-eort common trac of low priority, with no bandwidth denition.
Each service channel (on each ring) of the MAC of RPR ring can have one or two transit queues
PTQ (Primary Transit Queue) and STQ (Secondary Transit Queue). Transit trac of Class A
passes the PTQ, and transit trac of Class B and Class C passes the STQ. Trac of Class A is in
the PTQ, and trac of Class B and C is in the STQ.In other words, for double-transit-queue RPR,
the RPR loop uses separate buering queues for trac of high and low priority, and uses strict
priority queue for switching. In Other words, the decision mechanism of MAC of the RPR ring
will rst process the Trac of high priority in whatever circumstance, and the trac of low priority
will not aect the real-time switching of that of high priority. Transit queue is similar to the lane
on a ring road in a city: A single queue is Equivalent to a single lane, where all the vehicles run;
double queues are equivalent to two lanes, where cars run on the fast lane and trunks on the slow
lane. Obviously, Double queues are superior to single queue technically. Queue scheduling of
class A Trac is not aected that of classes B and C, so the trac of high priority with low Delay
is ensured. However, RPR still takes the single-queue mode as an option, out of consideration of
reduced cost. In the single queue mode, trac of classes A, B, and C are not divided for queuing,
so the hardware is much easier to implement, with much lower cost. The single queue mode can
be used for networks where only Simple data services are provided and performance is not so
important, to reduce cost. However, for the IP MAN and backbone networks, which bear
multiple Services, including high-quality services, the double-transit-ring mode must
be used. For large education networks and enterprise networks which usually also bear IP Voice
and video services requiring high performance, the double-transit-ring mode is also
recommended.

Fair Algorithm

RPR allows the stations to share the bandwidth resources available. When the data trac
is low, RPR can meet the needs of all the stations for trac loading. When the trac becomes
heavy, link overload or trac congestion may occur, as the needs of the trac for bandwidth not
fully satised. In such a circumstance, some stations occupy excessive bandwidth, by relying on
their advantages in position (near) or time (rst), while aecting other stations. To ensure that all
the stations can share the bandwidth fairly in the event of congestion or overload, RPR presents a
special fair algorithm for fair bandwidth sharing and allocation. The fair algorithm of RPR is a
distributed fair algorithm, where the stations transfer the information required via control
messages, including rate allowed, rate recommended, and strategy indication. Fair algorithm
includes trac measurement and strategy processing and the multiple stages during the
processing, for ultimate achievement of fair allocation. Bandwidth fairness and congestion
control mechanism are functions of the MAC control sub-layer of the data link layer of RPR.
The RPR fair algorithm is applicable to services where contention for bandwidth is required, that
is, EIR services and best-eort services. The fair algorithm protocol implemented in the fair
control unit has the following functions Detects and eliminates congestion Transmits and
receives the fair control messages between the RPR stations; Provides access control for ring
bandwidth based on the service classes, and uses the even or weighted fair algorithm to control
the utilization of the entire ring bandwidth; Provides separate bandwidth fair operations for
Ringlet 0 and Ringlet 1, and allocates all the bandwidth between any two stations on the
ring to the users as global resources; Each station can control the rate at which to forward
packets to the ring based on the service class and utilization of the bandwidth on the ring, to
ensure every station has the fair ring bandwidth allocated; Flows on the dierent sub-rings in the
opposite direction based on the bandwidth fair control frame and the associated data stream. RPR
supports monopolized and weighted fairness arrangement, where the trac inserted at each node
is not necessarily equal.

Failure Self-healing

RPR uses the SDH ring structure, and inherits a major feature, the powerful failure self-
healing capability, which implements failure protection switching in 50ms. The following
diagram illustrates the protection in the event of a failure on the link. Inside the stations at both
ends of the failed link, Ringlet0 and Ringlet 1 are connected to form a new ring network. For the
trac being transmitted on the ring, there are two protection modes: Wrap and Steering (also
known as the source route). The following diagram illustrates these two protection modes

Topology Discovery

RPR supports automatic topology discovery. The protection information or topology


information packets contain the topology information, which is broadcast on the ring network.
The possible topology structures are all loop-back structure and chain structure (when some links
fail). Automatic discovery is helpful for the protection in the event of link failure, and it also
provides good support for network expansion, in enabling station level plug-n-play. In other
words, a station can be added or deleted to or from the ring network without manual conguration
of data

Management Protection

RPR frame structure contains many option parameters for performance management, fault
management and conguration management, which laid a good foundation for RPRs
Maintenance, Administration and Maintenance (OAM). RPR implements fault monitoring,
location and isolation on the RPR layer through the special control frames
Improving RPR Performance Kollam

Potrebbero piacerti anche