Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
10
Agenda
Calibration need
• The accuracy of statistical models can be significantly enhanced
for a specific market by a careful calibration. The first calibration
should be initially performed in the Greenfield phase prior to RF-
design and site deployment.
• After this, a constant process to keep and enhance the quality of
the model portfolio is required.
• Model recalibration is of a particular need, when:
• new markets were entered that contain new types of clutter;
• average cell sizes within markets change drastically due to
densification, new frequencies or new technologies (e.g. GERAN,
HSDPA, HSUPA, LTE, WiMAX);
• clutters significantly changed through the years;
• terrain and clutter data in the RF-Planning tools have been updated.
Calibration need
• Enhancements are usually implemented to diversify the set of
models, e.g. to take into account:
• strong topological changes (e.g. hilly and flat models);
• seasonal changes, particularly on rural models at agricultural clutters;
• specific propagation environments on certain sites or clusters.
Calibration process
• Most statistic macro cell models used in the industry are derived
from Hata or Walfisch–Ikegami. For these types of models several
tuning algorithms have been developed which are introduced in the
next slides.
• Prior to surveying a cell, the following (minimum) steps should be
taken:
1. Ensure adequate clearance of the transmitting antenna. Macrocell
models assume that the transmitting antenna is located above the
clutter.
2. Measure the antenna height and tilt accurately. In the case of an
omnidirectional antenna, ensure that the antenna is truly vertical (0 tilt).
3. Measure the cable loss, antenna gain and calibrate the signal at the
mobile unit (aim for an overall gain of 0 dB).
4. Verify the site coordinates with a GPS and a map.
5. Calibrate the transmitter in a minimum of 1 dB steps.
Calibration process
• Drive test
• Drive testing should be performed encircle the route and represent all
of azimuth.
• Drive testing results are used for calibration to get appropriate
propagation model.
Calibration process
• Drive test
Calibration process
• Drive tests should ideally meet the following criteria:
1. Data should cover all the areas surrounding the cell in order to cover
all the clutter classes in the serving area.
2. If possible, line-of-sight routes should be avoided or measured
separately, so that clutter and diffraction losses can be properly
studied. Additionally, local clutter losses at the mobile can be better
studied if the survey routes encircled the base station. Driving radially
puts greater emphasis on path losses.
3. A high number of data samples should be collected for statistical
confidence. Important is not necessarily a high number of drive tests
but a high number of bins within the distances from the base stations
that you are interested in. Usually, all roads between 350 and 2000m
from the base station should be tested.
Calibration process
• Drive tests (cont.):
4. Another factor to take into account is the sampling rate. This can be
distance/speed or time-dependent. In urban areas, distance-
dependent sampling is preferable, since it avoids collecting a large
number of samples when the vehicle is stationary.
5. Avoid collecting measurements in areas where the signal level is at or
below the noise floor, typically −110dBm. If this is not possible, many
network planning tools provide a filtering facility that deals with this
problem.
Calibration process
• Drive tests (cont.):
Route planning around the site under test
Calibration process
• Drive tests (cont.):
Prediction and drive test results
Measurement results
Prediction results
Calibration process
• Additionally an optical filtering of the drive test data based on
photos from the site or maps should be undertaken in a
Geographical Information System (GIS) tool. The goal is the
elimination of:
• samples that are not taken from the correct terrain height; this
happens when the drive test vehicle moves on higher bridges or within
tunnels and road canyons; these sample falsify the total result;
• samples that are taken from bins located out of the 3 dB points of
antennas’ horizontal main lobe; this issue does not appear if an
omnidirectional antenna has been used, because the horizontal main
lobe does not exist;
• samples that have been taken in areas where the clutter information is
not up-to-date.
• samples taken in (or behind) constructions that generate a significant
dielectric effect within the path; examples are some steel-made
suspension bridges or several parallel railway tracks causing Faraday
or resonance effects within the electromagnetic field.
Tuning Algorithms
• The most common tuning algorithms are based on iterative
approaches intended to minimise the standard deviation between
the Hata coefficients and the drive test data. Figure below shows
the level-to-distance relationship of a suburban test drive in a
major city at 2.160 GHz.
Tuning Algorithms
• The most important outputs of the statistical tools are the average
error and the standard deviation of the entire path as well as the
average error per distance.
• The average error x e indicates whether the model is under-
performing (positive value) or over-predicting (negative value)
compared with measurement data; it is calculated as:
∑ (x − x measured )i
NS
1
xe = predicted
NS i =1
where
• Ns is the number of samples,
• xpredicted is the power level of prediction and
• xmeasured is the measured power level.
Tuning Algorithms
• The average error value per distance is derived in the same way
as the total average error, but for sections of bins having the same
distance to the antenna location. The standard deviation (STD)
estimates the variability of the prediction model (around average
value).
• In the case of single slope model, the path loss L for the general
model is given by:
L = C1 + C 2 lg (d ) + C 3 Ld + C 4 lg (he ) + C 5 lg (he ) lg (d ) + C 6 Lc + C 7 d
where
• C1–C7 are weighting factors (described below),
• d is the distance between transmitter and receiver,
• Ld is loss due to diffraction,
• he is effective mobile height above ground and
• Lc is loss of the clutter.
Tuning Algorithms
• The constants C1–C7 can be summarised as:
• C1 – constant describing the intercept
• C2 – slope factor
• C3 – diffraction weight
• C4 – effective height weight
• C5 – distance/height weight
• C6 – clutter weight
• C7 – distance weight.
Tuning Algorithms
• The objective of the calibration exercise is to achieve a minimum
STD and an average error between measured and predicted data
for each clutter group. Usually, this minimisation is originated like
the following example:
1) Filter the section to be calibrated (minimum dB, distance from site,
number of knife edges, minimum bin size).
2) Initialise the model with parameter offset values.
3) Change C1 and C2 in a manner that a minimum in the standard
deviation is reached. Then repeat the procedure for C3. The
diffraction weight should not increase above 1.0.
4) Once a new minimum is reached, start varying C2 in smaller steps.
After this, start varying C3 in lower steps. Keeping the new values
for C2 and C3 constant, repeat the procedure for C4.
Tuning Algorithms
5) Now vary C2, C3 and C4 in very small steps until a new minimum is
reached. Usually, C3 and C4 are not correlated, so that a re-tuning
of C3 is usually not required. Keeping the new values for C2, C3
and C4 constant, repeat the procedure for C5.
6) Regarding C6, it is a global clutter weight factor that is multiplied
with the clutter correction factors. This allows to leave C6 constant,
while all clutter factors get optimised with respect to:
– a statistical optimum (lowest standard deviation, average error of 0)
for each clutter;
– an engineering approach; this should consider that the clutter
correction factors relative to each other are in plausible limits (e.g.
‘dense urban’ should not have a more optimistic correction factor
than ‘open’ or ‘suburban’ etc.).
7) C7 is usually assumed to be equal to zero.
Tuning Algorithms
• A different and quicker approach compared to an iterative tuning is
the determinant based method.
• The goal here is to find an absolute minimum of the deviation
between the predicted and the surveyed samples.
• In general, it has to be stated that neither the automated tuning
algorithm, nor Measurement Based Prediction (MBP), nor the
determinant approach will always deliver plausible results from an
engineering point of view.
• For this reason, a validation of the coverage produced by the
model is a prerequisite before implementing the model.
L = C1 + C 2 lg (d ) + C 3 Ld + C 4 lg (he ) + C 5 lg (he ) lg (d ) + C 6 Lc + C 7 d
C 2a for d < D
C2 =
C 2b for d > D
Thank you