Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Page | 1
B) Systematic errors
These arise from some physical phenomenon or some tendency on the part of the observer, the
instrument used, the environment (natural conditions) during observations or a combination of these
factors.
Systematic errors can be eliminated if the laws governing these contributory factors are known. A
systematic error follows a pattern which will be duplicated if the measurement is made under the same
conditions
It is referred to as a constant error if its magnitude and sign remain the same throughout the
measurement process.
They include
Incorrect calibration of instruments
Imperfect measurement techniques
Failure to make the necessary corrections because of simplification of the model used in
adjustment
Bias by observer
Constructional faults in the instrument e.g. Misaligned parts
Systematic errors have to be eliminated before measurement by calibrating the instrument, during
measurement by adopting special observational techniques, after measurement by correcting for them
computationally.
C) Random errors
These are errors remaining after blunders have been detected and removed and the corrections have
been made for all systematic errors.
The errors are of no known functional relationship and they are of the greatest concern in statistical
analysis of measurements. They can be quantified only by use of repeated measurements. They are
unpredictable with regard to both size and sign.
The characteristics of random errors include
A positive error will occur as frequently as a negative error
Page | 2
Examples include
Extremely small disturbances and fluctuations in the measuring system
From human judgment, interpolation beyond the smallest division of a scale
Uncertainty in the definition of the quantity being measured (e.g. a length measurement whose
end points are not well defined).
Definitions
Observations:
This is a recorded value resulting from a measurement.
True Value:
This is the theoretically correct or exact value of a quantity. From measurements, however, the true
value can never be determined, because no matter how much care is exercised in measurement small
random errors will still always be present.
Error:
The difference between any measured quantity and the true value for that quantity. Since the true value
of a measured quantity can never determined, errors are likewise indeterminate and hence they are
strictly theoretical quantities. Errors may be estimated by comparing measured quantities or computed
values with those obtained by an independent method known to have a higher accuracy.
Let denote the true value of a quantity, x be its observed value, then the error , in x is defined as
=
NB: true value of a measured quantity can never be known.
Page | 3
2
==
2
1
s is the standard deviation, v2 is the sum of the squares of the residuals and r is the degrees of
freedom.
Page | 4
Weight:
The relative worth of an observation compared to any other observations. Weights control the sizes of
corrections applied to measurements in an adjustment. The more precise an observation is, the smaller
the variance will be and the higher the weight. In this case, the correction it receives is smaller than that
received by a less precise measurement. Weights are inversely proportional to variances.
P2
+ 2 1
1
2 2
S12
= 12 sin ,
P1
2 1
= 1
= 12 cos
2 1
2 1
There is always a minimum number of independent variables that uniquely determine the chosen
model.
For example, to determine just the shape of plane triangle, we need to measure a minimum of 2 angles,
i.e. n0=2. If in addition we need to determine the size of this triangle (i.e. the size and shape), in addition
to 2 distinct angles, a length must be measured; n0=3
NB: unless the observations supply n0, the system will always show some deficiency and cannot be
solved for (for example a case of 3 unknown, in 2 equations)
If we have a total of n observations (and n >n0) then the difference r= n-n0 is referred to as redundancy.
When redundancy exists, then an adjustment is possible. The redundancy is also referred to as degrees
of freedom.
Page | 5
One should be careful since sometimes redundancy can be seen to exist when in fact it is non existent.
For example, consider a plane triangle alongside. Let our interest be determination of its shape.
1. We require to observe only 2 angles (any 2 of , and ) i.e. n0=2. If we observe , and then,
n=3 and r = n-n0 = 3-2= 1
2. We can observe only one angle , say four times, each time getting 1, 2, 3, 4. Although
n=4, we cannot uniquely determine the shape of this triangle, there is a deficient (observations
are not distinct).
For us to obtain a unique solution from all subsets, we set least squares principle or criterion which
states that (in matrix /vector form)
=
(Here we assume for now that all observations are of equal reliability.)
The function above can also be expressed as:
12
22
32
+ ..+
=
=1
= 2 ,
=0
Page | 6
Note that the least squares principle does not require a prior knowledge about distribution of the
observations.
Concept of weight
When introducing observations of different accuracies into survey calculation, it is necessary to control
the influence of these observations on the final results. The principal applied is that observations with
high accuracies should contribute more to the solution than of lower accuracy.
To achieve this concept, weighted observations are introduced. The weight is defined as follows = 2
2
=
=
Weights for a set of observations are combined in a weight matrix W in the form.
1
0
0
0
..
2 . .
:
3
: : :
0 0 0
.. 0
.. 0
.. 0
; ..
0
02
12
..
02
22
..
: :
0 0
:
0
..
..
..
0
0
0
..
02
0
2
Then by definition the residual errors of the observation may be expressed in this manner.
1 = 1
2 = 2
3 = 3
:
=
These residuals can be assumed to behave in a manner similar to the true errors and may be used
interchangeably.
=
The residuals are normally distributed according to the function
= =
1
2
2
)
2 2
(1)
22
1
2
The individual probabilities for the occurrence of the residuals 1 , 2 , . are obtained by
multiplying respective ordinates with a small increment, v.
The probability of occurrence of the errors may be expressed as
22
1
2 22
1 = 1 =
2 = 2 =
= =
2 2
= =
The probability of the simultaneous occurrence of all residuals is simply the product of the individual
probability. i.e.
= 1 2 ..
=
2 12
22
2
2 2
(3)
22
1
22
2
. .
2 2
. . (4)
From eqn 4 we note that P (the probability of all simultaneous residuals) can be maximized if we
minimize the quantity in the brackets in the eqn below (sum of squares of residuals)
=
12 +22 + 32 +. .+ 2
Page | 8
From eqn 4, we realize that to maximize P (probability of occurrence of the residuals) we have to
minimize the sum of the squares of the residuals. This is the fundamental principle of least squares.
Expressed in
2 = 12 + 22 + 32 + . . + 2 = . (5)
=1
From Equation 5 the most probable value of measured quantity is obtained from repeated
measurements of equal weight is the value that renders the sum of squares of residuals a minimum.
Ideally the minimum value of a function is obtained by taking the 1st derivative with respect to the
variables.
+ 2
+ 3
=1
2
= 1 + 2 + + = 0
1 2
= 1 +
2
2 + + = 0
= 1 + 2 + . +
=
1 + 2 + . +
. . (7)
The quantity in eqn 7 is the MPV of the arithmetic mean and therefore we may conclude that the MPV
of a quantity that has been observed independently is the arithmetic mean.
Least squares adjustment (Weighted case)
The MPV as in meaning residuals can be expressed in the form:
1 = 1 2 = 2 , ., =
It is known:
2 =
1
2 2
1 12 + 2 22 + 3 32 + ..+ 2
(8)
Page | 9
+ 2 2
+ . . +
(10)
2
= 1 1
+ 2 2
+ . . +
. (11)
1 2
= 1 1 + 2 2 + . . +
2
1 1 1 + 2 2 2 + . . + = 0
1 + 2 + . . + = 1 1 + 2 2 + +
. . (12)
The most probable value (MPV) for the weighted case is as shown in Eqn 12.
The most probable value for a quantity obtained from repeated measurements having varying weights is
that value which renders the sum of the weights multiplied with their respective residuals of squares a
minimum.
Least squares solution is found by minimizing the function
= min (1)
In a situation where different observations carry different weights, then eqn (1) may be modified to
account for the different weights. It becomes:
= min (2)
V is an n 1 vector of residuals while W is an n n weight matrix which will always be a diagonal matrix.
Page | 10
Example 1
Consider the measurements of the distances P1P2.
P1
P2
2 + 2
2 = 2 = 15.14
+ 15.14
Our unknown is . Taking partial derivatives of with respect to the unknowns, , and equate to zero.
Thus:
= 15.13
NB: this value is equal to the arithmetic mean since observations are uncorrelated and of equal
precision.
Example 2
A distance is measured 4 times with the following results. What it the least squares estimate of the
distance.
1 = 32.51
2 = 32.48, 3 = 32.52
4 = 32.53
Solution: = 4 1 = 3
= 1 + 1
1 = 32.51
2 = 32.48
3 = 32.52
4 =
32.53
= 12 + 22 + 32 + 42
Page | 11
= 32.51
+ 32.48
+ 32.52
+ 32.53
8 260.08 = 0
= 32.51
Example 3
The equation of a straight line in a plane is y-ax-b=0. In order to estimate the slope a and the intercept b,
the coordinates of 3 points were measured.
Point
3.2
4.0
5.0
Assuming that the x coordinates are error free constants and y are observation, use the method of least
squares to calculate estimates .
Solution: n=3, n0=2, r=n-n0= 3-2=1 implies an adjustment is possible.
The parameters set (i.e. unknowns being sought are (a, b). we now write the 3 equations involving the
observation:
1 + 1 1 = 0
1 + 3.2 2
1 = 2 + 3.2
2 + 2 2 = 0
2 + 4 4
2 = 4 + 4
3 + 3 3 = 0
= 12 + 22 + 32 =
3 + 5 6
2 + 3.2
+ 4 + 4.0
3 = 6 + 5
2
+ 6 + 5.0
= 8 + 4 12.8 + 32 + 8 32.0 + 72 + 12 60 = 0
12 + 3 12.2 = 0 . . (2)
112 + 24 104.8 = 0 (1)
2 = 0.90 ,
= 0.450
= +2.27
1
2 0.450 + 2.27 3.2
2 = 4 0.450 + 2.27 4.0 =
3
6 0.450 + 2.27 5.0
0.033
+0.067
0.033
Computed observations:
1
2
3
1
= 2
3
+1
+2 =
+3
3.2
4.0 +
5.0
0.033
0.067 =
0.033
3.167
4.067
4.967
Example 4
Consider the following set of linear equations
+ 3.0 = 1
2 1.5 = 2
0.2 = 3
Find the least square solution of x and y.
, =
2 = + 3.0
+ 2 1.5
+ 0.2
To minimise the equation above, we have to differentiate with respect to x and y and equate the
derivatives to zero. i.e.
2
= 2 + 3.0 + 2 2 2 1.5 + 2 0.2 = 0
2
= 2 + 3.0 2 2 1.5 2 0.2 = 0
1 = 0.044 12 = 0.001936
2 = 0.086 22 = 0.007396
3 = 0.128 32 = 0.016384
2 = 0.025716
Least squares provides the smallest value of sum of residuals and MPV for the estimates of x and y.
Example 5
Given
a
set
of
observations
= {1 , 2 , 3 , . }
weights:1 , 2 , 3 Determine the weighted arithmetic mean;
with
corresponding
Solution:
=
1 1 + 2 2 +
1 + 2 +
+
=
+
We want to show that the choice of 0 , as used in determining the weights does not affect the final
result. We are given 3 measurements and their standard deviation.
Observation
Weight with =1
1
4
16
=4
4
1
16
16
=1
16
1
16
16
=1
16
1 = 100.01 2
2
= 100.03 4
13
= 100.04 4
Page | 14