Sei sulla pagina 1di 14

APPLICATIONS ON ADJUSTMENT OF OBSERVATIONS

Need for Adjustment


In the practice of surveying the measurement of angles, distances and differences in elevation are
fundamental to the final reporting of the X, Y, Z coordinates based on some reference data. A
measurement is subject to variations caused by changes in physical and geometrical conditions and the
reliability of a measurement depends on how much variation has been removed.
If there is a quantity whose value is required, redundant observations are carried out so as to guard
against blunders. Consequently, the use of redundant observations will call for an adjustment, which
enables the (unique) optimum value(s) that is/are close to the true value(s) to be obtained.
Thus, the need for adjustment comes about because of the redundant measurements which are
employed so as to:
i) Guard against blunders
ii) Give a means of obtaining the optimum value(s) for the unknown(s)
iii) Enable accuracy estimation to be carried out.
All measurements have errors. Accounting for the existence of errors in measurements consists of;
Performing statistical analysis of the measurements to assess the magnitudes of the errors and
study their distributions to determine whether or not they are within acceptable tolerance
If measurements are acceptable, adjustment is done so that they conform to the exact
geometric conditions.
In summary;
No measurement is exact
Every measurement contains errors
The true value of a measurement is never known
The exact sizes of errors present are always unknown
Types of errors
A) Gross errors (Mistakes)
These are due to carelessness of the observer. Mistakes and blunders must be removed before the data
is used. They include

Improper logic (misunderstanding what one is doing)


Misreading values
Errors in transcription of data
Confusion of units
Arithmetic mistakes

Page | 1

Observational procedures that help detect and eliminate mistakes include

Careful checking of all entries of observations


Repeated readings of the same quantity
Verifying recorded data
Use of common sense
Independent repeated checks
Mathematical checking for consistency

B) Systematic errors
These arise from some physical phenomenon or some tendency on the part of the observer, the
instrument used, the environment (natural conditions) during observations or a combination of these
factors.
Systematic errors can be eliminated if the laws governing these contributory factors are known. A
systematic error follows a pattern which will be duplicated if the measurement is made under the same
conditions
It is referred to as a constant error if its magnitude and sign remain the same throughout the
measurement process.
They include
Incorrect calibration of instruments
Imperfect measurement techniques
Failure to make the necessary corrections because of simplification of the model used in
adjustment
Bias by observer
Constructional faults in the instrument e.g. Misaligned parts
Systematic errors have to be eliminated before measurement by calibrating the instrument, during
measurement by adopting special observational techniques, after measurement by correcting for them
computationally.
C) Random errors
These are errors remaining after blunders have been detected and removed and the corrections have
been made for all systematic errors.
The errors are of no known functional relationship and they are of the greatest concern in statistical
analysis of measurements. They can be quantified only by use of repeated measurements. They are
unpredictable with regard to both size and sign.
The characteristics of random errors include
A positive error will occur as frequently as a negative error
Page | 2

Small errors will occur more frequently than large errors


Chances of large errors occurring are very remote

Examples include
Extremely small disturbances and fluctuations in the measuring system
From human judgment, interpolation beyond the smallest division of a scale
Uncertainty in the definition of the quantity being measured (e.g. a length measurement whose
end points are not well defined).
Definitions
Observations:
This is a recorded value resulting from a measurement.
True Value:
This is the theoretically correct or exact value of a quantity. From measurements, however, the true
value can never be determined, because no matter how much care is exercised in measurement small
random errors will still always be present.
Error:
The difference between any measured quantity and the true value for that quantity. Since the true value
of a measured quantity can never determined, errors are likewise indeterminate and hence they are
strictly theoretical quantities. Errors may be estimated by comparing measured quantities or computed
values with those obtained by an independent method known to have a higher accuracy.
Let denote the true value of a quantity, x be its observed value, then the error , in x is defined as

=
NB: true value of a measured quantity can never be known.

which is a good estimate of can be used. Hence,


= , where v, is the residual
The residual expresses the variation in the measurement. This variation is the error in the measurement
and every survey measurement will have it.

Page | 3

Most probable value:


That value for a measured or indirectly determined quantity which, based upon observations has the
highest probability of occurrence. The most probable value (m.p.v) is determined through Least Squares
Adjustment, which is based on the mathematical laws of probability.
The most probable value for a quantity that has been directly or independently measured several times
with observations of equal weight is simply the mean, or
. . =

x is the sum of individual measurements, and n is the number of observations.


Residual:
This is the difference between any individual measured quantity and the most probable value of that
quantity. It is the value that is dealt with in adjustment computations.
Degrees of freedom:
The number of redundant observations (those in excess of the number actually needed to calculate
unknowns). Redundant observations reveal discrepancies in observed values to make possible the
practice of Least Squares Adjustment for obtaining most probable values.
Sample:
A subset of data selected from a population
Population:
Consists of all possible measurements that can be made of a particular quantity
Standard deviation:
This is a quantity used to express the precision of a group of measurements (also called root mean
square error). An expression for standard deviation of a quantity for which a number of direct, equally
weighted observations have been made is:

2
==

2
1

s is the standard deviation, v2 is the sum of the squares of the residuals and r is the degrees of
freedom.
Page | 4

Weight:
The relative worth of an observation compared to any other observations. Weights control the sizes of
corrections applied to measurements in an adjustment. The more precise an observation is, the smaller
the variance will be and the higher the weight. In this case, the correction it receives is smaller than that
received by a less precise measurement. Weights are inversely proportional to variances.

Introduction to least squares


Adjustment is understood to be a method of deriving estimates for stochastic /random variables and
their distribution parameters from observed samples. It is meaningful if more data than the minimum
required is available.
Different methods of adjustment exist but the most common one is the least squares method. Carl
Friedrich Gauss was the 1st person to apply least squares adjustment. Before making any observations
(measuring) we 1st establish functional desired parameters.
E.g. if we require to compute 2D coordinates from an observed horizontal distance s, we set
Y
12 =

P2

+ 2 1

1
2 2

S12

= 12 sin ,

P1

2 1

= 1

= 12 cos
2 1
2 1

There is always a minimum number of independent variables that uniquely determine the chosen
model.
For example, to determine just the shape of plane triangle, we need to measure a minimum of 2 angles,
i.e. n0=2. If in addition we need to determine the size of this triangle (i.e. the size and shape), in addition
to 2 distinct angles, a length must be measured; n0=3
NB: unless the observations supply n0, the system will always show some deficiency and cannot be
solved for (for example a case of 3 unknown, in 2 equations)
If we have a total of n observations (and n >n0) then the difference r= n-n0 is referred to as redundancy.
When redundancy exists, then an adjustment is possible. The redundancy is also referred to as degrees
of freedom.

Page | 5

One should be careful since sometimes redundancy can be seen to exist when in fact it is non existent.
For example, consider a plane triangle alongside. Let our interest be determination of its shape.
1. We require to observe only 2 angles (any 2 of , and ) i.e. n0=2. If we observe , and then,
n=3 and r = n-n0 = 3-2= 1
2. We can observe only one angle , say four times, each time getting 1, 2, 3, 4. Although
n=4, we cannot uniquely determine the shape of this triangle, there is a deficient (observations
are not distinct).

Principle of least squares


When redundancy exists, any subset of number will yield a result different from the other due to the
random variability of the observational sampling.
Example: Will {,} give a shape different from {,} different from {,}?
Solution: since we desire to obtain a unique result from all subsets, we have to introduce an additional
criterion. The original set of observations is normally denoted by the vector . This set is inconsistent
with the functional model.
The adjustment produces another set of observations . This set is now consistent with the model i.e.
whichever subset is used; the result of the model is the same.
The estimates generally differ only slightly from those in (to original) so that
=

For us to obtain a unique solution from all subsets, we set least squares principle or criterion which
states that (in matrix /vector form)
=

(Here we assume for now that all observations are of equal reliability.)
The function above can also be expressed as:

12

22

32

+ ..+

=
=1

(See probably how the name least square comes about).

= 2 ,

=0

Page | 6

Note that the least squares principle does not require a prior knowledge about distribution of the
observations.

Concept of weight
When introducing observations of different accuracies into survey calculation, it is necessary to control
the influence of these observations on the final results. The principal applied is that observations with
high accuracies should contribute more to the solution than of lower accuracy.

To achieve this concept, weighted observations are introduced. The weight is defined as follows = 2

where: wi= weight, c- an arbitrary constant, i standard deviation of the observation.


Weights reflect the relative quality of the observations; for example allocating weights 1 and 4 to two
observations is no different from allocating 3 and 12 to the same.
Traditionally c is chosen to be 02 and the weight definition becomes:
02
1
: 0
2 =

2
=
=

Weights for a set of observations are combined in a weight matrix W in the form.

1
0
0

0
..
2 . .
:
3
: : :
0 0 0

.. 0
.. 0
.. 0
; ..
0

02
12

..

02
22

..

: :
0 0

:
0

..
..
..

0
0
0

..
02
0
2

Parameter estimation by least squares


For practical purposes, only a finite number of observations, can be performed in order to obtain the
most probable value (MPV) estimate of the true value.
The MPV can be considered as population mean or the sample mean.
We therefore define the most probable value as that quantity which has a maximum probability of
occurrence.
Consider n independent and equally weighted observations i.e. 1 , 2 , . . , of the same
quantity X which has an MPV value of . . ( = ) .
Page | 7

Then by definition the residual errors of the observation may be expressed in this manner.
1 = 1
2 = 2
3 = 3
:

=
These residuals can be assumed to behave in a manner similar to the true errors and may be used
interchangeably.
=
The residuals are normally distributed according to the function
= =

1
2

2
)
2 2

(1)

We may we write eqn 1 in the form given in eqn 2


=

22

where the quantity =

1
2

The individual probabilities for the occurrence of the residuals 1 , 2 , . are obtained by
multiplying respective ordinates with a small increment, v.
The probability of occurrence of the errors may be expressed as
22
1

2 22

1 = 1 =
2 = 2 =

= =

2 2
= =
The probability of the simultaneous occurrence of all residuals is simply the product of the individual
probability. i.e.
= 1 2 ..
=

2 12

22
2

2 2

(3)

Simplify eqn (3),


=

22
1

22
2

. .

2 2

. . (4)

From eqn 4 we note that P (the probability of all simultaneous residuals) can be maximized if we
minimize the quantity in the brackets in the eqn below (sum of squares of residuals)
=

12 +22 + 32 +. .+ 2

Page | 8

From eqn 4, we realize that to maximize P (probability of occurrence of the residuals) we have to
minimize the sum of the squares of the residuals. This is the fundamental principle of least squares.
Expressed in

2 = 12 + 22 + 32 + . . + 2 = . (5)
=1

From Equation 5 the most probable value of measured quantity is obtained from repeated
measurements of equal weight is the value that renders the sum of squares of residuals a minimum.
Ideally the minimum value of a function is obtained by taking the 1st derivative with respect to the
variables.

+ 2

+ 3

=1

Differentiating with respect to ;

2
= 1 + 2 + + = 0

1 2
= 1 +
2

2 + + = 0

= 1 + 2 + . +
=

1 + 2 + . +

. . (7)

The quantity in eqn 7 is the MPV of the arithmetic mean and therefore we may conclude that the MPV
of a quantity that has been observed independently is the arithmetic mean.
Least squares adjustment (Weighted case)
The MPV as in meaning residuals can be expressed in the form:
1 = 1 2 = 2 , ., =
It is known:
2 =

1
2 2

We can reformulate the total probability P,


=

1 12 + 2 22 + 3 32 + ..+ 2

(8)

Page | 9

To maximize P, we need to minimize


1 12 + 2 22 + 3 32 + . . + 2 (9)
2 = 1 12 + 2 22 + 3 32 + . . + 2
2 = 1 1

+ 2 2

+ . . +

(10)

Differentiating eqn 10 and equating to zero, then we have:

2
= 1 1

+ 2 2

+ . . +

. (11)

1 2
= 1 1 + 2 2 + . . +
2

1 1 1 + 2 2 2 + . . + = 0
1 + 2 + . . + = 1 1 + 2 2 + +

. . (12)

The most probable value (MPV) for the weighted case is as shown in Eqn 12.
The most probable value for a quantity obtained from repeated measurements having varying weights is
that value which renders the sum of the weights multiplied with their respective residuals of squares a
minimum.
Least squares solution is found by minimizing the function
= min (1)
In a situation where different observations carry different weights, then eqn (1) may be modified to
account for the different weights. It becomes:
= min (2)
V is an n 1 vector of residuals while W is an n n weight matrix which will always be a diagonal matrix.

Page | 10

Example 1
Consider the measurements of the distances P1P2.
P1

P2

It was measured twice: l1= 15.12 m and l2= 15.14m.


Find the least squares solution
Solution: = 2, , 0 = 1 , = 0 = 1
An adjustment is possible. Find the estimate for the distance ;
= 1 + 1
1 = 1 = 15.12

2 + 2

2 = 2 = 15.14

Establish the function = 12 + 22 = 15.12

+ 15.14

Our unknown is . Taking partial derivatives of with respect to the unknowns, , and equate to zero.
Thus:

= 2 15.12 + 2 15.14 = 0 4 60.52 = 0

= 15.13

NB: this value is equal to the arithmetic mean since observations are uncorrelated and of equal
precision.
Example 2
A distance is measured 4 times with the following results. What it the least squares estimate of the
distance.
1 = 32.51

2 = 32.48, 3 = 32.52

4 = 32.53

Solution: = 4 1 = 3
= 1 + 1
1 = 32.51
2 = 32.48
3 = 32.52
4 =

32.53

= 12 + 22 + 32 + 42

Page | 11

= 32.51

+ 32.48

+ 32.52

+ 32.53

= 2 32.51 + 2 32.48 + 2 32.52 + 2 32.53 = 0

8 260.08 = 0

= 32.51

1 = 32.51 32.51 = 0.00

2 = 32.51 32.48 = 0.03

3 = 32.51 32.52 = 0.01

4 = 32.51 32.53 = 0.02

Example 3
The equation of a straight line in a plane is y-ax-b=0. In order to estimate the slope a and the intercept b,
the coordinates of 3 points were measured.
Point

3.2

4.0

5.0

Assuming that the x coordinates are error free constants and y are observation, use the method of least
squares to calculate estimates .
Solution: n=3, n0=2, r=n-n0= 3-2=1 implies an adjustment is possible.
The parameters set (i.e. unknowns being sought are (a, b). we now write the 3 equations involving the
observation:
1 + 1 1 = 0

1 + 3.2 2

1 = 2 + 3.2

2 + 2 2 = 0

2 + 4 4

2 = 4 + 4

3 + 3 3 = 0
= 12 + 22 + 32 =

3 + 5 6

2 + 3.2

+ 4 + 4.0

3 = 6 + 5
2

+ 6 + 5.0

= 2 2 2 + 3.2 + 2 4 4 + 4.0 + 2 6 6 + 5.0 = 0

= 8 + 4 12.8 + 32 + 8 32.0 + 72 + 12 60 = 0

112 + 24 104.8 = 0 (1)

= 2 2 + 3.2 + 2 4 + 4.0 + 2 + 5.0 = 0


Page | 12

12 + 3 12.2 = 0 . . (2)
112 + 24 104.8 = 0 (1)
2 = 0.90 ,

= 0.450

= +2.27
1
2 0.450 + 2.27 3.2
2 = 4 0.450 + 2.27 4.0 =
3
6 0.450 + 2.27 5.0

0.033
+0.067
0.033

Computed observations:
1
2
3

1
= 2
3

+1
+2 =
+3

3.2
4.0 +
5.0

0.033
0.067 =
0.033

3.167
4.067
4.967

Example 4
Consider the following set of linear equations
+ 3.0 = 1
2 1.5 = 2
0.2 = 3
Find the least square solution of x and y.
, =

2 = + 3.0

+ 2 1.5

+ 0.2

To minimise the equation above, we have to differentiate with respect to x and y and equate the
derivatives to zero. i.e.

2
= 2 + 3.0 + 2 2 2 1.5 + 2 0.2 = 0

2
= 2 + 3.0 2 2 1.5 2 0.2 = 0

Simplifying the set of equations in equation


6 2 6.2 = 0
2 + 3 1.3 = 0
The solution of equation gives the set of values for x and y as x=1.514 and y=1.442
If we substitute these least squares solutions to original equations, we get residues
Page | 13

1 = 0.044 12 = 0.001936
2 = 0.086 22 = 0.007396
3 = 0.128 32 = 0.016384

2 = 0.025716

Least squares provides the smallest value of sum of residuals and MPV for the estimates of x and y.

Example 5
Given
a
set
of
observations
= {1 , 2 , 3 , . }
weights:1 , 2 , 3 Determine the weighted arithmetic mean;

with

corresponding

Solution:
=

1 1 + 2 2 +
1 + 2 +

+
=
+

We want to show that the choice of 0 , as used in determining the weights does not affect the final
result. We are given 3 measurements and their standard deviation.

Observation

Weight with =1

Weight with =16

1
4

16
=4
4

1
16

16
=1
16

1
16

16
=1
16

1 = 100.01 2

2
= 100.03 4
13
= 100.04 4

The weighted mean

100.01 100.03 100.04


+ 16 + 16
37.507
= 4
=
= 100.018
1 1
1
0.375
+
+
4 16 16
100.01 4 + 100.03 1 + 100.04 1
600.11
=
= 100.018
4+1+1
6

Page | 14

Potrebbero piacerti anche