Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Part I
by
Urho A. Uotila
Table of contents
Propagation of Variances and Covariances
Definitions
Linear Functions
Nonlinear Functions
Example: Effects of Covariances
Propagation of Variances and Covariances Through a Chain of Triangles
Weights
Variance of Weighted Mean
Example of Weighted Mean
Commonly Used Weights in Geodesy and Surveying
Example of Weighted Mean
Examples of Weighted Mean in Leveling
Least Squares Adjustments
Observation Equations
Linear Model
Variance-Covariance Matrix for Parameters
A Posteriori Variance of Unit Weight
Numerical Example for a Linear Model
Example of Station Adjustment
Nonlinear Model
Observation Equations for Distances and Angles on a Plane
Numerical Example of Adjustment
Method of Condition Equations (Method of Correlateds)
Linear and Nonlinear Models
Example: Adjustment of a Leveling Net
Example: Adjustment of a Traverse
Effect of Changes of Weights of Observations
Sequential Solutions with Observation Equations
Addition of New Observations to the Normal Equation
Propagation of Variances and Covariances
We have the following definitions:
Variance of a random variables xi is given as:
σ x2 = var( xi ) = E ( xi − E [ xi ])2 = E xi2 + ( E [ xi ]) 2 − 2 xi E [ xi ]
i
= E xi x j − E [ xi ] E x j − E x j E [ xi ] + E x j E x j = E xi x j − E [ xi ] E x j (2)
⋮ ⋮ ⋮
xn n×1 xn E[ xn ]
Variance-Covariance matrix of X = ∑ X
x x1 x1 x1
T
x − E [ x ] x − E [ x ] T
1
x x x 1 1
1 1
2
=E
x
− E − E = E
2 2 2 x2 − E [ x2 ] x2 − E [ x2 ]
⋮ ⋮ ⋮ ⋮ ⋮ ⋮
xn xn xn xn xn − E [ xn ] xn − E [ xn ]
x1 − E [ x1 ]
x2 − E [ x2 ]
=E x1 − E [ x1 ] x2 − E [ x2 ] ⋯ xn − E [ xn ]
⋮
xn − E [ xn ]
σ x21 σx x ⋯ σ x1xn
1 2
σ x x σ x2 ⋯ σ x2 xn
= 21 2
σ ij = σ ji (4)
⋮ ⋮ ⋱ ⋮
σ x x σxx ⋯ σ x2n
n1 n 2
y1 = a1 x1 + a2 x2 + c1 σ x21 σx x
∑X = 1 2
y2 = b1 x1 − b2 x2 + c2 σ x2 x1 σ x2 2
E [ y1 ] = E [ a1 x1 + a2 x2 + c1 ] = a1 E [ x1 ] + a2 E [ x2 ] + c1
E [ y2 ] = E [b1 x1 − b2 x2 + c2 ] = b1 E [ x1 ] − b2 E [ x2 ] + c2
y1 − E [ y1 ] = a1 x1 + a2 x2 + c1 − (a1 E [ x1 ] + a2 E [ x2 ] + c1 ) = a1 ( x1 − E [ x1 ]) + a2 ( x2 − E [ x2 ]) (5)
Similarly,
y2 − E [ y2 ] = b1 x1 + b2 x2 + c2 − (b1 E [ x1 ] + b2 E [ x2 ] + c2 ) = b1 ( x1 − E [ x1 ]) − b2 ( x2 − E [ x2 ]) (6)
Again,
Finally,
σ x21 σ x x a1
∴σ = a σ + a σ + 2a1a2σ x1x2 = [ a1
2 2 2 2 2
a2 ] 1 2
= A∑X A
T
(7)
σ x2 x1 σ
y1 1 x1 2 x2 2
x2 a2
Similarly,
( y2 − E [ y2 ])2 = b12 ( x1 − E [ x1 ])2 + b22 ( x2 − E [ x2 ]) 2 − 2b1b2 ( x1 − E [ x1 ])( x2 − E [ x2 ])
σ x21 σ x x b1
∴σ 2
= b σ + b σ − 2b1b2σ x1 x2 = [ b1
2 2 2 2
−b2 ] = B ∑X B
1 2 T
(8)
σ x2 x1 σ x −b2
y2 1 x1 2 x2 2
2
σ x21 σ x x b1
σ y y = a1b1σ − a2b2σ − (a1b2 − a2b1 )σ x x = [ a1
2 2
a2 ] 1 2
= A∑X B
T
(9)
σ x2 x1 σ x −b2
1 2 x1 x2 1 2 2
2
More generally:
n n
E ∑ ai xi = ∑ ai E [ xi ] (10)
i =1 i =1
n n n n
var ∑ ai xi = ∑ ai2 var( xi ) + ∑∑ ai a j cov( xi x j ) i ≠ j (11)
i =1 i =1 i =1 j =1
n n
y1 = ∑ ai xi y2 = ∑ bi xi
i =1 i =1
n n n
cov( y1 , y2 ) = ∑ ai bi var( xi ) + ∑∑ ai b j cov( xi x j ) i ≠ j (12)
i =1 i =1 j =1
a2 σ x1 σ x x a1
2 T
a a2
∑Y = 1
1 2
= G ∑ X GT
b1 −b2 σ x2 x1 σ x2 b1
2
−b2
a12σ x2 + a22σ x22 + 2a1a2σ x1 x2 a12b1σ x21 − a2b2σ x22 − (a1b2 − a2b1 )σ x1x2
= 2 2 1
a1 b1σ x1 − a2b2σ x2 − (a1b2 − a2b1 )σ x1 x2 b12σ x21 + b22σ x22 − 2b1b2σ x1x2
2
Another example:
σ x21 σxx σxx 4.5 1.2 −1.3
y = 2 x1 + x2 − 2 x3 + 3
1 2 1 3
given 1 ∑ X = σ x2 x1 σ 2
σ x2 x3 = 1.2 3.2 −2.1
y2 = 3x1 − x2 − 5
x2
σ x3 x1 σx x 3 2
σ x23 −1.3 −2.1 6.3
σ y21 σyy
then derives: ∑Y = 1 2
σ y2 y1 σ y2
2
x1
y 2 1 −2 3
Y = 1 X = x2 G = C=
y2 x3 3 −1 0 −5
∑Y =
1.2 3.2 −2.1 1 −1 = =
1 2
σ y1 y2 28.6
ρ y1 y2 = = = 0.57
σ y1 × σ y2 70.0 × 36.5
(b) Let us consider a nonlinear function of the form:
∂f ( x) 1 ∂ 2 f ( x) 1 ∂ 3 f ( x)
f ( x) ≈ f (a) + ( x − a) + ( x − a ) 2
+ ( x − a )3 + ⋯
∂x 2! ∂x 2 3! ∂x3
with matrices: Yn×1 = F ( X u×1 )
Linearizing the function, we get:
σ =±
[vv ] ±2.55 0.0767m
n −1
σx = ±
[vv ] ±1.14 0.0343m
n(n − 1)
2
σx =
[ vv ] 1.30’2 0.00118m2
n(n − 1)
1 1 α°
S= area of sector = r 2α = r 2
2 2 ρ°
where ρ ° is a constant used to convert the angle into radians, which is a unitless
quantity.
If the cvariances and covariances of all the observed quantities are known, then the
variance of the closing base x can be found in the following manner:
The variables are: C , α1 , α 2 ,⋯ , α n , β1 , β 2 ,⋯ , β n .
We differentiate partially with respect to all the variables:
n
∂x ∏
sin α i
x
= i =n1 =
∂C
∏ sin β
C
i
i =1
n
∏ sin β
i =1
i
∂x ∂x
= x × cot α 2 ,…., = x × cot α n
∂α 2 ∂α n
n
∏ sin β
i =1
i
∂x ∂x
= − x × cot β 2 ,…., = − x × cot β n
∂β 2 ∂β n
We know previously that for Y = F ( X )
∂F ( X )
∑Y = G ∑ X G T where G =
∂X
In our case,
x
G= x cot α1 x cot α 2 ⋯ x cot α n − x cot β1 − x cot β 2 ⋯ − x cot β n
C
The variance-covariance matrix of observed quantities is given to be:
σ α1C σα σα α ⋯ σ α1α n σα β σα β σα β
2
1 1 2 1 1 1 2
⋯ 1 n
σ α 2 C σα α σα ⋯ σ α 2α n σα β σα β σα β
2
2 1 2 2 1 2 2
⋯ 2 n
⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋱ ⋮
∑ X = ∑Cαβ = σ α nC σα α
n 1
σα α
n 2
⋯ σ α2 n
σα β n 1
σα β
n 2
⋯ σ αnβn
σ β1C σβα
1 1
σβα
1 2
⋯ σ β1α n σ β2 1
σβ β
1 2
⋯ σ β1βn
σ σβ α σβ α ⋯ σ β 2α n σβ β σ β2 ⋯ σ β 2 βn
β2C 2 1 2 2 1 2 2
⋮ ⋮ ⋮ ⋯ ⋮ ⋮ ⋮ ⋱ ⋮
σ β nC σβ α
n 1
σβ α
n 2
⋯ σ β nα n σβ α n n
σβ β
n 1
σβ β
n 2
σ β2n
If we assume
σ Cα = σ C β = σ α β = 0 ( i ≠ j ) and σ α α = σ β β = 0 ( i ≠ j )
i i j i i j i j
−1 2 −1 3 3 −2 2 2 2 1 2 1 2 4 2 1 2
0 0 σ γ2' σ − σ + σ σα ' + σ β ' + σγ '
3 3 3 −1 −1 9 α ' 9 β ' 9 γ ' 9 9 9
3 3
If we asume: σ α2 ' = σ β2 ' = σ γ2' = σ A2
2 2 −1 2
σ α2 σ αβ 3σA 3
σA
then ∑αβ = =
σ βα σ β2 −1 2
σA
2 2
σA
3 3
2 2 −1 2
σ α2 = σ A2 ; σ β2 = σ A2 ; σ αβ = σA
3 3 3
These variance-covariances are for two adjusted angle in a triangle. The assumption
is made that the onserved angles are of equal variances and are not correlated with
each others.
Substituting these variances and covariances into equation (1) we get:
x2 2 2 2
σ x2 = 2
× σ C2 + x 2 σ A2 ∑ cot α i + σ A2 ∑ cot β i + σ A2 ∑ cot α i cot β i
C 3 3 3
x2 2
σ x2 = 2
× σ C2 + x 2σ A2 ∑ ( cot α i + cot β i + cot α i cot β i )
C 3
This is the variance of closing base x in the case that all three angles have been
measured with equal accuracy in each triangle and the measured angles are not
correlated with each other.
Divided both sides by x 2 we get a ratio:
σ x2 1 2
2
= 2
× σ C2 + σ A2 ∑ ( cot α i + cot β i + cot α i cot βi )
x C 3
d
But cot α = (log sin α )
dα
If σ A is in seconds, then 1” difference in log sin α replaces cot α and log sin β
replaces cot β . We then have:
σ x2
× σ C2 + σ A2 ∑ (δα i2 + δβ i2 + δα iδβ i )
1 2
22
=
x C 3
Where δα i = 1" difference in log sin α i ; δβi = 1" difference in log sin βi
This is a basic formula for the so call strength of the figure in triangulation which is
as follows:
D −C
R= ∑ (δα2 + δ β2 + δα δ β )
D
Where D= the number of new directions observed.
C= the number of geometric conditions that must be satisfied in the figure.
0
10 428 359
12 359 295 253
14 315 253 214 187
16 484 225 187 162 143
18 262 204 168 143 126 113
20 245 189 153 130 113 100 91
22 232 177 142 119 103 91 81 74
24 221 167 134 111 95 83 74 67 61
26 213 160 126 104 89 77 68 61 56 51
28 206 153 120 99 83 72 63 57 51 47 43
30 199 148 115 94 79 68 59 53 48 43 40 33
35 188 137 106 85 71 60 52 46 41 37 33 27 23
40 179 129 99 79 65 54 47 41 36 32 29 23 19 16
45 172 124 93 74 60 50 43 37 32 28 25 20 16 13 11
50 167 119 89 70 57 47 39 34 29 26 23 18 14 11 9 8
55 162 115 86 67 54 44 37 32 27 24 21 16 12 10 8 7 5
60 159 112 83 64 51 43 35 30 25 22 19 14 11 9 7 5 4 4
65 155 109 80 62 49 40 33 28 24 21 18 13 10 7 6 5 4 3 2
70 152 106 78 60 48 38 32 27 23 19 17 12 9 7 5 4 3 2 2 1
75 150 104 76 58 46 37 30 25 21 18 16 11 8 6 4 3 2 2 1 1 1
80 147 102 74 57 45 36 29 24 20 17 15 10 7 5 4 3 2 1 1 1 0 0
85 145 100 73 55 43 34 28 23 19 16 14 10 7 5 3 2 2 1 1 0 0 0 0
90 143 98 71 54 42 33 27 22 19 16 13 9 6 4 3 2 1 1 1 0 0 0 0
95 140 96 70 53 41 32 26 22 18 15 13 9 6 4 3 2 1 1 0 0 0 0
100 138 95 68 51 40 31 25 21 17 14 12 8 6 4 3 2 1 1 0 0 0
105 136 93 67 50 39 30 25 20 17 14 12 8 5 4 2 2 1 1 0 0
110 134 91 65 49 38 30 24 19 16 13 11 7 5 3 2 2 1 1 1
115 132 89 64 48 37 29 23 19 15 13 11 7 5 3 2 2 1 1
120 129 88 62 46 36 28 22 18 15 12 10 7 5 3 2 2 1
125 127 86 61 45 35 27 22 18 14 12 10 7 5 4 3 2
130 125 84 59 44 34 26 21 17 14 12 10 7 5 4 3
135 122 82 58 43 33 26 21 17 14 12 10 7 5 4
140 119 80 56 42 32 26 20 17 14 12 10 8 6
145 116 77 55 41 32 25 21 17 15 13 11 9
150 112 75 54 40 32 26 21 18 16 15 13
152 111 75 53 40 32 26 22 19 17 16
154 110 74 53 41 33 27 23 21 19
156 108 74 54 42 34 28 25 22
158 107 74 54 43 35 30 27
160 107 74 56 45 38 33
162 107 76 59 48 42
164 109 79 63 54
166 113 86 71
168 122 98
170 143
p1 σ l2 3 p σl
2 2
3
= 2 = ; 1 = 23 = or p1σ l21 = p2σ l22 = p3σ l23 = ⋯ = pnσ l2n = σ 02
p2 σ l1 4 p3 σ l1 2
σ 02 σ 02 σ 02 σ 02 σ 02 σ 02 σ 02
∴ pi = p1 = = = 3 , p = = = 4 , p = = = 2。
σ l2
i
σ l2 σ 0 2
1
2
σ l2 σ 0 2 3
σ l2 σ 0 2
2 3
3 4 2
and E[ xɵ ] = a1E[ x1 ] + a2 E[ x2 ] + ⋯ + an E[ xn ] = ∑ ai E[ xi ]
where E[ xɵ ] = E[ x] = µ .
n n
If ∑ ai = 1 then E[ xɵ ] = ∑ ai E[ xi ] = E[ xi ] = E[ x] .
i =1 i =1
How would the ai be choosen to obtain in some sense a best unbiased estimate? A
possible procedure is to choose the ai such that the estimate xɵ has a minimum
variance.
Now
n n n
var( xɵ ) = var(∑ ai xi ) = ∑ ai2 var( xi ) = ∑ ai2σ i2
i =1 i =1 i =1
n
Let us minimise var( xɵ ) subject to the condition ∑a
i =1
i = 1.
n n
Let F = ∑ ai2σ i2 + k (∑ ai − 1) = a12σ 12 + ka1 + a22σ 22 + ka2 + ⋯ + an2σ n2 + kan − k
i =1 i =1
−k 1 −2
from this we get: ∑ = 1 or k =
2 σ 2j 1
∑ 2
σ
j
1
σ i2
Inserting this k into the formula for a j , we get: ai =
1
∑ 2
σ
j
1 pi ∑ pi xi i = 1, 2,⋯ , n
If pi = then ai = and xɵ =
σ i
2
∑ pj ∑ pj j = 1, 2,⋯ , n
This is a minimum variance solution.
We could go through the same derivation, instead taking k = C × σ 02 . Then we would
get:
σ 02
pi = = 1 ∴σ i2 = σ 02
σi 2
∑p i
i =1
2 2 2
p p p pp
σ 2
= 1 σl21 + 2 σl22 + ⋯ + n σl2n + 2 1 2 2 σl1l2 + ⋯
XP ∑ p ∑ p ∑ p (∑ p)
p12 2 p22 2 pn2 pp
= 2
σ l1 + 2
σ l2 + ⋯ + 2
σl2n + 2 1 2 2 σl1l2 + ⋯
(∑ p) (∑ p) (∑ p ) (∑ p)
If we assume that there is no correlation, and that
σ02 σ02 σ02
p1 = , p2 = ,….., pn = , we get
σl21 σl22 σl2n
σ04 σ04 σ04
σl41 σl42 σl4n σ02 p1 σ02 p2 σ02 pn
σ X2 P = σ2 +
2 l1 2
σl22 + ⋯ + 2
σl2n = + + ⋯ +
(∑ p) (∑ p) (∑ p) ( ∑ p ) 2 (∑ p ) 2 (∑ p )2
σ2 ∑ p σ02
= 0 =
(∑ p ) 2 ∑ p
σ0 ɵ0
σ
∴ σX p = or σɵ X = p
∑p ∑p
1 1
Observation distance(ft) σ ( ft ) σ 2 p=
σ 2 ft 2
7829.614 ±0.020 0.00040 2500
7829.657 ±0.014 0.00020 5000
7829.668 ±0.020 0.00040 2500
7829.628 ±0.010 0.00010 10000
2500× 7829.614 + 5000× 7829.657 + 2500× 7829.668 + 10000× 7829.628
X=
2500 + 5000 + 2500 + 10000
156592770.000
X= = 7829.6385
20000
1
= σ X2 = 0.0005 σ X = ±0.00707
∑p
Alternatively, if σ02 = 0.00040 (assumed value):
σ02 0.00040 σ02 0.00040 σ02 0.00040
p1 = = =1 , p2 = = =2 , p3 = = =1 ,
σl21 0.00040 σl22 0.00020 σl23 0.00040
σ02 0.00040
p4 = = =4
σl24 0.00010
1× 7829.614 + 2× 7829.657 + 1× 7829.668 + 4× 7829.628
X= = 7829.6385
1+ 2 +1 + 4
or If we take a common part, 7829.600, we get obsvered values as:
1. 7829.600+0.014
2. 7829.600+0.057
3. 7829.600+0.068
4. 7829.600+0.028
1× (7829.600 + 0.014) + 2× (7829.600 + 0.057) + 1× (7829.600 + 0.068) + 4× (7829.600 + 0.028)
X=
1 + 2 +1 + 4
or
8× 7829.600 1× 0.014 + 2× 0.057 + 1×0.068 + 4× 0.028
X= +
8 8
0.3080
X = 7829.600 + = 7829.600 + 0.0385 = 7829.6385
8
2
ɵ2X = σ0 = 0.0004 = 0.00005
σ σ X = ±0.00707
∑p 8
th
S I = ∆S1 + ∆S 2 + ∆S3 + ⋯ + ∆S nI where ∆Si is a length of i distance.
If the △S:S are tape lengths then S I = nI ×∆S
And if variance of △S’s are equal to σt2 (variance of one tape length), then
8×30.0 + 4×30.3
and X = = 30.10
8+4
which one is the correct choice, a or b?
1 1 1 B
× ( H A + ∆hA ) + × ( H B + ∆hB ) + × ( H C + ∆hC )
S SB SC SA
Hp = A A SB
1 1 1
+ +
S A S B SC P
∆hA = H p − H A ; ∆hB = H p − H B ; ∆hC = H p − H C SC
C
Example of a weighted mean of measured distance and its standard error:
The difference between an estimate f and the observed value f is a residual and we
can write it:
F − F = V = AX − F These are called observation equations.
n×1 n×1 n×1
{
= trace {∑−f 1 E[( F − F 0 )( F − F 0 )T ]} − trace AT ∑−f 1 AE[( X − X )( X − X )T ] }
n×n n×n u×u u×u { }
n×n u×u
{ }
= trace ∑−f 1 ∑ f − trace ∑−X1 ∑ X = trace I − trace I = n − u
v3 = ɵ
x1 + ɵ
x2 + ɵx3 + ɵ
x 4 − f3 v9 = ɵ
x3 + ɵ
x 4 − f9
v =ɵ
4 x1 + ɵ
x4 − f 4 v10 = ɵ
x1 + ɵ
x2 + ɵ
x3 + ɵ
x 4 − f10
v5 = ɵ
x2 + ɵ
x 4 − f5 v =ɵ
11 x2 + ɵ
x3 + ɵ
x4 − f 11
v6 = ɵ
x2 + ɵ
x3 + ɵx4 − f6 v12 = ɵ
x3 + ɵ
x 4 − f12
We write the observation equations in matrix form:
v1 1 0 0 1 101511mm
v2 1 1 0 1 304220
v 1 1 1 1 657119
3
v 1 0 0 1 101520
4
v 0 ɵ
5 1 0 1 x1 202718
v6 0 1 1 1 ɵ x 2 555622
= − V = A X− F
v7 1 1 0 1 ɵ x 304230 12×1 12×4 4×1 12×1
3
v8 0 1 0 1 ɵ 202715
x 4
v9 0 0 1 1 352915
v10 1 1 1 1 657111
v11 0 1 1 1 555620
1 352914
v12 0 0 1
or
l1 + v1 = ɵx1
b a
l2 + v2 = ɵx 2
b a
l3 + v3 = ɵx3
b a
l4 + v4 = ɵx1 + ɵx 2
b a a
l5 + v5 = ɵx 2 + ɵx3
b a a
l6 + v6 = 360°− ɵx1 − ɵx 2 − ɵx 3
b a a a
We can take approximate values for parameters in order to have numbers which are
easier to handle. Then we compute correction to those approximate values.
x10 = 44°29 '30" x20 = 85°10 '10" x30 = 55°05' 40"
l4 + v4 = ( x1 + δ x1 ) + ( x2 + δ x2 ) l5 + v5 = ( x2 + δ x2 ) + ( x3 + δ x3 )
b 0 0 b 0 0
l6 = 360°− ( x1 + δ x1 ) − ( x2 + δ x2 ) − ( x3 + δ x3 )
b 0 0 0
l5 + v5 = δ x2 + δ x3 + l5
b
l6 + v6 = l6 − δ x1 − δ x2 − δ x3
0 b 0
2 2
ɵ =ɵ
σ2a σ × 0.50 = 0.22667 × 0.5 = 0.113335
0
ɵ2 = ±0.34"
σ a
2
σ ɵ02 × 0.50 = 0.22667 × 0.5 = 0.113335
ɵ3 = σ ɵ3 = ±0.34"
σ
a a
∂F
=A X 0 = approximate values of parameters
∂X a Xa =X0
therefore,
T
V T PV = X AT PAX + 2 LT PAX + LT PL ==> minimum
Partially differentiated with respect to X :
1 ∂(V T PV )
× = AT PAX + AT PL ==> = 0
2 ∂X
Let us say;
AT PA = N
T
A PAX + A PL = 0 T
and ∴ N X + U = 0 (normal equations)
AT PL = U
X = −N −1U X = −( AT PA)−1 AT PL
We aleady know:
T T
V T PV = ( AX + L)T P ( AX + L) = X AT PAX + X AT PL + LT PAX + LT PL
T
∴ V T PV = X ( AT PAX + AT PL) + LT PAX + LT PL as AT PAX + AT PL = 0
= LT PAX + LT PL = U T X + LT PL
T
V T PV = LT PL + X U
We calculate a posteriori variance:
We know that ∑ X = G ∑ L GT
a b
ɵ
L a = Lb + V = Lb + AX + L = Lb + AX + L0 − Lb = AX + L0 L0 constant
∑ ɵLa = G ∑ X GT
G=A
∑ ɵLa = A(σ02 AT PA)−1 AT
X = X a − X0
∑ X = ∑ X a = σ02 ( AT PA)−1
∑ ɵLa = G ∑ X GT = Aσ02 ( AT PA)−1 AT = σ02 AN −1 AT
1. Distances
Mathematical model
2 2
Sija = (x ia − x ja ) + ( yia − y ja )
Partial derivatives:
∂Sija 1 1 xia − x ja
= × × 2 ( xia − x ja ) (+1) = = sin t jia
∂xia 2 2 2 Sija
( xia − x ja ) + ( yia − y ja )
t ji = grid azimuth of line ij from point j to point i
∂Sija 1 1 yia − y ja
= × × 2 ( yia − y ja ) (+1) = = cos t jia
∂yia 2 2 2 Sija
(x ia − x ja ) + ( yia − y ja )
∂Sija 1 1 xia − x ja
= × × 2 ( xia − x ja ) (−1) = − = − sin t jia
∂x ja 2 2 2 Sija
( xia − x ja ) + ( yia − y ja )
∂Sija 1 1 yia − y ja
= × × 2 ( yia − y ja ) (−1) = − = − cos t jia
∂y ja 2 2 2 Sija
(x ia − x ja ) + ( yia − y ja )
Approximate coordinates:
xi , yi , x j , y j , xk , yk
0 0 0 0 0 0
2 2
Sij0 = (x i0 − x j0 ) + ( yi0 − y j0 )
Sijb = observed value of distance between points i and j
Observation equations:
xi0 − x j0 yi0 − y j0 xi0 − x j0 yi0 − y j0
vSij = δ xi + δ yi − δxj − δ y j + Sij0 − Sijb
Sij0 Sij0 Sij0 Sij0
or
vSij = sin t ji0 δ xi + cos t ji0 δ yi − sin t ji0 δ x j − cos t ji0 δ y j + Sij0 − Sijb
xi0 − x j0
where t ji = tan −1 0
yi0 − y j0
2. Angles
αijk = t jk − t ji
xk − x j yk − y j xi − x j yi − y j
t jk = tan −1 = cot −1 t ji = tan −1 = cot −1
yk − y j xk − x j yi − y j xi − x j
where t jk = grid azimuth from point j to point k
xk0 − x j0 xi0 − x j0
αijk0 = tan−1 − tan−1
yk0 − y j0 yi0 − y j0
We have to fix coordinates of one point and an azimuth or coordinates of one point
and one coordinate of a second point.
If azimuth t ji is fixed, then:
xka − x ja
αijka = t jka − t ji ( fixed ) = tan−1 − t ji ( fixed )
yka − y ja
Then partial derivatives have to be derived for this mathematical model. When the
coordinates are fixed, there will not be δ x and δ y for those coordinates.
If azimuth are observed, then mathematical model is
xia − x ja
t jia = tan−1
yia − y ja
and corresponding partial derivatives are:
∂t ji yi − y j cos t ji ∂t ji −( xi − x j ) − sin t ji
= 2
= = 2
=
∂xi S ij Sij ∂yi S ij Sij
∂t ji −( yi − y j ) − cos t ji ∂t ji xi − x j sin t ji
= 2
= = 2
=
∂x j S ij Sij ∂yi S ij Sij
Observation equation for an azimuth is:
yi − y j xi − x j yi − y j xi − x j xi − x j0
vt"ji = ρ " 0 2 0 δ xi − ρ " 0 2 0 δ yi − ρ " 0 2 0 δ x j + ρ " 0 2 0 δ yi + tan −1 0 − t jib "
Sij0 Sij0 Sij0 Sij0 yi0 − y j0
P1 GIVEN:
P2 point x y
m m
P P1 842.281 925.523
P2 1337.544 996.249
P4 P3 1831.727 723.962
P3 P4 840.408 658.345
Observed:
(0)
lb m distance ɵm
σ
1 244.512 P1P ±0.012
2 321.570 P2P ±0.016
3 773.154 P3P ±0.038
4 279.992 P4P ±0.014
5 ∠P1 PP2 123°38’01.4” ±2.0”
Compute coordinates for point P and related data.
(1) Mathematical model
xP
X a = coordinates of the point P
a
La = F ( X a )
y
Pa
l1 = ( xP − xP ) 2 + ( yP − yP ) 2
a 1 a 1 a
l2 = ( xP − xP ) 2 + ( y P − y P ) 2
a 2 a 2 a
2 2
l3 = ( xP − xP ) + ( yP − yP )
a 3 a 3 a
l4 = ( xP − xP ) + ( y P − y P ) 2
a 4 a
2
4 a
xP1 − xPa
t PP1 = tan−1 = grid azimuth from P to P1 (direction?)
yP1 − yPa
xP2 − xPa
t PP2 = tan−1 = grid azimuth from P to P2
yP2 − yPa
(2) Approximate values of Parameters
xP 1065.200
X 0 = =
0
y 825.200
P0
(3) Linearized Equation (by using Taylor’s Series)
V = AX + L
∂F
A= L = L0 − Lb L0 = F ( X 0 )
∂X a X = X
a 0
δ x + xP δ x
X a = X + X 0 =
0
X =
δ y
δ y + yP0
Similarly
∂l1a −( yP1 − yP0 ) ∂l2a −( xP2 − xP0 ) ∂l2a −( yP2 − yP0 )
= ; = ; =
∂yPa l1 0
∂xPa l2 0
∂yPa l2 0
m m m m
m m m m m
m
y − y y − y x − x x − x
v5 " = ρ " P P
+
P P
P P P P0
δ y+ l5 "− l5 "
δ x+ ρ " +
2 0 1 0 2 0 1
l 2 l 2
l 2
l 2 0 b
20 10 20 10
2
m 2 m
2 2
m m
V T PV = LT PL + X U = 0.8436
T
ɵ02 = 0.8436 = 0.2812
σ ɵ0 = ±0.5303
σ
5− 2
−0.00197 244.510
−0.00550 321.564
V = AX + L = −0.02726 ɵ
L a = F ( X a ) = 773.127
−0.00597 279.986
0.011 123°38'01.41"
∂F
Let us say, + V = B and F ( Lb ) = W
∂La La = Lb
by Urho A. Uotila
The Ohio State University
The basic formulas are in the usual observation equation system as follows:
(1) La = F ( X a )
(2) La = Lb + V
(3) X a = X 0 + X
∂F
(4) =A
∂X a
(5) L0 = F ( X 0 )
(6) L = L0 − Lb
(7) P = σ02 ∑−L 1 b
(8) X = −( A PA)−1 AT PL
T
(9) V = AX + L = ( P−1 − AN −1 AT ) PL = QV PL
T
(10) V T PV = LT PL + X AT PL
2 V T PV
(11) σɵ0 =
n −u
(12) ∑ X = σ02 ( AT PA)−1
a
(17) V1 = A1 X + L1
(18) V2 = A2 X + L2
2 V T PV
(20) σɵ0 =
n1 + n2 − u
(21) ∑ X = σ02 ( A1T P1 A1 + A2T P2 A2 )−1
a
A
(22) ∑ L = 1 ∑ X A1T A2T
A2
a a
2. Computing the effect of new, additional observations on the parameters and related
quantities.
A sequential solution can be developed from Eq.(16)-(21). We mean with the
sequential solution here, that we have a solution for the first system as follows:
La1 = F1 ( X a ) using observations Lb1
*
(23) X = −( A1T P1 A1 )−1 A1T PL −1 T
1 1 = − N1 A1 PL
1 1
2 −1
(24) ∑ X = σ0 N1 *
a
*T
(25) (V T PV )* = LT1 P1 L1 + X A1T PL
1 1
The first term is equal to X * . The two last terms can be considered as X in the
expression:
*
(27) X = X + ∆X where
∆X = N1−1 A2T ( P2−1 + A2 N1−1 A2T )−1 A2 N1−1 ( A1T PL T −1 T
1 1 + A2 P2 L2 ) − N1 A2 P2 L2
= −N1−1 A2T ( P2−1 + A2 N1−1 A2T )−1 A2 X * + N1−1 A2T ( P2−1 + A2 N1−1 A2T )−1 A2 N1−1 A2T P2 L2 − N1−1 A2T P2 L2
Using Eq.(20) and (28) from the “Useful Matrix Equalities” (U.M.E.) the last two
terms of the above expression can be written as:
N1−1 A2T ( P2−1 + A2 N1−1 A2T )−1 A2 N1−1 A2T P2 L2 − N1−1 A2T P2 L2
= −( N1−1 + N1−1 A2T ( P2−1 + A2 N1−1 A2T )−1 A2 N1−1 ) A2T P2 L2
= −( N1−1 A2T ( P2−1 + A2 N1−1 A2T )−1 L2
Combining the expression we get finally:
* *
(28) X = X − ( N1−1 A2T ( A2 N1−1 A2T + P2−1 )−1 ( A2 X + L2 )
From Eq.(28) or (21) it can be derived directly:
(29) ∑ X = σ02 N1−1 − σ02 N1−1 A2T ( A2 N1−1 A2T + P2−1 )−1 A2 N1−1 or
a
(17) V1 = A1 X + L1
(18) V2 = A2 X + L2
T
(19) V T PV = LT1 P1 L1 + LT2 P2 L2 + X ( A1T PL T
1 1 + A2 P2 L2 )
2 V T PV
(20) σɵ0 =
n1 + n2 − u
(21) ∑ X = σ02 ( A1T P1 A1 + A2T P2 A2 )−1
a
A
(22) ∑ L = 1 ∑ X A1T A2T
A2
a a
As said before now we want to remove the second set of observations, Lb 2 , and
derive
(40) X 1 = −( A1T P1 A1 )−1 A1T PL
1 1
Using Eq.(50) for ∆X and Eq.(44) for A1T P1 L1 , solving from Eq.(18):
L2 −V2 = A2 N −1 AT PL and from Eq.(48): A2 N −1 A2 = P2−1 − QV 2V 2 , we get finally:
(57) ∆V T PV = −V2T QV−21V 2V2
Using Eq.(20) from the U.M.E. and Eq.(43) we get:
(58) N1−1 = N −1 − N −1 A2T ( A2 N −1 A2T − P2−1 ) A2 N −1
or N1−1 = N −1 + N −1 A2T QV−21V 2 A2 N −1
we get for the variance-covariance matrix of X 1 : s :
(59) ∑ X = σ02 N1−1
1a
General formulas for removal of a set of observations, Lbi , which are not correlated
with remaining observations can be formulated as follows:
(60) Ni−1 = Ni−−11 − Ni−−11 AiT ( Ai Ni−−11 AiT − Pi−1 )−1 Ai Ni−−11
(61) ∆X i = −Ni−−11 AiT ( Ai Ni−−11 AiT − Pi−1 )−1 ( Ai X i−1 + Li )
(62) ∑ X = σ02 Ni−1
ai
(63) X i = X i−1 + ∆X i
T
(64) V T PV = (V T PV )i−1 −∆X iT ( AT PL)i−1 − X i AiT PL T
i i − Li PL
i i
It should be noted once more that the same numerical values must be used for σ02
and X 0 throughout the sequential procedures.
5. Least squares solution in the case of observation equations with weighted
(66) X 2 = X 0 − Lb 2
Where Lb 2 has values of parameters for which the weights are given. If the values of
Lb 2 are equal to the X 0 values, then of course L2 = 0 . Using Eq.(16), we have a
solution:
(67) X = −( A1T P1 A1 + A2T PX A2 )−1 ( A1T PL T
1 1 + A2 PX L2 )
The A2 -matrix is a special type of matrix. It has as many rows as there are observed
parameters and as many columns as there are parameters. Each row has only one
element different from zero and its value is +1. The +1 appears once in each column
corresponding to those parameters, which will have a weight and are considered as an
observed quantity. The dimensions of A2T PX A2 must be the same as the dimensions of
A1T P1 A1 . In the case that PX is a diagonal matrix, we could use a notation:
A2T PX A2 = P X
The matrix P X has same dimensions as A1T P1 A1 and has non-zero elements along the
diagonal corresponding to parameters for which the weights were given and the
values of those elements are the weights themselves. Under these circumstances the
expression
A1T P1 A1 + A2T PX A2 = A1T P1 A1 + P X
would mean that the weights are added directly to the corresponding diagonal
elements of the normal equation matrix, N. If we further assume that the values of
parameters, Lb 2 , for which the weights are given, have same values as the
approximate values, X 0 , then L2 will have all elements equal to zero. Under these
conditions, equations (67), (68), and (69) can be written as follows:
(70) X = −( A1T P1 A1 + P X )−1 A1T PL
1 1
T
(71) V T PV = V1T PV T T T
1 1 + VX P X VX = L1 P1 L1 + X A1 P1 L1
It should be pointed out that Eq.(70) can be utilized very effectively in the case where
some parameters are wished to be constrained. In that case we give a relatively high
weight for the parameter, which means in practices that a weight is added to the
corresponding diagonal elements of the normal equation matrix. A more rigorous way
As it can be seen, this system is becoming quite complicated and therefore might not
be the preferable approach.
by Urho A. Uotila
The Ohio State University
N N12
N = 11 N12 and N 21 do not have to be square matrices.
N 21 N 22
Let’s have a Q-matrix which is the inverse of the N-matrix. Its submatrices have the
same sizes as the corresponding submatrices of N.
Q Q12
N −1 = Q = 11
Q21 Q22
We further assume that the determinants N , N11 and N 22 are not equal to zero.
Either N11 or N 22 could be equal to zero in some of the equalities derived below,
but not both simultaneously and N must not be equal to zero. If N 22 is not equal
to zero, then N = N 22 N11 − N12 N 22−1 N 21 . On the other hand, if N11 is not equal to
zero, then N = N11 N 22 − N 21 N11−1 N12 .
N11 N12 Q11 Q12 Q11 Q12 N11 N12
We have relations: NQ = =I and QN = =I ,
N 21 N 22 Q21 Q22 Q21 Q22 N 21 N 22
from which we get:
(1) N11Q11 + N12Q21 = I
(2) N11Q12 + N12Q22 = 0
(3) N 21Q11 + N 22Q21 = 0 from NQ = I
(4) N 21Q12 + N 22Q22 = I
(5) Q11 N11 + Q12 N 21 = I
(6) Q11 N12 + Q12 N 22 = 0
(7) Q21 N11 + Q22 N 21 = 0 from QN = I
(8) Q21 N12 + Q22 N 22 = I
We take Eq.(3) and multiply both sides by N 22−1 and we get:
−1 −1 −1
N 22 N 21Q11 + N 22 N 22Q21 = 0 Q21 = −N 22 N 21Q11
By inserting this Q21 into Eq.(1) we get:
−1
( N11 − N12 N 22 N 21 )Q11 = I
from this we get:
(9) Q11 = ( N11 − N12 N 22−1 N 21 )−1
Using a similar technique we get from Eq.(2) and Eq.(4) or from Eq.(7) and (8):
(10) Q22 = ( N 22 − N 21 N11−1 N12 )−1
From Eq.(2) and (10):
(11) Q12 = −N11−1 N12Q22 = −N11−1 N12 ( N 22 − N 21 N11−1 N12 )−1
From Eq.(3) and (9):
−1 −1 −1
= N 22 + N 22 N 21Q11 N12 N 22
−1
Q12 = −( N11 − N12 N 22 N 21 )−1 N 21 N 22
−1 −1
= −Q11 N12 N 22
= −N11−1 N12 ( N 22 − N 21 N11−1 N12 )−1 = −N11−1 N12Q22
−1 −1
Q21 = −N 22 N 21 ( N11 − N12 N 22 N 21 )−1 = −N 22
−1
N 21Q11
= −( N 22 − N 21 N11−1 N12 )−1 N 21 N11−1 = −Q22 N 21 N11−1
In the case that N 22 = 0 , we have Q22 = −( N 21 N11−1 N12 )−1 ,
Q11 = N11−1 + N11−1 N12Q22 N 21 N11−1 , Q12 = −N11−1 N12Q22 and Q21 = −Q22 N 21 N11−1 .
If we have a matrix equation
N11 N12 X 1 U
= − 1
N 21 N 22 X 2 U 2
We get:
−1 −1
X1 N N12 U 1
U1 X 1 = −Q11U1 − Q12U 2
Q Q12
= − 11 or
= − 11
X2 N 21 N 22 U 2
U 2 Q21 Q22
X 2 = −Q21U1 − Q22U 2
In the solution we can use the above expression for Q : s and solve for X1 or X2 or
for both, using only submatrices. We can also write for example
−1 −1
X 1 = −Q11U1 − Q12U 2 = −Q11U1 + Q11 N12 N 22 U 2 = Q11 (−U1 + N12 N 22 U 2 ) etc.
Using the above derivations we can also write (Note: It is required, that derterminats
of N, N11, N22 are not equal to zero):
(17) ( N11 − N12 N 22−1 N 21 )−1 = N11−1 + N11−1 N12 ( N 22 − N 21 N11−1 N12 )−1 N 21 N11−1
(18) ( N11−1 − N12 N 22−1 N 21 )−1 = N11 + N11 N12 ( N 22 − N 21 N11 N12 )−1 N 21 N11
(19) ( N11−1 − N12 N 22 N 21 )−1 = N11 + N11 N12 ( N 22−1 − N 21 N11 N12 )−1 N 21 N11