Minimi Quadrati Fangi Cap17
-
Upload
mubiin-manas -
Category
Documents
-
view
246 -
download
0
Transcript of Minimi Quadrati Fangi Cap17
17 CHAPTER SEVENTEEN
__________________________________________________________ APPENDIX VI
THE LEAST SQUARES ADJUSTMENT with INDIRECT OBSERVATIONS
17.1. The method of the least squares 17.2. Adjustment with the method of the least squares 17.2.1. Linear functions 17.2.2. Not linear functions 17.2.a- Observations of equal weight 17.2.b- Observations of different weight 17.3. Estimate of the variance factor 17.4. Test on the variance factor 17.5. Estimate of the accuracy of the unknowns 17.6. Operative procedure 17.7. The ellipse of error 17.8. Remarkable observation equations in surveying 17.9. The equation of the distance 17.10. The bearing and the direction equations 17.11. The equation of the angle 17.12. Three-dimensional adjustment 17.13. Error ellipsoid
G. Fangi Photogrammetry Notes L.S.Adjst. Ch. XVII - pg. 2
Contents In practice, the “surveyor” makes a number of measures larger than it is strictly
necessary, measures slightly discordant, because of the observation errors. The
problem of the determination of unknown quantities satisfying at the same time
the observations, slightly inconsistent, is not solvable from the algebraic point of
view. In order to restore the congruency of the physical model, other unknown
quantities are added, the corrections of the observations. the resulting system is
however indeterminate because the unknowns are larger than the equations. To
get the solution of such a system, we impose the condition of minimum to the
sum of the least of the corrections, condition known as least squares criterion.
At first the unknown parameters are estimated in this manner, then the
corrections of the observations are determined. The criterion is here presented in
the version with the indirect observations only, since it is the most used, because
it is easier to programme.
In the first time all the observations are regarded of equal accuracy, or with unity
weight, then we examine the case where the observations are all characterised
by different accuracy, or by having different weight.
The system of equations must be linear. We consider at first the case of
observation equations that is already linear, then we pass to the case where the
observation equations are non linear; it is necessary to proceed then to their
linearisation.
The theory of the error ellipse is exposed.
In the linearised form, the most common observation equations in surveying are
given: they are the equation of the distance, the equation of the bearing, the
equation of the direction and the equation of the angle.
Finally the three-dimentional adjustment is treated, followed by the error
ellipsoid .
G. Fangi Photogrammetry Notes L.S.Adjst. Ch. XVII - pg. 3
THE METHOD OF THE LEAST SQUARES
The method of the least squares has been developed to satisfy three main purposes; it is based on
the principle of the maximum estimators. We set two fundamental hypothesis:
1. the observations have only stochastic errors that follow the Gauss normal distribution;
and
2. the observations are independent from one another.
Let's consider the following example: Of the quantity µ we make the n independent observations l l ln1 2, ...,, and ν ν ν1 2, ,..., n are the
observation errors: ν µ1 1= −l
ν µ2 2= −l
... ν µn nl= −
The problem is assessing the most probable value for µ on the base of the n observations ( , ,..., )l l ln1 2 that is equivalent to find the most probable values for ( , ,..., )ν ν ν1 2 n . The last
ones are n stochastic variables, each of them follows the normal distribution N i( , )0 σ
characterised by the probability density function
2
21
21)(
⎟⎟⎠
⎞⎜⎜⎝
⎛−
= i
i
efi
iσν
πσν
Since the variables are independent from one another, the corresponding probability density function
is given by the product of the separated functions :
==⎟⎟⎠
⎞⎜⎜⎝
⎛−⎟⎟
⎠
⎞⎜⎜⎝
⎛−
22
1
1
21
21
121 2
1....2
1),...,,( n
n
eefn
nσν
σν
πσπσννν
∑=
⎟⎟⎠
⎞⎜⎜⎝
⎛−
⎟⎟⎠
⎞⎜⎜⎝
⎛⋅⋅⋅⎟
⎠
⎞⎜⎝
⎛=
n
ii
i
en
n
1
2
21
1
1121 σ
ν
σσπ
The function f n( , ,..., )ν ν ν1 2 reaches its maximum when its exponent has minimum value:
νσ
i
ii
n 2
21
⇒=∑ minimum
G. Fangi Photogrammetry Notes L.S.Adjst. Ch. XVII - pg. 4
in other words when the function sum of the squares of the residuals is minimum ( least squares ). Let σ0 be an arbitrary constant that doesn't modify the result of the criterion of the least squares:
σ νσ02 2
21
i
ii
n
⇒=∑ minimum. The σ0 factor is essentially a scale factor. Having defined the quantity
pii
=σσ
02
2
called weight of the observation, the condition of the least squares becomes:
pi ii
n
ν2
1
= ⇒=∑ V PVT minimum
having indicated with V the vector of the residuals and with P the (nxn) weight matrix.
The weight of a single observation is inversely proportional to the variance of the observation. An observation having variance σ0
2 has weight equal to one and therefore σ02 is called also
variance of the unity weight.
Adjustment with the criterion of the Least Squares (Least Squares Adjustment)
There are two methods for the solution of overdetermined problems with the criterion of the
least squares:
- condition equations (Standard Problem 1)
- indirect observations (Standard Problem 2)
Of the two, here we only shortly summarise the second since it is by far more employed than the first
one.
Let a quantity X (vector (ux1) - independent variable) be related to the quantity L (dependent variable)
by the relationship
A.X L=
Also called physical model . Of the quantity L we know some casual realisations (vector (nx1) ),
called observations, with which we want to estimate the most probable value of X. In the case that
the observations are as many as the unknowns, the problem has as many equations as are the
unknowns , and the solution is unique, assuming that the system has full rank. Normally, on the
contrary, as in the largest majority of the problems of geodetic-photogrammetric nature, there are
more observations than unknowns: the difference
G. Fangi Photogrammetry Notes L.S.Adjst. Ch. XVII - pg. 5
r = n - u
is called redundancy.
A) We assume first that all the observations have equal quality or as we say, have unit .weight .
1) case: A linear
The system of linear equations
a X li k ik
u
i,=∑ =
1
for i n= 1,..,
in the case that n=u is isodetermined and the solution is unique. In case that n>u, it becomes an
impossible system in the sense that it is impossible to find a set X with u components capable to
satisfy at the same time n equations that are independent from one another.
The matrix A contains the (linear) coefficients of the unknowns X and it is called matrix of the
observations or design matrix. In order to re-establish the congruence of the problem we add then
to the observations the vector V of the residuals
a X l vi k ik
u
i i,=∑ = +
1
G. Fangi Photogrammetry Notes L.S.Adjst. Ch. XVII - pg. 6
and in a vectorial notation, underlining the stochastic quantities :
A. X l v= +
( , )( , ) ( , )( , )n u u n n1 1 1
To the original u unknowns we add the n unknowns constituted by the observation residuals v. The
observation system becomes in such a way undetermined having more unknowns (n+u) than are the
observations (n). In order to make the system solution possible, we add the u equations given by the u
conditions of minimum of the sum of the squares of the residuals (criterion of the least squares).
a X l vi k ik
u
i i,=∑ = +
1
for i n= 1,..,
Putting g vii
n
= ∑ 2 function sum of the squares of the residuals; we have that to the least
value of g corresponds the maximum value of the probability density function p of the u dimensional X
variable.
( )
g
nxx ef 2
1
2
21
1
2
det −−−Σ
=π
:
( )01 1
2,
1
2
=⎟⎟⎠
⎞⎜⎜⎝
⎛−
=⎟⎠
⎞⎜⎝
⎛
=∑∑∑= ==
k
n
i
u
jijji
k
n
ii
k x
lxa
x
v
xg
∂
∂
∂
∂
∂∂
for k u= 1,..,
A.X V lB.X 0.V 0
− =+ =
G. Fangi Photogrammetry Notes L.S.Adjst. Ch. XVII - pg. 7
The matrix B contains the partial derivatives with respect to the unknowns X of the function sum of the
squares of the residuals.
The system is partialized: first, the last equations are solved, or with vectorial notation:
llXA2lXAAXl)X(Al)X(Amin.VV TTTTTT +−=−−== ˆˆˆˆˆ
We impose the condition of minimum:
0A2lAAX2x
V)(V TTTT
=−= ˆ∂
∂
A2lAAX2 TTT =ˆ
from which, by transposition, we get the so-called normal system :
lAXAA TT =ˆ
Observe that the normalisation of the normal system corresponds to the pre-multiplication of the first
and of the second member of the observation system by AT
G. Fangi Photogrammetry Notes L.S.Adjst. Ch. XVII - pg. 8
The obtained system is called normal system because it has as many equations as are the unknowns:
The vector A .lT is called normalised known term.
The solution of the normal system is:
lAA)(AX T1T −=ˆ
The matrix A is called design matrix. The matrix N A AT= ( ) is called normal matrix or matrix of the normal system. It is a square
symmetric matrix positive defined (the terms of the main diagonal are bigger than zero) and it admits
its inverse matrix.
Therefore it is possible to solve for the first group of equations to estimate the residuals V
lXA.v −= ˆˆ
The mean square root of the observation residuals with respect to the redundancy r is called standard
deviation of the unity weight or variance factor and also sigma zero a posteriori, ( a posteriori
sigma naught) or shifting variable:
G. Fangi Photogrammetry Notes L.S.Adjst. Ch. XVII - pg. 9
un
vn
ii
o −±=∑ 2
σ̂
The N −1 inverse matrix of the normal matrix N, contains the terms that are proportional to the
variances and covariances of the unknowns and the proportionality coefficient is just the a posteriori coefficient elevated to the square. The matrix
XX∑ of the variances and covariances of the
unknowns, is called matrix of variance-covariance :
120XX .N−=Σ σ̂ˆˆ
and it contains all the stochastic information relative to the solution.
2) the deterministic or physic model is given by non linear functions of the independent variables
and the observation equations are:
l f x xi i u= ( ,.., )1 for i n= 1,..,
The problem is solved by the linearisation of the observation equations by Taylor expansion around an approximated values x0 of the unknowns
X X dX0= +
or x x dxk k
ok= + for k u= 1,..,
l f x x f x x fx
dxi i u io
uo i
kk
k
u
= = +=∑( ,.., ) ( ,.., )1 1
1
∂∂
As known term we regard the difference between the observed values and the values computed in
correspondence of the provisional values; such quantity is called discrepancy. l f x xi i
ouo− ( ,.., )1
and the design matrix A contains the partial derivatives of the functions with respect to the unknowns,
computed in correspondence of the approximated values.
),..,( 11 0
ou
oii
u
kk
k
i xxfldxxf
−=⎟⎟⎠
⎞⎜⎜⎝
⎛∑= ∂
∂
G. Fangi Photogrammetry Notes L.S.Adjst. Ch. XVII - pg. 10
The solution is completely similar to the linear case with the differences:
- the unknowns are not the independent variables X but their variations or updating dX;
- it is necessary to know the approximated values for the unknowns, so close that the values of the
derivatives should be almost equal to those computed with their exact values;
- since the values of the coefficients of the unknowns aren't the exact ones ( but they are only close to
these ones) then it is necessary to iterate by substituting at any iteration the updated values of the
unknown parameters, that are obtained by adding to the provisional values, the computed increases
of the unknowns. At every j.th iteration, the updated values of the unknowns are taken as
approximated values for the following iteration.
X X dXj j j= +−1
At each iteration the updating of the unknowns should reduce as average in absolute value. In this
case we have convergence of the iterative computation procedure, in the opposite case the
computation diverges. We end the iterating procedure when
- the updatings of the unknowns are no more meaningful
- the procedure diverges and the number of iterations is greater than the a priori maximum limit.
B) Now we take into consideration the case in which the observations are of different accuracy or, as
we say , of different weight.
The weight of an observation corresponds to the equivalent number of fictitious observations, of unit
weight, required to get the same accuracy of the observation under consideration. In other words, it is as if the i.th observation is the mean of p observations each of which has s.d. σ0 : It is
σ σi
ip= 0
from which we get:
pii
=σσ
02
2
The coefficient of proportionality σ02 is known as variance of the a priori unit weight and its square root
σ0 is said error or standard deviation of the unity of weight or a priori sigma-naught. Then one
observation of weight p is equivalent to p observations of unit weight. The matrix of the weights is
defined :
Pll ll= −σ0
2 1Σ
Usually we neglect the correlation among the observations that are independent from each other; we
consider then the diagonal matrix
G. Fangi Photogrammetry Notes L.S.Adjst. Ch. XVII - pg. 11
⎥⎦
⎤⎢⎣
⎡=Σ−
222
21
1 1,..,11
nll diag
σσσ
and also the P matrix results a diagonal one:
⎥⎦
⎤⎢⎣
⎡= 22
221
20
1,..,11.n
ll diagPσσσ
σ
The criterion of the least squares becomes:
V P V minTll =
and its solution is:
lPAA)P(AX llT1
llT −=ˆ
The normal matrix is N (A P A)Tll=
The matrix of the cofactors is Q N (A P A)1 Tll
1= =− −
The normalised known term A P lTll .
An unbiased estimator of σ0 a priori , or variance factor, or s.d. of the unity of weight, is:
un
T
−=
PVV20σ̂
that in case of diagonal weight matrix becomes:
un
pvn
iii
o −±=∑ 2
σ̂
G. Fangi Photogrammetry Notes L.S.Adjst. Ch. XVII - pg. 12
Testing on sigma-zero A priori a value to σ0 ( a priori sigma-zero) is given aiming to attribute the weights to the single
observations. At the end of the adjustment, by means of the estimate of the residuals, a value for σ0
(a posteriori sigma-zero) can be estimated. In order to verify whether there is agreement between the
initial hypothesis and the final results, the two quantities can be compared via Fisher test; in such a
way we can assess whether they are equal or different (in statistical sense).
Once a confidence level αis fixed, we formulate two hypothesis:
the null hypothesis : 1ˆ
: 20
20
0 =⎪⎭
⎪⎬⎫
⎪⎩
⎪⎨⎧
σσ
EH
the alternative hypothesis: 1ˆ
: 20
20
1 ≠⎪⎭
⎪⎬⎫
⎪⎩
⎪⎨⎧
σσ
EH
The ratio 20
20ˆ
σ
σ must be compared with the critical value Fr , ,∞ α 2 taken from the Fisher distribution
tables, where r=m-u are the degrees of freedom of 20σ̂ and ∞ those of σ0
2 . Note that as numerator we
put the larger variance because the tables are made for critical values only for F>1.
For 2,,20
20ˆ
ασσ
∞≤ rF the null hypothesis holds, the two variances are equal being their difference casual
with a confidence level α. In the opposite case the alternative hypothesis is true.
If the test is negative, say the alternative hypothesis is true, (in the sense that the two variances are
significantly different from each other), the causes can be:
1) the presence of blunders in the observations; 2) the initial hypothesis on σ0 was not correct
3) the mathematical model established as base of the observation equations, doesn't correspond to
the physical reality of the observed phenomenon.
For this reason the test on variance factor is called general test or multi dimensional test.
In other words if the alternative hypothesis is true, we know that there is something that is not correct
in our adjustment, but we don't know what it is.
Estimate of the variance-covariance matrix for the estimated quantities. If we remember the law of propagation of the variances, and the fact that the variance of the
observations is Σ ll =−σ0
2 1P , noting that the matrices P and N are symmetric, we can take advantage
for the estimate of the unknowns:
Σ Σxx llT T T T T T
T T T T
A PA A P P A PA A P
A PA A P P PA A PA A PA N
= = ⋅ =
= ⋅ ⋅ = =
− − −
− − − − −
U U ( ) ( )
( ) ( ) ( )
102 1 1
02 1 1 1
02 1
02 1
σ
σ σ σ
G. Fangi Photogrammetry Notes L.S.Adjst. Ch. XVII - pg. 13
If to the a priori variance factor, we substitute its a posteriori estimate, we get the estimates of the
variances and covariances for the unknowns.
1
1
2221
11211
20
120
2
2
2
ˆˆ
........
.
ˆˆ
........
.
1
212
1211
−
−
⎥⎥⎥⎥⎥
⎦
⎤
⎢⎢⎢⎢⎢
⎣
⎡
==
⎥⎥⎥⎥⎥
⎦
⎤
⎢⎢⎢⎢⎢
⎣
⎡
=Σ
uuu
u
xxx
xxx
xxxxx
xx
nn
nnnnn
N
uu
u
σσ
σσ
σσσσσ
Note that:
- in the matrix N and therefore in its inverse N −1 also no stochastic elements are present . The
coefficients of the observation matrix express the functional relationships among the different
unknowns , in other words the way as they are correlated in the functional model. In the case of a
network or an aerial triangulation adjustment, these coefficients derive exclusively from the network
geometry. These coefficients are known even before the measures are carried out, if the
approximated positions of the network points are known. The variance factor acts only as a scale
parameter.
On the base of these elements it is possible to know a priori the accuracy of the unknowns, or better
the ratios between the accuracy of the unknowns even before the measures are taken, apart from a
scale coefficient that is unique for all the unknowns.
With regard to these considerations it is possible to make the preliminary design of the network, or
make the planning of the observations, by adding, or by subtracting measures, varying the
conformation of the network until the accuracy satisfies the requirements of the project.
OPERATIVE PROCEDURE
1 - Choice of the independent parameters and of the dependent parameters
2 - Choice of the approximated values for the unknown parameters
3 - We write the observation equations
4 - Possible linearisation of the observation equations
5 - Computation of the coefficients in the design matrix
6 - Weighting of the observations
7 - Normalisation of the observations
8 - Normalisation of the known term
9 - Inversion of the normal matrix
10- Solution
11- Updating of the parameters
12- Steps from 5 to 11 are iterated until the solution is meaningful
G. Fangi Photogrammetry Notes L.S.Adjst. Ch. XVII - pg. 14
13- Computation of the observation residuals
14- Estimate of the variance factor of the unit weight
15- Computation of the variance and covariance matrix of the unknowns
16- Testing on the variance factor of the unit weight
- a posteriori sigma-naught versus a priori sigma- naught
G. Fangi Photogrammetry Notes L.S.Adjst. Ch. XVII - pg. 15
Diagram of flow for the least squares solution
G. Fangi Photogrammetry Notes L.S.Adjst. Ch. XVII - pg. 16
Error ellipse
The error ellipses are used to indicating the accuracy of the position of the points in the plane. Let :
⎥⎥⎦
⎤
⎢⎢⎣
⎡=Σ 2
2
yxy
xyxxx σσ
σσ
be the variance-covariance matrix of the plane co-ordinates X,Y of a point P. We define standard
ellipse of error the ellipse having its centre in the point P(X,Y) and the major and minor semi-axes :
σσ σ σ σ σ
max
( ) ( )2
2 2 2 2 2 24
2=
+ + − +x y x y xy
σσ σ σ σ σ
min
( ) ( )2
2 2 2 2 2 24
2=
+ − − +x y x y xy
and tan( )
222 2θσ
σ σ=
−xy
x y
θ is the angle formed by the major axis with the X axis. Statistically speaking the probability that a
point falls inside the standard ellipse of error, is 0.38.
G. Fangi Photogrammetry Notes L.S.Adjst. Ch. XVII - pg. 17
Fig. 17.2 - Probability of a two dimensions stochastic variable
The locus of the points with equal density of probability is:
V V XT
XXΣ− =102 (17.20)
with X 0 equal to a costant the variation of which makes the locus of points vary. The matrix Σ XX is
symmetrical and positive defined has eigenvalues λ1 and λ2 with the corrisponding eigenvectors U1
and U2. For a known property of the eigenvalues, the inverse matrix Σ XX−1 will have eigenvalues
1 1λ and 1 2λ of the same eigenvectors U1 and U2. The spectral decomposition brings to :
Σ XX
TUDU− =1 (17.21)
where U is an orthogonal matrix with column vectors that are the eigenvectors U1 and U2 while D
is a diagonal matrix whose diagonal elements are the eigenvalues 1 1λ and 1 2λ .
Substituing the (17.21) in the (17.20) we get:
V UDU V XT T = 0
2 (17.22)
With the transformation Q U VT= where Q q qT = 1 2 , the (17.22) becomes:
Q DQ XT = 0
2
and explicitly:
[ ] 20
2
1
2
121 10
01X
qq =⎥⎦
⎤⎢⎣
⎡⎥⎦
⎤⎢⎣
⎡λ
λ
That developed becomes:
( ) ( ) 12
20
22
2
10
21 =+
λλ X
q
X
q
That is the equation of an ellipse in the new variables Q q qT = 1 2 . We define standard ellipse
the one where X 0 1= :
V VT
XXΣ− =1 1
or q q1
2
1
22
2
1λ λ
+ =
G. Fangi Photogrammetry Notes L.S.Adjst. Ch. XVII - pg. 18
the centre of the ellipse has coordinates µ X and µY while the lenghts of the major and minor
semiaxes are equal to the square root of the eigenvalues of Σ XX .
λ σ1 = max and λ σ2 = min
That are just given by the expressions (17.19): The slope of the axes of the ellipse respect to the axes
X and Y indicates correlation between the variables X and Y . The directions of the axes are given by the eigenvectors of the matrix Σ XX .The tangent lines to the ellipse parallel to the axes X and Y
intercept on the Y and X axes the s.d. σ y and σ x respectively
The equation of the distance
Of particular interest in geodesy is the observation equation also referred to as the distance equation. Between two points A and B on the X, Y plane, we measure the distance d AB already regarded as
reduced to the plane.
The relationship linking the co-ordinates of the points A and B and the distance is:
d X X Y YAB B A B A= − + −( ) ( )2 2
Its linearisation around approximated values of the co-ordinates of A and B, having put,
X X dX Y Y dY X X dX Y Y dYA A A A A A B B B B B B= + = + = + = +0 0 0 0, , , and
d X X Y YAB B A B A0 0 0 2 0 0 2= − + −( ) ( )
is
vdddYXddX
XddY
XddX
Xd
ABABBB
ABB
B
ABA
A
ABA
A
AB +−=⎟⎟⎠
⎞⎜⎜⎝
⎛+⎟⎟
⎠
⎞⎜⎜⎝
⎛+⎟⎟
⎠
⎞⎜⎜⎝
⎛+⎟⎟
⎠
⎞⎜⎜⎝
⎛ 0
0000 ∂∂
∂∂
∂∂
∂∂
G. Fangi Photogrammetry Notes L.S.Adjst. Ch. XVII - pg. 19
The discrepancy is the difference between the observed distance d AB and its provisional value
d AB0 . By substitution of the expressions of the derivatives, we come to:
−−
−−
+−
+−
=
= − +
X Xd
dX Y Yd
dY X Xd
dX Y Yd
dY
d d v
B A
ABA
B A
ABA
B A
ABB
B A
ABB
AB AB
0 0
0
0 0
0
0 0
0
0 0
0
0( )
The coefficients are adimensional, the unknowns are four at most. In the case that one of the two
points is known, the unknowns related to it are null. Finally the residuals are in the same measure
unity of the discrepancies. For every observed distance, we write an observation equation.
Equation of the bearing and equation of the direction
Between two points A and B on the X, Y plane, we measure the bearing ϑ AB . already regarded as
reduced to the plane.
The relationship that links the co-ordinates of the points A and B and the bearing, is:
⎟⎟⎠
⎞⎜⎜⎝
⎛−−
=AB
ABAB YY
XXarctgϑ
Its linearisation around approximated values of the co-ordinates of A and B, having put as usual, X X dX Y Y dY X X dX Y Y dYA A A A A A B B B B B B= + = + = + = +0 0 0 0, , , and
d X X Y YAB B A B A0 0 0 2 0 0 2= − + −( ) ( ) ⎟⎟
⎠
⎞⎜⎜⎝
⎛−−
= 00
000 arctg
AB
ABAB YY
XXϑ
is
vdYX
dXX
dYX
dXX ABABB
A
ABB
A
ABA
A
ABA
A
AB +−=⎟⎟⎠
⎞⎜⎜⎝
⎛+⎟⎟
⎠
⎞⎜⎜⎝
⎛+⎟⎟
⎠
⎞⎜⎜⎝
⎛+⎟⎟
⎠
⎞⎜⎜⎝
⎛ 0
0000
ϑϑ∂∂ϑ
∂∂ϑ
∂∂ϑ
∂∂ϑ
G. Fangi Photogrammetry Notes L.S.Adjst. Ch. XVII - pg. 20
The discrepancy is the difference between the observed distance ϑ AB and its approximated value
ϑ AB0 . By Substituting the expressions of the derivatives, we come to:
−−
+−
+−
−−
=
= − +
Y Y
ddX
X X
ddY
Y Y
ddX
X X
ddY
v
B A
AB
AB A
AB
AB A
AB
BB A
AB
B
AB AB
0 0
0 2
0 0
0 2
0 0
0 2
0 0
0 2
0( )ϑ ϑ
Normally the bearing is not measured, since the direction of the Y axis is unknown; instead of it we measure the direction l B0 that is linked to the bearing by :
l B A AB0 0+ =ϑ ϑ
The last equation of the bearing becomes then the so-called equation of direction:
−−
+−
+−
−−
− = − +
Y Y
ddX
X X
ddY
Y Y
ddX
X X
ddY
l v
B A
AB
AB A
AB
AB A
AB
BB A
AB
B
A AB AB
0 0
0 2
0 0
0 2
0 0
0 2
0 0
0 2
00ϑ ϑ( )
To the traditional unknown co-ordinates another one is added, the so-called bearing of the origin ϑ A0 .
The angles, bearings and directions, must be expressed in radians.
The co-ordinates coefficients are dimensionally the inverse of distances, the unknowns are at most
five. In the case that one of the two points is known, its corresponding unknowns are null. Finally the
residuals are in the same measure unit of the discrepancies.
Equation of the angle
The observation of one direction only is useless. At least we observe two directions, for example, as in figure l A0 and l B0 , from which we derive the included angle α = − +l lA B0 0 . We can take advantage of
that in order to eliminate the direction unknown. ϑ A0 .since it has a positive sign first and a negative sign after. The derived equation, containing at
most six unknowns, is called equation of the angle:
G. Fangi Photogrammetry Notes L.S.Adjst. Ch. XVII - pg. 21
vll
dYd
XXdXd
YYdYd
XXdXd
YY
dYd
XXd
XXdXd
YYd
YY
ACACABAB
C
AC
ACC
AC
ACB
AB
ABB
AB
AB
A
AC
AC
AB
ABA
AC
AC
AB
AB
+−+−−=
=−
−−
+−
+−
−
+⎟⎟⎠
⎞⎜⎜⎝
⎛ −+
−−+⎟
⎟⎠
⎞⎜⎜⎝
⎛ −−
−
)()( 00
20
00
20
00
20
00
20
00
20
00
20
00
20
00
20
00
ϑϑ
The unknowns of the known points are set equal to zero. Instead of two equations, we write one only, reducing at the same time by one the number of the unknowns.
17.12 – THREE-DIMENSIONAL ADJUSTMENT The bearing and the angle equations are equal to the ones written (17.22) and (17.23). For the
equation of the distance, we add the term relative to the heights:
d X X Y Y Z ZSP S P S P S P= − + − + −( ) ( ) ( )2 2 2
set X X dX Y Y dY Z Z dZ X X dX Y Y dY Y Z dZS S S S S S S S S P P P P P P P P P= + = + = + = + = + = +0 0 0 0 0 0, , , , ,
and we get
−
−−
−−
−+
−+
−+
−=
= − +
X Xd
dX Y Yd
dY Z Zd
dY X Xd
dX Y Yd
dY Z Zd
dZ
d d v
P S
SPS
P S
SPS
P S
SPS
P S
SPP
P S
SPP
P S
SPP
SP SP
0 0
0
0 0
0
0 0
0
0 0
0
0 0
0
0 0
0
0( )
(17.24)
Note that the measured distance dSP is not the distance d , measured directly between the centre of
the instrument and the centre of the signal, but the one reduced to the point S and P on the ground .
Fig. 17.6 – The way we write the distances from the centres of the intruments to the ground
We apply the Carnot theorem to the triangle SPM and we get the inclined distance dSP reduced to
the ground, with ϕSP zenithal angle, hS instrumental height in S , ∆mP height of the target in P .
d d h m d h mSP S P S P SP2 2 2= + − + ⋅ −( ) ( )cos∆ ∆ ϕ
G. Fangi Photogrammetry Notes L.S.Adjst. Ch. XVII - pg. 22
Zenital angle
As it is known, the heigth ZP of a point aimed from the station S of elevation ZS is given by:
Z Z d ctg kR
dP S SP SP SP= + +−ϕ 12
2
deriving the zenital angle ϕSP , we get:
( ) ( )( ) ( ) ( ) SPSPSP
m
PSSP
SPSP YYXXRk
RZ
mhZZYYXX
actg ϕ=−+−−
+⎥⎥⎦
⎤
⎢⎢⎣
⎡⎟⎠⎞
⎜⎝⎛ −
∆+−−−+− 22
22
211
with k refraction coefficient, ZZ Z
mS P=+2
mean height, R radius of the local sphere, hS
instrumental height in S , ∆mP heigth of the target in P
Its linearization brings to:
( )
( )( ) ( ) +⎥
⎦
⎤⎢⎣
⎡−
−+−
−⋅
⎥⎥⎥⎥⎥
⎦
⎤
⎢⎢⎢⎢⎢
⎣
⎡
−+
∆+−+⎟⎠⎞
⎜⎝⎛ +
∆+−⋅⎟⎠⎞
⎜⎝⎛ +
SPSP
SPSP
SP
SP
PSSPSPm
PSSPm
dYdYd
YYdXdXd
XXRk
mhdZdR
Z
mhdZR
Z
0
00
0
00
20202
0
21
1
1
( )( ) 0
20202
0
1
1
SPSPSP
PSSPSPm
SPm
dZdZmhdZd
RZ
dR
Z
ϕϕ −=−
∆+−+⎟⎠⎞
⎜⎝⎛ +
⎟⎠⎞
⎜⎝⎛ −
− (17.25)
with the positions:
dZ Z ZSP S P
0 0 0= −
d X X Y YSP S P S P0 0 0 2 0 0 2= − + −( ) ( )
00
00
211 SP
m
PSSP
SPSP d
Rk
RZ
mhdZdactg −
+⎥⎦
⎤⎢⎣
⎡⎟⎠⎞
⎜⎝⎛ −
∆+−=ϕ
(see1). When we want to neglect the term Z Rm , the (17.25) becomes :
( )
( ) ( ) ( ) +⎥⎦
⎤⎢⎣
⎡−
−+−
−⋅
⎥⎥⎦
⎤
⎢⎢⎣
⎡ −+
∆+−+
∆+−SP
SP
SPSP
SP
SP
PSSPSP
PSSP dYdYd
YYdXdXd
XXRk
mhdZdmhdZ
0
00
0
00
2020
0
21
1C.Monti, L.Mussio - Esempio di compensazione plano-altimetrica. Bollettino SIFET n. 2 1983
G. Fangi Photogrammetry Notes L.S.Adjst. Ch. XVII - pg. 23
( ) ( ) 02020
0
SPSPSPPSSPSP
SP dZdZmhdZd
d ϕϕ −=−∆+−+
−
with the position 00
00
21
SPPSSP
SPSP d
Rk
mhdZdactg −
+⎥⎦
⎤⎢⎣
⎡∆+−
=ϕ
The weight of the observation is that of the measure of the zenital angles; we can suppose it to be
constant for all the observations. The unknowns are the variations of the coordinates dX, dY, dZ for
the station S and for the aimed point P (6 in total). With this equation we take into account the existing
correlation between the three coordinates. Such a correlation is weak in the case of geodetic networks
where the zenital angles are close to the right angle. The correlations are on the contrary strong, and
then it is useful to consider them, in the case of local networks and of technical networks, where the
visual directions are often very inclinated. The redundancy is increased. For example in the case of
intersection from two known stations to an unknown point, by dividing the plan adjustment from the
altimetric one, the redundancy in planimetry is 0, whilst the altimetric one is 1. With the three-
dimensional adjustment, the computation of the three coordinates is carried out simultaneously and
the redundancy is equal to 1. (two equations of the angles and two zenital equations). The estimate of
the variance matrix of the unknowns is then more correct.
17.13 - Error ellipsoid The equivalent of the error ellipse, for a three-dimensional stochastic variable, normally
distribuited, is the error ellipsoid . It is characterized by three axes, directed along the principal
directions. For each adjusted vertex we get the variance-covariance matrix:
ZZ
YZYY
XZXYX
XX
σσσσσσ
2
2
ˆˆ =Σ
If, as we suppose, the observed functions follow the normal distribution, the probability density
function is :
VVXX XXT
eZYXf1
212
1
2det
),,(−Σ−
−Σ=
π
with V X Y ZT
X Y Z= − − −( ), ( ), ( )µ µ µ
being µ X , µY , e µZ
the mean values of the three variables X , Y and Z.
G. Fangi Photogrammetry Notes L.S.Adjst. Ch. XVII - pg. 24
The treatment of the error ellipsoid is similar to that of the error elllipse (see 17.7). We define standard error ellipsoid the ellipsoid having its centre in the point P(µ µ µx y z, , ) with semi-axes
equal to the square root of the eigenvalues of Σ xx and the directions of the axes given by the
eigenvectors of Σ xx :
q q q1
2
1
22
2
32
3
1λ λ λ
+ + =
Fig. 17.7- Error Ellipsoids