libro pag. 211-329.pdf

download libro pag. 211-329.pdf

of 119

Transcript of libro pag. 211-329.pdf

  • 8/16/2019 libro pag. 211-329.pdf

    1/119

     

    Proof of Eq. (8-21) is left to the reader [Hint:        XXXX ii ]. Taking expectations of Eq. (8-21) in accordance with Eq. (8-16), we get

    2n

    1i

    2

    i

    n

    1i

    2

    i   XnEXXXXE  

     

      (8-22)

     Noting from Eq. (5-29) that   22X   , and from Eqs. (8-19), (8-20), and (5-29)

    that n

    X22 , we get

            222n

    1i

    2

    i   1nnnnXXE  

      (8-23) 

    It therefore follows that

        2n

    1i

    2

    i

    2XX

    1n

    1ESE  

     

      (8-24) 

    Which shows that2

    S   is an unbiased estimator of   2 . This is the reason why the term

    1n   is used for the sample variance. If the sum of the squares of the deviations were to be divided by n rather than by   1n  , the resulting estimator would be biased.If the population from which the sample is drawn is normally distributed, it can be shown

    that2

    S is also a consistent estimator of2

     and that the distribution of2

    S  is related to the

    chi-square distribution; specifically

    22   S1n

    Y

      (8-25) 

    has a chi-square distribution with 1n  degrees of freedom.

    8.7. CONFIDENCE INTERVAL FOR THE MEAN

    If a random sample of size n is drawn from a normally distributed population with mean

      and variance  2 , the sample mean X   has a normal distribution with mean and

    variancen

    2 . Thus, according to Eq. (5-23), the quantity

    n

    XZ

      (8-26) 

    has a standard normal distribution with zero mean and unit variance, and it follows that

    Introductory Statistical Analysis 211

  • 8/16/2019 libro pag. 211-329.pdf

    2/119

     

      1Z2Z

     N

    XZP  

      (8-27)

    where Z   is the value of the standard normal distribution function, obtainable fromTable I of Appendix B.Rearranging the inequality inside the brackets of Eq. (8-27), we get

      1Z2n

    zX

    n

    zXP  

      (8-28) 

    which reads as follows: The probability that   lies betweenn

    zX

        andn

    zX

     

    is   2     1z   . When a specific numerical value x   is provided for  x , the foregoing

     probability statement becomes a confidence statement. The valuesn

    zX     and

    n

    zX     are known as confidence limits, the interval between them is known as a

    confidence interval, and   2     1z   is known as the degree of confidence, or confidencelevel, often stated as a percentage. The construction of a confidence interval for a

     particular distribution parameter, such as u, is known as interval estimation.

    EXAMPLE 8-5

    In Example 8-3, the sample mean of 20 independent measurements of a distance was

    calculated to be 537.615 m. If the standard deviation of each measurement (i.e., thestandard deviation of the population) is known to be 0.033 m, construct a 0.95 (95%)

    confidence interval for the population mean, .

    Solution

    For      95.01z2   ,   975.02

    195z   . From Table I of Appendix B, 96.1z   .

    Therefore, the confidence limits are

    m601.53720

    )033.0(96.1615.537

    n

    zx  

     

    and

    m629.53720

    )033.0(96.1615.537

    n

    zx  

     

    Thus, we can say with 95% confidence that u lies in the interval 537.601 m to 537,629 to.In Example 8-5 the standard deviation v of the population was known. More often,however, o is unknown and must be estimated, usually by the

    212 Analysis and Adjustments of Survey Measurements

  • 8/16/2019 libro pag. 211-329.pdf

    3/119

     

    sample standard deviation S. Thus, instead of using Eq. (8-26), we must use

    nS

    XT   (8-29)

    It is easily shown from Eqs. (8-7), (8-25), (8-26), and (8-29) that T has a t distribution with

    n -1 degrees of freedom. Thus, instead of Eq. (8-28), we have

    1)t(F2n

    tSX

    n

    tSXP  

      (8-30) 

    When specific numerical values x and s are provided for X  and S, we obtain a confidence

    interval with confidence limits ntSX   and   ntSX   and degree of confidence   1)t(2   .

    EXAMPLE 8-6

    With reference to Example 8-3, in which the sample mean of 20 independentmeasurements of a distance is calculated to be 537.615 m, and to Example 8-4 in whichthe sample standard deviation of the same 20 measurements is calculated to be 0.035 m,

    construct a 0.95 (95%) confidence interval for the population mean, .

    Solution

    For 975.0295.1)t(F,95.01)t(2    Degrees of freedom   191201n   . From Table

    III, Appendix B, 209t19,975.0

    . Therefore the confidence limits are

    m599.537

    20

    )035.0)(09.2(615.537

    n

    tSX    

    and

    m631.537

    20

    )035.0)(09.2(615.537

    n

    tSX    

    Thus, we can say with 95% confidence that  lies in the interval 537.599m to 537.631m.

    In establishing a confidence interval for the mean of a distribution it has been assumed that

    the random sample is drawn from a normal distribution. If the population distribution is

    not normal, but the sample size is large, X  will have a distribution that is approximately

    normal, and Eqs. (8-28) and (8-30) are still valid for all practical purposes.Finally, as the sample size increases, we see that the values of t   approach thecorresponding values of z . Indeed, for a sample size of 30 or larger, the t  

    Introductory Statistical Analysis 213

  • 8/16/2019 libro pag. 211-329.pdf

    4/119

     

    distribution can be approximated very well by the standard normal distribution.

    8.8. CONFIDENCE INTERVAL FOR THE VARIANCE

    When a random sample of size n is drawn from a normal population, the relationship given

     by Eq. (8-25), that  2S21n   has a chi-square distribution with n-1 degrees offreedom, can be used to construct a confidence interval for the population variance,

    2 .

    Thus,

    a bXS)1n(

    XP   21n, b2

    22

    1n,a 

        (8-31)

    where  2

    1n,aX and  2

    1n, bX are the th and b:h percentiles, respectively, of the

    chi-square distribution with n-1 degrees of freedom.

    From Eq. (8-31) it follows that:

    a bX

    S)1n(

    X

    S)1n(P

    2

    1n,a

    2

    2

    1n, b

    22

     

      (8-32) 

    and when a specific numerical value2s   is provided for

    2S , we obtain a confidence

    interval with limits 21n, b

    X2S)1n(

      and 2

    1n,aX

    2S)1n(

    and degree of confidence

    .a b    

    In constructing an appropriate confidence interval for   2 , it is customary to make the two

     percentiles complementary, i.e.,   .1a b    

    If a confidence interval for the standard deviation  is desired, positive square roots of the

    confidence limits for   2 are taken.

    EXAMPLE 8-7

    With reference once more to Example 8-4, in which a sample variance of 0.001212

    m iscalculated from 20 independent measurements of a distance, construct a 0.95 confidence

    interval for2

     and the corresponding confidence interval for   .

    Solution

    For    95.0a b   and   1 ba    we get 025.0a    and 975.0 b  

    Degrees of freedom   191n   . From Table II of Appendix B 91.82

    19,025.0X     and

    9.322

    19,975.0X    Therefore the confidence limits are

    2

    2

    19,975.0

    2

    m00071.09.32

    )00121.0)(19(

    X

    s)1n(  

    214 Analysis and Adjustments of Survey Measurements

  • 8/16/2019 libro pag. 211-329.pdf

    5/119

     

    and

    .m00258.091.8

    )00121.0)(19(

    X

    s)1n(   22

    19,025.0

    2

     

    Thus, we can say with 95% confidence that2

     lies in the interval 0.000702

    m  to 0.00258

    2m .The corresponding 95% confidence interval for o is 0.026 m to 0.051 m.

    8.9. STATISTICAL TESTING

    It is often desirable to ascertain from a sample whether or not a population has a particular

     probability distribution. The usual course of action that is taken is to make a statement

    about the probability distribution of the population, and then test to see if the sampledrawn from the population is consistent with the statement.The statement that is made about the probability distribution of the population is called a

    statistical hypothesis. If the hypothesis specifies the probability distribution completely, itis known as a simple hypothesis; otherwise, it is known as a composite hypothesis.

    For every hypothesis 0H  there is a complementary alternative 1H . 0H and 1H , are often

    called the null hypothesis and alternative hypothesis, respectively.An hypothesis is tested by drawing a sample from the population in question, computingthe value of a specific sample statistic, and then making the decision to accept or reject the

    hypothesis on the basis of the value of the statistic. The statistic used for making the test iscaned the test statistic.

    Testing of a statistical hypothesis 0H   is not infallible, since it is based upon a sample

    drawn from a population rather than upon the entire population itself. Four possible

    outcomes can occur:

    1. 0H  is accepted, when 0H , is true.

    2. 0H , is rejected, when 0H  is true.

    3. 0H , is accepted, when 0H  is false.

    4. 0H  is rejected, when 0H , is false.

    If outcome (1) or outcome (4) occurs, no error is made in that the correct course of actionhas been taken. Outcome (2) is known as a Type I error; outcome (3) is known as a Type II

    error.

    The size of the Type I error, designated  , is defined as the probability of rejecting 0H  

    when 0H  is true, i.e.,

    Introductory Statistical Analysis 215

  • 8/16/2019 libro pag. 211-329.pdf

    6/119

     

    .trueisHwhenH jetReP 00   (8-33) 

    When  is fixed at some level for  0H , and is expressed as a percentage, it is known as thesignificance level of the test. Although the choice of significance level is arbitrary,

    common practice indicates a significance level of 5% as "significant", and 1% as "highlysignificant".

    8.10. TEST OF THE MEAN OF A PROBABILITY DISTRIBUTION

    Under certain conditions we may expect the mean  of a probability distribution to have a

    specific value0

    . The hypothesis that.0

    , can be tested by drawing a sample of size n 

    and using the sample mean X  as the test statistic. Specifically, we have

    00 :H    

    00 :H    

    X is assumed to be normally distributed, or at least approximately normally distributed.

    Under the hypothesis that 0 , the following probability statement can be derived from

    Eq. (8-27), assuming  is known:

      ,1)z(2)c(X)c(P 00     (8-34)

    where   .nzc    

    If  is unknown, the following probability statement can be derived from Ea. (8-30):

      ,1)t(F2)c(X)c(P 00     (8-35)

    where .ntsc    

    0H , is accepted if  x , the specific value of  X calculated from the sample, lies

     between   c0   and   c0   ; otherwise, 0H , is rejected. The regions of acceptance and

    rejection are shown in Fig. 8-4. If  is the probability that 0H , is rejected when it is true,

    then 1 must e the probability that 0H is accepted when it is true. It follows, then, that

    1)z(21     for know   (8-36a)

    1)t(F2     for know   (8-36b)

    216 Analysis and Adjustments of Survey Measurements

  • 8/16/2019 libro pag. 211-329.pdf

    7/119

     

    Fig. 8-4.

    Solving for    )z( , or    )t(f  , we get:

    21)z(

        (8-37a) 

    or

    21)t(F

        (8-37b)

    Thus, the value of z  or t  is obtained from the significance level of the test,   . Specifically,

    Table I of Appendix B is used to evaluate  z ; Table III of Appendix B is used to evaluate t .

    [Note: In Table III, 1n, ptt , where   )t(F p   .]

    EXAMPLE 8-8

    An angle is measured 10 times. Each measurement is independent and made with the same

     precision, i.e., the 10 measurements constitute a random sample of size 10. The sample

    mean and sample standard deviation are calculated from the measurements: x =

    42°12'14.6", s = 3.7". Test at the 5%o level of significance the hypothesis that  , the

     population mean of the measurements, is 42°12'16.0" against the alternative that   is not

    42°12'16,0",

    Solution

    0 42°12'16,0",

    For 5% level of significance, a = 0.05. Thus,

    Introductory Statistical Analysis 217

  • 8/16/2019 libro pag. 211-329.pdf

    8/119

     

    ,975.02

    1)t(F p  

     

    Degrees of freedom ,91n    From Table III of Appendix B,   26.29,975.0tt   . Thus,

    ´´´´

    6.210

    )7.3)(26.2(

    n

    tsc    

    and so´´´º

    0  4.131242c   

    and´´´º

    0   6.181242c   

    Since 12'14.6"42x    lies between   c0

      and   c0

      , the hypothesis that µ=42°12'16.01 is

    accepted at the 5% level of significance.

    8.11. TEST OF THE VARIANCE

    OF A PROBABILITY DISTRIBUTION

    Under the assumption that a population is normally distributed, we can test the null

    hypothesis 0H that the population variance is  2

    0 ; against the alternative that it is not  2

    0 ,

    using the sample variance  2S as the test statistic.

     Noting that 20

    2S)1n(   is distributed as chi-square with n-1 degrees of freedom, we can

    make the following probability statement:

    ,a bxS)1n(xP   2 1n, b2022 1n,a       (8-38) 

    from which we get

    ,a b)1n(

    xS

    )1n(

    xP

    2

    0

    2

    1n, b2

    2

    0

    2

    1n,a

        (8-39) 

    0H   is accepted if2

    s , the specific value of2

    S calculated from the sample, lies

     between   )1n(x  2

    0

    2

    1n,a   and   )1n(x  2

    0

    2

    1n, b   ; otherwise, 0H is rejected. Theregions of acceptance and rejection are shown in Fig. 8-5.

    Again, 1 is the probability that 0H  is accepted when it is true. Thus,

    ,a b1     (8-40) 

    218 Analysis and Adjustments of Survey Measurements

  • 8/16/2019 libro pag. 211-329.pdf

    9/119

     

    Fig. 8-5.

    and for    1 ba   (complementary percentiles). We obtain.

    2a

        (8-41) 

    and

    ,2

    1 b     (8-42) 

    This test procedure can, of course, be used to test the standard deviation as well as thevariance.

    EXAMPLE 8-9

    Referring to the data in Example 8-8, test at the 5% level of significance the hypothesis

    that , the population standard deviation of the measurements, is 2.0" against the

    alternative that  is not 2.0".

    Solution

    .0.2   ´´20    

    Introductory Statistical Analysis 219

  • 8/16/2019 libro pag. 211-329.pdf

    10/119

     

    From Example 8-8, the sample standard deviation,  ´´7.3s   .Now 05.0a   .

    Thus   025.02aa   , and b= I  –  (a/2) = 0.975. Degrees of freedom= n –  1=9. From Table

    II, Appendix B,   70.22 9,025.0X   , and 0.192

    9,975.0X   . Thus,

    20.19

    )0.2)(70.2(

    1n

    X   2´´2

    9,025.0

    (seconds of arc) 2

     

    and

    44.89

    )0.2)(0.19(

    1n

    X   2´´29,975.0

    (seconds of arc)

     2 

     Now 7.132)7.3(

    2s   (seconds of arc)

     2. Since

    2s  does not lie between 1.20 and 8.44,

    the hypothesis that

    ´´

    0.2  is rejected at the level of significance.

    8.12. BIVARIATE NORMAL DISTRIBUTION

    The probability distribution of two jointly distributed random variables was discussed ingeneral terms in Chapter 5. We shall now look at a particular joint distribution of two

    random variables — the bivariate normal distribution. This distribution is very useful whendealing with planimetric (x, y) positions in surveying.

    The joint density function of two random variables X and Y which have a bivariate normaldistribution is

    ,

    2

    y

    yy

    y

    yy

    x

    xx2

    2

    x

    xx

    )

    2

    1(2

    1exp

    2

    1yx2

    21

    )y,x(f    }{

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

    (8-43)

    in whichx

    , andx

     are the mean and standard deviation, respectively, of X;y

    , and

    y are the mean and standard deviation, respectively, of Y; and   is the correlation

    coefficient of X and Y as defined by Eq. (5-55),This density function has the form of a bell-shaped surface over the x, y coordinate plane,

    centered yy,xx   , as shown in Fig. 8-6. The marginal density functions for X and Y

    are, respectively,

    }{

      2

    xx2

    1

    exp2

    1

    )x(f x  

     

     

     

      (8-44)

    220 Analysis and Adjustments of Survey Measurements

  • 8/16/2019 libro pag. 211-329.pdf

    11/119

     

    Fig. 8-6.

    And

    ,exp2

    1

    )y(f    }{

    2

    y

    yy

    2

    1

    y

     

     

     

     

      (8-45)

    which are the usual density functions for individual normally distributed random variables.

    The two marginal density functions are also shown in Fig. 8-6.A plane that is parallel to the x, y coordinate plane will cut the bivariate density surface inan ellipse (see Fig. 8-6). The equation of this ellipse is obtained by setting f (x, y) in Eq.(8-43) equal to the height K of the intersecting plane above the x, y plane, and simplifying.

    The result is

      ,c1yyx

    2x   22

    2

    y

    y

    y

    y

    x

    x

    2

    x

    x  

     

     

     

     

     

     

     

     

     

      

     

     

      

     

      (8-46) 

    where

    1

    )2

    1(2

    y

    2

    x

    2K 2

    4ln2c

    , a constant.

    Introductory Statistical Analysis 221

  • 8/16/2019 libro pag. 211-329.pdf

    12/119

     

    EXAMPLE 8-10

    The parameters of a bivariate normal distribution are   4 ,   5 ,   4x

    ,   4y

    ,

    and   5.0xy

    . A plane intersects the density function at 1.0K   above the x, y coordinate

     plane. Evaluate and plot the ellipse of intersection.

    Solution

    60.21

    )25.01(2)5.0(

    2)1(

    2)1.0(

    24ln

    2c  

     

    ,95.1)60.2)(25.01(c)1(   2222  Thus, the equation of the ellipse of intersection is

    ,95.1

    5.0

    5y

    5.0

    5y

    1

    4x)5.0(2

    1

    4x  22

     

     

     

       

     

     

     

       

     

     

     

       

     

     

     

         

    Simplifying, we get

    .95.1)5y(4)5y)(4x(2)4x(  22  

    Letting   4xu   , and   5yv   , we have

    i2v4uv2

    2u    

    Solving for u in terms of v, we get

    .v395.1vu  2  

    Thus,

    2)5y(395.15y4x    or

    2)5y(395.11yx    

    Values for x and y are listed in Table 8-4, and the ellipse is plotted in Fig. 8-7

    It can be shown through appropriate differentiation of Eq. (8-46) that the extreme points ofthe ellipse (A, B, C, and D in Fig. 8-7), have the following coordinates:

    X Y

    A xc

    x      y

    cy

       

    B xc

    x    y

    cy

         

    C xc

    x      ycy    

    D xc

    x    y

    cy

         

    222 Analysis and Adjustments of Survey Measurements

  • 8/16/2019 libro pag. 211-329.pdf

    13/119

     

     Table 8 .4

    x

    4.2 1.03 and 3-374-4 2-47 4-33

    4.6 2-39 4.81

    4.8 2-45 5-15

    5-0 2.60 5.40

    5.2 2.85 5.55

    5-4 3.19 5.61

    5.6 3.67 5.53

    5.8 4.63 4.97

    Fig. 8-7.

    If the ellipse is enclosed within an imaginary box, indicated by the broken lines

    in Fig- 8-7, we see that the half-dimensions of the box are xc , and yc . We can

    also see that   acts as a proportioning factor in locating A, B, C, and D.

    Points E and G (Fig. 8-7) on the ellipse can be located by setting  x  in Eq- (8-46)equal to zero and solving for y:

    .1cy  2

    y    

    6

    Introductory Statistical Analysis 223

  • 8/16/2019 libro pag. 211-329.pdf

    14/119

     

    Similarly, points F and H (Fig. 8-7) are located by setting y in Eq- (8-46) equal to zero andsolving for x:

    .2

    1xcz    

    When   yx , and   0 , Eq- (8-46) reduces to

        ,cyx   222y2

    x     (8-47) 

    which is the equation of a circle with radius   c .

    8.13. ERROR ELLIPSES

    In the previous section, the general case of the bivariate normal distribution wasconsidered. This is the usual model which applies to survey measurements. If we wish to

    focus on the random error components only, we can set x , and y , equal to zero and get

     probability distribution that centers on the origin of the  x,  y  coordinate system.

    When   0yx   , Eq. (8-43) reduces to

     

      

     

     

      

     

     

      

     

     

      

     

    2

    yyx

    2

    x

    22

    yx

    yyx2

    x

    12

    1exp

    12

    1)y,x(f   

    (8-48) 

    and Eq.(8-48) becomes

      .c1yyx2x   222

    yyx

    2

    x

     

     

      

     

     

      

     

     

      

     

     

      

     

      (8-49) 

    Equation (8-49) represents a family of error ellipses centered on the origin of the x, y

    coordinate system. When   1c   , Eq. (8-49) is the equation of the standard error ellipse. 

    The size, shape and orientation of the standard error ellipse are governed by the

    distribution parameters x , y and . Six examples illustrating the effects of different

    combinations of distribution parameters are shown in Fig. 8-8.

    A typical standard error ellipse is shown in Fig. 8-9. Since   1c   , the imaginary box

    (broken line) that encloses the ellipse has half-dimensions x , and y , In general, the

     principal axes of the ellipse,  x´ and y´, do not coincide

    224 Analysis and Adjustments of Survey Measurements

  • 8/16/2019 libro pag. 211-329.pdf

    15/119

     

    with the coordinate axes x and y; the major axis of the ellipse,  x', makes an angle   with

    the x-axis.

    A positional error is expressed in the  x, y coordinate system by random Y

    X

    vector; the

    same error is expressed in the x, y coordinate system by random vector ´Y

    ´X.The orthogonal

    (rotational) transformation which relates the two vectors is

    Fig. 8-8.

    Introductory Statistical Analysis 225

  • 8/16/2019 libro pag. 211-329.pdf

    16/119

     

    Fig. 8-9.

    ,Y

    X

    cossin

    sincos

    ´Y

    ´X

      (8-50) 

    where  is the angle of rotation.

     Now the covariance matrices for random vectors

    Y

    X  and

    ´Y

    ´X 

    are

    2

    yxy

    xy

    2

    x  and ,

    0

    02

    y

    2

    x

     

    respectively. The off-diagonal terms in the covariance matrix for

    ´Y

    ´X are zero because X' and Y' are uncorrelated (x' and y' are the

     principal axes of the ellipse).

    Applying the general law of propagation of variances andcovariances, Eq. (6-19), to the vector relationship given by Eq. (8-

    50), we get:

    .cossinsincos

    cossinsincos

    00

    2

    yxy

    xy

    2

    x

    2

    ´y

    2

    ´x

      (8-51) 

    226 Analysis and Adjustments of Survey Measurements

  • 8/16/2019 libro pag. 211-329.pdf

    17/119

  • 8/16/2019 libro pag. 211-329.pdf

    18/119

     

    Solution

      ,m0246.014.022.080.0   2yxxy      

    ,m0340.02

    14.022.0

    2

    2222y2x  

      .m0285.0)0246.0(

    4

    )14.0()22.0(

    4

    2

    21

    2

    22221

    2

    xy

    22

    y

    2

    x

     Thus,

      21

    2

    xy

    22

    y

    2

    x

    2

    y

    2

    x2

    ´x42  

     

    22

    ´x   m0625.00285.00340.0    and

      21

    2

    xy

    22

    y

    2

    x

    2

    y

    2

    x2

    ´y42  

     

    .m0055.00285.00340.0  22

    ´x    

    Thus the semimajor axis is

    m25.00625.0´x    

    and the semiminor axis is

    .m074.00055.0´y

       

    now

      .711.1

    14.022.0

    )0246.0(222tan

    222

    y

    2

    x

    xy

     

    Since xy , and  2

    y

    2

    x   are both positive,   2 lies in the first quadrant.

    Thus,   º7.59)711.1(1

    tan2  

    , and the orientation of the error ellipse is   º8.29 .

    To determine the probability associated with an error ellipse, it is most convenient toconsider independent (uncorrelated) random errors X and Y. (If the errors are correlated,

    they can always be transformed into uncorrelated errors by rotation through angle   .)

    228 Analysis and Adjustments of Survey Measurements

  • 8/16/2019 libro pag. 211-329.pdf

    19/119

     

    For uncorrelated random errors, p = 0 and Eq. (8-49) reduces to

    .c

    yx   22

    y

    2

    2

    x

    2

      (8-58) 

     Now consider the position of a point defined by the two random errors X and Y. This pointwill lie on or within the error ellipse if

    .cyx   22

    y

    2

    2

    x

    2

      (8-59) 

    Since X and Y are two independent normal random variables with zero means, the random

    variable

    2

    y

    2

    2

    x

    2

    YXU

      (8-60) 

    has a chi-square distribution with two degrees of freedom. The probability density functionof U can be easily derived from the general chi-square density function, Eq. (8-3), noting

    that for two degrees of freedom,   12n   .Thus, the probability density function of U is

    2ue2

    1)u(f 

      for    .0u    (8-61)

    The probability that the position given by values of X and Y lies on cr within the error  

    ellipse is

    222

    y

    2

    2

    x

    2

    cUPcyx

    P  

     

        2

    0

    2udue

    2

    .e1  2u   (8-62)

    The probability   2cUP   is represented by the volume under the bivariate normal densitysurface within the region defined by the error ellipse.

     

    2cUP   for various values of c is

    given in Table 8-5. Since for the standard error ellipse   1c   , we see from Table 8-5 that the

     probability is 0.394 that the position of a point plotted from the two random errors will lieon or within the standard error ellipse.

    Introductory Statistical Analysis 229

  • 8/16/2019 libro pag. 211-329.pdf

    20/119

     

    Table 8-5

    c p[U

  • 8/16/2019 libro pag. 211-329.pdf

    21/119

     

    8-3  Given a random variable T which has a t   distribution with 10 degrees offreedom. Determine from Table III of Appendix B the probability that T takes on a value:(a) less than 0.70; (b) greater than 0.70; (c) between 0.26 and 0.70; (d) between -3.17 and

    3.17; (e) between -1.81 and 0,26. Note: Since the t  distribution is symmetric about zero, tTP1tTP   .8-4  A distance is measured 25 times. All measurements are independent and have thesame precision (i.e., the measurements constitute a random sample of size 25). Followingare the measurements:

    231.354 m 231.361 m 231.384 m 23 1. 34 7 m 231.335 m

    231.31 2 m 231.3 55 m 231.34 7 m 231.3 66 m 231.361 m

    231.320 m 231.348 m 231.34 1 m 231.3 38 m 231.337 m

    231.361 m 231.3 41 m 231.3 50 m 231.3 33 m 231.355 m

    231.32 2 m 231.3 31 m 231.37 6 m 231.3 35 m 231.344 m

    Evaluate the sample mean, sample median, sample midrange, sample range, sample meandeviation, sample variance, and sample standard deviation.

    8-5  The following 20 observations of a pair of angles,  and , are obtained:

    OBSERVATION    

    1 31°14'16.2" 42º08'24.0"

    2 31°14'15.2" 42°08'24.4"

    3 31°14'15.6" 42°08'24.5'4 31º14'14.5" 42º08'23.8"

    5 31°14'14.0" 42º08'25.7"

    6 31'14'15.8" 42°08'26.1"

    7 31°14'16.0" 42º08'21.8"

    8 31°14'14.1" 42º08'23.3"9 31°14'16.4" 42°08'24.8"

    10 31°14'13.6" 42°08'23.2"

    11 31°14'16.7" 42°08'25.7"

    12 31º14'14.1" 42°08'22.7"

    13 31°14'15.3" 42º08'26.2"14 31°14'15.2" 42°08'25.8"

    15 31°14'12.9" 42°08'25.3"

    16 31°14'17.9" 42°08'24.0"17 31°14'16.2" 42°08'27.4"

    18 31°14'14.2" 42°08'25.8"

    19 31º14'14.1" 42°08'23.7"

    20 31°14'14.8" 42º08'24.0"

    Compute the sample variances,2

    S and  2S , and the sample covariance, S . From these

    sample statistics compute the sample correlation coefficient for   and  

    Introductory Statistical Analysis 231

  • 8/16/2019 libro pag. 211-329.pdf

    22/119

     

    .SS

    S

     

    8-6  Angles  and  in Problem 8-5 are added to form a new angle   .Compute

    a sample of 20 values for   directly from the 20 pairs of   and  values in Problem

    8-5. Then compute from these data the sample variance  2s , the sample covariance

    2s   and the sample correlation coefficient . Check the computed values

    for   2s , and

      2s by appl yin g vari ance-covariance propagati on to the vector

    function

    ,10

    01

     

    using the sample variances and covariance for   and  , computed in Problem 8-5, as

    elements of the covariance matrix for  .

    8-7  A distance is measured 10 times. All measurements are independent and have thesome precision. The standard deviation of each measurement is known to be 0.025 m. Thefollowing values are obtained: 307.532 m, 307.500 m, 307.474 m, 307.549 m, 307.490 m,307.527 m, 307.556 m, 307.502 m, 307.489 m, and 307.514 m. Evaluate the sample mean

    for the distance and the standard deviation of this sample mean. Construct 50% and 95%confidence intervals for the population mean of t distance.8-8  if in Problem S-7 the standard deviation of each measurement is not known,construct 50% and 95% confidence intervals for the unknown standard deviation, under

    the assumption that the measurements are normally distributed. Is there any significantdifference between the sample standard deviation and the standard deviation (0.025 m)given in Problem 8-7?8-9 An angle is measured six times with the following results:

    40°10'15.6" 40°10'16.4"40º10'10.8" 40º10' 13.5"40°10'08.9" 40°10'11.0"

    The measurements are assumed to be normally distributed and independent. Construct90% and 99% confidence intervals for the population mean, variance,and standard deviation.8-10 An angle is measured 16 times with a theodolite . The measured values are:

    1. 52°35'24" 9. 52°35'24"2. 52°35'28" 10. 52°35'29"

    3. 52º35'22" 11. 52°35'35"

    4, 52°35'20" 12. 52°35'31"

    5. 52°35'25" 13. 52°35'29"

    6. 52°35'29" 14. 52°35'26"7. 52°35'18" 15. 52°35'30"

    3. 52°35'26" 16. 52°35'31"

    232 Analysis and Adjustments of Survey Measurements

  • 8/16/2019 libro pag. 211-329.pdf

    23/119

     

    It is suspected that the theodolite was disturbed between the eighth and ninthmeasurements. Construct 90% confidence intervals for the means of the first eight and lasteight measurements. Is there evidence the theodolite was disturbed?

    8-11  An angle is independently measured 10 times with the same precision. Theobserved values are 90°00'05", 90°00'10", 90°00'00", 90°00'07", 89°59'54", 89°59'58",90°00'06", 90°00'03", 89°59'57", 90°00'10".

    (a)Test the hypothesis that the mean of the measurement equals 90°00'00" against the

    alternative that the mean does not equal 90°00'00". Use a 5% level of significance.(b)Test the hypothesis that the standard deviation of the measurement is 4" against the

    alternative that it is not 4". Use a 5% level of significance.8-10  A level rod is observed 15 times with a precise level that is equipped with a

    micrometer. Following are the rod readings (assumed to be a random sample from a

    normal population):

    1412.80 mm 1412.85 mm 1412.87 mm1413.09 mm 1412.50 mm 1412.80 mm1412.86 mm 1412.84 mm 1412.66 mm

    1412.80 mm 1412.84 mm 1412.84 mm

    1412.78 mm 1413.02 mm 1412.72 mm

    Test the following hypotheses at the 5% level of significance:

    (a) 0H :   1413.00 mm against 1H :   1413.00 mm

    (b) 0H :   1412.75 mm against 1H :   1412.75 mm

    (c) 0H :     0.08 mm against 1H :     0.08 mm

    (d) 0H :     0.20 mm against 1H :     0.20 mm

    8-13  Two independent calibrations of a 50 m steel tape yield two different values forits length: 50.0026 m, and 50.0008 m. The standard deviation of a single tape calibration isknown to be 0.7 mm.

    (a) Test at the 2% level of significance for any significant difference between the twocalibration values.

    (b) Assuming there is no significant difference between the two calibration values,construct a 99% confidence interval for the length of the tape based upon the mean of thetwo calibration values.8-14 Plane coordinates X and Y of a survey station have a bivariate normal distribu-

    tion. The mean and standard deviation of X are 1700.50 m and 0.20 m, respectively; themean and standard deviation of Y are 810.65 m and 0.10 m, respectively, The coefficientof correlation between X and Y is 0.60. Evaluate the principal dimensions (semimajor and

    semiminor axes) and orientation of the standard error ellipse associated with this survey

    station position.8-15 The following covariance matrix is associated with the random error in thehorizontal (x, y) position of a point:

    ,m160.0096.0

    096.0090.02

     

    Introductory Statistical Analysis 233

  • 8/16/2019 libro pag. 211-329.pdf

    24/119

     

    Under the assumption that the random error has a bivariate normal distribution, evaluatethe principal dimensions and orientation of its standard error ellipse. Sketch the standarderror ellipse, showing pertinent dimensions.

    8-16 If X and Y have a bivariate normal distribution with   0yx   ,   0yx    

    and 0xy   , it can be shown that the radial distance  2

    Y2

    XR    has the following

    density function:

    2

    2

    22

    r exp

    r )r (f   for .0r    

    This is the density function of the Rayleigh distribution. Evaluate   0R P   and comparewith the corresponding entry in Table 8-5.8-17 The computation of a closed traverse results in the following misclosure in the xand y coordinates:

    cm3.3xx 0c    

    ,cm9.6yy 0c    

    where xo and 0y are the given coordinates of the origin of the survey and cx and cy are the

    computed coordinates. The covariance matrix of the computed coordinates, referenced tothe given coordinates, is

    .cm83.887.0

    87.063.42

     

    Is the misclosure acceptable in the sense that it lies within the 0.95 probability errorellipse?8-18 The x, y position of a survey station is computed by the method of least squares.

    The initial (approximate) position of the station is given by   m60.10400x   , m50.2143

    0y   ,

    and the normal equations in the least squares solution are

    .20.0

    40.0

    y

    x

    500.0250.0

    250.0125.1

     

    The reference variance is  2m040.0 ,Assume the solution requires only one iteration.

    Evaluate the least squares position of the survey station and the principal dimensions andorientation angle of its 99% confidence region  (an ellipse identical in size, shape andorientation to the corresponding 0.99 probability error ellipse but centered on the least

    squares position).8-19  The principal axes of a standard error ellipse coincide with the  x  and  y  axes

    (Fig.8-10). The standard deviations x , and y are as shown in the figure. Let of

    OP0    be the standard deviation in any direction  . In general, P does not lie on the

    ellipse; instead, its locus is the so-called pedal curve, shown as a broken line in Fig. 8-10.

    (a) Show that

    234 Analysis and Adjustments of Survey Measurements

  • 8/16/2019 libro pag. 211-329.pdf

    25/119

     

    .sencos   22y22

    x

    2

    0    

    ( Hint : Put the x' axis in the direction of OP.)(b) Then show that the equation of the pedal curve in the x, y coordinate system is

      .0yxyx   22y22x222  

    8-20 A steel tape of length 1 is used to measure the distance between two survey

    stations. A total of n full tapelengths are required to measure the distance. Random errors,

    nE,2E,1E   in aligning the tape introduce a positive error V   in the observed distance.

    Assume these alignment errors are independent and normally distributed with zero mean

    and standard deviation  , and that the effect of each error  iE , in the direction of taping can

     be approximated by 22

    i

    E .

    (a) Show that Y22V   , where Y is distributed as chi-sguare with n degrees offreedom. (b) Derive expressions for the mean and standard deviation of V. (c) If a 50 m

    tape is used to measure a distance of 800 m, and   m50.0 , evaluate v such

    that     95.0vV p    .

    Fig. 8-10

    Introductory Statistical Analysis 235

  • 8/16/2019 libro pag. 211-329.pdf

    26/119

     

    General Least Squares

    Adjustment

    9.1. INTRODUCTION

    In Examples 3-5 and 4-6 a straight line is fitted through three points. In these examples,the y-coordinates were assumed to be the observations, while the x-coordinates were

    considered as constants. Based on the straight-line equation

    0 baxy     (9-1)

    the condition equations took the form for the adjustment of indirect observations

    .f Bv     (9-2) 

    The elements of the unknown parameters vector   are the slope a and the y-intercept b, and

    the residuals are those associated with the observed y-coordinates.Let us now consider the case in which not only the y-coordinate but also the x-coordinateof each point is an observed quantity. The straight-line equation (9-1), written for any

     point, will then contain two observations, x and y, and two parameters, a and b. In this

    form, it does not fit the condition equations form of either one of the two techniques ofleast squares adjustment discussed in Chapter 4. In adjustment of indirect observations,

    where the form of the condition equations is given by Eq. (9-2), each condition equationcontains only one observation. In adjustment of observations only, where the conditionequations are of the form

    ,f Av    (9-3)

  • 8/16/2019 libro pag. 211-329.pdf

    27/119

     

    no parameters are included in the conditions. Consequently, for the case at hand, there isneed for a more general least squares technique that can handle combined observations and

     parameters in the condition equations without the restriction of having only one

    observation in each equation. Such a technique is the subject of this chapter.To permit as much generality as possible, the technique will place no restriction on thestructure of the covariance, cofactor, or weight matrices of the observations, i.e., it willaccept measurements that may be correlated and/or of unequal precision.

    Before proceeding with the derivation of the general least squares technique, we shallrework the straight-line problem of Examples 3-5 and 4-6, now taking the x-coordinates aswell as the y-coordinates as observations.

    EXAMPLE 9-1

    A straight line,   baxy   , must be fitted through three points. The following data are

    given:

    POINT x(cm) y(cm)

    )

    2

    cm(

    2

    x   )

    2

    cm(

    2

    y  1 2.00 3.20 0.04 0.10

    2 4.00 4.00 0.04 0.08

    3 6.00 5.00 0.04 0.08

    These are precisely the same data as given for Example 4-6, Part (2), with variances for the

    x-coordinates added. All measured coordinates are assumed to be uncorrelated. It isrequired, under these new conditions, to find least squares estimates for the two parametersa and b.

    Solution

    For any one of the three points, the equation of the straight line is

        .0 bvxavyFxy

       

    Since a and xv , are both unknown, this equation is nonlinear in the unknowns. It cannot

    therefore be used directly, as was the case in Example 4-6, but must be linearized asfollows:

      0 b b

    Fa

    a

    Fv

    y

    Fv

    x

    F bxay

    yx00 

     

    or

    , bxay b1axv1va 00yx0    

    General Least Squares Adjustment 237

  • 8/16/2019 libro pag. 211-329.pdf

    28/119

  • 8/16/2019 libro pag. 211-329.pdf

    29/119

     

    .

    0

    0

    20.0

    00.2)00.6(5.000.5

    00.2)00.4(5.000.4

    00.2)00.2(5.020.3

     

    Since both the x and y coordinates are treated as observed variables, and since there are sixcoordinate values in total, the covariance matrix is 6x6. Furthermore, since all theobservations are uncorrelated, the covariance matrix is diagonal, i.e.,

    ,cm

    08.0

    04.0

    08.0

    04.0

    10.0

    04.0

    2

    2

    3y

    2

    3x

    2

    2y

    2

    2x

    2

    1y

    2

    1x

    The observation vector  and the residual vector v are

    .

    v

    v

    v

    v

    v

    v

    vand

    y

    x

    y

    x

    y

    x

    3y

    3x

    2y

    2x

    1y

    1x

    3

    3

    2

    2

    1

    1

     

    The constant term vector f can be writing as

    ,

     b

     b

     b

    y

    x

    y

    x

    y

    x

    1a0000

    001a00

    00001a

    0

    0

    0

    3

    3

    2

    2

    1

    1

    0

    0

    0

     

    which, symbolically, is

    ,dAf       

    where d is obviously the constant vector

    0 b

    0 b

    0 b

    .

     Now, the vector of observations  can be transformed into a vector of equivalent  

    observations, c , as follows:

    .Ac      

    General Least Squares Adjustment 239

  • 8/16/2019 libro pag. 211-329.pdf

    30/119

     

    The same transformation applies to the vector of residuals, v, i.e.,

    .Avv0   Thus,

    df  c     and the linearized condition equations become

    ,dBv c0      which is the form of Eq. (4-7) for the technique of adjustment of indirect observations.From the general law of propagation of variances and covariances, expressed by

    Eq. (6- 19), we can obtain the covariance matrix of  c .

    .cm

    09.000

    009.00

    0011.0

    AA  2t

     

    Letting  2

    cm09.02

    0   , the weight matrix of the equivalent observations is [see Eq. (4-19)]:

    .w  12

    0  

    1

    1

    818.0

    09.01

    09.01

    01.01

    )09.0(  

    and the squares solution, according to Eqs. (4-28), (4-29), and (4-38), is

    818.2636.11

    636.11272.55

    WBB N  t

     

    1636.0

    3272.0Wf Bt

      t 

    7147.25715.0

    5715.01384.0 N   1  

    .2571.0

    0482.0t N

      1

     

     

    The corrected parameter values are

    4518.00482.05.0aaa 01   .2571.22571.00.2 b b b 01    

    240 Analysis and Adjustments of Survey Measurements

  • 8/16/2019 libro pag. 211-329.pdf

    31/119

     

    The corrected values must now be used as new approximations in the solution for a newcorrection vector. Thus

    14518.00000

    0014518.000

    000014518.0

    A1  

    16

    14

    12

    B1   and as before,

    0321.0

    0643.0

    0393.0

    f 1  

    0882.0

    0882.0

    1082.0

    AA  t

    1

    t

    11

     

      1

    1

    8151.0

    W  1

    1c

    2

    01c  

    815.2630.11

    630.11260.55BWB N 11c

    t

    11  

    0002.0

    0005.0f WBt11c

    t

    11  

    7219.25729.0

    5729.01387.0 N

      1

    1  

    .00026.0

    00005.0t N 11

    11  

       

    Since the values in 1 , are sufficiently small, the iterative procedure is terminated, and the

    final estimates of the parameters arc

    .cm257.2 b̂,452.0â    

    We have seen in the preceding example that in some problems the condition equations aremore readily written in the form   f BAv   than in either of the two simpler forms

    discussed in Chapter 4. While it is possible in principle to solve any problem by any

    technique of Least squares, it is quite often

    General Least Squares Adjustment 241

  • 8/16/2019 libro pag. 211-329.pdf

    32/119

     

    more convenient, and more efficient, to solve a given problem by a particular technique.Among others, curve-fitting problems and coordinate-transformation problems are bestsolved by the general technique of this chapter when all the coordinates in the condition

    equations are observed variables.

    9.2. DERIVATION

    Given a mathematical model, the minimum number of observations necessary for its

    unique determination is 0n . When n measurements which are consistent with the model

    are acquired, such that0nn   , the redundancy is

    .nnr  0   (9-4) 

    This means that among the n observational variables there exist r independent conditionequations (see Sections 3.3, 4.1, and 4.4). If we wish to carry unknown parameters into the

    adjustment, then we must write an additional condition equation for each parameter.Hence, if u parameters are carried, the number of condition equations will be

    .ur c     (9-5)

    The lower limit for the value of u is obviously zero, in which case no parameters are

    carried and there are   r c   conditions among the observations only. On the other hand, the

    upper limit for the value of u is 0n , for which   n0nr c   , so that we may not write any

    more conditions than the total number of observations. Thus,

    0nu0     (9-6)

    ,ncr      (9-7)

    the lower limits of which define the case for adjustment of observations only, and theupper limits define the case for adjustment of indirect observations, the two special cases

    treated in Chapter 4 (see Section 9.4). In many problems, the number of parameters carried

    in the adjustment is neither zero nor  0n . In such cases, neither one of the two special

    techniques of Chapter 4 would be directly suitable. Instead, c condition equations existwhich, when they are originally linear, take the form

      ,dBvA     (9-8)

    where

    242 Analysis and Adjustments of Survey Measurements

  • 8/16/2019 libro pag. 211-329.pdf

    33/119

     

    A is a   nc  rectangular coefficient matrix nc   ;

    B is a   uc  rectangular coefficient matrix nc   ;   is the   1n  given observational vector;v is the corresponding   1n  vector of residuals;

     is the   1u   vector of parameters;

    d is a   1c  vector of constants.

    The precision of the n given observations may be expressed by either a co-variance

    matrix   ,   a cofactor matrix Q, or a weight matrix W. In the subsequent derivation, the

    cofactor matrix will be used.There are two possible procedures for deriving the least squares estimate of the parametervector   . The first was essentially followed in Example 9-1 in which the condition

    equations combining observations and parameters are transformed into the form of indirectobservation adjustment. For the sake of completeness, this procedure is summarized here.

    Let

      Ac    (9-9)

    represent a vector of c equivalent observations, each of which is a linear combination of

    the n original observations; and let

    Avvc    (9-10) 

     be the corresponding c residuals. If Q is the cofactor matrix of the given observations, then

    cQ , the cofactor matrix of the equivalent observations, is, according to Eq. (6-27), given

     by

    c,nn,nn,cc,c

    AQAQ  t

    c     (9-11)

    In terms of the equivalent observations, the condition equations given by Eq. (9 -8) become

    dBvcc    or (9-12) 

    .f dBv cc      

    If the weight matrix of the equivalent observations is

     1t1cc   AQAQW     (9-13)

    General Least Squares Adjustment 243 

  • 8/16/2019 libro pag. 211-329.pdf

    34/119

     

    then the normal equation matrices for the conditions expressed by Eq. (9-12) are,according to Eqs. (4-28) and (4-29),

      BAQABBWB N   1ttct 

      (9-14)

    and

      f AQABf WBt   1ttct 

      (9-15)

    and the least squares estimates of the parameters are, from Eq. (4-38),

    .t N  1   (9-16)

    The reader should recognize that Eq. (9-13) is identical to Eq. (4-53) in Chapter 4, and

    should understand that the use of the symbols cQ and cW in Chapter 4 was justified since

    they, indeed, represent cofactor and weight matrices, respectively, of the set of equivalent  observations.In the second derivation procedure we shall apply the minimum criterion directly. Recallfrom Chapter 4 that the minimum criterion for observations with a weight matrix W is

    .imunminWvv t   (9-17)

    Although in Chapter 4 we restricted ourselves to uncorrelated observations for the sake ofsimplicity, this expression applies to all cases regardless of the structure of the weight

    matrix. Thus, W may be a full matrix, for which case the observations are correlated with

    equal or unequal precision; or it may be a diagonal matrix for uncorrelated observationswith unequal precision; or it may be a scalar or identity matrix reflecting uncorrelatedobservations with equal precision.

    The linear, or linearized, condition equations are

    .f BAv     (9-18)

    When they arc originally linear, then from Eq. (9-8)

    .Adf      (9-19)

    The more common case is to have the conditions nonlinear with c functions of the generalform

      0x,FF       (9-20)

    in which   is the vector or n observations and x the vector of u parameters. Thus, the

    linearization of Eq. (9-20) leads to Eq. (9-18), with

    244 Analysis and Adjustments of Survey Measurements

  • 8/16/2019 libro pag. 211-329.pdf

    35/119

     

    ,0x,Ff ,

    x

    FB,

    FA  

      (9-21)

    where the three matrices A, B, and f are evaluated at the numerical values for the

    observations and a set of u approximate values 0x for the unknown parameters.

    In a manner similar to the technique of adjustment of observations only (see Section 4.4), avector k of c Lagrange multipliers is used. The minimum criterion then becomes

      .imunminf BAvk 2Wvv'   tt   (9-22) 

    To achieve a minimum, the partial derivatives of   ' with respect to both v and  must be

    equated to zero, or

    0Ak 2Wv2

    v

    '   tt

     

    ,0Bk 2'   t

     

    which after transposition and rearrangement become

    k AWv  t   (9-23).0k Bt   (9-24) 

    Solving Eq. (9-23) for v, we get

    ,k QAk AWv   tt1     (9-25) 

    and substituting into Eq. (9-18), we get

      ,f Bk AQAt  

    which, in view of Eq. (9-11), becomes

    .Bf k Qc     (9-26) 

    Solving Eq. (9-26) for k yields

    ,Bf WBf Qk c

    1

    c      (9-27) 

    Finally, substituting into Eq. (9-24) gives

      0Bf WB ct  

    General Least Squares Adjustment 245 

  • 8/16/2019 libro pag. 211-329.pdf

    36/119

     

    or

      ,f WBBWB ctct   (9-28)

    i.e., ,t N     (9-29)where

      BAQABBWB N   1ttct 

     

    and

      ,f AQABf WBt   1ttct 

     

    which are precisely the relationships given by Eqs. (9-14) and (9-15). Equation (9-16) can

    then be used to solve for   .

    If the condition equations are nonlinear, the vector estimate of the parameters is

    .xx̂ 0     (9-30) 

    The vector of residuals. v, is obtained by substituting the righthand side of Eq. (9-27) for kin Eq. (9-25):

    .Bf WQAv ct   (9-31)

    The vector of adjusted observations, ̂ , is obtained by adding v to the observation vector,

    , i.e.,

    vˆ     (9-32)

    EXAMPLE 9-2

    Using the data of Example 9-1, calculate the adjusted coordinates of the three points.

    Solution

    .cm

    08.004.0

    08.0

    04.0

    10.0

    04.0

    andcm09.0  222

    0  

     

    246 Analysis and Adjustments of Survey Measurements

  • 8/16/2019 libro pag. 211-329.pdf

    37/119

     

    Thus

    .

    889.0

    444.0

    889.0

    444.0111.1

    04.0

    1Q

    2

    0

     

    Values for the final iteration are

    14518.00000

    0014518.000

    000014518.0

    A  

    0321.0

    0643.00393.0

    f ,

    16

    1412

    B  

    0

    0

    0

    ,

    100

    010

    008151.0

    Wc  

    Thus

      cm

    03.0

    01.0

    06.001.0

    04.0

    01.0

    f QABf WQAv  t

    c

    t

     

    and

    ,cm

    97.401.6

    06.4

    99.3

    16.3

    01.2

    03.001.0

    06.0

    01.0

    04.0

    01.0

    00.500.6

    00.4

    00.4

    20.3

    00.2

       

    i.e., the adjusted coordinates of the three points are

    General Least Squares Adjustment 247 

  • 8/16/2019 libro pag. 211-329.pdf

    38/119

     

    POINT x(cm) y(cm)

    1 2.01 3.16

    2 3.99 4.06

    3 6.01 4.97

    The reader can verify that these adjusted positions lie on the line   .26.2x452.0y    

    In addition to the estimates themselves, it is equally important to calculate the precision ofthe estimates. This is discussed in the following section.

    9.3. PRECISION ESTIMATION

    To derive Q the cofactor matrix of the parameter estimates,  , we first substitute Eq. (9-

    15) into Eq. (9-16) to get

    .f WB Nc

    t1   (9-33)

    The vector f is then replaced by   Ad   , according to Eq. (9-19), to give

    .AdWB N ct1

        (9-34) 

    The only random vector in Eq. (9-34) is   . Its cofactor matrix is Q. Thus, the Jacobian of

    with respect to   is

    AWB NJ ct1

     

      (9-35) 

    and from the propagation law expressed by Eq. (6 -28), we get

    tQJJQ

         

    tct1ct1 AWB NQAWB N    1

    c

    t

    c

    t1 BNW)AQA(WB N    

    , N  1   (9-36) 

    since cW , and N are symmetric and, from Eqs. (9-13) and (9-14),  tAQA

    1

    cW  

    , and

    BcWt

    B N   , respectively.

    If the condition equations are originally linear, then  is the vector of estimated parameters, and Q , given by Eq. (9-36), is the corresponding cofactor matrix. If,

    however, the condition equations are nonlinear, they are linearized by a series expansion

    248 Analysis and Adjustments of Survey Measurements

  • 8/16/2019 libro pag. 211-329.pdf

    39/119

     

    (see Chapter 2), and  is a vector of parameter corrections  to be added to the

    approximations0x . It is important, then, to derive the cofactor matrix xxQ for the final

    vector of estimates x. Equation (9-30) indicates that the final estimate is the sum of  0x ,

    and  . If  0x , is taken as the parameter approximation at the beginning of the last iteration

    of the least squares solution and  is the final correction, then

    , NQQ   1xx

        (9-37) 

     because 0x , can be regarded as a vector of numerical constants the components of which

    have been determined by all iterations preceding the last.

    Equation (9-37) shows that the cofactor matrix of the parameters estimated by leastsquares turns out to be simply the inverse of the normal equations coefficient matrix N.Thus, the precision of the parameter estimates is a byproduct of the calculation of the

     parameter estimates themselves.While Eq. (9-37) gives the cofactor matrix, or relative covariance matrix, of the estimated

     parameters, it is more often necessary to find the absolute precision of the parameter

    estimates, i.e., their covariance matrix,  . If the reference variance,   20 , is known

     beforehand (a priori), then it is a straightforward matter to calculate or   xx :

    . NQ  12

    0

    2

    0xx

        (9-38) 

    EXAMPLE 9-3

    With reference to Example 9-1, calculate the covariance matrix for the two parameter

    estimates  â and b̂ .

    Solution

    From Example 9-1, the cofactor matrix of the parameter estimates is

    .7219.25729.0

    5729.01387.0 NQQ

      1

    1xx  

       

    The a priori reference variance  2

    0 is 0.09  2cm .Thus

    .2450.00516.0

    0516.00125.0 N

      12

    0xx  

       

    If, however,2

    0 is not known beforehand, an estimate for it,  2

    0̂ , can be calculated a

     posteriori from the results of the adjustment:

    ,r 

    Wvv

    uc

    Wvvˆ

    tt2

    0     (9-39)

    General Least Squares Adjustment 249 

  • 8/16/2019 libro pag. 211-329.pdf

    40/119

     

    where

    v is the vector of observational residuals;

    W is the a priori weight matrix of the observations;c is the number of condition equations;u is the number of parameters;

    r is the redundancy (degrees of freedom) which is equal to 0nn   , 0n , being the total

    number of observations requiered to define the underlying model uniquely.*

    The a posteriori covariance matrix of the parameter estimates is then

    . NˆQˆ   1202

    0xx

        (9-40) 

    The a priori value of   2

    0 , if given, can be statistically tested by using  2

    0̂ in place of   2S and r

    in place of    1n   in Section 8.11, Chapter 8. When  2

    0̂ is consistent with  2

    0 , the latter should

    always be used in calculating covariance matrices such as xx .This is because the value

    of   2

    0̂ ; is only one estimate from one data set with limited redundancy, while  2

    0  presumed

    to be far better known. Should  2

    0̂ turn out to he inconsistent with  2

    0 , then several steps are

    taken to determine the reason. [This topic is outside the scope of this book; for thoseinterested, see Mikhail (1976)].

    To evaluate the quadratic form, Wvtv , we use the relationship     Bf cW

    tQAv of Eq.

    (9-31). Thus,

        Bf WQAWBf WQAWvv ctt

    c

    tt 

    which reduces to

      tc

    tt f f Wf Wvv   (9-41)

    if we note that Q and cW   are symmetric, that1

    WQ

    , and that

    tAQA

    1

    cW  

    ,   BcWt

    B N   ,   f cWt

    Bt   , and   t1

     N

    , from Eqs. (9-13) through (9-16), re-

    spectively.When the condition equations are nonlinear, the iterative procedure is usually carried outuntil the final value of   is so small as to be essentially zero. Hence, for the nonlinear case,

    Eq. (9-41) reduces to

    .f Wf Wvvc

    tt   (9-42) 

    *It is interesting to note that when 10n   n,= 1 and 1W   , En. (9-39) reduces to

    1n

    2v

    1n

    vtv2

     

    the familiar expression for sample variance.  

    250 Analysis and Adjustments of Survey Measurements

  • 8/16/2019 libro pag. 211-329.pdf

    41/119

     

    Two other cofactor matrices are of interest  —  vvQ , the cofactor matrix of the residuals;

    and, more important,ˆˆ

    Q the cofactor matrix of the adjusted observations.

    To derive vvQ we note from Eqs. (9-27), (9-15), and (9-19) that

    tBNf Wk    1c   f WBBNf W ct1c  

    ,AdWBBNIW ct1c       (9-43) and from Eq. (9-25) that

    .k QAv   t  Applying the general law of cofactor propagation to Eq. (9-43), and noting that d is avector of constants, we obtain

      tct1cct1ckk    AWBBNIWQAWBBNIWQ    which reduces to ct1ckk    WBBNIWQ     (9-44) since   cWtB1BNIcW

     is idempotent.* Applying the general law of cofactor propagation

    to Eq. (9-25), we obtain

    ttkk tvv   QAQQAQ     AQWBBNIWQA ct1ct     (9-45)

    To deriveˆˆ

    Q , we note from Eqs. (9-32), (9-25) and (9-43) that

          AdWBBNIWQA ct1ct       ,cWBBNIWQAI ct1ct     (9-46) 

    where   dcWtB1BNIcWtQAc , a vector of constants. Applying the general law ofcofactor propagation to Eq. (9-46) we get

      ,AWBBNIWQAIQAWBBNIWQAIQ   tct1ctct1ctˆˆ  which reduces to

    AQWBBNIWQAQQ ct1ctˆˆ   (9-47) 

    *An idempotent matrix has the property that it is equal to its square.

    General Least Squares Adjustment 251

  • 8/16/2019 libro pag. 211-329.pdf

    42/119

     

    It is important to recognize from Eqs. (9-45) and (9-47) that

    ,QQQvvˆˆ

     

      (9-48)

    which is the same relationship as given by Eqs. (6-46) and (6-61).

    EXAMPLE 9-4

    With reference to the problem of Examples 9-1 and 9-2, calculate the covariance matrixfor the adjusted coordinates.

    Solution

    From Examples 9-1 and

    889.0

    444.0

    889.0

    444.0

    111.1

    04.0

    Q  

    14518.00000

    0014518.000

    000014518.0

    A  

    .7219.25729.0

    5729.01387.0 N,

    100

    010

    008151.0

    W,

    16

    14

    12

    B  1

    c  

     

     Thus,

    8403.03217.01605.0

    3217.03579.03212.0

    1969.03941.08030.0

    WBBNc

    t1 

    and

     

    1597.03217.01605.0

    3217.06421.03212.0

    1969.03941.01970.0

    WBBNI ct1  

    252 Analysis and Adjustments of Survey Measurements

  • 8/16/2019 libro pag. 211-329.pdf

    43/119

     

    and

    .

    889.000

    00.200

    0889.00

    000.2000111.1

    0000.2

    QAt

     

    Using Eq. (9-45), the cofactor matrix of the residual vector v can be calculated:

    AQWBBNIWQAQ ct1ctvv  

    .

    126.0Symmetric

    028.0006.0

    254.0057.0507.0

    057.0013.0114.0026.0

    159.0036.0317.0071.0198.0

    029.0006.0057.0013.0036.0006.0

     

    From Eq. (9-48) we can get the cofactor matrix of the adjusted coordinates, ̂ :

    ,QQQ vvˆˆ    

    .

    763.0Symmetric

    028.0438.0

    254.0057.0382.0

    057.0013.0114.0418.0

    159.0036.0317.0071.0913.0

    029.0006.0057.0013.0036.0438.0

     

    Finally, since  2

    cm09.02

    0    the covariance matrix of the adjusted coordinates is

    .cm

    0687.0

    0025.00394.0

    0229.00051.00344.0

    0051.00012.00103.00376.0

    0143.00082.00285.00064.00822.0

    0026.00005.00051.00012.00032.00394.0

    Q   2ˆˆ

    2

    0ˆˆ

     

    General Least Squares Adjustment 253

  • 8/16/2019 libro pag. 211-329.pdf

    44/119

     

    9.4. SPECIAL CASES 

    In Chapter 4, two specific techniques of least squares adjustment were presented. These

    techniques are:

    1. Adjustment of indirect observations.2. Adjustment of observations only.

    After studying the general case covered in this chapter, it should be clear that the twotechniques of Chapter 4 arc comparatively simpler. They are, in fact, two special cases ofthe general technique, as was indicated in Section 9.2.

    Adjustment of Indirect Observations

    With reference to Section 9.2, this special case is achieved when u (the number of

     parameters carried in the adjustment) is equal to its upper limit, 0n (the minimum number

    of observations necessary for a unique determination), making c (the number of condition

    equations) equal to its upper limit, n (the number of observations).According to Eq. (9-18), the initial condition equations can be expressed in general linearor linearized form as

    ,f BvA 000     (9-49) 

    where 0A , 0B , and 0f   arc the initial matrix and vector inputs.

    When   cn   , the 0A matrix becomes a square   nn  matrix. Furthermore, since the condition

    equations are independent, 0A must be nonsingular, i.e., 0A , must have an inverse. Thus,

    if Eq. (9-49) is premultiplied through by  1

    0A

    , and B and f are set equal to

    0B1

    0a

    and

    0f 1

    0a , respectively, we get

    ,f Bv     (9-50) 

    which is identical to Eq. (9-2), the form of the condition equations for adjustment ofindirect observations. The distinctive feature of this equation is that the coefficient matrixof v is an identity matrix.

    With A replaced by the identity matrix, we have, from Eqs. (9-11), (9-13), (9-14), and (9-

    15)

    QIQIQc     (9-51)

    WQQW  11

    cc    

      (9-52) 

    WBBBWB N   tc

    t   (9-53)  

    254 Analysis and Adjustments of Survey Measurements

  • 8/16/2019 libro pag. 211-329.pdf

    45/119

     

    and

    .Wf Bf WBt   tc

    t   (9-54) 

    Equations (9-53) and (9-54) are identical to Eqs. (4-28) and (4-29), respectively,developed in Chapter 4.

    When   IA   and  1QcW

    , as they are in this special case, Eq. (9-31) reduces to

    ,Bf v     (9-55) 

    which is identical to Eq. (4-32), an obvious rearrangement of the condition equation.

    If N is calculated using Eq. (9-53),  1 NQ

    , as given by Eq. (9-36), is identical to the

    cofactor matrix for   obtained in Eq. (6-43).

    Finally, with   IA   and  1QcW

    , Eq. (9-45) reduces to

    t1

    vv  BBNQQ

        (9-56) 

    which is identical to Eq. (6-44); and Eq. (9-47) reduces to

    t1

    ˆˆ  BBNQ

     

      (9-57) 

    which is identical to Eq. (6-45).Thus, in every respect, the method of least squares adjustment of indirect observations is a

    special case of the general procedure.

    Adjustment of Observations Only

    With reference once more to Section 9.2, this special case is achieved when u is equal toits lower limit, zero, making c equal to its lower limit, r (the redundancy). With no

     parameters carried in the adjustment, the   B term in Eq. (9-18) vanishes, leaving

    ,f Av    (9-58) 

    which is identical to Eq. (9-3), the form of the condition equations for adjustment ofobservations only.

    To make the   B term vanish we can simply set   0B   . If this is done, Eq. (9-27) reduces to

    f Wk  c   (9-59) 

    which is identical to Eq. (4-52). Equation (9-25), which is

    k QAv   t  

    General Least Squares Adjustment 255

  • 8/16/2019 libro pag. 211-329.pdf

    46/119

     

    needs no reduction since it is already identical to Eq. (4-48).

    As for the cofactor matrices, when   0B   , Eq. (9-45) reduces to

    AQWQAQ ctvv    (9-60) 

    which is identical to Eq. (6-58), and Eq. (9-47) reduces to

    AQWQAQQ ct

    ˆˆ 

      (9-61) 

    which is identical to Eq. (6-60). Obviously, the relationship expressed by

    Eq. (9-48).

    ,QQQvvˆˆ

     

     

    is consistent with Eqs. (9-60) and (9-61).

    This should make it clear that the technique of adjustment of observations only is also aspecial case of the general procedure.In principle, a problem that can be solved by the general method of adjustment can also be

    solved by using the special procedures, provided appropriate transformations are made.The necessary transformations may be relatively complicated, however, and it is notadvocated that they be attempted. Instead, it is suggested that the most appropriate of thethree available techniques be selected to solve a specific adjustment problem.

    Selection of the most appropriate technique is based on experience.

    9.5. SUMMARY OF SYMBOLS AND EQUATIONS

    The basic symbols and equations of the least squares method are summarized for quick

    reference when solving adjustment problems.

    Symbols

      is the vector of observations,

    ̂   is the vector of adjusted observations,

    v is the vector of residuals.x is the vector of parameters.

    0x   is the vector of approximate values for x,

      is the vector of parameter estimates (linear case) or of parameter corrections

    (nonlinear case).

    x̂   is the vector of parameter estimates (nonlinear case),d is a vector of constants.

    f is the condition equations constant terms vector.

    256 Analysis and Adjustments of Survey Measurements

  • 8/16/2019 libro pag. 211-329.pdf

    47/119

     

    c   is the vector of equivalent observations.

    n is the number of given measurements, and thus the number of elements in   , ̂ , 

    and v.

    0n   is the number of observations necessary to specify uniquely the model that

    underlies the adjustment problem.r is the redundancy, or the number of statistical degrees of freedom.u is the number of parameters carried in the adjustment, and thus the number of

    elements in A, x, 0x  , and x̂ .

    c is the number of independent condition equations, and the number of elements

    in d, f, and c .

    A is a   nc  coefficient matrix for the observations.

    B is a   uc  coefficient matrix for the parameters.

    Q is the   nn  a priori cofactor matrix of  .

    W is the   nn  a priori weight matrix of   .

    cQ   is the   cc  cofactor matrix of  c .

    cW   is the   cc  weight matrix of c .

     N is the   uu  coefficient matrix of the normal equations.

    t is the   1u   vector of "constants" in the normal equations.

    k is the   1c  vector of Lagrange multipliers.

    Q , xxQ , vvQ and ˆˆQ  are the cofactor matrices of   , x, v, and ̂ , respectively.

    2

    0   is the reference variance (a priori value).

    2

    0̂   is the least squares estimate of the reference variance (a posteriori value).

    , xx

    , vv

    and ˆˆ

     are the covariance matrices of   , x, v, and , respectively.

    Equations, General Case

    0nnr      (9-4) ur c     (9-5)

    0nu0     (9-6)ncr      (9-7)

    The general linear or linearized form of the condition equations is

    .f BAv     (9-18)

    where for the linear case

    .Adf      (9-19)

    General Least Squares Adjustment 257

  • 8/16/2019 libro pag. 211-329.pdf

    48/119

     

    and for the nonlinear case

    ,0x,Ff ,xF

    B,

    F

    A  

      (9-21)in which

      0x,FF       (9-20)

    The least squares solution is

    t

    c  AQAQ     (9-11)

     1t1cc   AQAQW    (9-13)

      BAQABBWB N   1ttct 

      (9-14)

      f AQABf WBt   1ttct     (9-15).t N

      1   (9-16).xx̂ 0     (9-30) 

    .Bf WQAvc

    t   (9-31)

    .vˆ     (9-32)

    The cofactor matrices are

    1 NQ

          (9-36) 

    1

    xx   NQ    (9-37)

    AQWBBNIWQAQQ ct1ctˆˆ   (9-47) .QQQ

    vvˆˆ 

      (9-48)

    The estimate of the reference variance is

    ,r 

    Wvv

    uc

    Wvvˆ

    tt2

    0     (9-39)

    where

    .f f Wf Wvv  t

    c

    tt   (9-41)

    The covariance matrices are

    12

    0

    2

    0xx   NQ 

      , if 20 is know a priori (9-38)

    258 Analysis and Adjustments of Survey Measurements

  • 8/16/2019 libro pag. 211-329.pdf

    49/119

     

    12

    0

    2

    0xx   NˆQˆ 

      , if 20 is know a priori; (9-40)   ˆˆ

    2

    0ˆˆ  Q

      , if 20 is know a priori

    ˆˆ

    2

    0Q̂ , if2

    0 is know a priori.

    Equations, Special Case-Adjustment of Indirect Observations

    0nu    Upper limit (9-6).nc    Upper limit  (9-7)

    Then condition equations are of the form

    ,f Bv     (9-50) 

    The least squares solution is

    WQQW  11

    cc    

      (9-52)

    WBBBWB N   tc

    t   (9-53) 

    Wf Bf WBt   tc

    t   (9-54) 

    .t N  1   (9-16)

    )casenonlinear (.xx̂ 0     (9-30)

    ,Bf v     (9-55)

    vˆ     (9-32)

    The cofactor matrices are

    1 NQ 

        (9-36) 

    , NQQ   1xx

        (9-37)t1

    vv   BBNQQ    (9-56)

    t1

    ˆˆ  BBNQ

     

      (9-57)

    Equations, Special Case-Adjustment of Observations Only

    0u    Lower limit (9-6).r c    Lower limit  (9-7)

    General Least Squares Adjustment 259

  • 8/16/2019 libro pag. 211-329.pdf

    50/119

     

    The r condition equations are of the form

    .f Av    (9-58)

    The least squares solution is

    t

    c  AQAQ     (9-11)

    1

    cc   QW    (9-13)

    f Wk  c   (9-59) 

    ,k QAv  t   (9-25) 

    .vˆ     (9-32)

    The cofactor matrices are

    AQWQAQc

    tvv    (9-60)

    .QQQvvˆˆ

     

      (9-48)

    PROBLEMS

    9-1 Figure 9-1 depicts a plane isosceles triangle ABC. The sides and height of the

    triangle are measured; the observations are 1 , 2 , and 3 , as shown. If the base x is the

    only parameter to be carried in the adjustment, give the model elements

    Fig. 9-1. 

    260 Analysis and Adjustments of Survey Measurements

  • 8/16/2019 libro pag. 211-329.pdf

    51/119

     

    (   ,u,r ,0n,n  and c) and write the condition equations in the linearized form, f BAv   .

    9-2  In Fig. 9-2, distances OA, AB, BC, and CO are observed. The observed values

    are 1 , 2 , 3 , and 4 , respectively. All angles are held fixed. It is required to find leastsquares estimates for the coordinates of C ( 0x and 0y ). (a) Write suitable condition

    equations for this problem in the form   f Av   . (b) Write the condition equations in the

    form   f BAv   , carrying x, and y, as the parameters.

    9-3  Two sides and the three angles of the triangle in Fig. 9-3 are measured. The

    measured values are 1 , 2 , 1  and 2 , as shown. Least squares estimates for side x andaltitude h are required. Write suitable condition equations for this problem in the form

    f BAv   , carrying x and h as the parameters.

    9-4  If in Problem 9-1   m00.10001   ,   m10.10002   and   m25.8003   , all uncorrelated

    and with equal precision, find the least squares estimate for x

    Fig. 9-2.

    Fig. 9-3.

    General Least Squares Adjustment 261

  • 8/16/2019 libro pag. 211-329.pdf

    52/119

     

    9-5  If in Problem 9-2   m20.10001   ,   m55.5002   ,   m75.7073   ,and m60.11184   ,

    and the covariance matrix for t4321   is

    ,cm

    1005000

    5010000

    00200100

    00100200

    2

     

    find the least squares estimates for  0x and 0y Also determine the principal dimensions and

    orientation of the standard error ellipse for the computed position of C.

    9-6  With reference to the triangle in Problem 9-3,   00'00"401   , 00'00"952   ,

    00'30"45

    3  45°00'30",   m00.1000

    1

      and m00.15502

      . The observations are uncor-

    related, the standard deviation of each observed angle is 15", and the standard deviation of

    each observed side is 0.10 m. Find least squares estimates for x and h. Evaluate also thecovariance matrix for x and h, and construct 95% confidence intervals for x and h.9-7  With reference to Fig. 9-4, the following angles are measured:

    00'00"10AOBANGLE1

       

    00'00"8BOCANGLE2

       

    00'07"18AOCANGLE3

       

    00'00"5CODANGLE4

       

    00'00"12DOCANGLE5

       

    00'12"25BOEANGLE6

       

    Fig. 9-4.

    262 Analysis and Adjustments of Survey Measurements

  • 8/16/2019 libro pag. 211-329.pdf

    53/119

     

    The covariance matrix for the observation vector    is

    Symmetric

    .arcof ondssec

    4

    24

    024

    0024

    00024

    000024

    2

     

    Find the least squares estimate for angle COD. Construct also a 99% confidence interval

    for the angle COD.9-8  The following data are observed:

    x(m) y(m)

    1.00 2.952.05 3.054.10 3.604.85 4.30

    8.00 4.809.10 5.25

    All observations (x and y) are independent and have the same precision. Find least squares

    estimates for the parameters of the straight line  baxy   fitted to the data. Find also thea posteriori estimate for the reference variance and use this value to estimate the standarddeviation of each observation and the standard deviations of the estimates for a and b. Test

    at the 2% level of significance the hypothesis that b= 2.70 m.9-9 Rework Example 48, Chapter 4, using the time as an observed variable as well as

    the altitude, with standard deviations 1.0s and 20 ", respectively.9-10 Calculate the covariance matrix for the least squares estimates of the maximum

    altitude and the time at which maximum altitude occurs, as determined in Problem 9-9.Evaluate also the standard deviations and coefficient of correlation for these least squaresestimates.9-11  The angles shown in Fig. 9-5 are measured with a theodolite. The observed

    values and their weights are

    ANGLE OBSERVED VALUE WEIGHT

    a 70°14'30" 1

     b 62°27'14" 2

    c 103°38'26" 2

    d 61°52'04" 1e 109°10'04" 1

    f 76°56'36" 1

    General Least Squares Adjustment 263

  • 8/16/2019 libro pag. 211-329.pdf

    54/119

     

    Fig. 9-5.

    (a) Use the method of least squares to determine adjusted values for these angles. (b)

    Construct 95% confidence intervals for angles b and f.

    9-12 The following data are given for the level net in Fig. 9-6

    LINE FROM TO OBSERVED

    ELEVATIONDIFFERENCE(m)

    DISTANCE

    (Km)

    1 A B +34.090 2.90

    2 B C +15.608 6.153 C A -49.679 17.324 A D +16.010 15.845 D B +18.125 9.22

    6 D C +33.704 10.50

    Fig. 9-6.

    264 Analysis and Adjustments of Survey Measurements

  • 8/16/2019 libro pag. 211-329.pdf

    55/119

     

    The elevation of A is fixed at 324.120 m above mean sea level. The variance of eachelevation difference is directly proportional to the distance.(a) Determine least squares estimates for the elevations of B, C, and D and evaluate the

    cofactor matrix for these estimates. (b) Evaluate the a posteriori reference variance andconstruct 95% confidence intervals for the elevations of B, C, and D.

    General Least Squares Adjustment 265

  • 8/16/2019 libro pag. 211-329.pdf

    56/119

     

    Applications

    in Plane Coordinate

    Surveys10.1. INTRODUCTION

    Many survey projects are based upon two-dimensional positioning within a plane

    rectangular coordinate system. This chapter addresses the application of least squaresadjustment in such plane coordinate surveys. It includes formulation and linearization ofthe three basic condition equations (distance, azimuth, and angle) encountered in theadjustment of plane coordinates by the method of indirect observations (also known as the

    method of variation of coordinates), least squares position adjustment for a typical procedure employed in plane coordinate surveying, and least squares transformation ofcoordinates from one plane rectangular system to another_

    10.2. THE DISTANCE CONDITION AND ITS LINEARIZATION 

    The adjusted distance, ijŜ , between two points i and j is given by

    ,

    212

    iY

     jY

    2

    iX

     jX

    ijŜ

      

      

      

         (10-1)

    where   )iY,iX( and   ) jY, jX( are the rectangular plane coordinates of i and j, respectively.

    This is the distance condition.Linearization of Eq. (10-1) according to Eq. (2-18) is

     jY

     jY

    ijS

     jX

     jX

    ijS

    iY

    iY

    ijS

    iX

    iX

    ijS

    0ijS

    ijŜ

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

      (10-2)

    where

    ,212

    0iY0

     jY

    20i

    X0 jX

    0

    ij

    S

      

      

      

         (10-3)

  • 8/16/2019 libro pag. 211-329.pdf

    57/119

     

    noting that   )0

    iY,

    0

    iX( and   )

    0

     jY,

    0

     jX( are approximate values for the coordinates of i and j,

    respectively.

    Equation (10-2) assusmes that all four coordinates are unknown variables for whichapproximations are necessary. If one of the two points is a control point, the coordinates ofthis point Would be known and would thus be taken as constants. In this case, Eq. (10-2)would include only two partial derivatives; namely, the derivatives with respect to the two

    coordinates of the unknown point, For example, if point j is a control point, Eq. (10-2)reduces to

    iY

    iY

    ijS

    iX

    iX

    ijS

    0ijS

    ijŜ

     

     

     

     

     

     

     

     

      (10-4)

    in which

    ,

    212

    0

    i

    Y

     j

    Y2

    0

    i

    X

     j

    X0

    ij

    S

     

      

       

      

        (10-5) 

    noting that   ) jY, jX( are the known  coordinates of the control point j, and   )0

    iY,

    0

    iX( are

    approximate values for the coordinates or the unknown point i. For the remainder of thissection we will consider the general case of having all four coordinates as unknowns.

     Now, according to Eq. (9-32), the adjusted distance is

    ,vSŜijijij

        (10-6)

    whereij

    S is the observed value of the distance andij

    v is the corresponding residual. Thus, it

    follows from Eqs. (10-2) and (10-6) that

    .ijS

    0

    ij

    S jY

     jY

    ijS

     jX

     jX

    ijS

    iY

    iY

    ijS

    iX

    iX

    ijS

    ijv