EXTRACTION OF ATTITUDE PARAMETERS OF MOVING ...position, speed, and direction of moving targets at a...

12
MAPPS/ASPRS 2006 Fall Conference November 6 – 10, 2006 * San Antonio, Texas EXTRACTION OF ATTITUDE PARAMETERS OF MOVING TARGET FROM A SINGLE SET OF HIGH SPATIAL RESOLUTION SATELLITE IMAGERY Zhen Xiong, Post Doctoral Fellow Yun Zhang, Associate Professor Department of Geodesy & Geomatics Engineering, University of New Brunswick, 15 Dineen Drive, PO Box 4400, Fredericton, NB, Canada E3B 5A3 [email protected] [email protected] ABSTRACT Attitude parameters of moving target include: position, speed, and direction. This information is very useful for transportation management, security surveillance, and military applications. A new technique based on a single set of high spatial resolution optical satellite imagery to extract moving target’s attitude parameters is presented in this paper. In this research, we use the time interval of panchromatic image and multi-spectral image to detect moving target. We mainly focused on satellite geometric sensor model refinement for image registration and the method of calculating moving target’s ground position from image position. Our test result shows that this method can offer the position, speed, and direction of moving targets at a moment. An experiment of using a single set of Quickbird images to detect moving target is conducted and the result is presented. INTRODUCTION Moving target detection is a fast growing research field. Most of techniques for moving target detection are based on Radar [Liu Guoqing, et al., 2001; Nag, S., et al., 2003; Liu, C.-M. Jen, C.-W., 1992], SAR [Dias, J.M.B., et al., 2003; Hong-Bo Sun, et al., 2002; Pettersson, M.I., 2004; Soumekh, M.I., 2002], or video [Munno, C.J., et al., 1993]. The platforms are almost ground based [Castellano, G., et al., 1999; Nag, S., et al., 2003; Munno, C.J., et al., 1993; Pettersson, M.I., 2004] or airborne based [Liu Guoqing, et al., 2001; Hong-Bo Sun, et al., 2002; Soumekh, M.I., 2002]. But seldom is space-borne based. Basically these techniques are usually used for security surveillance or military applications. Some of them just deliver the service of catching moving target, without any information on the target’s position and velocity. Based on different equipment, different techniques were adopted for catching moving targets, such as using generalized likelihood ratio as a threshold to decide which target is moving [Liu Guoqing, et al., 2001; Dias, J.M.B., et al., 2003; Pettersson, M.I., 2004]; using a filter for digital moving target detection [Nag, S., et al., 2003]; applying a fractional Fourier transformation [Hong-Bo Sun, et al., 2002]; or utilizing victo’s frequency domain spatiao-temporal filtering and spatio-temperal constraint error of image frame pairs to detect and track moving targets (e.g., personnel) in natural scenes in spite of low image contrast, changes in the target's infra-red image pattern, sensor noise, or background clutter [Munno, C.J., et al., 1993]. In this paper we introduce a new technique to detect moving targets and their positions and moving speeds based on a single set of high spatial resolution satellite imagery. Such single set of satellite imagery consists of one panchromatic image and the corresponding multi-spectral imagery, which are acquired by some current high resolution satellites, such as SPOT, IKONOS, and Quickbird. Although the time interval between panchromatic image and multi-spectral image is very small, if the target is moving, the panchromatic image and multi-spectral image should record this position change during satellite time interval. So theoretically if the target is moving, we can find the target is moving and even moving speed of the target. We developed an algorithm to calculate the position and moving speed of moving target which is acquired by both panchromatic and multispectral images at a very close time.

Transcript of EXTRACTION OF ATTITUDE PARAMETERS OF MOVING ...position, speed, and direction of moving targets at a...

Page 1: EXTRACTION OF ATTITUDE PARAMETERS OF MOVING ...position, speed, and direction of moving targets at a moment. An experiment of using a single set of Quickbird images to detect moving

MAPPS/ASPRS 2006 Fall Conference November 6 – 10, 2006 * San Antonio, Texas

EXTRACTION OF ATTITUDE PARAMETERS OF MOVING TARGET FROM A SINGLE SET OF HIGH SPATIAL RESOLUTION SATELLITE IMAGERY

Zhen Xiong, Post Doctoral Fellow Yun Zhang, Associate Professor

Department of Geodesy & Geomatics Engineering, University of New Brunswick, 15 Dineen Drive, PO Box 4400, Fredericton, NB, Canada E3B 5A3

[email protected] [email protected]

ABSTRACT Attitude parameters of moving target include: position, speed, and direction. This information is very useful for transportation management, security surveillance, and military applications. A new technique based on a single set of high spatial resolution optical satellite imagery to extract moving target’s attitude parameters is presented in this paper. In this research, we use the time interval of panchromatic image and multi-spectral image to detect moving target. We mainly focused on satellite geometric sensor model refinement for image registration and the method of calculating moving target’s ground position from image position. Our test result shows that this method can offer the position, speed, and direction of moving targets at a moment. An experiment of using a single set of Quickbird images to detect moving target is conducted and the result is presented.

INTRODUCTION

Moving target detection is a fast growing research field. Most of techniques for moving target detection are based on Radar [Liu Guoqing, et al., 2001; Nag, S., et al., 2003; Liu, C.-M. Jen, C.-W., 1992], SAR [Dias, J.M.B., et al., 2003; Hong-Bo Sun, et al., 2002; Pettersson, M.I., 2004; Soumekh, M.I., 2002], or video [Munno, C.J., et al., 1993]. The platforms are almost ground based [Castellano, G., et al., 1999; Nag, S., et al., 2003; Munno, C.J., et al., 1993; Pettersson, M.I., 2004] or airborne based [Liu Guoqing, et al., 2001; Hong-Bo Sun, et al., 2002; Soumekh, M.I., 2002]. But seldom is space-borne based. Basically these techniques are usually used for security surveillance or military applications. Some of them just deliver the service of catching moving target, without any information on the target’s position and velocity. Based on different equipment, different techniques were adopted for catching moving targets, such as using generalized likelihood ratio as a threshold to decide which target is moving [Liu Guoqing, et al., 2001; Dias, J.M.B., et al., 2003; Pettersson, M.I., 2004]; using a filter for digital moving target detection [Nag, S., et al., 2003]; applying a fractional Fourier transformation [Hong-Bo Sun, et al., 2002]; or utilizing victo’s frequency domain spatiao-temporal filtering and spatio-temperal constraint error of image frame pairs to detect and track moving targets (e.g., personnel) in natural scenes in spite of low image contrast, changes in the target's infra-red image pattern, sensor noise, or background clutter [Munno, C.J., et al., 1993].

In this paper we introduce a new technique to detect moving targets and their positions and moving speeds based on a single set of high spatial resolution satellite imagery. Such single set of satellite imagery consists of one panchromatic image and the corresponding multi-spectral imagery, which are acquired by some current high resolution satellites, such as SPOT, IKONOS, and Quickbird. Although the time interval between panchromatic image and multi-spectral image is very small, if the target is moving, the panchromatic image and multi-spectral image should record this position change during satellite time interval. So theoretically if the target is moving, we can find the target is moving and even moving speed of the target. We developed an algorithm to calculate the position and moving speed of moving target which is acquired by both panchromatic and multispectral images at a very close time.

Page 2: EXTRACTION OF ATTITUDE PARAMETERS OF MOVING ...position, speed, and direction of moving targets at a moment. An experiment of using a single set of Quickbird images to detect moving

MAPPS/ASPRS 2006 Fall Conference November 6 – 10, 2006 * San Antonio, Texas

METHODOLOGY

To date, many satellites can acquire both panchromatic (PAN) and multi-spectral (MS) images at the same time. But because of technique arrangement, these satellites usually catch the panchromatic and multi-spectral images not really at same time. Usually there is a very small time interval between the PAN and MS imagery. Therefore, if a ground target is moving, theoretically this moving target should be record in different ground position. That is to say, if we can calculate target’s ground position from image coordinates, we should obtain two different coordinates from the PAN and MS images respectively for the same moving target. Then from these two ground coordinates, we can calculate the moving speed and moving direction of the target. This is the elements of our moving target detection technique.

However, because the time interval between the PAN and MS imagery is very small, less than 1 second, the position of moving target changes within this small time interval is also very small. Therefore if the error of position calculation is greater than the value of its position change, we can never detect moving target correctly. Therefore, some methods must be used to minimize the errors of target position calculation. Besides, because we generally use satellite geometric sensor model to calculate the target ground position, the accuracy of object ground position is directly related with the satellite geometric sensor model. Therefore how to refine the satellite sensor model to improve target position accuracy is our focus. Satellite Geometric Sensor Model Refinement

Every satellite has a positioning system which can deliver ground position (on an ellipsoid surface) from each image point. Different satellite has a different physical geometric sensor model and different positioning accuracy. To date some satellite image vendors directly deliver users with a physical sensor model, such as SPOT, but some do not deliver it because of the commercial secrecy, such as IKONOS and Qucikbird. These satellite image vendors just release a rational polynomial coefficient (RPC) as their geometric sensor model (equation 1). No matter the direct physical sensor model or RPC, the satellite geometric sensor models all have a definite value of absolute positioning error. For example, according to our experiments, before refinement with ground control points (GCPs), the SPOT 1, 2, 4 has about 300 meters absolute positioning error, the SPOT 5 has about 50 meters absolute positioning error. IKONOS and Quickbird have about 20 meters absolute positioning error. Therefore, if we use this model to calculate object ground position, the error of sensor model will be propagated to this position. Once the position error is greater than the size of the object’s position change in the time interval of the PAN and MS images, the detection result may be very ridiculous. Sometimes the detection result may show that a static target is moving and while a moving target is static.

),,(),,(

2

1

ZYXPZYXPx = (1a)

),,(),,(

4

3

ZYXPZYXPy = (1b)

∑∑∑= = =

=1

0

2

0

3

0),,(

m

i

m

j

m

k

kjiijk ZYXaZYXP (1c)

30 1 ≤≤ m , 30 2 ≤≤ m , 30 3 ≤≤ m , 3321 ≤++ mmm (1d) Here (x, y) are the image coordinates and (X, Y, Z) are the ground coordinates, aijk is the polynomial coefficients.

For example, if a ground target moves from A to B, it is imaged on the PAN and MS images at P and P’ respectively, and if the satellite sensor model has error, then with this model to calculate object ground position from P and P’, we will obtain the position C, not A and B. In this case, the moving target will be detected as a static object. On the other hand, it is possible the calculation shows that a static target is moving. Therefore, before the target detection, we must firstly improve the satellite positioning accuracy.

Many researchers have done much work on this topic. Kaichang Di et al [2003] recognized that there are two methods to improve the geopositioning accuracy of Ikonos Geo products.

(1) The first is to compute the new rational polynomial coefficients (RPCs) with the vender-provided RPC coefficients used as initial values in equations (1). If the rigorous sensor model is not available, high quality control

Page 3: EXTRACTION OF ATTITUDE PARAMETERS OF MOVING ...position, speed, and direction of moving targets at a moment. An experiment of using a single set of Quickbird images to detect moving

MAPPS/ASPRS 2006 Fall Conference November 6 – 10, 2006 * San Antonio, Texas

points can not be produced by the rigorous sensor model. Consequently, this method requires a large number of GCPs to compute the new RPCs. In fact, more than 39 GCPs are required for the third-order rational coefficient.

(2) The second method improves the ground coordinates derived from the vendor-provided RPCs by a polynomial correction whose parameters are determined by the GCPs. RFRFRF ZaYaXaaX 3210 +++= (2a) RFRFRF ZbYbXbbY 3210 +++= (2b) RFRFRF ZcYcXccZ 3210 +++= (2c) Where (X, Y, Z) are the ground coordinates after correction, (XRF, YRF, ZRF) are ground coordinates derived from RPC, and (ai, bi, ci) are correction coefficients.

(3) Jacek and Dial [2003] proposed a RPC block adjustment in image space. They used denormalized RPC models [Jacek and Dial 2003], p and r, to express the object-space to image-space relationship, and the adjustable functions, Δp and Δr, which are added to the rational functions to capture the discrepancies between the nominal and the measured image space coordinates. For each image point I on image j, the RPC block adjustment model is defined as following: [Jacek 2003]

Actually, the methods proposed by both Di and Jacek are polynomial models. Some are used in image domain, some are used in object domain. This polynomial model generally can effectively correct the satellite sensor model and obtain a relative idea result. For Ikonos image, Di [2003] improved ground position accuracy to 1 to 2 meters after the sensor model refinement and Jacek’s experiment result also showed the ground position accuracy had been improved to 1 to 2 meters [2003].

In this research, we can not directly use any of the above equations to correct the sensor models, because we have not any ground control point. On the other hand, the absolute position accuracy is not our most important focus, because for moving target detection, the relative position accuracy is our interesting. In this case, we use tie points to refine sensor models, so as to improve the relative position accuracy.

Because during moving target detection, we use image coordinates to calculate its ground coordinates, our purpose is getting accurate ground coordinates of moving target, so we use equation (2) to correct the ground coordinates. In order to calculate the correction coefficients of (ai, bi, ci) of equation (2), we must use at least 4 ground control points. In this case, first we use image coordinates of the MS and PAN images to calculate their ground coordinates respectively. The method we used to calculate ground coordinates from image coordinates is illustrated in figure 2. This ground coordinates are derived from satellite sensor model, so they are (XRF, YRF, ZRF). Then we can get a mean value of ground coordinates for each tie point as ground control point (X, Y, Z). Later we use a set of (X, Y, Z) and (XRF, YRF, ZRF) to refine the correction coefficients (ai, bi, ci). This is an iteration procedure. Each iteration, we can calculate the ground coordinates based on sensor model and correct it with equation (2) and get final result (X, Y, Z). After that, we compare every point’s ground coordinates with the value of its last iteration and a relative position deviation between these two iterations. If the biggest position deviation is less than a threshold ε, stop this iteration procedure. Ground Coordinates Calculation

For many satellites, their geometric sensor models are from ground to space. That means, with ground coordinates (X, Y, H), we can use sensor model (1) to calculate its image position. Now when we use satellite image to detect moving target, we must calculate the target ground coordinates from image coordinates. Here we use an iteration procedure to calculate every pixel’s ground position.

Actually we calculate the object’s ground position based on a DEM. First, we use sensor model to build a coarse linear transformation equation between image coordinates and ground coordinates. X=f(I, J) (3a) Y=g(I, J) (3b) f(I, J) = a1I+b1J+c1 (3c) g(I, J) = a2I+b2J+c2 (3d)

From this equation, with the image coordinates (I, J), we can get a coarse ground position (X, Y). From (X, Y), through DEM, we can get its height H. Then we use sensor model (1) to calculate its image coordinates (I’, J’). So we can get a difference between these two sets of image coordinates. ΔI = I – I’ (4a) ΔJ = J – J’ (4b)

Page 4: EXTRACTION OF ATTITUDE PARAMETERS OF MOVING ...position, speed, and direction of moving targets at a moment. An experiment of using a single set of Quickbird images to detect moving

MAPPS/ASPRS 2006 Fall Conference November 6 – 10, 2006 * San Antonio, Texas

Then we use ΔI, ΔJ and (3) to correct its ground coordinates (X, Y), continue this procedure until ΔI, ΔJ all are less than a threshold, say 0.0001 pixel.

EXPERIMENT

We use a pair of level 1A Quickbird images which contains one 0.61 meter resolution panchromatic image and one 2.44 meter resolution multi-spectral image (figure 1) to test our technique. These images are acquired on July 26, 2002 in Gagetown, New Brunswick, Canada. We just used a part of whole scene image for our experiment. Below is the detailed data clipping information.

Because the Quickbird level 1A (Basic) panchromatic and multi-spectral images are not registered, so we firstly

use 15 tie points to register these two images together. Table 1 shows image coordinates both on the MS and PAN images of the 15 tie points.

Table 2 shows the relative position deviation before sensor model refinement and table 3 shows the relative

position deviation after sensor model refinement.

MS PAN NO P L P L 1 333 612 1616 1162 2 2730 570 11186 999 3 241 1711 1245 5561 4 2844 1487 11638 4664 5 1333 1778 5608 5827 6 1887 503 7817 730 7 93 506 655 738 8 99 509 681 750

Table 1. Image coordinates of tie points

MS PAN NO P L P L 9 91 497 648 702 10 96 500 669 714 11 101 503 689 726 12 94 493 659 687 13 95 490 665 672 14 101 493 687 685 15 106 496 707 696

Figure 1. Quickbird multi-spectral image and panchromatic image

MS Data: Upper Left: P (column): 2865 L (row): 5047 Lower Right P (column): 5937 L (row): 7007

PAN Data: Upper Left: P (column): 11200 L (row): 20000 Lower Right P (column): 24000 L (row): 29000

Page 5: EXTRACTION OF ATTITUDE PARAMETERS OF MOVING ...position, speed, and direction of moving targets at a moment. An experiment of using a single set of Quickbird images to detect moving

MAPPS/ASPRS 2006 Fall Conference November 6 – 10, 2006 * San Antonio, Texas

These tables show that after sensor model refinement based on these tie points, the relative position mean

deviation reduced from 3.47 meter to 1.33 meter (figure 2).

Table 3. Ground coordinates and relative deviation of tie points after sensor model refinement No X(m) Y(m) X(m) Y(m) Deviation(m) 1 694449.9 5079301.6 694451.0 5079301.8 1.6 2 700472.0 5079712.6 700473.2 5079711.9 1.7 3 694301.4 5076470.8 694300.2 5076469.3 2.3 4 700825.4 5077380.2 700824.2 5077381.8 2.2 5 697052.4 5076439.4 697053.0 5076440.4 1.3 6 698349.5 5079776.8 698348.6 5079775.8 1.6 7 693836.4 5079540.6 693835.8 5079540.5 0.9 8 693851.8 5079533.7 693852.4 5079533.7 0.9 9 693830.6 5079563.4 693830.6 5079563.2 0.1 10 693843.5 5079556.3 693844.1 5079556.3 0.9 11 693856.3 5079549.3 693857.0 5079549.2 0.9 12 693837.9 5079574.0 693837.2 5079573.2 1.1 13 693840.1 5079581.8 693840.7 5079582.9 1.4 14 693855.5 5079574.9 693854.9 5079575.4 0.9 15 693868.3 5079567.9 693867.7 5079569.0 1.4

Table 2. Ground coordinates and relative deviation of tie points before sensor model refinement No X(m) Y(m) X(m) Y(m) Deviation(m) 1 694451.0 5079302.1 694449.7 5079302.4 1.7 2 700473.8 5079713.1 700472.6 5079712.5 1.7 3 694302.5 5076471.4 694298.9 5076470.2 5.3 4 700826.9 5077380.9 700823.7 5077382.7 5.0 5 697053.7 5076440.0 697052.0 5076441.4 2.6 6 698351.0 5079777.3 698347.8 5079776.3 4.7 7 693837.4 5079541.1 693834.4 5079540.9 4.3 8 693852.8 5079534.2 693851.0 5079534.1 2.4 9 693831.6 5079563.8 693829.2 5079563.7 3.4 10 693844.5 5079556.8 693842.7 5079556.7 2.4 11 693857.3 5079549.8 693855.6 5079549.7 2.4 12 693838.8 5079574.4 693835.9 5079573.6 4.3 13 693841.1 5079582.2 693839.3 5079583.4 2.7 14 693856.5 5079575.3 693853.5 5079575.8 4.2 15 693869.3 5079568.3 693866.3 5079569.5 4.3

Page 6: EXTRACTION OF ATTITUDE PARAMETERS OF MOVING ...position, speed, and direction of moving targets at a moment. An experiment of using a single set of Quickbird images to detect moving

MAPPS/ASPRS 2006 Fall Conference November 6 – 10, 2006 * San Antonio, Texas

Figure 3 and table 4 shows the moving targets we selected on panchromatic image and multi-spectral image

crossing the testing area (figure 1). From the images below we can see many targets are very big. Normally we measured the target’s image coordinates at its central position. Then we use target image coordinates to calculate its ground coordinates. From the two sets of ground coordinates, which are calculated from panchromatic image and multi-spectral image respectively, based on satellite time interval, we can calculate target’s moving distance, moving speed and moving azimuth angle (Table 5).

0

1

2

3

4

5

6

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

tie point

rela

tive

devi

atio

n (m

)

after sensor model refinementbefore sensor model refinement

Figure 2. Position error before and after sensor model refinement

Page 7: EXTRACTION OF ATTITUDE PARAMETERS OF MOVING ...position, speed, and direction of moving targets at a moment. An experiment of using a single set of Quickbird images to detect moving

MAPPS/ASPRS 2006 Fall Conference November 6 – 10, 2006 * San Antonio, Texas

(1) (2) (3) (4)

(5) (6) (7) (8)

(9) (10) (11) (12)

(13) (14) (15) (16)

Figure 3. Moving targets from the MS and PAN images (each picture has a different scale)

Figure 4 shows the position, moving speed and moving direction of the targets. The arrow shows the moving

direction, the longer this arrow is, the faster the target moves.

1 1

2 3 2 3

4

56

4

56

8

7

9

7

89

10

11

1213

10

11

12 13

14 14 15

16

17 18 15

16 17 18

19 20 19

20

21

22 23

24

21

22 23

24

Page 8: EXTRACTION OF ATTITUDE PARAMETERS OF MOVING ...position, speed, and direction of moving targets at a moment. An experiment of using a single set of Quickbird images to detect moving

MAPPS/ASPRS 2006 Fall Conference November 6 – 10, 2006 * San Antonio, Texas

(1) (2)

(3) (4)

Table 4. Image coordinates of moving targets

NO MS PAN P L P L 13 616 1244 2742 3682 14 1788 1449 7418 4516 15 1939 1383 8035 4247 16 1933 1400 7991 4320 17 1944 1396 8037 4303 18 1904 1395 7897 4294 19 2344 1221 9653 3597 20 2365 1224 9717 3617 21 2722 579 11151 1044 22 2719 599 11137 1127 23 2717 611 11130 1172 24 2701 631 11069 1233

NO MS PAN P L P L 1 331 706 1599 1531 2 345 723 1657 1597 3 358 719 1718 1596 4 166 583 939 1039 5 127 552 798 927 6 124 560 777 955 7 723 1362 3179 4172 8 670 1322 2955 3993 9 679 1332 2987 4034 10 608 1230 2707 3626 11 614 1212 2741 3576 12 615 1240 2733 3664

1

2 3

5

4

6

9

7

8

12

11

13

10

Page 9: EXTRACTION OF ATTITUDE PARAMETERS OF MOVING ...position, speed, and direction of moving targets at a moment. An experiment of using a single set of Quickbird images to detect moving

MAPPS/ASPRS 2006 Fall Conference November 6 – 10, 2006 * San Antonio, Texas

(5) (6)

(7) (8)

Figure 4. Moving speed and direction of targets (each picture has different scale)

14

16

15 18

17

21

24

22

23

19

20

Page 10: EXTRACTION OF ATTITUDE PARAMETERS OF MOVING ...position, speed, and direction of moving targets at a moment. An experiment of using a single set of Quickbird images to detect moving

MAPPS/ASPRS 2006 Fall Conference November 6 – 10, 2006 * San Antonio, Texas

From the table 5 and figure 4 we can find that the mean speed is about 100 km/h. But some targets are moving

very slow and some are very quick. We checked some of them. For example, the speed of target 6 is only 23.8km/h and its moving direction is crossing road (figure 4-(2)). It is

very slow if comparing to other targets. From figure 3-(4) and figure 4-(2) we can see that target 6 is on the road side, not in the middle of run way. Because there is still 1.33 meter relative position error after sensor model refinement. That means in our result, the moving speed has about 23.9 km/h error as the analysis below. So we guess this target maybe just start up or slow down or just park there.

From table 5 we can find that the target 3 just on the slow lane runs at a speed of 68.15km/h. Target 14 is also very slow. Its speed is only 71km/h. From figure 3-(2) and figure 4-(1) we can find target 3 is on the slow lane and a dark car close to it on the fast lane surpassing it. From figure 3-(10) and figure 4-(5) we can see that target 14 is a very long vehicle and on the slow lane.

From table 5 we can also find some targets moving very quickly, their speeds all are above 140km/h. We checked these targets. Target 9 (figure 3-(6) and figure 4-(3)) is moving with speed 149km/h. we can find target 8 (figure 3-(6) and figure 4-(3)) which is moving with speed 113km/h follows target 9. So we guess target 9 just has surpassed target 8. For the same reason, there are target 12 will surpass target 13 (figure 3-(8) and figure 4-(4)), target 16 will surpass target 17 (figure 3-(12) and figure 4-(6)), and target 18 just surpassed target 15 (figure 3-(12) and figure 4-(6)).

DISCUSSION

The accuracy of moving target detection is based on the techniques of satellite sensor model refinement, image resolution, accuracy of target image coordinates, accuracy of satellite time interval, and DEM accuracy.

Here the accuracy of satellite time interval and image resolution is related to satellite equipment, so we can consider them as constants.

No X(m) Y(m) H(m) Speed(km/h) Azimuth(degree) 1 694447.8 5079066.2 29.5 118.5 133.8 2 694485.7 5079025.9 30.3 109.9 150.1 3 694524.2 5079028.5 30.5 68.1 323.1 4 694021.1 5079358.1 18.0 133.6 126.3 5 693929.8 5079424.8 15.4 93.1 306.8 6 693917.1 5079406.3 15.7 23.8 74.0 7 695493.5 5077424.5 30.5 135.7 317.7 8 695349.1 5077532.0 30.6 113.8 152.4 9 695370.0 5077506.8 30.6 149.7 134.3 10 695185.9 5077759.2 30.3 107.4 150.1 11 695206.4 5077792.3 30.2 145.6 337.5 12 695203.0 5077735.7 30.3 145.4 146.1 13 695209.0 5077724.5 30.3 83.8 164.7 14 698168.0 5077339.5 37.8 71.2 52.9 15 698551.6 5077532.6 43.5 120.4 254.5 16 698525.3 5077484.4 43.6 162.6 74.1 17 698553.9 5077496.8 44.0 127.1 75.3 18 698465.6 5077497.8 42.2 150.2 249.1 19 699557.0 5078000.1 42.4 144.7 243.9 20 699597.5 5077989.3 42.3 143.5 68.1 21 700452.3 5079682.3 27.9 96.3 4.3 22 700445.0 5079628.6 28.1 138.3 11.2 23 700441.4 5079599.5 28.2 100.2 10.3 24 700404.1 5079558.5 28.5 77.5 183.2

Table 5. Ground coordinates, speed, and azimuth angle of moving targets

Page 11: EXTRACTION OF ATTITUDE PARAMETERS OF MOVING ...position, speed, and direction of moving targets at a moment. An experiment of using a single set of Quickbird images to detect moving

MAPPS/ASPRS 2006 Fall Conference November 6 – 10, 2006 * San Antonio, Texas

Because the time interval of panchromatic image and multi-spectral image is very small, which means the intersection angle of the PAN and MS images is also very small, so we can not calculate target ground coordinates just based on its image coordinates. That means a DEM is necessary and the affection to the ground coordinates calculation of DEM error must be limited in a reasonable range. DEM accuracy is a very complicated difficulty. Many research scientists are focusing on this topic. But in our research it is not our focus.

So the last aspect that can be improved is the accuracy of target image coordinates. Here in our experiment, we just used target central image coordinates which is measured by mouse clicking and we measure the image coordinates with the accuracy of only one pixel. That means we have improvement potential in the accuracy of target image coordinates. This is also our future work direction.

In order to recognize the affection of the clicking error to the moving target detection, we have done a test. The 14th point (moving target), when we use a set of image coordinates (1788 1449) and (7418 4516), its moved distance is 5.29m and moving speed is 95.22km/h; when we use (1788 1449) and (7419 4516), its moved distance is 4.45m and moving speed is 80.049km/h; when we use (1789 1449) and (7418 4516), its moved distance is 8.719m and moving speed is 156.944km/h. This experiment result shows that one pixel’s clicking error for PAN image will cause about 15km/h speed error and one pixel’s clicking error for MS image will cause about 77km/h speed error.

On the other hand, in our experiment we measure the target image coordinates with one pixel accuracy. If we can improve the image coordinates measurement accuracy to sub-pixel, the accuracy of moving target detection will be greatly improved.

CONCLUSION

We have presented the whole procedure of extracting attitude parameters of moving target based on high resolution satellite images. It includes several steps: sensor model refinement, target image coordinates measurement, and target ground coordinates calculation. The experiment result shows that the technique can deliver moving target’s position, moving speed and moving direction effectively. But we recognized there is still very potential for further improvement in the target image coordinates measurement. As the satellite time interval is very small and target’s moving distance during this short time is also very limited, so even a very small improvement in the target image coordinates measurement, say 0.1 pixel, will give a very big contribution to the accuracy of moving target detection. This also is our next focus.

REFERENCES

Castellano, G.; Boyce, J.; Sandler, M.; CDWT OPTICAL FLOW APPLIED TO MOVING TARGET DETECTION. IEE Colloquium on Motion Analysis and Tracking (Ref. No. 1999/103), 10 May 1999 Page(s): 17/1 - 17/6.

Dias, J.M.B.; Marques, P.A.C.; Multiple Moving Target Detection and Trajectory Estimation Using a Single SAR Sensor. IEEE Transactions on Aerospace and Electronic Systems, Volume 39, Issue 2, April 2003 Page(s): 604 – 624.

Hong-Bo Sun; Guo-Sui Liu; Hong Gu; Wei-Min Su; Application of the Fractional Fourier Transform to Moving Target Detection in Airborne SAR. IEEE Transactions on Aerospace and Electronic Systems, Volume 38, Issue 4, Oct. 2002 Page(s): 1416 – 1424.

Jacek Grodecki and Gene Dial, Block Adjustment of High-Resolution Satellite Images Described by Rational Polynomials. Photogrammetric Engineering & Remote Sensing Vol. 69, No. 1, January 2003, pp. 59-68.

Kaichang Di, Ruijin Ma, and Rong Xing Li, Rational Functions and Potential for rigorous Sensor Model Recovery. Photogrammetric Engineering & Remote Sensing Vol. 69, No. 1, January 2003, pp. 33-41.

Liu, C.-M. Jen, C.-W.; A parallel adaptive algorithm for moving target detection and its VLSI array realization. Signal Processing, IEEE Transactions on Acoustics, Speech, and Signal Processing, Nov., 1992 Volume: 40, Issue: 11 page(s): 2841 – 2848.

Liu, Guoqing; Li Jian; Moving Target Detection via Airborne HRR Phased Array Radar. IEEE Transactions on Aerospace and Electronic Systems, Volume 37, Issue 3, July 2001 Page(s): 914 - 924.

Munno, C.J.; Turk, H.; Wayman, J.L.; Libert, J.M.; Tsao, T.J.; Automatic video image moving target detection for wide area surveillance. Security Technology Proceedings, Institute of Electrical and Electronics Engineers 1993 International Carnahan Conference on 13-15 Oct. 1993 Page(s): 47 – 57.

Page 12: EXTRACTION OF ATTITUDE PARAMETERS OF MOVING ...position, speed, and direction of moving targets at a moment. An experiment of using a single set of Quickbird images to detect moving

MAPPS/ASPRS 2006 Fall Conference November 6 – 10, 2006 * San Antonio, Texas

Nag, S.; Barnes, M.; A Moving Target Detection Filter for an Ultra-Wideband Radar. Proceedings of the IEEE Radar Conference 2003, 5-8 May 2003 Page(s): 147 – 153.

Pettersson, M.I.; Detection of moving targets in wideband SAR. IEEE Transactions on Aerospace and Electronic Systems, July 2004 Volume: 40, Issue: 3 page(s): 780 – 796.

Reed, I.S. Gagliardi, R.M. Stotts, L.B.; Optical moving target detection with 3-D matched filtering. IEEE Transactions on Aerospace and Electronic Systems, July 1988 Volume: 24, Issue: 4 page(s): 327 – 336.

Soumekh, M.; Moving target detection and imaging using an X band along-track monopulse SAR. IEEE Transactions on Aerospace and Electronic Systems, Jan., 2002 Volume: 38, Issue: 1 page(s): 315 – 333.