Experiments using a Sensor Network Based Workcell for ... · PDF fileExperiments Using a...

6
Experiments Using a Sensor Network Based Workcell for Industrial Robots Yanfei Liu, Adam Hoover, Ian Walker Department of Electrical and Computer Engineering Clemson University, Clemson, SC 29634 USA lyanfei, ahoover, [email protected] Abstract-- As manipulators~ robot arms are woefully un- derutilized in modern industrial workcells. The norm in- volves little or no sensing~ the grasp of a single known ob- ject~ and the execution of a single geometric motion. In contrast~ the popular ideal of a visually guided arm (as for a human) should be able to track and grab relatively arbitrary objects~ under relatively arbitrary motions~ across a com- fortable range of velocities. Our ongoing research aims at extending these capabilities in a modern industrial workcell. In this paper~ we outline a framework in which these capa- bilities might be measured~ and demonstrate experimentally the capabilities of our current prototype. Keywords--sensor network~ workcell I. INTRODUCTION Modern industrial robot workcells have proven effective in some industries, such as electronic chip and automo- bile manufacturing. They are capable of handling rigid parts, such as silicon circuits and metal hardware, where the sequence of motions can be predefined. In contrast, modern robot workcells are largely unable to work with flexible parts, such as wires and textiles. Usually, such tasks are given to specially designed machinery, in which the uncertainty is engineered out of the process as much as is possible. Our ongoing research aims towards extend- ing the capability of an industrial robot workcell into such industries and applications. In order to ground this re- search, we have devised a set of axes of classification of the capabilities of industrial robot workcells. In this work, we introduce and discuss this classification relative to the state of the art. We then experimentally demonstrate the capa- bilities of our current prototype workcell. A goal of this conference paper is to enable a discussion with interested colleagues about this classification scheme. visual tracking motion shape motion predictability motion velocity none linear known a priori stationary engineered features geometric smooth free form single object class windowed adjustment continuous adjustment arbitrary object conveyor walking speed mechanically limited Table 1: Capability of an industrial robot workcell Table 1 shows one way of describing the capabilities of an industrial robot workcell. There are four columns in this table; each of them presents a capability of an industrial robot workcell. In each column, the level of capability is qualitatively ordered as increasing from top to bottom. For the first column, i.e. visual tracking, the lowest level has no visual tracking. Most industrial workcells work well without visual tracking because everything is assumed known a priori (such as automobile painting). The second level refers to when some workcells have vision for engi- neered features. The third level classifies some workcells with the capability of recognizing a single object class. The most complete level of vision tracking occurs when industrial workcells can track arbitrary objects. Some re- searchers have pushed the state of the art to the level of a single object class. Nakai [2] built a robot system which can play one-to-one volleyball games with a human player. The vision system for the robot here only focused on the volleyball. Hashimoto [3] developed a pingpong robot sys- tern using a 7 degree of freedom direct drive arm. Kwon [4] proposed a real-time system to recover the posture of a human's hand and then guide a robot hand in gripping gestures. In [5] an adaptive controller with visual feedback was presented for manipulator to grasp a moving cylindri- cal object. Panpanikolopoulos [6] presented an eye-in-hand sensor system of manipulator to track moving geometric objects. For the second column, motion shape, the simplest type is linear. The second level features geometric patterns such as squares. The third level is smooth but nongeometric pat- terns. The deepest level is to let the robot follow free-form, i.e. nonlinear, unsmooth and nongeometric-pattern trajec- tories. Some work has been done in this aspect. Allen [7] demonstrated a PUMA-560 tracking and grasping a mov- ing model train which moved around a circular railway. In [2], the robot system can deal with parabola, which is a non-geometric but smooth trajectory. For the third column, motion predictability, the lowest level is known a priori. Prediction is not necessary in an automobile assembly line because the manipulator has ex- act knowledge of the task. The second level can be termed as windowed adjustment, where the position of the object is known and the orientation is unknown. In this case we only need to take one picture at that position and get the orientation of the object. However, for flexible strings or a moving chain, we don't know either the positions or the 2988

Transcript of Experiments using a Sensor Network Based Workcell for ... · PDF fileExperiments Using a...

Page 1: Experiments using a Sensor Network Based Workcell for ... · PDF fileExperiments Using a Sensor Network Based ... We choose a red remote control toy car ... Experiments using a Sensor

Experiments Using a Sensor Network Based Workcell for Industrial Robots

Yanfe i Liu, A d a m Hoove r , I a n W a l k e r

D e p a r t m e n t of E l e c t r i c a l a n d C o m p u t e r E n g i n e e r i n g

C l e m s o n Un ive r s i ty , C l e m s o n , SC 29634 U S A

lyanfei , a h o o v e r , i w a l k e r @ c l e m s o n . e d u

Abstract-- As m a n i p u l a t o r s ~ r o b o t a r m s a r e w o e f u l l y u n - d e r u t i l i z e d in m o d e r n i n d u s t r i a l w o r k c e l l s . T h e n o r m in- v o l v e s l i t t l e o r no sensing~ t h e g r a s p o f a s ing le k n o w n ob - ject~ a n d t h e e x e c u t i o n o f a s ing le g e o m e t r i c m o t i o n . I n c o n t r a s t ~ t h e p o p u l a r idea l of a v i s u a l l y g u i d e d a r m (as for a h u m a n ) s h o u l d b e a b l e t o t r a c k a n d g r a b r e l a t i v e l y a r b i t r a r y ob jec t s~ u n d e r r e l a t i v e l y a r b i t r a r y mo t ions~ a c r o s s a c o m - f o r t a b l e r a n g e o f v e l o c i t i e s . O u r o n g o i n g r e s e a r c h a i m s a t e x t e n d i n g t h e s e c a p a b i l i t i e s in a m o d e r n i n d u s t r i a l w o r k c e l l . I n t h i s paper~ we o u t l i n e a f r a m e w o r k in w h i c h t h e s e c a p a - b i l i t i e s m i g h t be m e a s u r e d ~ a n d d e m o n s t r a t e e x p e r i m e n t a l l y t h e c a p a b i l i t i e s o f o u r c u r r e n t p r o t o t y p e .

Keywords- -sensor n e t w o r k ~ w o r k c e l l

I . I N T R O D U C T I O N

Modern industrial robot workcells have proven effective in some industries, such as electronic chip and automo- bile manufacturing. They are capable of handling rigid parts, such as silicon circuits and metal hardware, where the sequence of motions can be predefined. In contrast, modern robot workcells are largely unable to work with flexible parts, such as wires and textiles. Usually, such tasks are given to specially designed machinery, in which the uncertainty is engineered out of the process as much as is possible. Our ongoing research aims towards extend- ing the capability of an industrial robot workcell into such industries and applications. In order to ground this re- search, we have devised a set of axes of classification of the capabilities of industrial robot workcells. In this work, we introduce and discuss this classification relative to the state of the art. We then experimentally demonstrate the capa- bilities of our current prototype workcell. A goal of this conference paper is to enable a discussion with interested colleagues about this classification scheme.

v i sua l t r a c k i n g

m o t i o n s h a p e

m o t i o n p r e d i c t a b i l i t y

m o t i o n ve loc i ty

none linear known a pr io r i stationary engineered features

geometric

smooth

f r ee

f o r m

single object class

windowed adjustment

continuous adjustment

arbitrary object

conveyor

walking speed

mechanically limited

Table 1: Capability of an industrial robot workcell

Table 1 shows one way of describing the capabilities of an industrial robot workcell. There are four columns in this table; each of them presents a capability of an industrial robot workcell. In each column, the level of capability is qualitatively ordered as increasing from top to bottom.

For the first column, i.e. visual tracking, the lowest level has no visual tracking. Most industrial workcells work well without visual tracking because everything is assumed known a pr io r i (such as automobile painting). The second level refers to when some workcells have vision for engi- neered features. The third level classifies some workcells with the capability of recognizing a single object class. The most complete level of vision tracking occurs when industrial workcells can track arbitrary objects. Some re- searchers have pushed the state of the art to the level of a single object class. Nakai [2] built a robot system which can play one-to-one volleyball games with a human player. The vision system for the robot here only focused on the volleyball. Hashimoto [3] developed a pingpong robot sys- tern using a 7 degree of freedom direct drive arm. Kwon [4] proposed a real-time system to recover the posture of a human's hand and then guide a robot hand in gripping gestures. In [5] an adaptive controller with visual feedback was presented for manipulator to grasp a moving cylindri- cal object. Panpanikolopoulos [6] presented an eye-in-hand sensor system of manipulator to track moving geometric objects.

For the second column, motion shape, the simplest type is linear. The second level features geometric patterns such as squares. The third level is smooth but nongeometric pat- terns. The deepest level is to let the robot follow free-form, i.e. nonlinear, unsmooth and nongeometric-pattern trajec- tories. Some work has been done in this aspect. Allen [7] demonstrated a PUMA-560 tracking and grasping a mov- ing model train which moved around a circular railway. In [2], the robot system can deal with parabola, which is a non-geometric but smooth trajectory.

For the third column, motion predictability, the lowest level is known a prior i . Prediction is not necessary in an automobile assembly line because the manipulator has ex- act knowledge of the task. The second level can be termed as windowed adjustment, where the position of the object is known and the orientation is unknown. In this case we only need to take one picture at that position and get the orientation of the object. However, for flexible strings or a moving chain, we don't know either the positions or the

2988

Page 2: Experiments using a Sensor Network Based Workcell for ... · PDF fileExperiments Using a Sensor Network Based ... We choose a red remote control toy car ... Experiments using a Sensor

orientations exactly. Therefore, we need continuous adjust- ment, which is the top level of motion predictability. In [81 the image is grabbed 30 times per second. Therefore, this qualifies as "continuous adjustment."

For the last column, motion velocity, the simplest type clearly is stationary. The second level is the speed typical of conveyor belts, which is 10-50 mm/sec. The third level is the walking speed of a human, or about 500 mm/sec. The highest level is mechanically (robot) limited speed, which is 1500 mm/sec for the manipulator, St/iubli RX130, in our novel workcell. In [9] virtual reality is used for assisting the manipulator to grasp moving objects in a conveyor and put them in an inspection station. Few results advance past this speed.

Some research has been done to advance the capability in one or more aspect above. However, no research work has been done to take all of the four aspects down to the bo t tom level. Collewet [101 presented a method to get the position of an arbitrary-shape but motionless object by vi- sion feedback. In [10] the first column in Table 1 has been improved to the bo t tom level, i.e. to recognize arbitrary objects. However, the fourth column, motion velocity is still in the "stationary" level. Our objective is to enhance the robot workcell so that it can cope with arbitrary objects and fast, free-form trajectories s imultaneously.

In the rest of this paper, we report the results of experi- ments which measure the capabilities of our current proto- type workcell along the classification lines just described. The remainder of this paper is organized as follows. In sec- tion II, we describe the basic approach of our workcell. In section III, the experimental results are given to show the performance of the workcell. We analyze the results and conclude with our future directions of research in section IV.

II. METHOD

Our robot workcell is based upon an overconstrained ap- proach to sensing. A network of six cameras surrounds the workcell, placed on a cube of aluminum framing. Figure 1 shows a picture of our prototype. In [11, we detailed the workcell configuration,calibration, image differencing and real-time robot motion planning. Currently we are con- ducting 2D experiments, i.e. tracking a moving object on the floor. The occupancy map [111 can output the position of a moving object 30 times per second.

We use a St/iubli RX130 manipulator in the sensor net- work based workcell, with its conventional controller. In the V+ language (software environment of the robot con- troller), we use the Alter command to implement real-time robot motion [11 . When the Alter command is enabled, this instruction (which changes the set point of the con- troller) will be executed once during each trajectory circle (16ms). The robot controller and the vision system com- municate though serial ports. The procedure in one loop to implement the tracking task is the following:

1. Inverse kinematics is used to obtain the joint angles of the desired points, which are sent from the sensor network.

2.The difference between the current joint angles and the

iiii

Fig. 1. Workcell prototype

desired joint angles is calculated. 3. The maximum of the six joint angles' difference is

divided by the maximum joint angle that the robot can move every instant. If the quotient is less than or equal to one, the robot can move to the desired position in the next instant. If the quotient is larger than one, every joint angle's difference is divided by the quotient and we get the new set of joint angles. Then the robot moves to the new set of joint angles in the next instant.

4. A low pass filter is used to smooth the joint angles. 5. Forward kinematics is used to obtain the new posi-

tions corresponding to the filtered joint angles. Then we use the Alter command to let the robot move to the new position.

III. RESULTS OF THE EXPERIMENTS

In the previous six months, we have performed many different kinds of tracking experiments. In all of the ex- periments discussed here, the robot tracks a moving object on the floor. The end effector of the robot points straight down and is 38 centimeters above the floor. Figure 2 gives a general view. We choose a red remote control toy car as the object for many of the examples. It is about 26 centimeters long and 10 centimeters wide.

A. Visual tracking

In the experiments presented here, we demonstrate that the manipulator can track arbitrary objects. Therefore, we show the case of a toy car, a human and a broom as the moving object. In the following three experiments, the trajectories of the objects are all linear.

1. toy car The situation of tracking a remote control toy car is

shown in Figure 2. The trajectories of the car and the manipulator are presented in Figure 3. The graphics of the tracking error in position and joint angles are given in Figure 4 and 5. After calculating the output data from the occupancy map and the robot controller, for a typical run

2989

Page 3: Experiments using a Sensor Network Based Workcell for ... · PDF fileExperiments Using a Sensor Network Based ... We choose a red remote control toy car ... Experiments using a Sensor

Fig. 2. P ic ture for tracking a car

1 ooo

z E

900

600

-500

br °t

i i i i i i i i i -400 -300 -200 -1 oo 0 1 oo 200 300 400

x (mm)

Fig. 3. T h e p lane p o s i t i o n of a l ine t ra jec tory of a car

-250 0

the x tracking error

A ,/~/ /! ,/'i ~i / i' / ~

%"~ ~l /~J 'V ) ", /fl~\)

/ // I /

A

C

5c

z

-5O

-100

-150

-200

-250

the y tracking error

.... ~w . . . . . . . . 'w"---

50 1 O0 0 50 1 O0 t (1/8 sec) t (1/8 sec)

Fig. 4. T h e p o s i t i o n error of t rack ing a car

the 1st joint tracking error

5 / \ \

o/ i .... -5 "' ' /' ~'/A'U/ ~v/

'/j/

-10 0

I -5

-10

& 0

~ - 2 :&

-4

the 2nd joint tracking error

50 1 O0 150 0 50 1 O0 150 the 3rd joint tracking error the 4th joint tracking error

,_w,'~JGy ,["\ ~

0 50 1 O0 the 5th joint tracking error

0 l ~'- ~ / ] \J \ \ '/'~,-~ : /u, /y ....

1 I 0.5

¥ o

--015 I ~ 1

0 50 1 O0 the 6th joint tracking error

-10

Fig. 5. T h e jo int error of t rack ing a car

with the average velocity, 87 mm/sec , we obtain that the average deviation between the car position and the position of end effector is 37 ram. The maximum deviation between them is 138 ram.

2. Human being The situation of tracking a human being is shown in

Figure 6. The graphics of the error in position and joint angles of tracking a human are given in Figure 7 and 8. After calculation, we obtain that for a typical run with the average velocity, 71 mm/sec , the average deviation between the car position and the position of end effector is 29 ram. The maximum deviation between them is 116 ram.

3. Broom The situation of tracking a broom is shown in Figure

9. The graphics of the error in position and joint angles of tracking a broom are given in Figure 10 and 11. After processing the data, we obtain that for a typical run with

A B

C D

Fig. 6. P i c ture for tracking a h u m a n

2990

Page 4: Experiments using a Sensor Network Based Workcell for ... · PDF fileExperiments Using a Sensor Network Based ... We choose a red remote control toy car ... Experiments using a Sensor

the x tracking error the y tracking error

S-J ~V ~ 'q 'U-,/-

5(

(

-50

-100

-150

-200

\ ,~ ~,~,~

/I,~.,,/'~/ .... l"li/"i ./~,i> .... -- ,~ !

i i i i i i i i 50 100 150 200 0 50 100 150 200

t (1/8 sec) t (1/8 sec)

Fig. 7. T h e p o s i t i o n error of t racking a h u m a n

the 1st joint tracking error the 2nd joint tracking error

I0 150 200 250 the 3rd joint tracking error

10

50 1 O0 150 200 250 the 5th joint tracking error

5

- O0 150 200 250

0 50 1 O0 150 200 250 the 4th joint tracking error

0 50 1 O0 150 200 250 the 6th joint tracking error

Fig. 8. T h e jo int error of t rack ing a h u m a n

the average velocity, 64 mm/sec , the average deviation be- tween the car position and the position of end effector is 28 ram. The maximum deviation between them is 90 ram.

B. Motion shape and motion predictability

In the introduction section, we noted that we want to enhance the workcell's capability to follow free-form trajec- tories. Obviously, this cannot be implemented if we simply use windowed adjustment in motion predictability. There- fore, we combine motion shape and motion predictability together.

The moving object is the toy car in all experiments pre- sented in this section. Compared with the linear trajectory in the previous section, we present two different trajectories in this section. The first trajectory is an arc. The plane position of the car and the robot are shown in Figure 12. The graphics of tracking error in position and joint angles are given in Figure 13 and 14. The second trajectory is a free-form trajectory. The graphics of tracking error in

A B

C D

Fig. 9. P ic ture for tracking a b r o o m

the x tracking error

-1 O0

-150

the y tracking error

, ; ' ,

-150 ~ ~ ~ ~ ~ ~ ~ 0 50 100 150 200 0 50 100 150 200

t (1/8 sec) t (1/8 sec)

Fig. 10. T h e p o s i t i o n error of t rack ing a b r o o m

the 1st joint t racking error

o "I' "lJ 'j 0 50 100 1500 20000 2 5 0

the 2nd joint t racking error

0 50 1 O0 150 2 0 0 2 5 0 the 3rd joint t racking error the 4th joint t racking error

0 50 1 O0 150 2 0 0 2 5 0 0 50 1 O0 150 2 0 0 2 5 0 the 5th joint t racking error the 6th joint t racking error

Fig. 11. T h e jo int error of t racking a b r o o m

2991

Page 5: Experiments using a Sensor Network Based Workcell for ... · PDF fileExperiments Using a Sensor Network Based ... We choose a red remote control toy car ... Experiments using a Sensor

the car and robot 1000

400

../~"

~ ot

-200 i i i i i i -900 -800 -700 -600 -500 -400 -300 -200

x (mm)

Fig. 12. The plane posi t ion of tracking an arc trajectory

" - ~ / <

the x tracking error the y tracking error

i I /

I

51

-50

-1 O0

-150

-200

/~i / / . . . . , /

'/ 'i, //

t / / / , ~ i / /

-200 ' ' ' ' ' ' ' 0 20 40 60 80 0 20 40 60 80

t (1/8 sec) t (1/8 sec)

Fig. 13. The posi t ion tracking error of an arc trajectory

position and joint angles are shown in Figure 15, 16 and 17. For the arc trajectory, with the average velocity, 117 mm/sec , the average deviation between the car position and the position of end effector is 48 ram. The maximum deviation between them is 198 ram. For the free-form tra- jectory, with the average velocity, 117 mm/sec , the average deviation between the car position and the position of end effector is 49 ram. The maximum deviation between them is 254 ram.

C. Different velocities

To show the capability of our workcell to handle different object velocities, we designed an experiment, which lea- tured the car moving in a circular path at different speeds. Then we recorded the average velocity of the object, the average deviation and maximum deviation between the ob- ject and the robot. We summarize the results in Table 2. In Figure 18, we plot these results, where the deviation be- tween the actual and the desired position is on the ordinate axis, and the average velocity of the moving object is on the abscissa axis.

the 1st joint tracking error 15

} 5 \ /i'

-5 0 20 40 60 80

the 3rd joint tracking error

o -&

-5

..:-=_. -lO

o 20 40 60 80 the 5thjointtracking error

4 , A/

~" o / \ \ ~ / 2 7 \ ~ - -

-2 0 20 40 60 80

4

o

-2 lOO

t 1

0.5

o

-0.5

-1

15

~1o ~ 5

o

-5 lOO o

the 2nd joint tracking error

\ ~ ,

20 40 60 80 the 4th joint tracking error

20 40 60 80 the 6th joint tracking error

Fig. 14. The joint tracking error of an arc trajectory

400

200 >,

0 ,) /

/ /

/// ,//

_~ i - y S j ~ ~ ~9~

~ o t

-800 -900 -800 -700 -600 -500 -400 -300 -200 -100 0 100

x (mm)

Fig. 15. Plane posit ion of tracking a free-form trajectory

the x tracking error the y tracking error

/~ J'i i

,i ti -100

-150

-200

-250

i ii'

i i/7,'

t : , ! : / ~ , /

i/ ti

-250 ' ' ' 0 50 1 O0 150 200 0 50 1 O0 150 200

t (1/8 sec) t (1/8 sec)

Fig. 16. The posi t ion tracking error of a free-form trajectory

2992

Page 6: Experiments using a Sensor Network Based Workcell for ... · PDF fileExperiments Using a Sensor Network Based ... We choose a red remote control toy car ... Experiments using a Sensor

2O

0 , /

. o

-10

the 1st joint tracking error

"/J

50 100 150 200 250 the 3rd joint tracking error

5

0 &

-5

. o

-10 0

- t l f -I o < / " ' 2 -0.5

.o -20 ~ 1 0 50 100 150 200 250 0

the 5th joint tracking error 10

~ > >, . /"I <, , l//~i / ~ / / /i<,'\

o i / i , / ' " ' -5 '/ v o

i

-10 0 50 100 150 200 250

the 2nd joint tracking error

50 100 150 200 250 the 4th joint tracking error

50 100 150 200 250 the 6th joint tracking error

20 I, ///IVfl//II ~o ! !/

-I0 Fig. 17. The joint tracking error of a free-form trajectory

E

~150

loo

0 i i i i 0 300 400 500 600

average velocity of the car(mm/s)

J J × / f average deviation

f f x j ~ maximum deviation

J J . / . / . / f

. . . . . . . . . . . . x

><~ j . x .....

/ j - - / j - -

/ / . . . . . . . . . . . . _ - x

..... ×--

i i

1 O0 200

Fig. 18. The deviation corresponding to different velocity

V o ( m m / s e c ) A d _ a v ( m m ) A d _ m a x ( m m ) 27 5 44 37 7 56 72 14 72

107 21 93 353 63 179 485 98 213 635 108 196 Table 2" the object average velocity,

average deviation and maximum deviation between the object and the robot

IV. CONCLUSIONS

Our ambition is to construct a workcell in which arbi- trary objects, undergoing arbitrary motions, can be tracked by a standard industrial robot arm. While we have not yet attained this goal, we have tried to devise a framework in which we can measure the relevant capabilities of workcells towards achieving this goal. We have also "measured" our experimental workcell on these axes, in order to quantize our current capabilities and outline a plan for future work.

Specifically, we can currently track a demonstrably wide variety of objects across motions that are mostly smooth but completely unpredicted. The arm inside our workcell can "hover" over these objects at speeds common to in- dustrial conveyor belts, but shows a lag (the arm "chases" the object) when the speed is increased much beyond that. Primarily, we believe this is due our lack of incorporating a prediction mechanism. As the whole system operates at a rate of 30 Hz, in order to achieve appreciable speedup we will examine using Kalman filter methods to push the speed capability of our workcell. Our ongoing work is aimed at using the workcell for impulsive manipulation with the in- dustrial robot arm.

V . ACKNOWLEDGEMENTS

This work was supported by grant number R9937C from South Carolina Commission on Higher Education (CHE). The third author is supported in part by NASA grant NAG-8759. The authors would like to thank Harry Beaver at St/iubli Corporation for all his help in this project.

R E F E R E N C E S

[1] Y. Liu, A. Hoover and I. Walker, "Sensor Network Based Work- cell for Industrial Robots," in IEEE/RSJ International Confer- ence on Intelligent Robots and Systems, Hawaii, Oct. 2001

[2] H. Nakai, Y. Taniguchi, M. Uenohara and T. Yoshimi, "A Vol- leyball Playing Robot," in Proc. 1998 IEEE Int. Conf. Robotics and Automation, Belgium, pp.1083-1089, May 1998

[3] H. Hashimoto, F. Ozaki, K. Asano and K. Osuka, "Development of a Pingpong Robot System using 7 Degrees of Freedom Direct Drive Arm," IEEE Proc. of IECON'87, pp.608-615,1987

[4] K. Kwon, H. Zhang and F. Dornaika, "Hand Pose Recovery with a Single Video Camera," in Proc. 2000 IEEE Int. Conf. Robotics and Automation, San Francisco, CA, pp.1194-1200, April 2000

[5] N. Houshangi, "Control of a Robotic Manipulator to Grasp a Moving Target using Vision," in Proc. 1990 IEEE Int. Conf. Robotics and Automation, pp. 604-609, 1990

[6] N. Papanikolopoulos, P. K. Khosla and T. Kanade, "Visual Tracking of a Moving Target by a Camera Mounted on a Robot: A Combination of Control and Vision," IEEE Trans. Robotics Automat. , vol. 9, no. 1, pp. 14-33, Feb. 1993

[7] P. Allen, A. Timcenko, B. Yoshimi and P. Michelman, "Au- tomated Tracking and Grasping of a Moving Object with a Robotics Hand-Eye System," IEEE Trans. Robotics Automat. , vol. 9, no. 2, pp. 152-165, April 1993

[8] J. Stavnitzky and D. Capson, "Multiple Camera Model-Based 3-D Visual Servo," IEEE Trans. Robotics Automat. , vol. 16, no. 6, pp.732-739, Dec. 2000

[9] C. Wang and D. J. Cannon, "Virtual-Reality-Based Point- and-Direct Robotic Inspection in Manufacturing," IEEE Trans. Robotics Automat. , vol. 12, no. 4, pp. 516-531, Aug. 1996

[10] C. Collewet, F. Chaumette and P. Loisel, "Image-based Vi- sum Servoing on Planar Objects of Unknown shape," in Proc. 2001 IEEE Int. Conf. Robotics and Automation, Seoul, Korea, pp.247-252, May 2001

[11] A. Hoover and B. Olsen, "A Real-Time Occupancy Map from Multiple Video Streams", in Proc. 1999 IEEE Int. Conf. Robotics and Automation, pp. 2261-2266, Detroit, May 1999

2993