A Vision Based Navigation Among Multiple Flocking Robots...

11
American Institute of Aeronautics and Astronautics 1 A Vision Based Navigation Among Multiple Flocking Robots: Modeling And Simulation Blake Eikenberry 1 , Oleg Yakimenko 2 , and Marcello Romano 3 Naval Postgraduate School, Monterey, CA 93943-5146 This paper presents modeling and simulation developments related to the navigation and guidance of a group of robots floating without friction along a planar floor. Each robot has three degrees-of-freedom, uses a rotating thruster as an actuator, and has both artificial vision and pseudo-GPS sensors. Each robot is prescribed a desired final relative position: each of the other robots have an associated desired range, bearing and orientation angle on that bearing. Each robot will initially locate the others by scanning the floor. Once each robot is found and identified, they will compute a trajectory and control profile to arrive at the final desired relative position. Simulated photographs are taken by the camera which alternates between the robots on the floor. These simulated photos are analyzed to determine the position and pose of each robot. Tables are constructed to track the positions of each robot and represent the system state. The guidance system on board each robot will update their independent system state and re-compute trajectories as needed. Collision avoidance with other robots and with the floor boundary must be employed. The paper includes simulations and modeling within MATLAB/Simulink environment involving enhanced animation. Nomenclature X, Y = nominal reference frame (on the floor) x, y = position of robot on the floor ψ = rotation of robot relative to the reference frame R ij = range of robot i to robot j B ij = relative bearing from the front of robot i to robot j O ij = orientation of robot j on B ij x L , x R , x M = distances of the left, right and middle support stanchions in a photograph L β , R β , M β = bearing angles to the left, right and middle support stanchions in a photograph η = robot rotation angle in a photograph f = focal length of the camera a = length of robot side α camera = angle of camera pointing bearing relative to robot front α = angle of thruster pointing bearing relative to robot front u, v = robot’s velocity in the x and y directions, respectively ω = spin rate of robot δ = spin rate of thruster relative to robot F = normalized thrust of robot T = normalized torque of robot m = mass of robot I = moment of inertia about the Z axis of robot t = time τ = time factor 1 Graduate student, Dept. of Mechanical and Astronautical Engineering, Member AIAA. 2 Research Associate Professor, Dept. of Mechanical and Astronautical Engineering, Code MAE/Yk, Associate Fellow AIAA. 3 Assistant Professor, Dept. of Mechanical and Astronautical Engineering, Code MAE/Ro, Member AIAA. AIAA Modeling and Simulation Technologies Conference and Exhibit 21 - 24 August 2006, Keystone, Colorado AIAA 2006-6481 This material is declared a work of the U.S. Government and is not subject to copyright protection in the United States.

Transcript of A Vision Based Navigation Among Multiple Flocking Robots...

Page 1: A Vision Based Navigation Among Multiple Flocking Robots ...faculty.nps.edu/oayakime/RapidPrototyping/Blake's AIAA'06 Paper.pdf · American Institute of Aeronautics and Astronautics

American Institute of Aeronautics and Astronautics

1

A Vision Based Navigation Among Multiple Flocking Robots: Modeling And Simulation

Blake Eikenberry1, Oleg Yakimenko2, and Marcello Romano3 Naval Postgraduate School, Monterey, CA 93943-5146

This paper presents modeling and simulation developments related to the navigation and guidance of a group of robots floating without friction along a planar floor. Each robot has three degrees-of-freedom, uses a rotating thruster as an actuator, and has both artificial vision and pseudo-GPS sensors. Each robot is prescribed a desired final relative position: each of the other robots have an associated desired range, bearing and orientation angle on that bearing. Each robot will initially locate the others by scanning the floor. Once each robot is found and identified, they will compute a trajectory and control profile to arrive at the final desired relative position. Simulated photographs are taken by the camera which alternates between the robots on the floor. These simulated photos are analyzed to determine the position and pose of each robot. Tables are constructed to track the positions of each robot and represent the system state. The guidance system on board each robot will update their independent system state and re-compute trajectories as needed. Collision avoidance with other robots and with the floor boundary must be employed. The paper includes simulations and modeling within MATLAB/Simulink environment involving enhanced animation.

Nomenclature X, Y = nominal reference frame (on the floor) x, y = position of robot on the floor ψ = rotation of robot relative to the reference frame Rij = range of robot i to robot j Bij = relative bearing from the front of robot i to robot j Oij = orientation of robot j on Bij xL, xR, xM = distances of the left, right and middle support stanchions in a photograph

Lβ , Rβ , Mβ = bearing angles to the left, right and middle support stanchions in a photograph η = robot rotation angle in a photograph f = focal length of the camera a = length of robot side αcamera = angle of camera pointing bearing relative to robot front α = angle of thruster pointing bearing relative to robot front u, v = robot’s velocity in the x and y directions, respectively ω = spin rate of robot δ = spin rate of thruster relative to robot F = normalized thrust of robot T = normalized torque of robot m = mass of robot I = moment of inertia about the Z axis of robot t = time τ = time factor

1 Graduate student, Dept. of Mechanical and Astronautical Engineering, Member AIAA. 2 Research Associate Professor, Dept. of Mechanical and Astronautical Engineering, Code MAE/Yk, Associate Fellow AIAA. 3 Assistant Professor, Dept. of Mechanical and Astronautical Engineering, Code MAE/Ro, Member AIAA.

AIAA Modeling and Simulation Technologies Conference and Exhibit21 - 24 August 2006, Keystone, Colorado

AIAA 2006-6481

This material is declared a work of the U.S. Government and is not subject to copyright protection in the United States.

Page 2: A Vision Based Navigation Among Multiple Flocking Robots ...faculty.nps.edu/oayakime/RapidPrototyping/Blake's AIAA'06 Paper.pdf · American Institute of Aeronautics and Astronautics

American Institute of Aeronautics and Astronautics

2

MSD = minimum safe distance λ = speed factor Pv(τ) = Polynomial function of τ for variable v w = weighting coefficients for performance index k = weighting coefficients for penalty function J = cost function

I. Introduction HE motivation behind the multiple flocking robot problem is the autonomous docking of multiple spacecraft. Many applications for the ability of a spacecraft to dock with another spacecraft can be imagined, including

rendezvous for repair, refueling or replenishment, and salvage or rescue. Since there are spacecraft already in orbit that may need to rendezvous, it is not possible to presuppose the existence of any tell-tail features on the spacecraft, such as a pattern of special light emitting diodes (LED), lasers, radio frequencies, etc. The Russians have operated such systems for years using such techniques, and of course, these pre-positioned articles would greatly simplify the problem. A vision based approach was taken to generalize the problem. Using a vision based approach, only general shape and size characteristics of the surrounding objects must be known in order to calculate distance and orientation from a single image in a camera frame.

Previous work in this area includes an autonomous docking simulation completed at Naval Postgraduate School in 2005. 1,2,3 An experimental test bed was constructed that consists of a frictionless floor and two air-thrust propelled robots that could dock using vision based navigation. Continuing work at the Spacecraft Robotics Laboratory includes building three new robots to experiment with multiple, cooperating robots. These robots will serve as a test platform for several design concepts with a variation of thruster and sensor types and an implementation of several guidance and control algorithms. This paper addresses some of the modeling and simulation issues involved with the three-robot experiment in these remaining sections:

II. Planned Test Bed Architecture and Related Simulation Model: presents a description of the experimental setup, working model, and interaction between modules.

III. Pose Estimation Strategy: gives an algorithm for determining range, bearing, and orientation from a single photograph.

IV. Basics of the Guidance System: defines the guidance and control problem and offers a solution to it. V. Example Simulation: demonstrates a sample simulation using the above concepts and methods.

II. Planned Test Bed Architecture and Related Simulation Model The autonomous robot flocking project aims to provide a validation framework for Guidance, Navigation and

Control (GNC) algorithms of a cluster of autonomous proximity navigating spacecraft. Only three degrees of freedom (DOF) are considered instead of 6 DOF that a rigid-body spacecraft would have. In fact, the robots are considered to move along a leveled surface. This simplification limits comparisons of these ground based experiments to operations on orbit to cases where motion between craft is in the same orbit plane, and each craft can maintain its orientation constant relative to the orbit plane. This limitation is acceptable and in line with most current concept of operations for orbital rendezvous; the Space Shuttle, for example, completes all rendezvous maneuvers with the International Space Station in a single orbit plane. The other major simplifications are the weightless environment, which is too difficult to simulate in three dimensions, and the mechanics of maneuvering in orbit, which are difficult to physically simulate on the ground. However, the special friction-free floor approximates weightlessness in translational movement, and computer simulation of Hill’s equations can be implemented to approximate differences between ground and on-orbit operations.7 See Ref.1 for a more detailed discussion on this topic.

Although several design concepts will be employed and tested, this paper will focus only on one particular configuration described here. The flat floor consists of a 4.9 meter by 4.3 meter surface made of Epoxy material. The floor is essentially frictionless and horizontal to a high degree of accuracy (residual gravity ~10-3 g). Spacecraft simulator robots float via air pads over the floor. Each robot has three degrees of freedom, two for the translation and one for the rotation about vertical axis. In this paper we consider that each robot has a vectored thruster with a 360 degree range in the horizontal plane for translational movement and a reaction wheel to control rotation about vertical axis. For reference, we consider for each robot the same basic physical properties as one of the two docking robots of Ref.1: in particular, mass of 63 kg, moment of inertia about the vertical axis of 2.3 kg m2, maximum control torque about vertical axis of 0.16 Nm, and maximum thrust of 0.45 N.

T

Page 3: A Vision Based Navigation Among Multiple Flocking Robots ...faculty.nps.edu/oayakime/RapidPrototyping/Blake's AIAA'06 Paper.pdf · American Institute of Aeronautics and Astronautics

American Institute of Aeronautics and Astronautics

3

Furthermore, each robot is considered equipped with pseudo Global Positioning System (GPS) and a mono-vision camera. The pseudo-GPS system acts similarly to GPS within the laboratory and is composed of two stationary emitters and one receiver on each robot. These units are calibrated so the sensor on each robot can determine its position on the floor to within 2 cm. The cameras mounted on each robot are used to find the positions of the other robots relative to it. The cameras can rotate in the horizontal plane, and have limited pitch movement ability. Communications between robots via a wireless network will be integrated into the system. Fig.1 shows a high-level sketch of a possible architecture for the multi-robots flocking test-bed. On board computers handle image processing for state estimation, compute control profiles, command thrust and torque actuators and are linked to a wireless network for data exchange among the robots. The wireless network will enable multiple cooperation paradigms. The modeling and simulation of this project will test different cooperative scenarios concurrent with the development of the hardware. As a development platform, the algorithms used for these simulations will eventually be physically employed on the robots. For instance, initially each robot can transmit its position information acquired from the pseudo-GPS system and relay it to the master computer via the wireless network. The master computer can compute and distribute control profiles for all of the robots. As the system is further developed, more capabilities will be employed onboard each robot, making them more

Master Computer/NetworkMaster Computer/Network

Dual PC104Dual PC104

Wireless NetworkWireless Network

Pseudo-GPSPseudo-GPS

GPS ReceiverGPS Receiver

Figure 1. Design concept.

Each robot is represented by the 3-DoF model

X

ψ

α Vectoredthruster (F, δ)

Robot front

Robot 1

Y

D

W

x

y

Robot 2

Robot 3

Reaction wheel (T)

Figure 2. Model of the robots with notation illustrated.

Page 4: A Vision Based Navigation Among Multiple Flocking Robots ...faculty.nps.edu/oayakime/RapidPrototyping/Blake's AIAA'06 Paper.pdf · American Institute of Aeronautics and Astronautics

American Institute of Aeronautics and Astronautics

4

autonomous. Eventually, no interaction from the master computer is desired outside of providing task commands. Fig.2 depicts a simplified representation which better characterizes the fidelity of the model we will simulate. The autonomous robot docking project was divided into four main components that were worked by a group in parallel: Modeling and Simulation, Sensors, Guidance and Control, and Robot Construction. Modeling and Simulation played an integral role in each of these components. First, it provided a fast proof of concept and an easy platform to layout a top-down design. These components are manipulated using Mathwork’s Simulink. Each robot will consist of a software architecture illustrated in Fig.3. Without loss of generality, a single robot is initially tested against two robots with ideal performance. The following presents a brief description of the functions of each subsystem presented on Fig.3. Each of the five main components, the Guidance Model, Robot Control Model, State Estimation, Artificial Vision Model, and Pose Estimation, operate independently and concurrently.

Simulink Model. The Simulink Model represents the overall control program and is provided the Load Parameters which include the initial conditions, final desired state, and physical properties. The ideal performance parameters for each robot are calculated and stored in lookup tables as “truth.” A single robot can then be tested against the ideal parameters of the other two and its movement can then be compared to its own ideal trajectory.

Guidance Model. The robot guidance subsystem calculates control profiles to command the robot’s motion. This calculation is based on the relative position as determined by the state estimation sub-system. Because the floor is constrained by a boundary, the guidance sub-system must also consider the robots position on the floor to avoid collision. For safety reasons, the guidance sub-system can also receive positional information form other robots to avoid collision between robots. More details on the developed guidance algorithm follow.

Robot Control Model. The robot control subsystem generates control commands based on the information passed to it from the Guidance Model. To maintain dynamics in an inertial frame, each robot will estimate its absolute position and velocity from its position at initialization using inertial sensing units, and the kinematics predicted from the calculated control profile. Position and velocity information calculated here is passed to the System State Estimation Models.

System State Estimation. The System State has 18 variables: the absolute position (x, y), orientation (ψ), velocity ( x , y ), and spin rate (ψ ) for each of three robots. These values are constructed from the estimated robot state each robot determines for itself and from the Pose Estimation module after processing the artificial vision. Alternatively, position and velocity values can be received via the Pseudo-GPS system and/or the other robots on the wireless network. As these values are updated at discrete intervals, a Kalman filter will provide a current predicted state for a query at any time. If only conducting a simulation, noise or other uncertainty can be added to the state variable.

Artificial Vision Model. This subsystem is responsible for determining were the other robots are relative to the robot. It controls the camera based on the predicted location of the target robot. The camera will turn to the desired bearing, take a photograph, and pass it to the Pose Estimation Module. The camera will alternate photographs between multiple robots. Fig.4 displays the Finite State Machine that will control the camera.

Pose Estimation. An onboard computer will process each image and determine a robot’s relative bearing, range, and orientation from the image. It will then update the System State. More details on the developed algorithm are in the following section.

Simulink Model

Guidance Model

Pose Estimation

Load Parameters

State Estimation

Robot Control Model

Artificial Vision Model

Figure 3. Software hierarchy.

Take photo/ Process image

of robot 2

Take photo/ Process image

of robot 1

Initialize Scan entire floor for robots

Update tracking tables

Figure 4. Chart flow for the robot tracking algorithm.

Page 5: A Vision Based Navigation Among Multiple Flocking Robots ...faculty.nps.edu/oayakime/RapidPrototyping/Blake's AIAA'06 Paper.pdf · American Institute of Aeronautics and Astronautics

American Institute of Aeronautics and Astronautics

5

Final End State. Six variables define a desired end state for three robots: a relative bearing to each robot, a range to each robot, and an angle that defines the orientation of each robot on its relative bearing. For example, a sample desired end state and the corresponding matrix from robot 1 (blue) is in Fig.5. Each robot must have a similar set of relative values that place that define the equivalent formation. The final desired rates are always zero.

III. Pose Estimation Strategy Image processing occurs automatically after a digital photograph is received from the camera. The algorithm

will first try to determine how many robots are in view: zero, one, or two. If there are no robots in the field of view, a search routine will have to be conducted. This routine is completed after initialization; once the robots are acquired and tracking has started, the camera will alternate amongst the moving robots and attempt to keep them in its field of view. If two robots are in the photo, it is preferred to center the camera on one robot at a time. If this is not possible, such as the case when one robot is behind the other, accurate pose estimates are very difficult to make.

Each photograph is processed onboard the robot to find and determine the relative positions of the other robot(s). Figures 6a and 6b illustrate an example photograph of one of the robots used in Ref.1 and a simulated photograph assumed to be of Robot 3 as seen by Robot 1. The image processor locates the three vertical support structures from the image and then determines the robot’s relative position from the know size and shape of the robot(s) in the field of view.

The basic algorithm for determining pose is to first determine the relative angle the robot is in the image frame. This is accomplished by finding the vertical corner support beams of the robot. Assuming that three support beams can be seen, the differences in the distance between the two sets of lines (i.e. the left-center, and center-right sets of lines) will give an orientation. Using only this algorithm will result in a set of four ambiguous solutions, so another feature of the robot will have to be known to differentiate the ambiguity. For example, the vertical beams of robot with a square cross-section will look the same when oriented at intervals of 0, π/2, π, and -π/2, so another known feature will have to be exploited to de-conflict the possibilities. This analysis is required regardless because and least one unique feature must be known of all robots so they can differentiate between them. Once the orientation has been determined, the distance to the robot is computed from the relative size and the focal length of the camera. The

O12

R12

R13 B13

O13

B12Robot 1

Robot 2

Robot 3

O13B13R13To Robot 3 (green)

O12B12R12To Robot 2 (red)

OrientationBearingRangeFrom Robot 1 (blue)

O13B13R13To Robot 3 (green)

O12B12R12To Robot 2 (red)

OrientationBearingRangeFrom Robot 1 (blue)

Figure 5. Parameters defining three-robot formation.

Figure 6a. Actual image taken from Robot 1.

-0.04 -0.03 -0.02 -0.01 0 0.01 0.02 0.03 0.04-0.03

-0.02

-0.01

0

0.01

0.02

0.03

Simulation Image from Robot1

Figure 6b. Simulated image taken from Robot 1.

Page 6: A Vision Based Navigation Among Multiple Flocking Robots ...faculty.nps.edu/oayakime/RapidPrototyping/Blake's AIAA'06 Paper.pdf · American Institute of Aeronautics and Astronautics

American Institute of Aeronautics and Astronautics

6

image processing itself requires a pixel analysis of the entire image. In order to find three vertical support beams of the robot, background clutter must first be separated.

Fig.7 depicts the geometry involved in relating a robot of known size (square with length a) to the projection of

Robot 1

Robot 3

fRxMxLx

η

a

cosa η

1tanRR

fx

β −=

1tanMM

fx

β −=

1tanLL

fx

β −=

sin Ma tgη β

2 cos4

a πη⎛ ⎞+⎜ ⎟⎝ ⎠

2 sin4 Ma tgπη β⎛ ⎞+⎜ ⎟

⎝ ⎠

cos sin

2 cos 2 sin4 4

M L M

L RM

a a tg x xx xa a tg

η η βπ πη η β

+ −=

−⎛ ⎞ ⎛ ⎞+ + +⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠

cos sin2

cos sin4 4

M L M

L R

M

fx x x

f x xx

η η

π πη η

+−

=−⎛ ⎞ ⎛ ⎞+ + +⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠

Relative orientation of Robot 3 with respect to Robot 1 is defined by angle η (to be found from the above transcendental equation).

Robot 1

Robot 3

4πη +

2a

1

22

sinsin tan 24 2L R

RL

aR

fx

β βπ π βη −

=−⎛ ⎞ ⎛ ⎞++ + − ⎜ ⎟⎜ ⎟ ⎝ ⎠⎝ ⎠

Once angle η is found, the range to the center of Robot 3 from Robot 1 can be defined as follows

2 Lπ β−

2L R

Rβ ββ −

+

R

13sin tan4

2 sin2

L

L R

fx

R a

π η

β β

−⎛ ⎞+ +⎜ ⎟

⎝ ⎠=+⎛ ⎞

⎜ ⎟⎝ ⎠

Figure 7. Pose estimation geometry.

Page 7: A Vision Based Navigation Among Multiple Flocking Robots ...faculty.nps.edu/oayakime/RapidPrototyping/Blake's AIAA'06 Paper.pdf · American Institute of Aeronautics and Astronautics

American Institute of Aeronautics and Astronautics

7

that image on to the focal plane. In this situation, the camera with focal length f on Robot 1 is pointed straight ahead (up) and Robot 3 is in the field of view on the relative left of Robot 1. The values xL, xM, and xR are found by the image processing that locates the three vertical support beams. From these three values and the camera’s focal length, f, the relative bearing angles to each support beam ( Lβ , Mβ , Rβ ) can determined. Taking the relative pointing angle of the camera into account, the formula for the relative bearing of Robot 3 from Robot 1 is

arctan( / ) arctan( / )2

R Lcamera

f x f xβ α

+= + (1)

The orientation of Robot 3 on this bearing is described by η , which is found by solving the transcendental equation

cos sin2

cos sin4 4

M L M

L R

M

fx x x

f x xx

η η

π πη η

+−

=−⎛ ⎞ ⎛ ⎞+ + +⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠

(2)

Finally the range from Robot 1 to Robot 3 is determined. Since the camera will be mounted in the center of the robot, the range determined from the geometry in Fig.7 can be used. The equation used to determine the range is

13sin tan4

2 sin2

L

L R

fx

R a

π η

β β

−⎛ ⎞+ +⎜ ⎟

⎝ ⎠=+⎛ ⎞

⎜ ⎟⎝ ⎠

(3)

IV. Basics of the Guidance System Let us first formalize the problem mathematically. The system of nonlinear equations driving each robot’s

dynamics ( 1,3i = ) is given below:

cos( )sin( )

i i

i i i i

i i i i i i i i

i i i i i i i

x uy vx u F Ty v F

ψ ωψ α ψ ωψ α α δ

== == = + = == = + =

(4)

The seven control states per robot are its x and y coordinates, x and y, components of its linear velocity, u and v, respectively, the attitude angle ψ (defining robot’s orientation with respect to the axis X), the angular velocity ω controlled by the reaction wheel, and the angle α defining the direction of thrust with respect to robot front. Three

available controls (per robot) are the magnitude of its linear acceleration i

ii

ThrustFm

= ( max0 i iF F≤ ≤ ), the control

input δ affecting orientation of the thrust and the angular acceleration i

ii

TorqueTI

= ( maxi iT T≤ ).

While maneuvering, all robots ( 1,3i = ) must obey the geometrical constraints of the arena 0.5 ( ) 0.5iMSD x t D MSD≤ ≤ − , 0.5 ( ) 0.5iMSD y t W MSD≤ ≤ − , 0 , ft t t⎡ ⎤∈ ⎣ ⎦ (5)

(where MSD stands for minimum safe distance between two robots and is equal to the diameter of the circles drawn on Fig.2 around each robot), and avoid collisions with other robots

( ) ( )2 2 2( ) ( ) ( ) ( ) 0i k i kx t x t y t y t MSD− + − − ≥ , , 1,3, i k i k∀ = ≠ , 0 , ft t t⎡ ⎤∈ ⎣ ⎦ . (6)

It is required to satisfy the following sets of boundary conditions per each robot ( 1,3i = ):

0 0

0 0

0 0

( ) ( )( ) ( )( ) ( )

i i i i if f

i i i i if f

i i i i if f

x t x x t xy t y y t y

t tψ ψ ψ ψ

= == == =

0 0

0 0

0 0

( ) ( )( ) ( )( ) ( )

i i i i if f

i i i i if f

i i i i if f

x t u x t uy t v y t v

t tψ ω ψ ω

= == == =

0 0 0 0

0 0 0 0

0 0

( ) cos( ) ( ) cos( )( ) sin( ) ( ) sin( )( ) ( )

i i i i i i i i if f f f

i i i i i i i i if f f f

i i i i if f

x t F x t Fy t F y t F

t T t T

ψ α ψ αψ α ψ α

ψ ψ

= + = += + = += =

In general, the performance index includes three appropriately weighted terms. The first one, 1ft , assures

minimum transition time for the first robot, the second one, 2 1 3 2f f t f f tt t t t− − ∆ + − − ∆ , guarantees sequential ( t∆ -

Page 8: A Vision Based Navigation Among Multiple Flocking Robots ...faculty.nps.edu/oayakime/RapidPrototyping/Blake's AIAA'06 Paper.pdf · American Institute of Aeronautics and Astronautics

American Institute of Aeronautics and Astronautics

8

second apart) joining the final formation, and the third one, 0

3

1

ift

i

r t

F dt=∑ ∫ , takes care of minimizing overall gas

consumption to produce thrust. To generate quasioptimal collision-free trajectories for all three robots in real time (and to be able to update

them every 2-3 seconds) the direct method of calculus of variations was chosen.5 To apply it we need to introduce an independent argument τi for each robot ( 1,3i = ) and using the corresponding speed factors λ i (different for each robot) rewrite the original system (4) as

cos( )

sin( )

i i i

i i i i i i

i i i i i i i i

i i i i i i i i

x u

y v

u F T

v F

λ

λ ψ ω λ

ψ α λ ω λ

ψ α λ α δ λ

′ =

′′ = =

′′ = + =

′′ = + =

(7)

Next we establish three reference functions (per robot): for coordinates ix and iy , as well as for the attitude

angle iψ : ( )i ixP τ , ( )i i

yP τ and ( )i iPψ τ , respectively ( 1,3i = ). If we choose to use polynomials, then as defined by the number of boundary conditions, the minimum order of approximating polynomials is five.4 For this specific problem to have an additional flexibility (to allow avoiding collisions), the order of polynomials was increased by two to be able to vary the third derivative of ix , iy and iψ , 1,3i = at both ends.4

The characterization of optimization routine follows. Given the boundary conditions we first determine nine reference polynomials ( )i i

xP τ , ( )i iyP τ and ( )i iPψ τ , 1,3i = and compute their coefficients using given boundary

conditions and initial guesses on the third derivatives 0ix ′′′ , i

fx ′′′ , 0iy ′′′ , i

fy ′′′ , 0iψ ′′′ , i

fψ ′′′ , 1,3i = . These variables

along with the lengths of three virtual arcs ifτ form the vector of variable parameters Ξ . Next, applying inverse

dynamics we numerically solve the problem for the remaining states.

Specifically, we start from dividing each virtual arc ifτ ( 1,3i = ) onto N–1 equal pieces

1

ifi

τ∆ =−

so that we

have N equidistant nodes 1,j N= along each virtual arc. For each robot all states at the first point 1j =

(corresponding to 1 0 0i iτ τ= = ) are defined. Additionally we define 1 1iλ = , 1,3i = .

For each of the subsequent N–1 nodes 2,j N= we compute the current values of robots’ coordinates and

attitudes using corresponding polynomials: ( )i i ij x jx P τ= , ( )i i i

j y jy P τ= and ( )i i ij jPψψ τ= , 1,3i = . Then, using the

inverse dynamics for the first four equations of the system (10) we determine the sum of angles ijψ and i

jα , and current control acceleration

arctaniji i

j j ij

y

xψ α

⎛ ⎞′′⎜ ⎟+ =⎜ ⎟′′⎝ ⎠

, 2 2i i i ij j j jF x yλ ′′ ′′= + . (8)

Inverting the last equation of the system (7) and using the first of two equations (8) we obtain the second control

2cosi i i ij j j ji i i i i i

j j j j j jij

x y x y

yδ λ α λ ψ ψ

⎛ ⎞′′′ ′′ ′′′′′−′ ′⎜ ⎟= = −⎜ ⎟′′⎝ ⎠

. (9)

From the first two equations of the system (7) we define the current speed 2 2i i i

j j jV u v= + , where i i ij j ju xλ ′= , i i i

j j jv yλ ′= (10) and therefore may proceed with determining the elapsed time for each robot

( ) ( )2 2

1 11

1

2i i i ij j j ji

j i ij j

x x y yt

V V− −

−−

− + −∆ =

+. (11)

Now, the current values of the speed factor are given by

Page 9: A Vision Based Navigation Among Multiple Flocking Robots ...faculty.nps.edu/oayakime/RapidPrototyping/Blake's AIAA'06 Paper.pdf · American Institute of Aeronautics and Astronautics

American Institute of Aeronautics and Astronautics

9

1

iij i

jtτλ−

∆=∆

(12)

and the current time for each robot is defined as 1 1 1 ( =0)i i i i

j j jt t t t− −= + ∆ . (13) Finally, we inverse the equations for robots’ attitude to get the third control

11

1

2i ij ji i

j jijt

ψ ψω ω−

−−

−= −

∆ and 1

1

i ij ji

j ij

Tt

ω ω −

−=

∆. (14)

Once all states along the trajectories are computed, we determine the performance index (employing the vector

of weighting coefficients w (3

1

1hh

w=

=∑ ))

( )3 1

1 2 1 3 21 2 3

1 0

Ni r

f f f t f f t jr j

J w t w t t t t w F t−

= =

= + − − ∆ + − − ∆ + ∆∑∑ (15)

and form the aggregate penalty using an appropriate four-component vector of weighting coefficients k (4

1

1qq

k=

=∑ ):

[ ]

( )( )( )( )

( ) ( )( )

23

max1

23

max1

2 23

1 2 3 4 4 4 1

2 22 1 2 1 2

1,2

max 0;

max 0;

max 0; max 0;, , , , , 2 2 2 2

max 0; ( ) ( ) ( ) ( ) , * arg min(

i ijji

i ijji

i ij jj ji

j j j j ij

F F

T T

D D W Wx MSD y MSDk k k k k k

MSD x t x t y t y t t

=

=

=

∗ ∗ ∗ ∗

=

⎛ ⎞⎛ ⎞ ⎛ ⎞⎛ ⎞ ⎛ ⎞⎜ ⎟− − + + − − +⎜ ⎟ ⎜ ⎟⎜ ⎟ ⎜ ⎟∆ = ⎜ ⎟⎝ ⎠ ⎝ ⎠⎝ ⎠ ⎝ ⎠⎝ ⎠

− − − − =

( ) ( )( )( ) ( )( )

2 22 1 3 1 3

1,3

2 22 2 3 2 3

2,3

)

max 0; ( ) ( ) ( ) ( ) , * arg min( )

max 0; ( ) ( ) ( ) ( ) , * arg min( )

if

ij j j j fij

ij j j j fij

MSD x t x t y t y t t

MSD x t x t y t y t t

∗ ∗ ∗ ∗

=

∗ ∗ ∗ ∗

=

⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥− − − − =⎢ ⎥⎢ ⎥⎢ ⎥− − − − =⎢ ⎥⎣ ⎦

. (16)

Note that the last three terms in the compound penalty (16) are quite tricky because robots’ coordinates have to be interpolated so that they correspond to the same instants of time.

Finally, we apply any standard nonlinear constrained minimization routine to minimize the performance index keeping the penalty within the certain tolerance:

min Jε∆≤Ξ

. (17)

An example simulation using this guidance system concept is provided in the following section.

0 1 2 3 40

0.5

1

1.5

2

2.5

3

3.5

4

4.5

Bird's Eye View

y-axis (East) (m)

x-ax

is (N

orth

) (m

)

time=00 1 2 3 4

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

Bird's Eye View

y-axis (East) (m)

x-ax

is (N

orth

) (m

)

time=14.670 1 2 3 4

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

Bird's Eye View

y-axis (East) (m)

x-ax

is (N

orth

) (m

)

time=230 1 2 3 4

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

Bird's Eye View

y-axis (m)

x-ax

is (m

)

time=45

(a) (b) (c) (d)

Figure 8. Example sequence at 0 (a), 15 (b), 23 (c) and 45 seconds (d).

Page 10: A Vision Based Navigation Among Multiple Flocking Robots ...faculty.nps.edu/oayakime/RapidPrototyping/Blake's AIAA'06 Paper.pdf · American Institute of Aeronautics and Astronautics

American Institute of Aeronautics and Astronautics

10

V. Example Simulation A rest to rest maneuver was simulated from an arbitrary starting position to a close-in, triangular final position.

Four frames from the bird’s eye view animation are provided in Fig.8. Robot 1 (bottom left), Robot 2 (top center),

and Robot 3 (right center) perform the rest to rest maneuver in approximately 45 seconds. Frame (a) depicts the starting position, frame (b) and frame (c) depict intermediate positions, and frame (d) depicts the final position with the ground tracks to achieve that position. The front side of each robot is indicated buy a line extending from its center. The direction and magnitude of the rotating thruster is also indicated by the plume extending from the robots. After defining the initial absolute position and final relative position, the algorithm varied the final absolute position, time, and time factor step size (∆τ), initial and final translational jerk (the derivative of acceleration) to achieve this final position without collision and in timely manner. The guidance algorithm achieved these results by computing polynomials for x, y, and ψ for each of the robots such that all positions, velocities, and accelerations met the given

0 5 100

0.5

1

λ

τ0 5 10

0

20

40

t (s)

τ0 5 10

0

2

4

∆ t

(s)

τ

0 5 100

2

4

6

x (m

)

τ0 5 10

1

2

3

4

y (m

)

τ0 5 10

-2

0

2

4

ψ (r

ad)

τ

0 5 10-5

0

5

α +

ψ (r

ad)

τ

0 5 100

0.1

0.2

F (N

/kg)

τ0 5 10

-0.05

0

0.05

T (s

-2)

τ0 5 10

-0.5

0

0.5

δ (ra

d/s)

τ

0 5 10-0.2

0

0.2

0.4

u (m

/s)

τ0 5 10

-0.5

0

0.5

v (m

/s)

τ0 5 10

-0.2

0

0.2

ω (r

ad/s

)

τ

0 2 40

2

4

x-ax

is (m

)

y-axis (m)0 5 10

-5

0

5

α (r

ad)

τRobot1Robot2Robot3

Figure 9. Example parameters and control profiles.

Page 11: A Vision Based Navigation Among Multiple Flocking Robots ...faculty.nps.edu/oayakime/RapidPrototyping/Blake's AIAA'06 Paper.pdf · American Institute of Aeronautics and Astronautics

American Institute of Aeronautics and Astronautics

11

boundary conditions. The control profiles were calculated from the inverse dynamics. A summary of all parameters are shown as functions of the time factor τ in Fig.9.

Finally, Fig.10 illustrates another perspective of animation developed for this system model. A view from any of the three cameras can be simulated at any time and animated. On the left of Fig.11, a bird’s eye view of the floor is shown highlighting the field of view of Robot 1’s camera. The right side if Fig.10 depicts what is seen in that field of view.

VI. Conclusion The Spacecraft Robotics

Laboratory at Naval Postgraduate School has developed two autonomous docking robots and is now developing three cooperative robots using vision based algorithms. The modeling and simulation of this project has started with rudimentary pose determination from two dimensional photographs and a quasioptimal guidance and control algorithm. Although simulation of these methods has provided a proof of concept, they have not yet been successfully employed on a real-time system. Image processing time coupled with the time delay in rotating the camera to alternate targets may prevent a quick solution. Quicker image processing techniques need to be researched to increase the acceptable speeds of the robots. The simulation platform will act as a test bed to quickly evaluate different methods, sensors and actuators before building them. Although simplified, the concepts used for autonomous flocking robots could be expanded and further developed to 6 DOF orbiting spacecraft.

References 1 M. Romano, D.A. Friedman, T.J. Shay, “Laboratory Experimentation of Autonomous Spacecraft Approach and Docking to a

Collaborative Target.” Proceedings of the AIAA Guidance, Navigation and Control Conference, Keystone, CO, August 2006. Also accepted for Publication. To appear on AIAA Journal of Spacecraft and Rockets.

2 Romano, M., “On-the-ground Experimentation of Autonomous Docking between Small Spacecraft using Computer Vision,” Proceedings of the AIAA Guidance, Navigation and Control Conference, San Francisco, CA, August 2005.

3 Romano, M., “On-the-ground Experiments of Autonomous Spacecraft Proximity Navigation using Computer Vision and Jet Actuators,” Proceedings of the IEEE/ASME International Conference on Advanced Intelligent Mechatronics, August 2005, pp. 1011 – 1016.

4 Yakimenko, O., “Direct method for Rapid Prototyping of Near-Optimal Aircraft Trajectories,” AIAA Journal of Guidance, Control, and Dynamics, 23(5), 2000, pp.865-875.

5 Yakimenko, O.A., Kaminer, I.I., Lentz, W.J., Ghyzel, P.A., “Unmanned Aircraft Navigation for Shipboard Landing Using Infrared Vision,” IEEE Transactions on Aerospace and Electronic Systems, 38(4), 2002, pp.1181-1200.

6 Yakimenko, O.A., Dobrokhodov, V.N., Kaminer, I.I., Berlind, R.M., “Autonomous Scoring and Dynamic Attitude Measurement,” Proceedings of the 18th AIAA Aerodynamic Decelerator Systems Technology Conference and Seminar, Munich, Germany, May 24-25, 2005.

7 Vallado, D.A., Fundamentals of Astrodynamics and Applications, 2nd ed., Microcosm Press, El Segundo, CA, 2004, Chap. 6.

Figure 10. Bird’s eye view and simulated camera image.