Monocular Multi-Robot Trajectory Control with RGB LEDs · LEDs’ colors are unique to each robot...

6
Monocular Multi-Robot Trajectory Control with RGB LEDs Anusha Nagabandi, Brian Nemsick, and Justin Yim Abstract—We present an approach for monocular pose esti- mation and feedback for multi-robot trajectory control. RGB LEDs are mounted on each robot in asymmetric configurations. The LEDs are detected by a single camera with robustness to a variety of lighting conditions. The centroids of the LEDs are used to estimate robot pose through the P3P algorithm and Gauss-Newton minimization. Subsequently, the pose estimates are inputs into an Extended Kalman Filter (EKF) to localize each robot in the global frame. From the EKF estimate, feedback control is applied. Trajectory results with a two robot team are included. I. INTRODUCTION Manually searching rubble for trapped survivors in urban disasters is both extremely challenging and hazardous to human rescuers. Multi-robot teams consisting of small robots show promise in their ability to navigate the narrow spaces found in pancaked buildings [1]. This enables the multi-robot team to perform Simultaneous Localization and Mapping (SLAM) in previously unreachable environments. Within the context of small robots, heterogeneous multi- robot teams have exhibited distinct advantages over more common homogeneous teams. Specifically, we consider a team consisting of a single computationally capable observer robot equipped with a camera and multiple more mobile but less computationally capable picket robots. Unobservable hazards can be detected through the sacrifice of less expen- sive and computationally capable picket robots [1]. This low cost and specialization approach enables the picket robots to quickly survey terrain while the observer remains safe and maintains the SLAM problem. Localization is a central problem in cooperative multi- robot teams. Accurate localization, consisting of pose and orientation, is essential for completing complex tasks ef- fectively and is fundamental to the SLAM problem. RGB LEDs serve as a low cost, weight, and power solution for accurate relative localization in unstructured and variable light environments with a monocular camera [2]. Trajectory control builds on accurate localization to enable coordinated motions. This can be accomplished with simple feedback control, addressing the dead zones of motor commands, and a basic motion model of the robot. In this paper we present a LED based system that is capable of taking in images from a stationary camera and outputting motor commands while maintaining localization estimates of each picket robot. This approach can be directly extended to heterogeneous multi-robot teams by substituting the stationary camera with a mobile observer robot and by adding an observer state to the EKF. II. RELATED WORK Several previous approaches exist for using a monocular camera to do pose estimation in six degrees of freedom (6DOF) for robotic systems. The standard approach demon- strated in [2] [3] uses feature detection, perspective 3 point (P3P), and Gauss Newton minimization pipeline. Infrared and RGB LEDs are a common feature [2] [3] as they are low cost, low power, and easily identifiable in a wide range of lighting conditions. The P3P algorithm [4] [5] along with prior estimates enables the estimation of the 6DOF pose when there is a known correspondences between features. Gauss Newton minimization is able to refine the 6DOF pose estimates with a rigid body assumption [6]. The byproduct of the minimization is a 6DOF pose covariance which is advantagious to filtering based localization approaches. A P3P and RANSAC approach is common for applications involving structure from motion (SfM) or estimating the pose of a camera [7][8]. RANSAC approaches are not necessary because LEDs detection has a low number of false detections and total features. Expanding on the relative pose estimate with covariance, localization filters have been used to improve state estimates. Generalized to systems with moving relative pose sensors, a distributed EKF approach is introduced in [9] for 3-DOF estimation. Odometry sensors are used to propagate the state of each robot. The EKF update is performed when the robots meet and a relative pose measurement is recorded. Relative pose measurements in an EKF framework are expanded on in [10] for more generalized systems. The constraints of relative pose measurements were relaxed to contain any combination of bearing, distance, or orientation. Similar to EKF based approaches, particle filters have been explored in [11][12][13] and show gains in accuracy at the cost of computational resources. A more recent approach [14] uses range-only sensors with a team of aerial vehicles for simultaneous localization and mapping (SLAM) and builds on the limited sensor approach of [15]. Alternatively, graph based approaches have been used to address the multi-robot localization problem [16][17] as alternatives to filter based approaches. A variety of approaches exist for trajectory control. A basic controller for a monocular camera system is presented in [18]. Communication latency is important for multi-robot teams with a centralized observer that communicates with several robots. The delay between sensor measurement and motor commands increases with the number of robots due to the growing amount of computation required. A notion of time delay in trajectory control is introduced in [19]

Transcript of Monocular Multi-Robot Trajectory Control with RGB LEDs · LEDs’ colors are unique to each robot...

Page 1: Monocular Multi-Robot Trajectory Control with RGB LEDs · LEDs’ colors are unique to each robot which enables the LEDs to be quickly clustered. B. Correspondence and Pose Optimization

Monocular Multi-Robot Trajectory Control with RGB LEDs

Anusha Nagabandi, Brian Nemsick, and Justin Yim

Abstract— We present an approach for monocular pose esti-mation and feedback for multi-robot trajectory control. RGBLEDs are mounted on each robot in asymmetric configurations.The LEDs are detected by a single camera with robustness toa variety of lighting conditions. The centroids of the LEDs areused to estimate robot pose through the P3P algorithm andGauss-Newton minimization. Subsequently, the pose estimatesare inputs into an Extended Kalman Filter (EKF) to localizeeach robot in the global frame. From the EKF estimate,feedback control is applied. Trajectory results with a two robotteam are included.

I. INTRODUCTION

Manually searching rubble for trapped survivors in urbandisasters is both extremely challenging and hazardous tohuman rescuers. Multi-robot teams consisting of small robotsshow promise in their ability to navigate the narrow spacesfound in pancaked buildings [1]. This enables the multi-robotteam to perform Simultaneous Localization and Mapping(SLAM) in previously unreachable environments.

Within the context of small robots, heterogeneous multi-robot teams have exhibited distinct advantages over morecommon homogeneous teams. Specifically, we consider ateam consisting of a single computationally capable observerrobot equipped with a camera and multiple more mobilebut less computationally capable picket robots. Unobservablehazards can be detected through the sacrifice of less expen-sive and computationally capable picket robots [1]. This lowcost and specialization approach enables the picket robots toquickly survey terrain while the observer remains safe andmaintains the SLAM problem.

Localization is a central problem in cooperative multi-robot teams. Accurate localization, consisting of pose andorientation, is essential for completing complex tasks ef-fectively and is fundamental to the SLAM problem. RGBLEDs serve as a low cost, weight, and power solution foraccurate relative localization in unstructured and variablelight environments with a monocular camera [2]. Trajectorycontrol builds on accurate localization to enable coordinatedmotions. This can be accomplished with simple feedbackcontrol, addressing the dead zones of motor commands, anda basic motion model of the robot.

In this paper we present a LED based system that iscapable of taking in images from a stationary camera andoutputting motor commands while maintaining localizationestimates of each picket robot. This approach can be directlyextended to heterogeneous multi-robot teams by substitutingthe stationary camera with a mobile observer robot and byadding an observer state to the EKF.

II. RELATED WORK

Several previous approaches exist for using a monocularcamera to do pose estimation in six degrees of freedom(6DOF) for robotic systems. The standard approach demon-strated in [2] [3] uses feature detection, perspective 3 point(P3P), and Gauss Newton minimization pipeline. Infraredand RGB LEDs are a common feature [2] [3] as they arelow cost, low power, and easily identifiable in a wide rangeof lighting conditions. The P3P algorithm [4] [5] along withprior estimates enables the estimation of the 6DOF posewhen there is a known correspondences between features.Gauss Newton minimization is able to refine the 6DOF poseestimates with a rigid body assumption [6]. The byproductof the minimization is a 6DOF pose covariance which isadvantagious to filtering based localization approaches. AP3P and RANSAC approach is common for applicationsinvolving structure from motion (SfM) or estimating the poseof a camera [7][8]. RANSAC approaches are not necessarybecause LEDs detection has a low number of false detectionsand total features.

Expanding on the relative pose estimate with covariance,localization filters have been used to improve state estimates.Generalized to systems with moving relative pose sensors, adistributed EKF approach is introduced in [9] for 3-DOFestimation. Odometry sensors are used to propagate the stateof each robot. The EKF update is performed when the robotsmeet and a relative pose measurement is recorded. Relativepose measurements in an EKF framework are expandedon in [10] for more generalized systems. The constraintsof relative pose measurements were relaxed to contain anycombination of bearing, distance, or orientation. Similar toEKF based approaches, particle filters have been exploredin [11][12][13] and show gains in accuracy at the costof computational resources. A more recent approach [14]uses range-only sensors with a team of aerial vehicles forsimultaneous localization and mapping (SLAM) and buildson the limited sensor approach of [15]. Alternatively, graphbased approaches have been used to address the multi-robotlocalization problem [16][17] as alternatives to filter basedapproaches.

A variety of approaches exist for trajectory control. Abasic controller for a monocular camera system is presentedin [18]. Communication latency is important for multi-robotteams with a centralized observer that communicates withseveral robots. The delay between sensor measurement andmotor commands increases with the number of robots dueto the growing amount of computation required. A notionof time delay in trajectory control is introduced in [19]

Page 2: Monocular Multi-Robot Trajectory Control with RGB LEDs · LEDs’ colors are unique to each robot which enables the LEDs to be quickly clustered. B. Correspondence and Pose Optimization

Fig. 1: System overview

which addresses this problem. Stability and path accuracy isfurther addressed in [20]. Fuzzy adapative controllers havedemonstrated improvements for wheeled and treaded robotsfor systems without a slippage model. The issue of deadzones is addressed in [22]. The approach does not requireeach of the robots’ dead zones to be manually calibrated,which is a significant advantage when considering largeteams of robots.

III. METHODOur computer vision and control approach assumes a

previously calibrated camera, accurate static LED relativelocations (from a motion capture system or precise mea-surements), and a specified trajectory. The overview of ourapproach is shown in Fig. 1. RGB LED detection is used tofind the location of LEDs in the camera image. Correspon-dence and pose optimization takes the location of the LEDsand estimates the pose in the camera frame for each robot.The EKF tracker fuses commanded motions to producefiltered estimates in the global frame for feedback controland to enhance the robustness of previous steps. Finally,the feedback control is tasked with maintaining accuratetrajectories. Each step in the overall system approach isdescribed in the following sections.

A. RGB LED Detection

Fig. 2: RGB LED Detection overview

The RGB LED detection pipeline is summarized in Fig. 2.To reduce the number of false detections, the camera is setto a low exposure. From the raw RGB image, we generate agrayscale image and extract the saturation which is definedas:

saturation = max(RGB)−min(RGB) ∈ [0, 1]

A minimum saturation, minimum grayscale value, and max-imum grayscale value are used to find colored halos sur-rounding the bright centers of the LEDs. A Difference ofGaussian (DoG) filter and threshold are applied to find dotsin the image. The grayscale image is directly thresholded

to find brightness (bright) in the image. The intersectionof dots and bright form the cores of the LEDs. Applyingconnected component analysis (CCA) to the cores definesregions. The use of previous pose estimates is an optionalparameter to filter out potentially false detections in theregions step. The regions and colored steps are combinedwith area, eccentricity and color thresholds to find LEDs ofappropriate colors and to estimate the centroids (x,y) in thecamera image. LEDs’ colors are unique to each robot whichenables the LEDs to be quickly clustered.

B. Correspondence and Pose Optimization

After clustering LEDs for each robot, we use the approachand implementation of [2]. Pose correspondence is computedwith the P3P algorithm [4][5] on combinations of threeLEDs within each robot’s individual cluster. Previous poseestimates are used to increase the speed of the search andhandle potential degenerate cases due to occlusion.

The pose optimization is performed with an exponentialmap and Gauss-Newton minimization algorithm. The itera-tive approach minimizes reprojection error and starts with aninitial estimate from the P3P algorithm [2]:

P ∗ = arg minP

∑<l,d>∈C

||π(l,P− d)||2

where l is the set of LED configurations, d is the set ofLED detection, C is the LED correspondences, and π projectsan LED from R3 into R2 (camera image).

The pose estimate covariance (Qm) is calculated with theJacobian (J) from the Gauss-Newton minimization [2]:

Qm = (JT Σ−1J)−1 where Σ = I2×2 pixels2

The pose estimate with covariance is then passed into thelocalization filter.

C. EKF Tracker

An Extended Kalman Filter smooths the pose estimatesfrom the camera for each robot. The state of the systemis updated after each camera pose estimate and propagatesbased on the time step between camera pose estimates.

1) State Vector: The state vector for each robot is:

x = [BGqT ,GpT ,BvT ]T

where BGqT is the unit quaternion representation of the

rotation from the global frame (G) to the body frame (B),Gp is the position of the body in the global frame and Bvis the velocity in the body frame.

Page 3: Monocular Multi-Robot Trajectory Control with RGB LEDs · LEDs’ colors are unique to each robot which enables the LEDs to be quickly clustered. B. Correspondence and Pose Optimization

Fig. 3: Coordinate Frames

The corresponding error-state for each robot is:

x = [δθT, pT , vT ]T

where δθ is the minimal representation from the errorquaternion δq ' [ 12δθ

T, 1]T . The remaining states use an

additive error model (e.g p = p− p)2) Motor Propagation: The state of the robot is propa-

gated with the commanded linear (Bv), angular velocity inthe body frame (Bω) and the time step (∆t).Using Euler integration the discrete time motion model andstate transition matrix of the system is:B

GqkGpkB vk

=

∆t/2 · Ω(Bω)q0 ⊗ BGqk−1

Gpk−1 + ∆t · GBRk−1Bv

Bv

F =

I3×3 03×3 03×3

∆t · b−BGR

TBv ×c I3×3 ∆t · BGRT

03×3 03×3 I3×3

where ⊗ represents quaternion composition, q0 is the unitquaternion, and

bω×c =

[0 −ωz ωy

ωz 0 −ωx−ωy ωx 0

],Ω(ω) =

[−bω×c ω−ωT 0

](1)

The process noise covariance (Qp) was estimated by drivingthe robots on a motion capture system across differentcommanded velocities.

An Unscented Kalman Filter (UKF) or a Particle Filter(PF) could be considered to avoid the Jacobian. Due to thesimplicity of the motion model and the direct observation ofthe pose component of the state, an EKF was chosen.

3) Camera Update: The state is updated after each poseestimate:

z = [BCqT ,CpT ]T

where the transform from the global to camera frame isexpressed by:

[CGqT ,Gp]T

A residual, r, and a measurement Jacobian, H, are requiredfor the EKF update. The relationship between the residualand measurement Jacobian is given by:

r = z− z ' Hx + n

where n is noise. In order to calculate a residual, r, anestimate of the camera measurements, z, is computed.

With respect to the camera frame, the predicted measure-ment is:

z =

[BGq⊗ G

CqCGR(Gp− Gpc)

]The corresponding residual and measurement Jacobian

are:

r =

[2 · π(BCq⊗ (BGq⊗ G

Cq)−1)Cp− C

GR(Gp− Gpc)

]H =

[BGR 03×3 03×3

03×3CGR 03×3

]where π is defined as π([qx, qy, qz, qw]T )T = [qx, qy, qz]T

and use a small angle approximation for the orientationdifference between z and z.

The standard covariance update, Kalman gain, and correc-tion are calculated with the measurement noise (Qm) fromGauss-Newton minimization. The non-quaternion states usestandard additive correction (e.g. Gpk+1|k+1 = Gpk+1|k +∆p). The orientation components of the correction are con-verted to quaternion corrections. Quaternion multiplicationis applied to the quaternion states and corrections (e.g.qk+1|k+1 = qk+1|k ⊗ δq).

D. Feedback Control

We apply a simple feedback control law to drive each robotthrough a trajectory consisting of waypoints. Each robotfollows a differential drive model consisting of a linear andangular velocity command. These commands serve as inputsto the on-board low-level velocity controllers that drive therobot’s motors.

The angular velocity command ωc is set by a Proportional-Derivative (PD) controller on heading error where the desiredheading is the angle from the estimate of the robot’s x,yposition (px, py) to the target waypoints x,y position (px, py).Since the motors have a deadband range in which lowcommands fail to turn the motors, we add a term based onthe sign of the motor command.

θerr = θdes − θθdes = atan2(py − py, px − px)

ωc = kpθerr + kdθ + sgn(kpθerr + kdθ)

The forwards velocity controller’s command vc advancesthe robot at a nominal rate v at zero heading error anddecreases the desired velocity as magnitude of the headingerror increases. This keeps the robot from swerving offcourse while maintaining a smooth transition from onedesired heading to another during waypoint switches.

vc = v ·max(cos(θerr), 0)

The robot’s body-frame lateral and longitudinal distancesfrom the target waypoint are thresholded. Thresholding ismore lenient in the lateral direction due to the non-holonomicdifferential drive constraint. Once both distances are suf-ficiently close, the waypoint is considered achieved andthe robot moves onto the next waypoint in the specifiedtrajectory.

Page 4: Monocular Multi-Robot Trajectory Control with RGB LEDs · LEDs’ colors are unique to each robot which enables the LEDs to be quickly clustered. B. Correspondence and Pose Optimization

IV. EXPERIMENTAL RESULTS

In this section we present a subset of the total experimentalresults for the Zumy robotic platform. To see the videos pre-sented in conjunction with the live demo at the poster session,please view: https://www.youtube.com/channel/UCCt9udUh6kNjDM1DC8CxLhw

A. Zumy Robotic Platform

To test the algorithmic methods for this work, we appliedthe monocular trajectory control technique to two low-costmobile robots. The Zumy robotic platform1 is a decimeter-scale tracked robot designed to be able to support a fullLinux computing system for convenient software develop-ment, networking, and accelerated vision processing. TheZumy can carry a Microsoft Lifecam 3000 webcam and anInvenSense MPU-6050 MEMS IMU. It also supports WiFiwireless communication to a host computer. This allowesthe Robot Operating System (ROS)2 to transmit control andsensor information between the team of robots. This robot isalso designed to be easily built from commercially availableoff-the-shelf (COTS) parts for a total cost of $350. For thepurpose of this project, we fitted each Zumy with an LED”hat” and removed the webcam as shown in Fig. 4.

We used the Microsoft Lifecam 3000, which was removedfrom the Zumy. At the beginning of each experiment, wecalibrate the world frame by setting one robot at the desiredorigin. We take the inverse transform of this robot relativeto the camera as our constant camera transform.

Fig. 4: LED “Hat” Zumy (left side) & Normal Zumy withOptiTrack Markers (right side)

B. RGB LED Detection and Pose Estimation

A sample frame of RGB LED Detection is shown in Fig.5. The original image is processed to find LEDs. In thisframe example, all LEDs are found with no false detections.The pose estimate shows the optimized locations of theLEDs with red circles, the rigid body frame with the bluerectangles, and the rotation coordinate frame with the axes.

1https://wiki.eecs.berkeley.edu/biomimetics/Main/Zumy

2www.ros.org

Fig. 5: Original image (top left), LED Detection (bottomleft), Pose Estimate Zumy-1 (top right), Pose Estimate Zumy-2 (bottom right)

C. EKF and Trajectory Control

The results for a single Zumy performing a circle 4 timesconsecutively is shown in Fig. 6, 7. The circle trajectory wasperformed on a flat surface. From Fig. 7. it is demonstratedthat our EKF reduces uncertainty in the depth estimationas the height (pz) is more constant. Improvements in theXY plane would require a more effective motion model anda more rigorous process noise covariance model. Overall,the robot consistently follows the circle trajectory despitelimitations in the motion model and deadband of the motors.

Further demonstrations such as the robot following amoving camera (waypoint always set as the origin) andmulti-robot trajectory videos can be found on the YoutubeChannel linked above.

Fig. 6: Pose estimate and EKF estimate in the XY plane for acircle trajectory. The circle was performed a total of 4 timesby the Zumy.

V. CONCLUSIONS

In this paper, we demonstrated the successful integrationof RGB LED detection using a monocular camera, pose

Page 5: Monocular Multi-Robot Trajectory Control with RGB LEDs · LEDs’ colors are unique to each robot which enables the LEDs to be quickly clustered. B. Correspondence and Pose Optimization

Fig. 7: Pose estimate and EKF estimate in the XZ plane for acircle trajectory. The circle was performed a total of 4 timesby the Zumy.

estimation through the P3P algorithm and Gauss-Newtonminimization, an EKF to smooth the estimates, and a PDcontroller on the robots’ heading. System integration, in-cluding data transfer and wireless communication, was doneentirely using ROS. In our current framework, we are ableto communicate at 30Hz and achieve real-time control. Oneissue that we faced was that if we wanted to either visualizemore images on the screen or if we wanted to record bagfileswith multiple image topics, we introduced lag into oursystem. This lag had a significant negative effect on ourdetection and pose estimation, which made real-time controlinfeasible.

Aside from reducing lag in ROS, we can also savecomputational power in the LED detection. Currently, wedownsample and search the entire camera image at eachiteration. Instead, we can introduce a region of interest inwhich the previous LED detections can give us an estimateof image where the next search should be performed. Inthis way, we eliminate the need to search the full imageat each iteration. We also attain better resolution because weno longer need to downsample the image, providing betterlocalization of our robot at further distances from the camera.

From adjusting the parameters of our LED detection tochanging our control gains, future performance improve-ments can be made with general parameter tuning. Wecan elicit tighter control by implementing better deadzonecompensation for our motors, and we can achieve better EKFresults with a better motion model covariance calibration.Furthermore, we can use active LED modulation for more ro-bust LED detection and correspondence. Looking toward theoverall project goals mentioned in the introduction, a logicalnext step would be to put the camera on a moving platform.With our work, we have laid down a good foundation, butmuch work remains in the area of using multi-robot teamsconsisting of small and cheap robots to explore previouslyunreachable environments.

ACKNOWLEDGMENT

The authors would like to thank the Biomimetic Mil-lisystems Lab, specifically Austin Buchan, Andrew Chen,Andrew Pullin, and James Lam Yi for their contributions tothe Zumy platform development.

REFERENCES

[1] Haldane, D.W., Fankhauser, P., Siegwart, R., Fearing, R.S., “Detectionof slippery terrain with a heterogeneous team of legged robots, in 2014IEEE Int. Conf. on Robotics and Automation (ICRA), pp.4576- 4581,May 31 2014-June 7 2014

[2] M. Faessler, E. Mueggler, K. Schwabe and D. Scaramuzza, “Amonocular pose estimation system based on infrared LEDs,” 2014IEEE International Conference on Robotics and Automation (ICRA),Hong Kong, 2014, pp. 907-913.

[3] A. Breitenmoser, L. Kneip and R. Siegwart, “A monocular vision-based system for 6D relative robot localization,” 2011 IEEE/RSJInternational Conference on Intelligent Robots and Systems, SanFrancisco, CA, 2011, pp. 79-85.

[4] Long Quan and Zhongdan Lan, “Linear N-point camera pose deter-mination,” in IEEE Transactions on Pattern Analysis and MachineIntelligence, vol. 21, no. 8, pp. 774-780, Aug 1999.

[5] L. Kneip, D. Scaramuzza and R. Siegwart, “A novel parametrizationof the perspective-three-point problem for a direct computation of ab-solute camera position and orientation,” Computer Vision and PatternRecognition (CVPR), 2011 IEEE Conference on, Providence, RI, 2011,pp. 2969-2976.

[6] J. R. Bergen, P. Anandan, K. J. Hanna, and R. Hingorani, “Hierachicalmodel-based motion estimation”, Proc. Eur. Conf. Computer Vision(ECCV), pp. 237-252, 1992

[7] A. Agarwal and B. Triggs, “Recovering 3D human pose from monoc-ular images,” in IEEE Transactions on Pattern Analysis and MachineIntelligence, vol. 28, no. 1, pp. 44-58, Jan. 2006.

[8] M. Andriluka, S. Roth and B. Schiele, “Monocular 3D pose estimationand tracking by detection,” Computer Vision and Pattern Recognition(CVPR), 2010 IEEE Conference on, San Francisco, CA, 2010, pp.623-630.

[9] Roumeliotis, S.I., Bekey, George A., “Distributed multirobot localiza-tion,” in IEEE Transactions on Robotics and Automation, vol.18, no.5,pp.781-795, Oct 2002

[10] Martinelli, A., Pont, F., Siegwart, R., “Multi-Robot Localization Using Relative Observations,” in2005 IEEE Int. Conf. on Robotics and Automation (ICRA),pp.2797-2802, 18-22 April 2005

[11] A. Howard, ”Multi-robot simultaneous localization and mapping usingparticle filters,” Intl. J. of Robotics Research, vol. 25, no. 12, pp. 1243-1256, 2006.

[12] Carlone, L., Ng, M.K., Jingjing Du, Bona, B., Indri, M., “Rao-Blackwellized Particle Filters multi robot SLAM with unknown initialcorrespondences and limited communication,” in 2010 IEEE Int. Conf.on Robotics and Automation (ICRA), pp.243-249, 3-7 May 2010

[13] Prorok, A., Bahr, A., Martinoli, A., “Low-cost collaborative localiza-tion for large-scale multi-robot systems,” 2012 IEEE Int. Conf. on inRobotics and Automation (ICRA), pp.4236-4241, 14-18 May 2012

[14] F. R. Fabresse, F. Caballero and A. Ollero, “Decentralized simul-taneous localization and mapping for multiple aerial vehicles usingrange-only sensors,” 2015 IEEE Int. Conf. on Robotics and Automation(ICRA), pp. 6408-6414.

[15] K. E. Bekris, M. Glick and L. E. Kavraki, “Evaluation of algorithmsfor bearing-only SLAM,” 2006 IEEE Int. Conf. on Robotics andAutomation(ICRA), pp. 1937-1943.

[16] Ahmad, A., Tipaldi, G.D., Lima, P., Burgard, W., “Cooperative robotlocalization and target tracking based on least squares minimization,”in 2013 IEEE Int. Conf. on Robotics and Automation (ICRA), pp.5696-5701, 6-10 May 2013

[17] Indelman, V. Nelson, E. Michael, N. Dellaert, F., “Multi-robot posegraph localization and data association from unknown initial relativeposes via expectation maximization,” in 2014 IEEE Int. Conf. onRobotics and Automation (ICRA), 2014 IEEE Int. Conf. on,pp.593-600, May 31 2014-June 7 2014

[18] P. Martinet, N. Daucher, J. Gallice, and M. Dhome, “Robot controlusing monocular pose estimation”, Workshop New Trends Image-BasedRobot Servoing, IROS,97, 1997

[19] P. H. Chang and J. W. Lee, “A model reference observer for time-delay control and its application to robot trajectory control,” in IEEETransactions on Control Systems Technology, vol. 4, no. 1, pp. 2-10,Jan 1996.

[20] H. H. T. Liu and J. K. Mills, “Robot trajectory control system designfor multiple simultaneous specifications: Theory and experiments”, J.Dynam. Syst., Meas., Contr., vol. 120, no. 4, pp. 520-523, 1998

Page 6: Monocular Multi-Robot Trajectory Control with RGB LEDs · LEDs’ colors are unique to each robot which enables the LEDs to be quickly clustered. B. Correspondence and Pose Optimization

[21] D. Chwa, “Fuzzy Adaptive Tracking Control of Wheeled MobileRobots With State-Dependent Kinematic and Dynamic Disturbances,”in IEEE Transactions on Fuzzy Systems, vol. 20, no. 3, pp. 587-593,June 2012.

[22] Bing Chen; Xiaoping Liu; Kefu Liu; Chong Lin “FuzzyApproximation-Based Adaptive Control of Nonlinear DelayedSystems With Unknown Dead Zone”, Fuzzy Systems, IEEETransactions on, vol. 22, no. 2, pp 237-248, April 2014