[IEEE 2013 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR) - Linkoping,...

6
978-1-4799-0880-6/13/$31.00 c 2013 IEEE An Augmented Reality approach for trajectory planning and control of tracked vehicles in rescue environments Mario Gianni, Gianpaolo Gonnelli, Arnab Sinha, Matteo Menna and Fiora Pirri ALCOR Laboratory, DIAG, Sapienza, University of Rome Via Ariosto 25 Rome, Italy {gianni,gonnelli,sinha,menna,pirri}@dis.uniroma1.it Abstract — In this paper we propose a framework for trajectory planning and control of tracked vehicles for rescue environments, based on Augmented Reality (AR). The framework provides the human operator with an AR-based interface that facilitates both 3D path planning and obstacle negotiation. The interface converts the 3D movements of a marker pen, handheld by the operator, into trajectories feasible for the tracked vehicle. The framework imple- ments a trajectory tracking controller to allow the tracked vehicle to autonomously follow the trajectories, decided by the operator. This controller relies on a localization system which provides, at real-time, position feedback. The localization system exploits the performance of a Dead Reckoning System together with the accuracy of an ICP- based SLAM in pose estimation, to determine the pose of the tracked vehicle within the 3D map. We demonstrate the application of the planning framework in autonomous robot navigation for evaluating the robot capabilities in rescue environments. Our experiments show the effectiveness of the trajectory tracking control method. Keywords: Augmented Reality, Trajectory tracking, Trajec- tory control, Unmanned Ground Vehicles, Urban Search & Rescue Robotics I. I NTRODUCTION Tracked vehicles are currently used in search and rescue, military, agricultural and planetary exploration applications where terrain conditions are difficult and unpredictable. They are better suited for such tasks than wheeled vehicles due to the larger contact area of tracks with the ground, which provides better traction on harsh terrains [1], [2]. These environments are often inaccessible or considered too dangerous for humans to operate in, thus requiring the tracked vehicle to be endowed with autonomous navigation, safe locomotion and human- robot interaction capabilities to assist humans in complex tasks such as rescue, scouting or transportation. Autonomous navigation in harsh terrains is still a daunting challenge because of the difficulty in capturing and represent- ing the variability of the environment. It relies on both 2D and 3D models of the environment to cope with perception and motion planning [3]–[6]. Moreover, autonomous navigation requires the ability to decide whether parts of the environment can be traversed or have to be bypassed [7], [8]. Although these approaches have been demonstrated to work online, on real terrain data, their complexity significantly reduced (a) Wheelchair ramp (b) Rubbles (c) Tunnel accident Fig. 1. Rescue scenarios the performance of the activities of the robot, which is an important factor in a search and rescue mission. Tracked vehicles are difficult to control since they are steered by skid steering [9]. The skid steering principle is based on controlling the relative velocities of both tracks, much in the same way as differential drive wheeled vehicles. However, control of tracked locomotion poses a more complex problem because variation of the relative velocities results in slippage between the tracks and the ground. The slippage generates large accumulated positioning errors, thus affecting the accuracy of the odometry as well as the performance of the trajectory tracking controller of the tracked vehicle. Under this perspective, trajectory tracking for safe locomotion plays a crucial role in making effective the motions of the tracked vehicle to autonomously accomplish the navigation task in such unstructured environments. Several methods to improve the accuracy of the odometry, based on slip-compensation, and to enable the tracked vehicle to follow a given trajectory have been proposed [10]–[12]. These methods require the accurate measurements of the vehicle velocity using internal sensors (e.g., inertial sensor) and friction coefficients that change according to the ground surface. In addition, these methods are difficult to implement on a robot. For rescue tasks human-robot interaction (HRI) is essential [13], [14]. HRI models robot and human perception in a reciprocal design maximizing the information provided to the human and the feedback given to the robot. Joint perception is very helpful to accomplish tasks [15], [16]. In fact, sharing knowledge is the basis of both robot and human situation awareness [17], [18]. In this paper we propose a framework for trajectory planning

Transcript of [IEEE 2013 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR) - Linkoping,...

Page 1: [IEEE 2013 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR) - Linkoping, Sweden (2013.10.21-2013.10.26)] 2013 IEEE International Symposium on Safety, Security,

978-1-4799-0880-6/13/$31.00 c©2013 IEEE

An Augmented Reality approach for trajectoryplanning and control of tracked vehicles in rescue

environmentsMario Gianni, Gianpaolo Gonnelli, Arnab Sinha, Matteo Menna and Fiora Pirri

ALCOR Laboratory, DIAG, Sapienza, University of RomeVia Ariosto 25

Rome, Italy{gianni,gonnelli,sinha,menna,pirri}@dis.uniroma1.it

Abstract — In this paper we propose a framework for trajectoryplanning and control of tracked vehicles for rescue environments,based on Augmented Reality (AR). The framework provides thehuman operator with an AR-based interface that facilitates both 3Dpath planning and obstacle negotiation. The interface converts the3D movements of a marker pen, handheld by the operator, intotrajectories feasible for the tracked vehicle. The framework imple-ments a trajectory tracking controller to allow the tracked vehicle toautonomously follow the trajectories, decided by the operator. Thiscontroller relies on a localization system which provides, at real-time,position feedback. The localization system exploits the performanceof a Dead Reckoning System together with the accuracy of an ICP-based SLAM in pose estimation, to determine the pose of the trackedvehicle within the 3D map. We demonstrate the application of theplanning framework in autonomous robot navigation for evaluatingthe robot capabilities in rescue environments. Our experiments showthe effectiveness of the trajectory tracking control method.

Keywords: Augmented Reality, Trajectory tracking, Trajec-tory control, Unmanned Ground Vehicles, Urban Search &Rescue Robotics

I. INTRODUCTION

Tracked vehicles are currently used in search and rescue,military, agricultural and planetary exploration applicationswhere terrain conditions are difficult and unpredictable. Theyare better suited for such tasks than wheeled vehicles due to thelarger contact area of tracks with the ground, which providesbetter traction on harsh terrains [1], [2]. These environmentsare often inaccessible or considered too dangerous for humansto operate in, thus requiring the tracked vehicle to be endowedwith autonomous navigation, safe locomotion and human-robot interaction capabilities to assist humans in complex taskssuch as rescue, scouting or transportation.

Autonomous navigation in harsh terrains is still a dauntingchallenge because of the difficulty in capturing and represent-ing the variability of the environment. It relies on both 2D and3D models of the environment to cope with perception andmotion planning [3]–[6]. Moreover, autonomous navigationrequires the ability to decide whether parts of the environmentcan be traversed or have to be bypassed [7], [8]. Althoughthese approaches have been demonstrated to work online,on real terrain data, their complexity significantly reduced

(a) Wheelchair ramp (b) Rubbles (c) Tunnel accident

Fig. 1. Rescue scenarios

the performance of the activities of the robot, which is animportant factor in a search and rescue mission.

Tracked vehicles are difficult to control since they aresteered by skid steering [9]. The skid steering principle isbased on controlling the relative velocities of both tracks,much in the same way as differential drive wheeled vehicles.However, control of tracked locomotion poses a more complexproblem because variation of the relative velocities resultsin slippage between the tracks and the ground. The slippagegenerates large accumulated positioning errors, thus affectingthe accuracy of the odometry as well as the performance ofthe trajectory tracking controller of the tracked vehicle. Underthis perspective, trajectory tracking for safe locomotion playsa crucial role in making effective the motions of the trackedvehicle to autonomously accomplish the navigation task insuch unstructured environments. Several methods to improvethe accuracy of the odometry, based on slip-compensation, andto enable the tracked vehicle to follow a given trajectory havebeen proposed [10]–[12]. These methods require the accuratemeasurements of the vehicle velocity using internal sensors(e.g., inertial sensor) and friction coefficients that changeaccording to the ground surface. In addition, these methodsare difficult to implement on a robot.

For rescue tasks human-robot interaction (HRI) is essential[13], [14]. HRI models robot and human perception in areciprocal design maximizing the information provided to thehuman and the feedback given to the robot. Joint perceptionis very helpful to accomplish tasks [15], [16]. In fact, sharingknowledge is the basis of both robot and human situationawareness [17], [18].

In this paper we propose a framework for trajectory planning

Page 2: [IEEE 2013 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR) - Linkoping, Sweden (2013.10.21-2013.10.26)] 2013 IEEE International Symposium on Safety, Security,

Rx

RyR

'x

'y

Gx

Gy

L

Bx

B

By

b

Fig. 2. Differential drive robot model

and control for tracked vehicles in rescue environments, basedon Augmented Reality (AR). This framework provides thehuman operator with an AR-based interface that facilitatesboth 3D path planning and obstacle negotiation. The interfaceoffers the capability to track the movements of a marker pen,handheld by an operator. These movements can be convertedinto the trajectory a tracked vehicles can follow. The drawntrajectories are immediately overlaid on the displayed 3D map.The operator can modify the drawn path by partially changingit or by completely replanning the path. The frameworkimplements a trajectory tracking controller, based on inputoutput linearization via feedback, to allow the tracked vehicleto autonomously follow the potential path, committed by theoperator [19]. The feedback of the controller is provided by alocalization system which exploits the performance of a DeadReckoning System together with the accuracy of an ICP-basedSimultaneous Localization and Mapping in pose estimation[20], to determine the pose of the tracked vehicle on the 3Dmap, in real-time.

The paper is organized as follows. In Section II we introducethe kinematics model of the tracked vehicle. In Section IIIwe describe the robot localization system, taking care of thenecessary feedback to the vehicle tracking controller. Thecontrol strategy for the trajectory tracking, is described inSection IV. In Section V we present the AR-based approachfor trajectory planning and generation. Validation experimentsare discussed in Section VI.

II. THE ROBOT MODEL

The tracked vehicle can be approximated to a differentialdrive robot, which is modelled as a unicycle. The higher levelcontrol inputs to the robot are specified in terms of the linearvelocity v and angular velocity ω. The differential drive modelis given as follows: xRyR

θR

=

v · cos θv · sin θω

(1)

Figure 2 shows the position of the centre of mass of therobot in a 2D plane. The orientation of the robot is givenby the angle between the local robot reference frame x′ axisand the global xG axis. The linear and angular velocity aregiven as input to the lower level control which computes the

velocities vl and vr of the left and right tracks of the robots,according to the following inverse kinematics

vl =

v − L·ω

2 if |vl| ≤ vmaxvmax if vl > vmax

−vmax if vl < −vmax(2)

vr =

v + L·ω

2 if |vr| ≤ vmaxvmax if vr > vmax

−vmax if vr < −vmax(3)

Here L is the distance between the robot tracks and vmax isthe maximum track velocity. Both Eq. (2) and Eq. (3) boundthe velocity inputs of the tracked vehicle.

III. ROBOT LOCALIZATION SYSTEM

The localization system (LS) provides to the trajectorytracking controller, the pose qR(t) of the robot, in a globalcoordinate frame. This system relies on two main localizationsub-systems: (1) a dead reckoning navigation system (DRNS)and, (2) an ICP-based simultaneous localization and mapping(ICP-SLAM) [20], [21]. In addition, DRNS is endowed witha complementary filter (CF) which provides an estimation ofthe Euler angles of the body of the robot from data fusion ofodometry and inertial data coming from the IMU sensor [22]–[24]. The localization system compensates the pose estimationerror, accumulated by DRNS, by integrating the laser odome-try provided by ICP-SLAM and so it determines the robot posecorrectly. The pose refinement process can be summarized asfollows.

Fig. 3. 3D Pose estimation based on IMU sensor data, provided by DRNS

DRNS provides the localization system with an estimationqodomR (t) of the robot pose, with respect to the robot odometryreference frame, at fDRNS = 10 Hz. This estimation is subjectto a cumulative error, which increases with the travelleddistance (see Figure 3). The error is due to the inaccuratemeasurements in heading rotation and to the lack of a globalreference.

In parallel, on the basis of the laser odometry informa-tion, ICP-SLAM provides the localization system with anestimation qmapR (t) of the pose of the robot, referred to themap reference frame, as well as the coordinate transformationTmapodom(t), between the robot odometry reference frame and

the map reference frame. The map reference frame is assumedto be the global reference frame. Due to the drift and the

Page 3: [IEEE 2013 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR) - Linkoping, Sweden (2013.10.21-2013.10.26)] 2013 IEEE International Symposium on Safety, Security,

slippage this transformation is not fixed. Both qmapR (t) andTmapodom(t) are updated by ICP-SLAM at fICP−SLAM = 1 Hz.

The output frequency fICP−SLAM is limited by the scanningspeed of the laser sensor.

Given the estimation qodomR (t) of the pose of the robot,provided by DRNS, and given the current transformationTmapodom(t), from ICP-SLAM, LS computes the pose qmapR (t)

of the tracked vehicle, at time t as follows

qmapR (t) = Tmapodom(t) · qodomR (t) (4)

LS refines the robot pose updating the transformationTmapodom(t), as soon as ICP-SLAM updates it . Figure 4 shows

the effects of the pose refinement process. This approachallows LS to provide the pose of the robot qmapR (t), as afeedback to the trajectory tracking controller, at fDRNS .

Fig. 4. 3D Pose estimation based on point clouds from 3D Laser sensorfrom ICP-SLAM

IV. TRAJECTORY TRACKING CONTROL

Let qd(t) =[xd(t) yd(t) θd(t)

]Tbe the position and

orientation of a virtual reference frame on the desired Carte-sian trajectory, with respect to a global reference frame. LetqR(t) =

[xR(t) yR(t) θR(t)

]Tbe the pose of the local

robot reference frame expressed in the global reference frame,provided by the localization system. The trajectory trackingcontroller of the robot has to generate the control inputscd(t) =

[vd(t) ωd(t)

]Tin order to asymptotically stabilize

to zero the tracking error e(t) between qd(t) and qR(t), whichis defined as follows:

e(t) =

cos θR(t) sin θR(t) 0− sin θR(t) cos θR(t) 0

0 0 1

xd(t)− xR(t)yd(t)− yR(t)θd(t)− θR(t)

(5)

The proposed approach to the design of trajectory trackingcontroller of the tracked vehicle is based on input-outputlinearisation via feedback [19], [25].

Let B =[xB(t) yB(t)

]Tbe a point displaced at distance

b > 0 along the main axis of the tracked vehicle (see Figure2), whose Cartesian coordinates are

xB(t) = xR(t) + b cos θR(t)yB(t) = yR(t) + b sin θR(t)

(6)

Given Eq. (1) and Eq. (6), the kinematics of the point B onthe tracked vehicle is given by[xB(t)yB(t)

]=

[cos θR(t) −b sin θR(t)sin θR(t) b cos θR(t)

] [vω

]= T(θR(t))

[vω

](7)

Since T(θR(t)) is not singular for any b > 0, then theinput-output linearization via feedback can be obtained bycomputing the transformation of the control inputs as follows[

]= T−1(θR(t))

[u1u2

](8)

Let qd,B(t) =[xd,B(t) yd,B(t) θd,B(t)

]Tthe pose

of the virtual reference frame on the desired Cartesiantrajectory, displaced at the same distance b > 0 alongthe tangent to the trajectory at qd(t). Let qd,B(t) =[xd,B(t) yd,B(t) θd,B(t)

]Tbe the velocity of the displaced

virtual reference frame, according to the timing law, definedon the desired trajectory. The trajectory tracking controller,designed for the tracked vehicle, has the following linear form

u1 = xd,B(t) + k1 (xd,B(t)− xB(t))u2 = yd,B(t) + k2 (yd,B(t)− yB(t))

(9)

Here k1 > 0 and k2 > 0 are the controller gains. Thelinear feedback controller makes the point B track any desiredtrajectory, even with discontinuous tangent to the path (e.g.,a square without stopping at corners) without requiring therobot to stop and reorient itself at those points, as long asb > 0 [19].

Figure 5 depicts the convergence to zero of the Carte-sian tracking error ep(t) =

[xd(t)− xR(t) yd(t)− yR(t)

]T,

with decoupled dynamics on its two components, induced bythe linearization of the controller.

Fig. 5. Cartesian trajectory error ep(t)

V. TRAJECTORY PLANNING AND GENERATION

Let C = R3 × SO(3) be the configuration space of thetracked vehicle. Let O =

⋃pi=1Oi ⊂ R3 be the region

occupied by the obstacles. Let qI ∈ C−O and qG ∈ C−O bethe initial and the goal configuration of the robot, respectively.The trajectory planning problem can be broken down into thefollowing two sub-problems:• generating a collision-free path τ : [0, 1]→ C −O, such

that τ (0) = qI and τ (1) = qG.• defining a timing law s(t) on the path, such that s(ti) = 0

and s(tf ) = 1, with s(t) > 0, for t ∈ [ti, tf ].

Page 4: [IEEE 2013 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR) - Linkoping, Sweden (2013.10.21-2013.10.26)] 2013 IEEE International Symposium on Safety, Security,

An approach based on Augmented Reality (AR) is proposedto address the problem of planning safe paths for the robotin the real environment [26]–[29]. This approach enables thehuman operator to plan collision-free paths on the 3D mapof the environment. For this purpose, the human operator isprovided with an interface, visualizing the 3D map of the realenvironment, from the 3D laser range data, as well as the 3Dmodel of the real robot in the correct pose, with respect toa global reference frame. The AR-based interface visualizesthe motions of the robot overlaid on to the real environment,from different perspective views. The user interacts with amarker pen, changing its position on the 3D map, clicking onit or selecting an action event from a context menu assignedto it. The marker pen implements a function τ : [0, 1] →R3−O, which generates a sequence of points, correspondingto the points of the trajectory curve. An action event in thecontext menu allows the operator to determine the startingpoint τ (0) = qI of the trajectory curve. The marker penis tracked by the interface which, in parallel, converts themovements of the human operator into the sequence of pointsτ = {τ (0) , . . . , τ (1)} and overlays this sequence on to the3D map of the real environment. The operator can modifythe drawn path by partially changing the sequence of pointsor by completely replanning the path. The operator commitsthe path by selecting the action from the context menu whichassigns to the ending point τ (1) = qG of the trajectory curvethe current position of the marker pen on the 3D map.

Upon the commitment of the path, generating the trajectoryof the robot requires to specify, for each point qi ∈ τ , theorientation R (φ) ∈ SO(3) of the virtual reference frame,with respect to the global reference frame, having as originqi, and the velocity ˙qi. Let vd ∈ R be a constant, specifyingthe velocity profile. Let f ∈ R be the minimum rate of thetrajectory tracking controller. Given the path τ , the AR-basedinterface generates the trajectory τ as well as the velocities˙qi, for all qi ∈ τ , according to the following function:

f3D : R3 × R× R→ R3 × SO(3)× R3 (10)

At ith iteration, the function computes for the pair ofconsecutive points qi, qi+1 ∈ τ , the Euclidean distancedi = ‖qi+1− qi‖. This distance is compared with the desiredtraveled distance di,d = vd · 1f . Whether di,d ≤ di a new pointqi is added, at distance di,d along the segment connectingqi to qi+1, to the trajectory τ , and qi is updated to qi.Otherwise, qi is not updated, a new pair of consecutive pointson the path is considered, the desired traveled distance isupdated to di+1,d = vd · 1

f − di and the new point, inthe (i + 1)th iteration will be added, according the updateddistance di+1,d. Moreover, for each point qi ∈ τ , the velocityqi =

[vd · cos(ϕ) vd · sin(ϕ) 0

]T, where ϕ is the yaw

angle defined by the vector qi+1−qi. This function generatesthe 3D trajectory, with the effect of smoothing the path tomake it more feasible for the robot to follow.

However, in order to apply the proposed control strategya 2D representation of the trajectory τ is required. As the

Algorithm 1: 2DtrajectoryGeneratorData: τ , vd, fResult: τ ,τdi := 0; di,d := 0; ∆d := 0; ϕ := 0;qi := τ (0); qnewi := τ (0);changePoint :=⊥; i := 1;while i < length (τ ) do

if changePoint thendi,d :=

(vd · 1f

)−∆d;

di := getDistance(τ (i) , qnewi );ϕ := computeYawAngle(qnewi , τ (i));

elsedi,d := vd · 1f ;di := getDistance(τ (i) , qi);ϕ := computeYawAngle(qi , τ (i));setYaw(qi,ϕ);

endif di >= di,d then

qtempx := qi,x + di,d · cos (ϕ);qtempy := qi,y + di,d · sin (ϕ);qtempz := qi,z;if changePoint then

ϕ := computeYawAngle(qi , τ (i));setYaw(qi,ϕ);

endqi,x := vd · cos (ϕ);qi,y := vd · sin (ϕ);qi,z := 0;add(τ ,qi);add(τ ,qi);qi := qtemp;changePoint :=⊥;

else∆d := ∆d+ |di − di,d|;qnewi := τ (i) ;changePoint := > ;i := i+ 1 ;

endend

tracked vehicle is always constrained to the surface which itis traversing, the 2D trajectory can be obtained from τ bycomputing the following mapping

f2D : R3 × SO(3)× R3 → R2 × SO(2)× R2 (11)

This function projects each way point qi ∈ τ on tothe 2D reference frame constrained to the traversed surface.In addition, for every point qi, it computes the headingreference R(ϕi) ∈ SO(2) as well as the x-component andthe y-component of the velocity qi. Algorithm 1 shows animplementation of the trajectory generator function in (10),integrating the mapping function in (11).

Page 5: [IEEE 2013 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR) - Linkoping, Sweden (2013.10.21-2013.10.26)] 2013 IEEE International Symposium on Safety, Security,

Fig. 6. Robot control loop

VI. EXPERIMENTS

The capability to describe the navigation task using theproposed path planning system and the effectiveness of thetrajectory tracking control method are validated by the experi-ments settled in real scenarios, with the BlueBotics UGV [30].Figure 7(a) shows a 3D representation of the wheelchair ramp,connecting the basement with the ground floor of a building.Colours of the point clouds specify the static traversability costassessment of the extracted terrain [31], [32]. According tothe assumed stability criteria, the terrain point cloud colouredby red or yellow can be safety traversable, while, conversely,green or blue terrain point cloud do not guarantee the staticstability of the robot. The wheelchair ramp is 50 m long.with 8% grade of slope. The human operator draws the pathto be followed by the robot along the ramp, starting from thebasement, up to the courtyard, at the ground floor (see the bluethick line in Figure 7(a)). The trajectory generator algorithmtakes as input the path and the velocity profile and suitablysamples a sequence of points, with the direction, representingthe desired trajectory. Upon the trajectory has been generated,the controller computes the input velocity commands of therobot to track the reference trajectory. Figure 7(b) and Figure7(c) report the convergence of the Cartesian trajectory errored(t), during the navigation task, under two different settingsof the velocity profile of the trajectory, vd = 0.1 m/s andvd = 0.25 m/s, respectively. In both these experiments thegains k1 and k2 of the controller have been set to 0.5 andthe distance b of the point B from the center of the robot to0.2 m.

Figure 7(d) shows the 3D representation of an area, of100 m2, with rubbles, stairs, gaps and ramps. Supported bythe path planning interface, the human operator identifies apossible route through the rubbles, and instructs the trackedvehicle to autonomously follow the path. The gains k1 andk2 of the controller have been set to 0.3, the velocity profilevd to 0.25 m/s and the distance b of the point B from thecenter of the robot to 0.2 m. In this experiment the skid-steering of the vehicle resulted in slippage between the tracks

and the terrain as well as track-soil shearing, thus makingdifficult to predict the exact motion of the vehicle on the basisof track velocities. Therefore, the accumulated positioningerror affected the estimation of the robot pose provided bythe dead reckoning navigation system. Figure 7(e) and Figure7(f) show how the linear and the angular velocity commands,given as inputs to the robot, affected both the variation of thenorm ‖ed(t)‖ of the Cartesian error and the variation of theorientation error eθ(t) of the controller tracking the path alongthe sloppy terrain. The error on the estimation of the real poseof the robot increases, as the input velocities increase.

VII. CONCLUSION

In this paper we proposed a framework for trajectoryplanning and control of tracked vehicles in rescue environ-ments, based on Augmented Reality (AR). This frameworkprovided an effective solution for closing the loop betweenperception, motion planning and control for tracked vehicles.We proposed an AR-based approach for both 3D path planningand obstacle negotiation, which enabled the human operatorto become part of the loop. The experiments, performed inreal rough environments, demonstrated the applicability of thisframework.

VIII. ACKNOWLEDGMENTS

The work reported in this paper was supported by the EUFP7 Integrated Project NIFTi, Contract No. 247870.

REFERENCES

[1] A. Davids, “Urban search and rescue robots: from tragedy to technol-ogy,” Int. Sys., vol. 17, no. 2, pp. 81–83, 2002.

[2] L. C.H., K. S.H., K. S.C., K. M.S., and K. Y.K., “Double-track mobilerobot for hazardous environment applications,” Adv. Rob., vol. 17, no. 5,pp. 447–459, 2003.

[3] S. Thrun, W. Burgard, and D. Fox, Probabilistic Robotics. MIT Press,2005.

[4] K. Konolige, M. Agrawal, R. Bolles, C. Cowan, M. Fischler, andB. Gerkey, “Outdoor mapping and navigation using stereo vision,” inExp. Robotics, 2008, vol. 39, pp. 179–190.

[5] J.-S. Gutmann, M. Fukuchi, and M. Fujita, “3d perception and environ-ment map generation for humanoid robot navigation,” Int. Jour. RoboticRes., vol. 27, no. 10, pp. 1117–1134, 2008.

Page 6: [IEEE 2013 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR) - Linkoping, Sweden (2013.10.21-2013.10.26)] 2013 IEEE International Symposium on Safety, Security,

(a) Wheelchair Ramp (b) ep(t) under vd = 0.1m/s (c) ep(t) under vd = 0.25m/s

(d) Sloppy terrain (e) ∆‖ep(t)‖ (f) ∆eθ(t)

Fig. 7. Trajectories followed (blue thick lines) by the robot along a wheelchair ramp (a) and along a sloppy terrain (d), respectively. Colours in the point cloudrepresent the static traversability cost assessment of the extracted terrain. (b) and (c) Cartesian trajectory error of the controller in following the wheelchairramp, subjected to two different velocity profiles vd. (e) Variation of the norm ‖ed(t)‖ of the Cartesian trajectory error of the controller in following thetrajectory along the sloppy terrain. (f) Variation of the orientation error ∆eθ(t) of the controller tracking the path along the sloppy terrain.

[6] B. Morisset, R. B. Rusu, A. Sundaresan, K. Hauser, M. Agrawal, J.-C. Latombe, and M. Beetz, “Leaving flatland: toward real-time 3dnavigation,” in ICRA, 2009, pp. 3384–3391.

[7] R. Manduchi, A. Castano, A. Talukder, and L. Matthies, “Obstacledetection and terrain classification for autonomous off-road navigation,”Aut. Rob., vol. 18, no. 1, pp. 81–102, 2005.

[8] C. Dornhege and A. Kleiner, “Behavior maps for online planning ofobstacle negotiation and climbing on rough terrain,” in IROS, 2007, pp.3005–3011.

[9] J. Y. Wong, Theory of Ground Vehicles. Wiley, 2001.[10] Z. Shiller, W. Serate, and M. Hua, “Trajectory planning of tracked

vehicles,” in ICRA, vol. 3, 1993, pp. 796–801.[11] J. L. Martınez, A. Mandow, J. Morales, S. Pedraza, and A. Garcıa-

Cerezo, “Approximating kinematics for tracked mobile robots,” Int. Jour.Robotic Res., vol. 24, no. 10, pp. 867–878, 2005.

[12] D. Endo, Y. Okada, K. Nagatani, and K. Yoshida, “Path following controlfor tracked vehicles based on slip-compensating odometry,” in IROS,2007, pp. 2871–2876.

[13] R. Murphy, “Human-robot interaction in rescue robotics,” IEEE Trans.on Systems, Man, and Cybernetics, Part C: Applications and Reviews,vol. 34, no. 2, pp. 138–153, 2004.

[14] G.-J. Kruijff, M. Janicek, S. Keshavdas, B. Larochelle, H. Zender,N. Smets, T. Mioch, M. Neerincx, J. van Diggelen, F. Colas, M. Liu,F. Pomerleau, R. Siegwart, V. Hlavac, T. Svoboda, T. Petrcek, M. Re-instein, K. Zimmermann, F. Pirri, M. Gianni, P. Papadakis, A. Sinha,P. Balmer, N. Tomatis, R. Worst, T. Linder, H. Surmann, V. Tretyakov,S. Corrao, S. Pratzler-Wanczura, and M. Sulk, “Experience in systemdesign for human-robot teaming in urban search & rescue,” in FSR,2012.

[15] J. Chen, E. Haas, and M. Barnes, “Human performance issues and userinterface design for teleoperated robots,” IEEE Trans. on Systems, Man,and Cybernetics, Part C: Applications and Reviews, vol. 37, no. 6, pp.1231 –1245, 2007.

[16] D. Labonte, P. Boissy, and F. Michaud, “Comparative analysis of 3-d robot teleoperation interfaces with novice users,” IEEE Trans. onSystems, Man, and Cybernetics, Part B: Cybernetics, vol. 40, no. 5,pp. 1331–1342, 2010.

[17] J. Drury, J. Scholtz, and H. Yanco, “Awareness in human-robot inter-actions,” in IEEE Int. Conf. on Systems, Man and Cybernetics, vol. 1,2003, pp. 912–918.

[18] J. Scholtz, J. Young, J. Drury, and H. Yanco, “Evaluation of human-robot

interaction awareness in search and rescue,” in ICRA, vol. 3, 2004, pp.2327–2332.

[19] B. Siciliano, L. Sciavicco, L. Villani, and G. Oriolo, Robotics: Mod-elling, Planning and Control. Springer, 2009.

[20] F. Pomerleau, M. Liu, F. Colas, and R. Siegwart, “Challenging data setsfor point cloud registration algorithms,” Int. Jour. Rob. Res., vol. 31,no. 14, pp. 1705–1711, 2012.

[21] F. Pomerleau, F. Colas, R. Siegwart, and S. Magnenat, “Comparing icpvariants on real-world data sets,” Aut. Rob., vol. 34, no. 3, pp. 133–148,April 2013.

[22] A. Tayebi, S. McGilvray, A. Roberts, and M. Moallem, “Attitudeestimation and stabilization of a rigid body using low-cost sensors,”in IEEE Conf. on Decision and Control, 2007, pp. –.

[23] D. Cao, Q. Qu, C. Li, and C. He, “Research of attitude estimation of uavbased on information fusion of complementary filter,” in ICCIT, 2009,pp. 1290–1293.

[24] V. Kubelka and M. Reinstein, “Complementary filtering approach toorientation estimation using inertial sensors only,” in ICRA, 2012.

[25] D.-H. Kim and J.-H. Oh, “Tracking control of a two-wheeled mobilerobot using inputoutput linearization,” Control Engineering Practice,vol. 7, no. 3, pp. 369 – 373, 1999.

[26] M. Zaeh and W. Vogl, “Interactive laser-projection for programmingindustrial robots,” in ISMAR, 2006, pp. 125 –128.

[27] S. A. Green, M. Billinghurst, X. Chen, and G. J. Chase, “Human-robotcollaboration: A literature review and augmented reality approach indesign,” Int. Jour. of Adv. Rob. Sys., vol. 5, no. 1, pp. 1–18, 2008.

[28] J. W. S. Chong, S. K. Ong, A. Y. C. Nee, and K. Youcef-Youmi,“Robot programming using augmented reality: An interactive method forplanning collision-free paths,” Robotics and Computer-Integrated Man.,vol. 25, no. 3, pp. 689–701, 2009.

[29] M. Gianni, F. Ferri, and F. Pirri, “Are: Augmented reality environmentfor mobile robots,” in TAROS, 2013.

[30] BlueBotics. (2012) Absolem - survaillance & rescue. [Online].Available: http://www.bluebotics.com/mobile-robotics/absolem

[31] T. Stoyanov, M. Magnusson, H. Andreasson, and A. Lilienthal, “Pathplanning in 3d environments using the normal distributions transform,”in IROS, 2010, pp. 3263–3268.

[32] P. Papadakis and F. Pirri, “3d mobility learning and regression ofarticulated, tracked robotic vehicles by physics-based optimization,” inVRIPHYS, 2012, pp. 147–156.