E cient Representation in 3D Environment Modeling for Planetary Robotic...

25
Efficient Representation in 3D Environment Modeling for Planetary Robotic Exploration Narunas Vaskevicius, Andreas Birk, Kaustubh Pathak, oren Schwertfeger Jacobs University, Campus Ring 1, 28759 Bremen, Germany http://robotics.jacobs-university.de, {n.vaskevicius, a.birk}@jacobs-university.de Abstract Good situation awareness is an absolute must when operating mobile robots for planetary ex- ploration. 3D sensing and modeling data gathered by the robot are hence crucial for the operator. But standard methods based on stereo vision have their limitations, especially in scenarios where there is no or only very limited visibility, e.g., due to extreme light conditions. 3D Laser Range Finders (3D-LRF) provide an interesting alternative, especially as they can provide very accurate, high resolution data at very high sampling rates. But the more 3D range data is acquired, the harder it becomes to transmit the data to the operator station. Here, a fast and robust method to fit planar surface patches into the data is presented. The usefulness of the approach is demonstrated in two different sets of experiments. The first set is based on data from our participation at the ESA Lunar Robotics Challenge 2008. The second one is based on data from a Velodyne 3D-LRF in a high fidelity simulation with ground truth data from Mars. keywords: space robotics, planetary exploration, 3D mapping, surface representation, plane fitting published in: Advanced Robotics, Vol. 24, Iss. 8-9, Brill, 2010 1 Introduction The Mars Exploration Rover (MER) missions [1][2][3][4] are an impressive demonstration of the possi- bilities of using mobile robots for planetary exploration. Though intelligent autonomous functionalities onboard of the robots play an important role, especially for locomotion control [5][6][7][8][9], there is still a significant amount of human control in form of advanced teleoperation needed for operating the systems [10]. In doing so, it is of high importance that the operators have an excellent situation aware- ness through 3D representations of the environment, typically based on stereo vision [11] [12] [13] [14] [15] or other vision based techniques [16][17], possibly in combination with structured laser light [18]. Stereo vision has its limitations for planetary exploration, especially in planetary settings with few features [19]. Furthermore, vision techniques are in general challenged by darkness or other extreme lighting conditions. There is hence an interest in exploring in addition alternative technologies for robotic planetary exploration like 3D Laser Range Finders (3D-LRF) [20], which as further benefits provide a large field of view and quite precise, high resolution data. But what in principle is a significant advantage, namely the provision of large amounts of high resolution 3D data, is also a drawback when it comes to the processing of the data, respectively the transmission of the data to the operator’s station on earth. Here, a way out of this dilemma is presented. Based on previous own work [21], it is shown how large planar surface patches can be fitted in a fast and robust manner into 3D range point clouds, hence significantly reducing the amount of data. In doing so, potential noise in the sensor is compensated [22] without the need for advanced sensor models of the laser range finder [23][24]. 1

Transcript of E cient Representation in 3D Environment Modeling for Planetary Robotic...

Page 1: E cient Representation in 3D Environment Modeling for Planetary Robotic Explorationrobotics.jacobs-university.de/publicationData/ARJ09... · 2012-09-24 · keywords: space robotics,

Efficient Representation in 3D Environment Modeling for

Planetary Robotic Exploration

Narunas Vaskevicius, Andreas Birk, Kaustubh Pathak,Soren Schwertfeger

Jacobs University, Campus Ring 1, 28759 Bremen, Germany

http://robotics.jacobs-university.de, n.vaskevicius, [email protected]

Abstract

Good situation awareness is an absolute must when operating mobile robots for planetary ex-ploration. 3D sensing and modeling data gathered by the robot are hence crucial for the operator.But standard methods based on stereo vision have their limitations, especially in scenarios wherethere is no or only very limited visibility, e.g., due to extreme light conditions. 3D Laser RangeFinders (3D-LRF) provide an interesting alternative, especially as they can provide very accurate,high resolution data at very high sampling rates. But the more 3D range data is acquired, theharder it becomes to transmit the data to the operator station. Here, a fast and robust method tofit planar surface patches into the data is presented. The usefulness of the approach is demonstratedin two different sets of experiments. The first set is based on data from our participation at theESA Lunar Robotics Challenge 2008. The second one is based on data from a Velodyne 3D-LRF ina high fidelity simulation with ground truth data from Mars.

keywords: space robotics, planetary exploration, 3D mapping, surface representation, plane fitting

published in:Advanced Robotics, Vol. 24, Iss. 8-9, Brill, 2010

1 Introduction

The Mars Exploration Rover (MER) missions [1][2][3][4] are an impressive demonstration of the possi-bilities of using mobile robots for planetary exploration. Though intelligent autonomous functionalitiesonboard of the robots play an important role, especially for locomotion control [5][6][7][8][9], there isstill a significant amount of human control in form of advanced teleoperation needed for operating thesystems [10]. In doing so, it is of high importance that the operators have an excellent situation aware-ness through 3D representations of the environment, typically based on stereo vision [11] [12] [13] [14][15] or other vision based techniques [16][17], possibly in combination with structured laser light [18].

Stereo vision has its limitations for planetary exploration, especially in planetary settings with fewfeatures [19]. Furthermore, vision techniques are in general challenged by darkness or other extremelighting conditions. There is hence an interest in exploring in addition alternative technologies forrobotic planetary exploration like 3D Laser Range Finders (3D-LRF) [20], which as further benefitsprovide a large field of view and quite precise, high resolution data.

But what in principle is a significant advantage, namely the provision of large amounts of highresolution 3D data, is also a drawback when it comes to the processing of the data, respectively thetransmission of the data to the operator’s station on earth. Here, a way out of this dilemma is presented.Based on previous own work [21], it is shown how large planar surface patches can be fitted in a fast androbust manner into 3D range point clouds, hence significantly reducing the amount of data. In doingso, potential noise in the sensor is compensated [22] without the need for advanced sensor models of thelaser range finder [23][24].

1

Page 2: E cient Representation in 3D Environment Modeling for Planetary Robotic Explorationrobotics.jacobs-university.de/publicationData/ARJ09... · 2012-09-24 · keywords: space robotics,

There is a tremendous amount of different map representations including for example occupancygrids [25], correspondence graphs [26], elevation maps [12][27] or behavior based representations [28].In this article, we focus on full 3D representation, for which point clouds are the predominant form ofrepresentation [29, 30, 31, 32, 33, 34] for two main reasons. First, points clouds, i.e., sets of coordinatesof surface points, are the prototypical raw data delivered by 3D range sensors. Second, point clouds arethe main basis for state of the art 3D perception and mapping algorithms, which - as also pointed outin [35] - are usually based on the iterative closest point (ICP) algorithm [36] operating on point clouds.

Figure 1: The two Jacobs robots during the ESA Lunar Robotics Challenge 2008. On the left, the relayrobot with a 3D Laser Range Finder to generate models of the environment. In the middle, the proberobot to take soil samples in the crater. On the right, the robots heading for the crater rim, duringthe actual ESA Lunar Robotics Challenge taking place in the night with simulated light conditions asfound in the polar regions of the moon.

Figure 2: The setup of the ESA Lunar Robotics Challenges features a typical scenario where 3D sensingand modeling is highly desirable but vision based approaches are unfeasible. The crater that is tobe explored is in complete darkness and RF-based communication from the lander is not possible. Acommunication relay robot can hence serve in addition model the environment to ease the operation ofthe robot exploring the crater.

Our approach is validated in two different planetary exploration experiments. The first set of exper-iments is based on data collected at the Lunar Robotics Challenge (LRC) [37] of the European SpaceAgency (ESA) in October 2008 in the volcanic landscape of the Teide National Park on the island ofTenerife. Two robots of Jacobs university (figure 1) participated in this field test based on a scenariowhere vision based approaches to environment modeling are infeasible due to the light conditions (figure

2

Page 3: E cient Representation in 3D Environment Modeling for Planetary Robotic Explorationrobotics.jacobs-university.de/publicationData/ARJ09... · 2012-09-24 · keywords: space robotics,

1, on the right). The motivation for the ESA LRC is scientific interest to search for the presence ofwater on moon, especially at the bottom of craters in the polar regions. For this purpose, a probe robothas to get from a landing site to the crater, to descend into it, to take probes of interesting soil spotsat the bottom, to climb out of the crater, and to return back to the lander where the soil probe hasto be delivered for an automated analysis. This leads to several particular challenges (figure 2). First,the crater is steep and loose soil makes locomotion challenging. Second, there is no RF-communicationfrom the lander - that provides the communication link to the operator station on earth - into thecrater, hence direct teleoperation is not possible. Third, there is very bright, horizontal illumination bysunlight on the top and the rim of the crater and absolute darkness inside of it, making teleoperationvia video-streams extremely hard and vision based environment modeling impossible.

The Jacobs team hence used a second robot in addition to a probe robot. This second robot servesas communication relay between the lander and the probe robot and furthermore as 3D environmentmodeler. For this purpose, the robot is equipped with an actuated laser range finder (LRF). The sensoris based on a commercial 2D LRF of the type SICK S300 combined with a simple servo for a pitchingmotion for the additional degree of freedom. The SICK S300 has a horizontal field of view of 270o of 541beams. The servo allows a maximum motion from −90o to +90o at a maximum spacing of 0.5o. Thisgives a 3D point-cloud of a total size of 541× 361 = 195, 301 per sample in the highest resolution. Themaximum range of the sensor is about 20 meters. The time to take one full scan in maximum resolutionis about Tscan ≈ 32 seconds. This scan time is the main bottleneck for the environment modeling. Butas shown in the second experiment, there are - more costly - off the shelf 3D-LRF alternatives thatprovide even higher resolution 3D data at significantly faster rates.

Figure 3: Planetary exploration in the Unified System for Automation and Robotics Simulator - US-ARsim. The simulator features artificially generated planetary surface types as well as the option toimportant in ground truth models from space missions, which is used for the experiments here.

In a second set of experiments, the Unified System for Automation and Robot Simulation (USARSim)[38] is used where a high fidelity simulation of a high end 3D LRF is available for testing purposes.Furthermore, we have imported ground truth data from Mars to model the experimentation environment(figure 3). USARsim is a high fidelity simulator built on top of the Unreal Tournament [39] gameengine. Its feature include a commercial physics engine (Karma [40]) and a real-time, three-dimensionalvisualisation engine. It is important that the components and models in USARsim have been tested fortheir physical fidelity [41, 42, 43, 44]. USARsim has already proven to be useful for the investigationof planetary exploration scenarios, including the usage of models based on ground truth data fromspace missions. One example is the terrain of the Eagle crater on Mars. The underlying environmentmodel is based on ground truth data from the Mars Exploration Rover (MER) mission data archives[45]. The according models were used in USARsim for example to investigate terrain classification andautonomous robot behaviors for planetary exploration [46][47].

In this article, we are interested in high resolution 3D range data that can ideally be acquired asfast as possible. Hence the simulation of a Velodyne HDL-64E is used in the second set of experiments.This sensor is particularly interesting as it can be considered to be the top of the state of the art of 3Dsensing as for example shown by its usage by different teams in the DARPA Grand Challenge [48, 49].The Velodyne HDL-64E has 64 lasers, 360 horizontal field of view (0.09 angular resolution), 26.8

vertical field of view (from ±2 up to −24.8 down with 64 equally spaced angular subdivisions) and

3

Page 4: E cient Representation in 3D Environment Modeling for Planetary Robotic Explorationrobotics.jacobs-university.de/publicationData/ARJ09... · 2012-09-24 · keywords: space robotics,

produces more than 1.3M points per second with maximum range of 50− 120 meters depending on thereflectivity of the surface [50].

The rest of the article is organized as follows: in sec. 2, it is shown how approximate estimatesof plane parameters and their uncertainty can be calculated from 3D measurements with the radialGaussian noise. In sec. 3 a real-time algorithm for a range image segmentation is presented, which isfollowed by a discussion in sec. 4 about memory efficient representation of the extracted planar surfaces.The experimental results are provided in sec. 5 where we mainly concentrate on runtimes, compressionrates and visual quality of created 3D models. Finally, conclusions are discussed in sec. 6.

2 Estimation of Plane Parameters and their Uncertainty

A general idea of an analytical solution to the Approximate Least Squares Problem (ALSP) formulatedfor the optimal plane fitting under the assumption of the radial Gaussian noise is presented here. Itis based on previous own work [22], [21]. We concentrate in this article on the efficient representationof single 3D scans by our plane fitting technique and its application in planetary exploration scenarios.Please note that this plane based representation has been shown to be a suited basis for 6 Degree ofFreedom registration of scans [51], which is suited for full 3D Simultaneous Localization and Mapping(SLAM) in unstructured domains as already shown in the case of disaster scenarios [52].

The equation of a plane is n · r = d, where n is the plane’s unit normal and d the distance tothe origin. Assume that the sensor returned a point-cloud rj = ρjmj , j = 1 . . . N , where, mj are themeasurement directions for the sensor, usually accurately known, and ρj are the respective ranges whichare noisy. We make an assumption that ρj ∼ N

(ρj , σ

2(ρj , n · m)), where, ρj = d

n·mjis the true range

of j -th measurement. Therefore the likelihood of plane-parameters (n, d) given range sample ρi alongmeasurement direction mj is

p(ρi|n, d, mj) =1√

2πσ(ρj , n · mj)exp−1

2(ρi − d/(n · mj))σ2(ρj , n · mj)

(1)

For commonly available 3D sensors like the Swiss-Ranger and Laser-Range-Finder (LRF) the standarddeviation is modeled as ([53], [54], [23])

σ(ρj , n · mj) =σ(ρj)|n · mj |

, σ(ρj) , κρ2j (2)

where n is the local normal to the surface the point rj lies on. The coefficient κ > 0 can be estimatedby doing initial calibration experiments with the sensor. From equations (1) and (2) we get the followinglog-likelihood function

L = K −N∑j=1

logσ(ρj , n · m)|n · mj |

− 12

N∑j=1

[(n · mj)ρj − d]2

σ2(ρj)(3)

L has to be maximized w.r.t. n, d. This can not be handled analytically, especially as σ is a function ofd

n·mj, therefore we make the assumption σ(ρj) ≈ σ(ρj). We define σj , σ(ρj) and note that σj is now

no longer a function of ρj and hence of n and d. Using Eq. (3) and ignoring constant terms togetherwith sum of logarithms we get ALSP formulation

maxn,dLALSP = −1

2

N∑j=1

(n · rj − d)2

σ2j

(4)

Now we have a constrained optimization problem which can be solved using Lagrange multipliers.Defining Lagrangian

L1 = −12

N∑j=1

(n · rj − d)2

σ2j

+ λ(nTn− 1) (5)

4

Page 5: E cient Representation in 3D Environment Modeling for Planetary Robotic Explorationrobotics.jacobs-university.de/publicationData/ARJ09... · 2012-09-24 · keywords: space robotics,

and maximizing it gives us the following estimates

d∗ = n∗T(∑N

j=11σ2

jrj∑N

j=11σ2

j

)= n∗TrG, (6)

λ∗ =12

N∑j=1

n∗T(rj − rG)(n∗Trj)σ2j

(7)

By substituting d∗ in (4) we get that n∗ is the eigenvector corresponding to the smallest eigenvalue ofthe positive semi-definite weighted scatter matrix M =

∑Nj=1

1σ2

j(rj − rG)(rj − rG)T.

Now the Hessian of log-likelihood function at d∗, n∗ and λ∗ can be evaluated

H =

[∂2L∂n2

∂2L∂d∂n

( ∂2L∂d∂n )T ∂2L

∂d2

]=[Hnn Hnd

HTnd Hdd

](8)

H evaluated at n∗, d∗, λ∗ ≡ H∗

H∗nn = −N∑j=1

rjrTj

σ2j

+

[N∑j=1

n∗T(rj − rG)(n∗T rj)σ2j

]I3 (9)

H∗nd =N∑j=1

rjσ2j

, H∗dd = −N∑j=1

1σ2j

Then the covariance matrix is [55]

C(n∗, d∗) = −(H∗)+ (10)

H∗ has a zero eigenvalue in the direction of (n∗, d∗)T therefore Moore-Penrose generalized inverse hasto be used. This property of the Hessian is discussed in [22], [56].

3 Range Image Segmentation

In this section, the principle of our algorithm for identifying regions of points that lie on one plane ispresented. This extends the work presented in [57] by modifying the algorithm by reformulating theunderlying mathematics to an incremental version, which allows a highly efficient implementation. Thealgorithm proceeds as follows. We select a point r1 and its nearest neighbor r2 from point cloud dataPC. For selection we can use random sampling or most planar place, where the flatness at the point isevaluated by the mean square error of the local optimal plane. This is our initial set of points - region Π(Algorithm 1, Line 4 - 5). Then we try to extend this region by considering points in increasing distancefrom set Π. Now suppose point r′ is such that the distance between it and the region is less than thedistance δ. Then if the mean square error (MSE) to the optimal plane Ω of the region Π∪ r′ is less thanε and if the distance between the new point and the optimal plane Ω is less than γ, then r′ is addedto the current region Π. We grow this region until no points can be added (Algorithm 1, Line 6 - 11).Afterwards if the region size is more than θ we add it to the set of regions R, else we treat those pointsas unexplained and add them to the set R′ (Algorithm 1, Line 12 - 16). This is repeated until eachpoint from PC is either in R or in R′.

The approach for regions identification can deal with local noise and therefore it is suitable for thedata produced by fast but error-prone 3D range sensors. Actually, the complexity of the model andsensitivity to noise can be controlled by using aforementioned parameters (δ, ε, γ, θ). The meaning andimpact of parameters is quite intuitive, however selecting the right parameters can play a crucial role inthe production of a good model. Fortunately, a single calibration step is sufficient, i.e., the parameterscan be tuned for one sensor on a single point cloud and then they can be used from then on withoutadaptation.

5

Page 6: E cient Representation in 3D Environment Modeling for Planetary Robotic Explorationrobotics.jacobs-university.de/publicationData/ARJ09... · 2012-09-24 · keywords: space robotics,

The high level formulation of Algorithm 1 is almost the same as in [57]. However, a naive implemen-tation of it would lead to a high computational complexity and unreasonable work time even for smallpoint clouds of a size around 104. Here, an incremental approach is presented, which is discussed laterin the implementation section 3.1.

From the theory in section 2 we know that calculating the optimal fit for a set of 3D points rj =(xj , yj , zj)T, j = 1 . . . k requires finding eigenvector n∗ corresponding to the smallest eigenvalue of thematrix

Mk =k∑j=1

1σ2j

(rj − rG)(rj − rG)T (11)

where rG is the weighted gravity center of the data Eq. (6). This operation is crucial as it has tobe performed whenever a new point is investigated, therefore it should be efficient. The numericallyoptimized methods of the GNU Scientific Library (GSL) are used for this purpose.

Algorithm 1 Range Image Segmentation1: R← ∅2: R′ ← ∅3: while ( PC \ (R ∪R′) 6= ∅ ) do4: select points r1 and r2 in PC \ (R ∪R′)5: Π← r1, r26: while ( new point can be found ) do7: select nearest neighbor r′ with distance d(Π, r′) < δ8: if (MSE(Π ∪ r′) < ε && d(Ω, r′) < γ ) then9: Π← Π ∪ r′

10: end if11: end while12: if ( size(Π) > θ ) then13: R← R ∪Π14: else15: R′ ← R′ ∪Π16: end if17: end while

3.1 Efficient Implementation

As mentioned before, it would be very time consuming to use a naive implementation for region growingAlgorithm 1. Here, we introduce a minimization of the computational complexity for this.

3.1.1 Incremental optimal plane and its error update

Suppose the matrix Mk would be calculated from the start every time a new point is added to theregion. This would mean that one would need to traverse every point in the current region leading toa huge overhead. Here a way to an incremental update of the matrix Mk and its mean square error ispresented, which takes previous calculations into account.

Lets define three variables which will describe a state of k points rj

• The sum of all weights - wk =∑kj

1σj

• The weighted sum of all points - sk =∑kj

rj

σ2j

• The weighted sum of all product matrices - Pk =∑kj

rjrTj

σ2j

We can express the matrix Mk and mean square error MSEk using these state variables in the following

6

Page 7: E cient Representation in 3D Environment Modeling for Planetary Robotic Explorationrobotics.jacobs-university.de/publicationData/ARJ09... · 2012-09-24 · keywords: space robotics,

way

Mk =k∑j=1

1σ2j

(rj − rG)(rj − rG)T =k∑j=1

rjrTj

σ2j

− rGk∑j=1

rTj

σ2j

− (k∑j=1

rjσ2j

)rTG + rGrT

G

k∑j=1

1σ2j

=k∑j=1

rjrTj

σ2j

− rGk∑j=1

rTj

σ2j

= Pk −sksTkwk

, (12)

MSEk =1wk

k∑j=1

1σ2j

(n∗Tk · rj − d∗k)2 =1wk

n∗Tk

k∑j=1

rjrTj

σ2j

n∗k −2d∗kwk

n∗k · sk + d∗2k

=1wk

n∗Tk Pkn∗k −2d∗kwk

n∗k · sk + d∗2k . (13)

Now suppose we want to add a new point rk+1. We can easily update state variables

wk+1 = wk +1

σ2k+1

, sk+1 = sk +rk+1

σ2k+1

, Pk+1 = Pk +rk+1rT

k+1

σ2k+1

(14)

From equations (12), (13) and (14) we get Mk+1 and MSEk+1. In other words we have updated Mk+1

and MSEk+1 indirectly by updating variables sk, wk and Pk. Please note that for MSEk+1 new normalvector n∗k+1 and d∗k+1 has to be calculated based on Mk+1 matrix.

Note that Hessian matrix Eq. (9) can be efficiently evaluated during the optimal fit calculation inthe following way. Suppose we have a set of k points rj with the state Pk, sk, wk and the estimate(n∗T, d∗

)T, where n∗ is the eigenvector corresponding to the smallest eigenvalue λ of matrix Mk Eq.(11), then it can be shown that

H∗ =[H∗nn H∗ndH∗Tnd H∗dd

]=

−∑kj=1

rjrTj

σ2j

+ λI3∑kj=1

rj

σ2j∑k

j=1

rTj

σ2j

−∑kj=1

1σ2

j

=

[−Pk + λI3 sk

sTk −wk

](15)

As we always keep track of the state Pk, sk, wk and the eigenvalue λ is calculated during the optimalfitting, we need only a few extra operations to find the Hessian matrix, which can be used to obtain thecovariance matrix Eq. (10).

3.1.2 Finding nearest neighbors

The important part of the region growing algorithm is the nearest neighbor selection for a current region(Algorithm 1, line 7), which is performed every time the inner loop starts. For that either a priorityor a FIFO (First In, First Out) queue can be employed. In the former case a priority queue Q withthe minimum distance on the top should be used. Whenever a new point is added to the region, itsk unvisited nearest neighbors nbt, t = 1 . . . k (from its Moore neighborhood in the range image) areinvestigated and if the distance d(r, nbt) < δ then nbt is added to the priority queue. This allows toextract the nearest neighbor for the region just by calling Q.TOP().

As the distances in the priority queue are bounded by the parameters δ and γ they are in a relativelysmall range. With the assumption that variation of the distances in this range is insignificant we canjustify the usage of a FIFO queue. We observed that priority queue is more advantageous when randomselection of a region seed is used. In case where initial point is selected based on the flatness criteria1

a simple FIFO queue showed better performance. The latter combination was used in experiments ofthis paper.

1The local flatness of a point is evaluated by fitting a plane in a small window around the point and calculating themean square error of the optimal fit. This is done in a preprocessing step.

7

Page 8: E cient Representation in 3D Environment Modeling for Planetary Robotic Explorationrobotics.jacobs-university.de/publicationData/ARJ09... · 2012-09-24 · keywords: space robotics,

3.1.3 Algorithm complexity

All optimizations described in the previous sections lead to a really fast algorithm. Suppose we havea point cloud data of size n. All operations inside the loops are performed in constant time exceptthe nearest neighbor search for the current region, which has logarithmic complexity when the priorityqueue is used. Note that in case of a FIFO queue the approximate nearest neighbor is obtained inconstant time. Now assume that there are a few points lying in the intersection of the planes, whichmeans that most of the points will be visited only once. Then the complexity of the algorithm can beconsidered to be O(n · log(n)) in case of priority queue and O(n) in case of FIFO queue. The memorycomplexity is clearly linear - O(n).

4 Finding Polygonal Representations of Planar Surfaces

This is the part where most of the compression of the original point cloud can be made. Once therange image is segmented the points are projected to their optimal planes, thus reducing representationdimension to 2, i.e., each segment can be represented by parameters (n∗, d∗)T , H∗ 2 of the optimal planeand a set of 2D points defined in the planes’ frame.

Furthermore, we note that the most important part per segment is the boundary of its 2D set ofpoints. As the neighborhood information of points is sustained from the pixel space of the range imageto the Cartesian space the boundary of the segment is the same in both spaces. Finding border pixels ofa segment in a range image is straightforward. One just need to check Moore neighborhood of range 1for each pixel in the segment. If all neighbors of the pixel are from the same segment then it is an innerpixel otherwise it is a boundary pixel. Taking corresponding Cartesian coordinates of border pixels givesus the outline of the planar patch. The inner points are redundant and can be removed. Note thatthe outline usually consists of several components - one outer boundary and several boundaries of innerholes.

The boundary representation already reduces the initial size quite significantly, however it is usuallyvery dense and can be approximated, thus allowing to increase memory efficiency even more. One optionis to use a Minimum Description Length (MDL) encoding of the chain code, which is computationallyexpensive. A simple, but sufficient alternative is a convex hull algorithm in form of Graham’s scan,which achieves very high compression rates in a short runtime of O(n · log(n)). Convex hull has certainlimitations such as holes closing and loosing concave features, which sometimes ends in very roughapproximations. Therefore one should always consider which detail level is needed before applying it.Usually in datasets with large openings (which is true in our case), the large scale features are moreinteresting, therefore convex hull is a suitable candidate.

5 Experiments

The experiments section consists of three parts. The fidelity of models created by our algorithm in non-planar environments is investigated in the first part in section 5.1. The performance of the presentedapproach is then evaluated in section 5.2 based on data from the two typical planetary explorationscenarios introduced in section 1. As discussed before in the introduction, there is a wide range ofdifferent representations for mapping data. We concentrate in section 5.2 on point clouds as comparisonbasis for our approach as this is the predominant representation for 3D perception and mapping. Inaddition, a discussion and experimental comparison to octrees and elevation maps is presented in section5.3.

5.1 An Analysis of the Approximation Error

Natural landscapes typically feature curved surface instead of planes. The polygonal representationhence always leads to a certain loss of information as it is an approximation of the terrain. This can beseen as a structural error, i.e., an error due to the false representation of the true structure. As shown

2Actually, it is enough to store only Hessian H∗, as (n∗, d∗)T is in its nullspace

8

Page 9: E cient Representation in 3D Environment Modeling for Planetary Robotic Explorationrobotics.jacobs-university.de/publicationData/ARJ09... · 2012-09-24 · keywords: space robotics,

in the following experiments, this approximation error can be controlled by the choice of the parametersin the plane fitting process.

Concretely, the fidelity of our planar approach is evaluated in the following framework. Threequadratic surfaces with different curvature complexities are sampled by a simulated ALRF (figure 4).No noise is added to have only approximation errors in the planar models. The plane extraction is runwith different sets of parameters thus allowing to investigate their impact on the model precision.

(a) Surface with low curvature (b) Surface with medium curvature (c) Surface with high curvature

Figure 4: The quadric surfaces used for structural error evaluation. The ground truth surfaces are shownin yellow, the sampled points in blue and the sensor location in red.

The surfaces used as ground truth curved terrain (figure 4) are hyperbolic paraboloids. Their canon-ical form is described by the following implicit equation

x2

a2− y2

b2− z = 0 (16)

where (a, b) are shape controlling parameters. We used the values (10, 10), (5, 10), (5, 5) respectively forthe surfaces plotted in figures 4(a), 4(b) and 4(c). The three different surfaces represent three amountsof curvature in the environment, which covers 40 m by 40 m. All surfaces are moved by 3 metersdown the z-axis. Three different sets of parameters are used for the plane extraction. Concretely, weinvestigate the ε and γ thresholds from the algorithm 1. The exact values are given in the table 1.

p1 p2 p3

ε, mm2 300 100 50γ, mm 250 75 30

Table 1: The different parameters for region growing used in the fidelity experiments.

The qualitative results are shown in figure 5. The models consist of planar segments and the pointsfrom the raw data, which could not be approximated by region growing. The boundaries of the polygonsare not simplified to preserve more details. An example with the results of convex hull polygonalizationis given in figure 6. The quantitative results are shown in table 2. First, the size of each point cloud,i.e., the number of points in it, is shown for each of the three curved surfaces that represent groundtruth. Second, the number of planes generated by our approach is given for the different parameter setsand surface types. As one can expect, both the surface types as well as the parameter settings influencethe number of planes generated to approximate the curved surfaces.

The planes never perfectly cover all points from the input point clouds. These remaining pointsare denoted as “unassigned”. The unassigned points are visualized in figures 5 and 6 as little blackdots. They occusionally occur at the boundaries of planes and mainly at regions on the far end ofthe laser scanners’ range where the density of samples is not high enough anymore for a reasonableapproximation. They can be included in the model, but in practice they tend to be in far away regions,which are not sufficiently sampled by the ALRF anyway.

9

Page 10: E cient Representation in 3D Environment Modeling for Planetary Robotic Explorationrobotics.jacobs-university.de/publicationData/ARJ09... · 2012-09-24 · keywords: space robotics,

(a) p1 parameters, low curvature (b) p1 parameters, medium curvature. (c) p1 parameters, high curvature

(d) p2 parameters, low curvature (e) p2 parameters, medium curvature (f) p2 parameters, high curvature

(g) p3 parameters, low curvature (h) p3 parameters, medium curvature (i) p3 parameters, high curvature

Figure 5: Planar models created by region growing with the different parameters. Different planarsegments have different colors.

(a) Surface with low curvature (b) Surface with medium curvature (c) Surface with high curvature

Figure 6: Models created using convex hull polygonalization with p2 parameters

10

Page 11: E cient Representation in 3D Environment Modeling for Planetary Robotic Explorationrobotics.jacobs-university.de/publicationData/ARJ09... · 2012-09-24 · keywords: space robotics,

−100 −50 0 50 1000

1000

2000

3000

4000

5000

6000

7000

error, mm

occu

ranc

es

(a) The example of the typical error histogram with itsscaled Gaussian approximation for low curvature surfaceand p1 parameters

−100 −50 0 50 1000

500

1000

1500

2000

2500

3000

3500

4000

error, mm

occu

ranc

es

(b) The example of the typical error histogram with itsscaled Gaussian approximation for high curvature surfaceand p3 parameters

−100 −50 0 50 1000

0.01

0.02

0.03

0.04

0.05

0.06

0.07

0.08

error, mm

p1

p2

p3

σ = 5.5917 mm

σ = 9.2538 mm

σ = 17.0328 mm

(c) surface with low curvature

−100 −50 0 50 1000

0.01

0.02

0.03

0.04

0.05

0.06

0.07

error, mm

p1

p2

p3

σ = 5.9669 mm

σ = 9.4127 mm

σ = 17.0029 mm

(d) surface with medium curvature

−100 −50 0 50 1000

0.01

0.02

0.03

0.04

0.05

0.06

0.07

error, mm

p1

p2

p3

σ = 6.2545 mm

σ = 9.4013 mm

σ = 16.7478 mm

(e) surface with high curvature

Figure 7: The distributions of the signed approximation error for different region growing parametersand different types of curved surfaces.

11

Page 12: E cient Representation in 3D Environment Modeling for Planetary Robotic Explorationrobotics.jacobs-university.de/publicationData/ARJ09... · 2012-09-24 · keywords: space robotics,

parameter surface point cloud number of unassigned pointsset curvature size planes number percent

low 37661 95 560 1.5p1 medium 41141 137 566 1.4

high 28707 140 812 2.8low 37661 174 878 2.3

p2 medium 41141 248 1066 2.6high 28707 234 1371 4.8low 37661 287 1572 4.2

p3 medium 41141 375 1577 3.8high 28707 298 2150 7.5

Table 2: The statistics of the plane fitting per ground truth point cloud and parameter set.

p1 p2 p3

Surface with low curvature 13.3216 7.2835 4.3120Surface with medium curvature 13.5333 7.5116 4.6779Surface with high curvature 13.2706 7.4665 4.9383

Table 3: The average absolute errors of the plane approximations in millimeters.

In the following, a quantitative evaluation of the approximation error is conducted. Concretely, thefidelity of the models in form of their numerical deviation from the ground truth curved surfaces isinvestigated. Consider a point pi in one of the point clouds, i.e., it lies on one of the curved surfaces asthe point clouds represent ground truth due to the absence of sensor noise. Suppose pi ∈ Rj where Rjis one of the segments returned by the plane extraction with its optimal plane (nj , dj). Then a signeddistance - from here forth denoted as error - between pi and its optimal plane is

ei = nj · pi − dj (17)

This error captures the amount of approximation of the curved surface by the plane in this point. Notethat the absolute error |ei| is the Euclidean distance between a point pi and a plane (nj , dj). Theaverage absolute error per point cloud is given in table 3 and the histograms/distributions of the signederror are shown in figure 7.

First of all, the important observation can be made that the approximation error is mostly dependenton the thresholds in plane extraction, but not on the surface complexity. This is also reflected in table2. The more curved the environment, the more planes are automatically generated to represent it.Therefore the model precision can be controlled using the parameters of the segmentation algorithm.

It can in addition be observed that the approximation is very precise. Each of the curved ground truthterrains spans 40 meters by 40 meters. But the average structural error is in the order of millimeters.Even the highest amount of average approximation error of about 14 mm with parameter setting p1

is so small that it roughly corresponds to the noise level found in the ALRF used on the robot. Thismeans that with these parameter settings, the effects of the approximation are roughly comparable tothe noise introduced by the sensor itself.

5.2 Planetary Exploration Scenarios

The first set of 3D measurements were made in the volcanic landscape of the Teide National Park onTenerife where the ESA Lunar Robotics Challenge (LRC) took place. For the LRC, one of the Jacobsrobots was equipped with an actuated Laser Range Finder (LRF). As described in section 1, this is arather low end solution for acquiring 3D LRF data. The second dataset is from a virtual environment,where the Eagle crater on Mars is modeled in USARsim based on ground truth data from the MarsExploration Rover (MER) mission data archives. There, a high end 3D LRF sensor in form of a VelodyneHDL-64E is used. Examples of the point clouds used in the experiments are shown in figures 8 and 9.

12

Page 13: E cient Representation in 3D Environment Modeling for Planetary Robotic Explorationrobotics.jacobs-university.de/publicationData/ARJ09... · 2012-09-24 · keywords: space robotics,

Figure 8: Several views of a 3D-LRF point cloud from the ESA Lunar Robotics Challenge.

Figure 9: Three point clouds from the Velodyne 3D-LRF on Mars. On the left, two views of a scantaken on relative flat terrain in the center of the crater. In the middle, two views of a scan taken nearthe rim with more complex terrain. On the right, two views of scanned terrain in a mountainous part.

13

Page 14: E cient Representation in 3D Environment Modeling for Planetary Robotic Explorationrobotics.jacobs-university.de/publicationData/ARJ09... · 2012-09-24 · keywords: space robotics,

Figure 10: Results of the plane fitting for the point cloud shown in figure 8. On top, two views of arepresentation where the boundaries of the plane patches are computed using the convex hull; on thebottom, the boundaries are computed by MDL polygonalization.

Figure 11: Results of the plane patch fitting for the three point clouds from the Velodyne 3D-LRF onMars shown in figure 9. Again, two views are shown for each case. Despite the compression by threeorders of magnitude, the general overview is still very useful; easy as well as hard to negotiable areascan be well recognized.

14

Page 15: E cient Representation in 3D Environment Modeling for Planetary Robotic Explorationrobotics.jacobs-university.de/publicationData/ARJ09... · 2012-09-24 · keywords: space robotics,

Examples of the results of the plane fitting and polygonalization are shown in figures 10 and 11.Multiple views of the 3D models are shown in these two figures; nevertheless, a full 3D visualization -like for example the one provided in the operator GUI of the Jacobs robots - provides a much betteroverview. For the convenience of the reader, several movies with 3D visualizations of the raw pointclouds as well as of the results of the plane fitting can be downloaded from

• http://robotics.jacobs-university.de/datasets/ESA-LRC-08 and

• http://robotics.jacobs-university.de/datasets/USARsim-Velodyne-Mars.

Tables 5.2 and 5.2 show the compression rates and run-times. All experiments were carried out ona 64-bit mobile platform with Intel Core 2 Duo T7100 (1.8GHz) CPU. As can be seen, the plane fittingwith convex hull polygonalization leads to compressions up to a factor 25. As discussed before, thepolygonal representation does not cover all points, some remain unassigned. We consider them to benot necessary for a good environment representation but one may be interested in keeping them in themodel. Hence, all values are also given for the case where unassigned points are included. Then, thecompression is still up to about a factor 15. The computations are very fast, the total runtimes are inthe order of a few hundred milli-seconds; the approach is hence well suited for online computations onthe robot.

point cloud incl. unassig. percent excl. unassig.Polygonal size size compression unassigned Size, compression

Representation (KB) (KB) (#PC/#PMincl) (%) (KB) (#PC/#PMexcl)Boundary 326 58 5.62 2.54 52 6.30

Convex Hull 326 21 15.18 2.54 13 24.92

Table 4: Average performance statistics on the dataset from the ESA LRC (11 scans, ∼ 4 · 104 pointsper scan). The values are given for point clouds (PC) as well as polygonal maps including (PMincl),respectively excluding unassigned points (PMexcl).

Polygonal Runtimes (sec)Representation Segmentation Polygonalization TotalBoundary 0.66 0.05 0.71Convex Hull 0.66 0.14 0.80

Table 5: Average runtimes on the dataset from the ESA LRC (11 scans, ∼ 4 · 104 points per scan)

An important aspect is that this compression is achieved with a high level of fidelity in the repre-sentation of the surface. Figure 12 shows the distribution of the “error”, i.e., the deviation of the pointsin the input point cloud from the fitted planes. Please note that this “error” captures both the effectsof the real structural error, i.e., the deviation from the - unknown - ground truth, plus the effects ofsensor noise. The real structural error is hence smaller. It can be seen that the representation has avery high fidelity. Concretely, the average absolute “error” is 19.3 millimeter, which is not far from theactual noise level of the ALRF.

Tables 5.2 and 5.2 show the compression rates and the run-times for the Velodyne Mars data.Again, the plane fitting with convex hull polygonalization leads on average to high compression rates.The computations take - due to the much higher resolution and range of the sensor - longer, namely inthe order of several seconds. Nevertheless, it is suited for online computations as the robot only needsto occasionally take a scan for getting an environment overview.

For the virtual Velodyne data it is interesting to have a more detailed look at the individual scansas they differ in the complexity of the terrain where they are taken. As can be seen in table 5.2, thecompression rates range from a minimum of 29.01 for the scan near the rim of the crater to a maximumof 452.6 for the scan in the center of the crater in the case when unassigned points are excluded. The

15

Page 16: E cient Representation in 3D Environment Modeling for Planetary Robotic Explorationrobotics.jacobs-university.de/publicationData/ARJ09... · 2012-09-24 · keywords: space robotics,

−300 −200 −100 0 100 2000

2000

4000

6000

8000

10000

12000

error, mm

occu

ranc

es

σ = 27.7005 mm

Figure 12: Distribution of the approximation “error” for the real point cloud shown in figure 8. Theabsolute value of the “error’ is on average 19.3 millimeter, which is not far from the noise level of thesensor.

point cloud incl. unassig. percent excl. unassig.size size compression unassigned Size, compression

(KB) (KB) (#PC/#PMincl) (%) (KB) (#PC/#PMincl)Polygonal Representation: Boundary

Fig.9 left 4711 533 8.83 2.51 415 11.35Fig.9 middle 3806 818 4.65 0.78 788 4.83Fig.9 right 4726 806 5.87 3.03 663 7.13average 4311 702 6.14 2.11 607 7.10

Polygonal Representation: Convex HullFig.9 left 4711 129 36.61 2.51 10 452.60Fig.9 middle 3806 161 23.65 0.78 131 29.01Fig.9 right 4726 211 22.43 3.03 68 69.82average 4311 163 26.47 2.11 68 63.28

Table 6: Performance statistics on the virtual Mars dataset(3 scans, ∼ 4·105 points per scan). The valuesare given for point clouds (PC) as well as polygonal maps including (PMincl), respectively excludingunassigned points (PMexcl).

Polygonal Runtimes (sec)Representation Segmentation Polygonalization TotalBoundary 4.98 1.15 6.13Convex Hull 4.98 1.19 6.17

Table 7: Average runtime statistics on the virtual Mars dataset (3 scans, ∼ 4 · 105 points per scan)

16

Page 17: E cient Representation in 3D Environment Modeling for Planetary Robotic Explorationrobotics.jacobs-university.de/publicationData/ARJ09... · 2012-09-24 · keywords: space robotics,

same effect can also be observed when unassigned points are included though in a smaller extent. Thisagain shows that the segmentation of the planar patches works as expected and that it adapts to thecomplexity of the terrain.

−100 −50 0 50 100 150 2000

1

2

3

4

5

6x 104

error, mm

occu

ranc

es

σ = 21.9 mm

(a) top PC

−300 −200 −100 0 100 200 3000

2

4

6

8

10

12x 104

error, mm

occu

ranc

es

σ = 26.3634 mm

(b) middle PC

−400 −300 −200 −100 0 100 200 300 400 500 6000

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2x 10

5

error, mm

occu

ranc

es

σ = 26.6948 mm

(c) bottom PC

Figure 13: The “error” distributions for the virtual point clouds (PC) shown in figure 9

An other very important point is that the representation have a high fidelity. Figure 13 shows the“error” distributions for the plane approximations of the three different point clouds. As discussedbefore, this “error” captures both the effects of the real structural error plus the effects of sensor noise,i.e., the real structural error is even smaller. The average of the absolute values for the three virtualMars scans are all in the order of 2 cm, namely 21.9 mm in the crater center (top), 19.6 mm at thecrater side (middle), and 18.5 mm in the mountains (bottom). Again, this is within the noise level ofthe sensor, which is included in the simulation.

Polygonal point cloud incl. unass. compression excl. unass. compressionRepresentation (KB) (KB) (#PC’/#PM’incl) (KB) (#PC’/#PM’excl)

ESA LRC dataBoundary 290 55 5.31 47 6.18

Convex Hull 290 19 15.35 11 25.90virtual Mars data

Boundary 3342 607 5.51 530 6.31Convex Hull 3342 135 24.81 58 57.52

Table 8: Additional file compression on the data using gzip almost exactly preserves the compressionrates in form of relative sizes (see tables 5.2 and 5.2 for the values without additional zip-compression).The values given below are for average sizes of zipped point clouds (PC’) as well as zipped polygonalmaps including (PM’incl), respectively excluding unassigned points (PM’excl).

Finally, the effects of file compression are studied. Table 5.2 shows the sizes of the different formatsafter applying the gzip file compression to them. When comparing to tables 5.2 and 5.2, it is interestingto notice that a bit of additional compression can be achieved though both the raw point clouds andthe polygonal maps are represented in a binary format. But the most important aspect is that thecompression ratio due to the different representations hardly changes at all. Concretely, the ratio ofunzipped point cloud over the corresponding unzipped polygon map is always almost exactly the sameas the one for zipped point cloud over the corresponding zipped polygon map.

5.3 Comparison with Octrees and Elevation Maps

Occupancy grids are the predominant form of representation for 2D mapping [58]. Unfortunately, theircanonical extension to 3D, i.e., 3D occupancy grids are highly inefficient with respect to memory usage.An alternative option are octrees [59, 60], which provide the same access operations as 3D occupancygrids and which are more memory-efficient, at least when compared to full 3D grids. We have includeda comparison here for the sake of completeness, though octrees have several disadvantages as illustratedin this section.

17

Page 18: E cient Representation in 3D Environment Modeling for Planetary Robotic Explorationrobotics.jacobs-university.de/publicationData/ARJ09... · 2012-09-24 · keywords: space robotics,

relative sizeresolution size point cloud polygon map

(cm × cm× cm) (KB) (#OCT/#PC) (#OCT/#PMch)1× 1 723 1.87 55.195× 5 288 0.75 21.98

10× 10 138 0.36 10.5320× 20 63 0.16 4.7730× 30 34 0.09 2.5640× 40 24 0.06 1.8350× 50 18 0.05 1.34

Table 9: Map sizes for octree representations with different resolutions of the point cloud shown infigure 8. The absolute size in kilobytes (KB) and the relative factor of the size of the octrees (#OCT)to the raw point cloud (#PC), respectively to the polygon map with convex hull boundaries (#PMch)are shown.

For the following comparison experiments, a highly efficient implementation of an octree is used [61].This Fast Octree (FOT) is highly efficient both with respect to computation time as well as memoryusage. The FOT is optimized for binary (occupancy) information per cell. The nodes and leaves of aFOT consume just 4 bytes of memory. In addition to small memory usage, this allows computationallyefficient integer and logical operations for accessing the FOT.

But even a highly optimized FOT requires significant amounts of memory compared to a polygonalmap. The octree representation may be an advantage over full 3D grids, but it is already less so incomparison to raw point clouds. The exact memory requirement depends of course on the spatialresolution of the octree. Table 5.3 shows the sizes of a FOT representation of the ESA LRC scan shownin figure 8 for different spatial resolutions. Visualizations of the different octrees are shown in figure14. The octree with the finest spatial resolution of a cubic centimeter per cell captures approximatelyall points in the original point cloud. This octree is about 1.9 times larger than the raw point clouddue to the overhead of the tree structure. The efficiency of the octree increases with decreasing spatialresolution as increasingly more points can be collapsed in a single octree cell. A cell size of 5×5×5 cm3

already leads to a reduction in size to approximately 75% of the raw point cloud data.But the octree is not really efficient compared to the polygonal representation. An extremely coarse

octree cell size of 50×50×50 cm3 is required to get roughly similar memory requirements as table 5.3shows. But the main problem with the octree representation are the side effects that result from thediscretization of the cells. A halfway smooth representation of the environment’s surface requires a smallcell size. Otherwise, there are step artifacts as illustrated in figures 15 and 16. If the cell size is toobig, the discretization will always lead to illusionary obstacles, which are not present in the polygonalrepresentation. But a suited cell size of 5×5×5 cm3 leads to memory requirements, which are about 22times higher than for the polygonal representation.

A further possible representation are elevation maps. As mentioned before, we are mainly interestedin full 3D data. Please note that our planar surface patches can represent tunnels, overhanging rocks,and other full 3D structures. But in many natural landscapes, a 2.5D representation is sufficient as longas two points with different vertical z-coordinates never have the same horizontal x,y-coordinates, i.e., ifthe environment can be described by an elevation surface. According 2.5D elevation maps - also knownas digital elevation models (DEM) or digital terrain models (DTM) - have been used in the context ofplanetary exploration [12][62]. Hence, a comparison is included here.

Elevation maps come in two different flavors, namely as irregular triangle mesh - short trimesh - oras raster data based on regular grids. In case an irregular trimesh representation is used, there is nosize difference between the elevation map and a raw point cloud. Concretely, the points in the cloudare simply identical with the corner points of the Delaunay triangulation of the elevation surface andhence their most efficient representation. The situation is different if a regular grid is used as basis forthe elevation map. The size of the map is then dependent on the resolution of the grid.

18

Page 19: E cient Representation in 3D Environment Modeling for Planetary Robotic Explorationrobotics.jacobs-university.de/publicationData/ARJ09... · 2012-09-24 · keywords: space robotics,

(a) 5 cm × 5 cm × 5 cm (b) 10 cm × 10 cm × 10 cm (c) 20 cm × 20 cm × 20 cm

(d) 30 cm × 30 cm × 30 cm (e) 40 cm × 40 cm × 40 cm (f) 50 cm × 50 cm × 50 cm

Figure 14: Topviews of octree representations of the ESA LRC point cloud shown in figure 8. Theoctrees are computed with different spatial resolutions, which directly influence their sizes but also theirusability.

Figure 15: The discretization used in octrees can lead to the illusion that there are steps in the terrain,which can not be negotiated by the robot. The effect obviously depends on the resolution with whichthe octree is created and it is hence directly related to the overall size of the octree. The problem doesnot occur with the polygon representation based on plane fitting.

19

Page 20: E cient Representation in 3D Environment Modeling for Planetary Robotic Explorationrobotics.jacobs-university.de/publicationData/ARJ09... · 2012-09-24 · keywords: space robotics,

Figure 16: An example of the step illusion (see also figure 15) shown here in the 20×20×20 cm3 octreerepresentations of the ESA LRC point cloud from figure 8.

elevation maps (EM) relative sizeresolution ESA LRC Mars ESA LRC Mars(cm × cm) (KB) (KB) (#EM/#PMch) (#EM/#PMch)10× 10 977 5625 74.55 82.5720× 20 244 1406 18.64 20.6430× 30 109 625 8.28 9.1740× 40 61 352 4.66 5.1650× 50 39 225 2.98 3.3060× 60 27 156 2.07 2.2970× 70 20 115 1.52 1.6980× 80 15 88 1.16 1.29

Table 10: Map sizes for elevation maps with different grid resolutions. In addition to the absolute sizein kilobytes (KB), the relative factor of the size of the elevation maps (#EM) to the polygon maps withconvex hull boundaries (#PMch) is shown.

20

Page 21: E cient Representation in 3D Environment Modeling for Planetary Robotic Explorationrobotics.jacobs-university.de/publicationData/ARJ09... · 2012-09-24 · keywords: space robotics,

Table 5.3 shows the sizes of elevation maps in the two test scenarios with different grid resolutions.The raster varies from a tile size of 10 cm by 10 cm up to 80 cm by 80 cm with steps of 10 cm. Itcan be seen that the elevation maps with reasonable grid resolutions require more space than the planefitting representation with convex hull boundaries. Only with very coarse grid resolutions, the sizes arecomparable. In addition, elevation maps are as mentioned only 2.5D and they can hence not handle full3D structures. But they at least do not suffer from the step illusion effect like octrees.

6 Conclusions

3D environment information is crucial for operating a robot in planetary exploration scenarios. It ishence desirable to get long range 3D data with high resolution, large field of view, and very fast updaterates. 3D Laser Range Finders (3D-LRF) have a high potential in this respect, especially devices atthe top of the state of the art like the Velodyne HDL-64E. In addition, 3D-LRF can operate underconditions where standard vision based methods fail, e.g., under extreme light conditions. But it isnon-trivial to transmit the huge amounts of data delivered by a 3D-LRF from a planetary explorationrobot to an operator station.

Based on previous own work on plane fitting, it is shown in this article how the huge amounts of3D point cloud data from 3D-LRF can be tremendously reduced. In doing so, large sets of points arereplaced by planar surface patches that are fitted into the data in an optimal way. The underlyingcomputations are very efficient and hence suited for online computations onboard of the robot.

The approach is tested with two sets of experiments. The first is based on data gathered during ourparticipation at the Lunar Robotics Challenge (LRC) of the European Space Agency (ESA) that tookplace in October 2008 in the volcanic landscape of the Teide National Park on Tenerife. The data iscollected by a robot with a SICK S300 2D-LRF combined with a simple servo for a pitching motionto get the additional degree of freedom. A high end solution is used for the second set of experiments.There, a Velodyne HDL-64E is used in a high fidelity simulation with ground truth data from Mars.The generated models are suited to give an overview of the environment while compression rates of upto two orders of magnitude are achieved in both sets of experiments.

REFERENCES

[1] J. Erickson, “Living the dream - an overview of the Mars exploration project,” Robotics and Au-tomation Magazine, IEEE, vol. 13, no. 2, pp. 12–18, 2006.

[2] J. Biesiadecki, E. Baumgartner, R. Bonitz, B. Cooper, B. A4 Cooper, F. Hartman, F.R. A5 Hart-man, P. Leger, P.C. A6 Leger, M. Maimone, M.W. A7 Maimone, S. Maxwell, S.A. A8 Maxwell,A. Trebi-Ollennu, A. A9 Trebi-Ollennu, E. Tunstel, E.W. A10 Tunstel, and J. Wright, J.R.A11 Wright, “Mars exploration rover surface operations: driving opportunity at Meridiani Planum,”Robotics and Automation Magazine, IEEE, vol. 13, no. 2, pp. 63–71, 2006.

[3] R. Lindemann, D. Bickler, B. Harrington, G. Ortiz, G.M. A4 Ortiz, and C. Voothees, C.J.A5 Voothees, “Mars exploration rover mobility development,” Robotics and Automation Magazine,IEEE, vol. 13, no. 2, pp. 19–26, 2006.

[4] M. Ai-Chang, J. Bresina, L. Charest, A. Chase, A. A4 Chase, J.-J. Hsu, J.C.-J. A5 Hsu, A. Jonsson,A. A6 Jonsson, B. Kanefsky, B. A7 Kanefsky, P. Morris, P. A8 Morris, K. R. A. K. Rajan, J. Ygle-sias, J. A10 Yglesias, B. Chafin, B.G. A11 Chafin, W. Dias, W.C. A12 Dias, and P. Maldague, P.F.A13 Maldague, “MAPGEN: mixed-initiative planning and scheduling for the Mars ExplorationRover mission,” Intelligent Systems, IEEE, vol. 19, no. 1, pp. 8–12, 2004.

[5] K. Iagnemma, C. Brooks, and S. Dubowsky, “Visual, tactile, and vibration-based terrain analysisfor planetary rovers,” in IEEE Aerospace Conference, vol. 2, 2004, pp. 841–848 Vol.2.

21

Page 22: E cient Representation in 3D Environment Modeling for Planetary Robotic Explorationrobotics.jacobs-university.de/publicationData/ARJ09... · 2012-09-24 · keywords: space robotics,

[6] K. Iagnemma, H. Shibly, and S. Dubowsky, “On-line terrain parameter estimation for planetaryrovers,” in IEEE International Conference on Robotics and Automation (ICRA), vol. 3, 2002, pp.3142–3147.

[7] S. Lacroix, A. Mallet, D. Bonnafous, G. Bauzil, S. Fleury, M. Herrb, and R. Chatila, “Au-tonomous rover navigation on unknown terrains: Functions and integration,” International Journalof Robotics Research, vol. 21, no. 10-11, pp. 917–942, 2002.

[8] ——, “Autonomous rover navigation on unknown terrains functions and integration,” in Experi-mental Robotics Vii, ser. Lecture Notes in Control and Information Sciences, 2001, vol. 271, pp.501–510.

[9] D. B. Gennery, “Traversability analysis and path planning for a planetary rover,” AutonomousRobots, vol. 6, no. 2, pp. 131–146, 1999.

[10] P. G. Backes, J. S. Norris, M. W. Powell, M. A. Vona, R. Steinke, and J. Wick, “The Science ActivityPlanner for the Mars Exploration Rover Mission: FIDO Field Test Results,” in Proceedings of theIEEE Aerospace Conference, Big Sky, MT, USA, 2003.

[11] Y. Kunii and T. Ushioda, “Shadow casting stereo imaging for high accurate and robust stereoprocessing of natural environment,” in Advanced Intelligent Mechatronics, 2008. AIM 2008.IEEE/ASME International Conference on, 2008, pp. 302–307.

[12] T. Kubota, K. Moesl, and I. Nakatani, “Map Matching Scheme for Position Estimation of PlanetaryExplorer in Natural Terrain,” in Robotics and Automation, 2007 IEEE International Conferenceon, 2007, pp. 3520–3525.

[13] L. Edwards, M. Sims, C. Kunz, D. Lees, and J. Bowman, “Photo-realistic Terrain Modeling andVisualization for Mars Exploration Rover Science Operations,” in Systems, Man and Cybernetics,2005 IEEE International Conference on, vol. 2, 2005, pp. 1389–1395.

[14] M. Massari, E. Ceriani, L. Rigolin, and F. Bernelli-Zazzera, “Optimal path planning for planetaryexploration rovers based on artificial vision system for environment reconstruction,” in AdvancedIntelligent Mechatronics. Proceedings, 2005 IEEE/ASME International Conference on, 2005, pp.987–992.

[15] S. Goldberg, M. Maimone, and L. Matthies, “Stereo vision and rover navigation software for plan-etary exploration,” in Aerospace Conference Proceedings, 2002. IEEE, vol. 5, 2002, pp. 5–2025–5–2036 vol.5.

[16] Y. Kunii and T. Gotoh, “Evaluation of Shadow Range Finder: SRF for planetary surface explo-ration,” in Robotics and Automation, 2003. Proceedings. ICRA ’03. IEEE International Conferenceon, vol. 2, 2003, pp. 2573–2578 vol.2.

[17] Y. Kunii, S. Tsuji, and M. Watari, “Accuracy improvement of Shadow Range Finder: SRF for 3Dsurface measurement,” in Intelligent Robots and Systems, 2003. (IROS 2003). Proceedings. 2003IEEE/RSJ International Conference on, vol. 4, 2003, pp. 3041–3046 vol.3.

[18] M. Kurisu, Y. Yokokohji, Y. Shiokawa, and T. Samejima, “Development of a laser range finder forthree-dimensional map building in rubble,” Advanced Robotics, vol. 19, no. 3, pp. 273–294, 2005.

[19] Y. Cheng, M. Maimone, and L. Matthies, “Visual odometry on the Mars exploration rovers - atool to ensure accurate driving and science imaging,” Robotics and Automation Magazine, IEEE,vol. 13, no. 2, pp. 54–62, 2006.

[20] CSA, “Avatar Explore Mission,” http://www.asc-csa.gc.ca/eng/sciences/avatar.asp, 2009.

[21] J. Poppinga, N. Vaskevicius, A. Birk, and K. Pathak, “Fast Plane Detection and Polygonalizationin noisy 3D Range Images,” in International Conference on Intelligent Robots and Systems (IROS).Nice, France: IEEE Press, 2008.

22

Page 23: E cient Representation in 3D Environment Modeling for Planetary Robotic Explorationrobotics.jacobs-university.de/publicationData/ARJ09... · 2012-09-24 · keywords: space robotics,

[22] K. Pathak, N. Vaskevicius, and A. Birk, “Revisiting Uncertainty Analysis for Optimum PlanesExtracted from 3D Range Sensor Point-Clouds,” in International Conference on Robotics and Au-tomation (ICRA). IEEE press, 2009.

[23] K. Pathak, A. Birk, S. Schwertfeger, and J. Poppinga, “3D Forward Sensor Modeling and Appli-cation to Occupancy Grid Based Sensor Fusion,” in International Conference on Intelligent Robotsand Systems (IROS). San Diego, USA: IEEE Press, 2007.

[24] Y.-S. Ha and H.-H. Kim, “Environmental map building for a mobile robot using infrared range-finder sensors,” Advanced Robotics, vol. 18, no. 4, pp. 437–450, 2004.

[25] A. Elfes, “Using occupancy grids for mobile robot perception and navigation,” Computer, vol. 22,no. 6, pp. 46–57, 1989.

[26] C. Pradalier and S. Sekhavat, “Simultaneous localization and mapping using the Geometric Pro-jection Filter and correspondence graph matching,” Advanced Robotics, vol. 17, no. 7, pp. 675–690,2004.

[27] I. S. Kweon and T. Kanade, “High resolution terrain map from multiple sensor data,” in Interna-tional Workshop on Intelligent Robots and Systems (IROS), 1990, pp. 127–134.

[28] S. Takamura, T. Nakamura, and M. Asada, “Behavior-based map representation for a sonar-basedmobile robot by statistical methods,” Advanced Robotics, vol. 11, no. 5, pp. 445–462, 1996.

[29] D. Fischer and P. Kohlhepp, “3D geometry reconstruction from multiple segmented surface descrip-tions using neuro-fuzzy similarity measures,” Journal of Intelligent and Robotic Systems, vol. 29,pp. 389–431, 2000.

[30] H. Surmann, A. Nuechter, and J. Hertzberg, “An autonomous mobile robot with a 3D laser rangefinder for 3D exploration and digitalization of indoor environments,” Robotics and AutonomousSystems, vol. 45, no. 3-4, pp. 181–198, 2003.

[31] S. Thrun, D. F. D. Haehnel, M. Montemerlo, R. Triebel, W. Burgard, C. Baker, Z. Omohundro,S. Thayer, and W. Whittaker, “A System for Volumetric Robotic Mapping of Abandoned Mines,” inProc. IEEE International Conference on Robotics and Automation (ICRA), Taipei, Taiwan, 2003.

[32] J. Weingarten and R. Siegwart, “3D SLAM using planar segments,” in IEEE/RSJ InternationalConference on Intelligent Robots and Systems (IROS), Beijing, 2006.

[33] M. Magnusson, A. Lilienthal, and T. Duckett, “Scan registration for autonomous mining vehiclesusing 3D-NDT,” Journal of Field Robotics, vol. 24, no. 10, pp. 803–827, 2007.

[34] A. Nuchter, K. Lingemann, and J. Hertzberg, “6D SLAM– 3D mapping outdoor environments,”Journal of Field Robotics, vol. 24, no. 8/9, pp. 699–722, 2007.

[35] J. Weingarten, “Feature-based 3D SLAM,” Ph.D. dissertation, EPFL, Lausanne, Switzerland,2006. [Online]. Available: http://library.epfl.ch/theses/?nr=3601

[36] P. J. Besl and N. D. McKay, “A method for registration of 3-d shapes,” IEEE Trans. on PatternAnalysis and Machine Intelligence, vol. 14, no. 2, pp. 239–256, Feb 1992.

[37] ESA, “Lunar Robotics Challenge (LRC),” http://www.esa.int/esaCP/SEMGAASHKHF index 0.html, 2008.

[38] USARsim, “Unified System for Automation and Robotics Simulator (USARsim),” http://usarsim.sourceforge.net/, 2006.

[39] Epic-Games, “Unreal engine,” 2003.

[40] Karma, “Mathengine Karma User Guide,” 2003.

23

Page 24: E cient Representation in 3D Environment Modeling for Planetary Robotic Explorationrobotics.jacobs-university.de/publicationData/ARJ09... · 2012-09-24 · keywords: space robotics,

[41] S. Carpin, M. Lewis, J. Wang, S. Balakirsky, and C. Scrapper, “Bridging the gap between simulationand reality in urban search and rescue,” in RoboCup 2006: Robot Soccer World Cup X, ser. LNAI.Springer, 2006, pp. 1–12.

[42] S. Carpin, M. Lewis, J. Wang, S. Balarkirsky, and C. Scrapper, “Usarsim: a robot simulator forresearch and education,” Proc. of the 2007 IEEE Intl. Conf. on Robotics and Automation (ICRA),2007.

[43] S. Carpin, T. Stoyanov, Y. Nevatia, M. Lewis, and J. Wang, “Quantitative assessments of usarsimaccuracy,” Proceedings of PerMIS, 2006.

[44] S. Carpin, A. Birk, M. Lewis, and A. Jacoff, “High fidelity tools for rescue robotics: results andperspectives,” in RoboCup 2005: Robot Soccer World Cup IX, ser. Lecture Notes in ArtificialIntelligence (LNAI), I. Noda, A. Jacoff, A. Bredenfeld, and Y. Takahashi, Eds. Springer, 2006.

[45] MER-Science-Team, “Mars Exploration Rover (MER) Mission data archives,” http://anserver1.eprsl.wustl.edu/anteam/merb/merb\ main2.htm, 2007.

[46] A. Birk, T. Stoyanov, Y. Nevatia, R. Ambrus, J. Poppinga, and K. Pathak, “Terrain Classificationfor Autonomous Robot Mobility: from Safety, Security Rescue Robotics to Planetary Exploration,”in Planetary Rovers Workshop, International Conference on Robotics and Automation (ICRA).IEEE, 2008.

[47] A. Birk, J. Poppinga, T. Stoyanov, and Y. Nevatia, “Planetary Exploration in USARsim: A CaseStudy including Real World Data from Mars,” in RoboCup 2008: Robot WorldCup XII, LectureNotes in Artificial Intelligence (LNAI), L. Iocchi, H. Matsubara, A. Weitzenfeld, and C. Zhou, Eds.Springer, 2009.

[48] K. Iagnemma and M. Buehler, “Special Issue on the DARPA Grand Challenge, Part 1,” Journalof Field Robotics, vol. 23, no. 8, 2006.

[49] ——, “Special Issue on the DARPA Grand Challenge, Part 2,” Journal of Field Robotics, vol. 23,no. 9, 2006.

[50] “Hdl-64e data sheet.” [Online]. Available: http://www.velodyne.com/lidar/products/specifications.aspx

[51] K. Pathak, N. Vaskevicius, J. Poppinga, M. Pfingsthorn, S. Schwertfeger, and A. Birk, “Fast3D Mapping by Matching Planes Extracted from Range Sensor Point-Clouds,” in InternationalConference on Intelligent Robots and Systems (IROS). IEEE Press, 2009.

[52] K. Pathak, A. Birk, N. Vaskevicius, M. Pfingsthorn, S. Schwertfeger, and J. Poppinga, “Online 3DSLAM by Registration of Large Planar Surface Segments and Closed Form Pose-Graph Relaxation,”Journal of Field Robotics, Special Issue on 3D Mapping, vol. (in press), 2009.

[53] F. Prieto, T. Redarce, P. Boulanger, and R. Lepage, “CAD-based range sensor placement foroptimum 3D data acquisition,” in Second International Conference on 3-D Imaging and Modeling(3DIM’99). Los Alamitos, CA, USA: IEEE Computer Society, 1999, p. 0128. [Online]. Available:http://doi.ieeecomputersociety.org/10.1109/IM.1999.805343

[54] D. Anderson, H. Herman, and A. Kelly, “Experimental characterization of commercial flashladar devices,” in International Conference on Sensing Technologies, Palmerston North, NewZealand, November 2005. [Online]. Available: http://www.frc.ri.cmu.edu/∼alonzo/pubs/papers/icst05FlashLadar.pdf

[55] D. S. Sivia, Data Analysis: A Bayesian Tutorial. Oxford University Press, 1996.

[56] Y. Kanazawa and K. Kanatani, “Reliability of fitting a plane to range data,” IEICE Transactionson Information and Systems, vol. E78-D, no. 12, pp. 1630–1635, 1995.

24

Page 25: E cient Representation in 3D Environment Modeling for Planetary Robotic Explorationrobotics.jacobs-university.de/publicationData/ARJ09... · 2012-09-24 · keywords: space robotics,

[57] D. Hahnel, W. Burgard, and S. Thrun, “Learning Compact 3D Models of Indoor and OutdoorEnvironments with a Mobile Robot,” Robotics and Autonomous Systems, vol. 44, no. 1, pp. 15–27,2003.

[58] S. Thrun, “Robotic Mapping: A Survey,” in Exploring Artificial Intelligence in the New Millenium,G. Lakemeyer and B. Nebel, Eds. Morgan Kaufmann, 2002.

[59] C. L. Jackins and S. L. Tanimoto, “Oct-trees and their use in representing three-dimensional ob-jects,” Computer Graphics Image Process, vol. 14, no. 3, pp. 249–270, 1980.

[60] D. Meagher, “Geometric modelling using octree encoding,” Computer Graphics Image Process,vol. 19, no. 2, pp. 129–147, 1982.

[61] J. Poppinga, M. Pfingsthorn, S. Schwertfeger, K. Pathak, and A. Birk, “Optimized Octtree Datas-tructure and Access Methods for 3D Mapping,” in IEEE Safety, Security, and Rescue Robotics(SSRR). IEEE Press, 2007.

[62] C.-H. Chien, “Applications of computer vision in space robotics,” in Recent Developments in Com-puter Vision, G. Goos, J. Hartmanis, and J. v. Leeuwen, Eds. Springer, 1996, vol. Lecture Notesin Computer Science (LNCS), 1035, pp. 59–68.

25