Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive...

56
Obstacle Detection and Avoidance Using TurtleBot Platform and XBox Kinect Sol Boucher Research Assistantship Report Department of Computer Science Rochester Institute of Technology Research Supervisor: Dr. Roxanne Canosa Research Sponsor: RIT Golisano College Honors Committee 20114/August 9, 2012 Roxanne Canosa, Ph.D. Date

Transcript of Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive...

Page 1: Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive property of the above obstacle composition criteria. This works very well if the terrain

Obstacle Detection and Avoidance Using TurtleBotPlatform and XBox Kinect

Sol Boucher

Research Assistantship ReportDepartment of Computer ScienceRochester Institute of Technology

Research Supervisor: Dr. Roxanne CanosaResearch Sponsor: RIT Golisano College Honors Committee

20114/August 9, 2012

Roxanne Canosa, Ph.D. Date

Page 2: Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive property of the above obstacle composition criteria. This works very well if the terrain

Abstract

Any robot that is to drive autonomously must be able to detect and avoidobstacles that it might encounter. Traditionally, this problem has beensolved using systems of one or more RGB cameras utilizing complicatedand computationally-expensive computer vision algorithms, somewhat un-reliable ultrasonic distance sensors, or laser-based depth scanners. However,Microsoft’s recent release of the XBox Kinect has opened up new areas ofresearch in the areas of computer vision and image understanding, and thissame device can be employed for obstacle detection.The three-dimensional point cloud provided by the low-cost and commercially-available Kinect platform puts much more information about the surroundingworld at the disposal of an autonomous robot. This research investigates theproblem of using this data to autonomously detect and avoid obstacles in anunconstrained indoor environment. The algorithm used is a synthesis of thetraditional method of choosing turn directions based on the centroid of thedetected points and a more novel search of the ground plane for edges andboundaries. Good results are achieved not only for drop-offs and commonobstructions, but also when objects are especially short or moving just infront of the robot and perpendicular to it.

2

Page 3: Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive property of the above obstacle composition criteria. This works very well if the terrain

Contents

List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 Similar Work (Literature Review) . . . . . . . . . . . . . . . . 63 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

4.1 Height range cropping . . . . . . . . . . . . . . . . . . 134.2 Cluster detection . . . . . . . . . . . . . . . . . . . . . 144.3 Ground plane edges . . . . . . . . . . . . . . . . . . . . 154.4 Combining approaches . . . . . . . . . . . . . . . . . . 17

5 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . 175.1 Intensive testing . . . . . . . . . . . . . . . . . . . . . . 175.2 Extensive testing . . . . . . . . . . . . . . . . . . . . . 23

6 Table of Hours Worked . . . . . . . . . . . . . . . . . . . . . . 247 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 Starting the TurtleBot . . . . . . . . . . . . . . . . . . . . . . 282 Stopping the TurtleBot . . . . . . . . . . . . . . . . . . . . . . 283 Setting up a Development Workstation . . . . . . . . . . . . . 294 Upgrading to the Latest Version of PCL . . . . . . . . . . . . 315 Backporting in a Class from the PCL Trunk . . . . . . . . . . 326 ROS Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . 347 ROS Commands . . . . . . . . . . . . . . . . . . . . . . . . . 358 Useful Links . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379 Code Listing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

3

Page 4: Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive property of the above obstacle composition criteria. This works very well if the terrain

List of Figures

1 TurtleBot components . . . . . . . . . . . . . . . . . . . . . . 72 Kinect imagery . . . . . . . . . . . . . . . . . . . . . . . . . . 8

a Color picture . . . . . . . . . . . . . . . . . . . . . . . 8b Point cloud . . . . . . . . . . . . . . . . . . . . . . . . 8

3 Development cycle overview . . . . . . . . . . . . . . . . . . . 124 Detailed progression of development . . . . . . . . . . . . . . . 185 Pseudo code for algorithms . . . . . . . . . . . . . . . . . . . . 19

a Height range cropping . . . . . . . . . . . . . . . . . . 19b Ground plane edges . . . . . . . . . . . . . . . . . . . . 20c Integration and drive control . . . . . . . . . . . . . . . 20

4

Page 5: Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive property of the above obstacle composition criteria. This works very well if the terrain

List of Tables

1 Structured test cases . . . . . . . . . . . . . . . . . . . . . . . 212 Test drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 Hours worked . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

5

Page 6: Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive property of the above obstacle composition criteria. This works very well if the terrain

1 Introduction

This research was conducted on Willow Garage’s TurtleBot robotics plat-form, shown in Figure 1. The TurtleBot is an integrated kit backed byWillow Garage’s Robot “Operating System” (ROS) robotics suite and theOpen Perception Foundation’s Point Cloud Library (the PCL), both of whichare open-source projects distributed under BSD licenses. The Kinect pro-vides depth information in the form of a three-dimensional point cloud, asshown in Figure 2. The goal was to implement a simple but effective ob-stacle detection and avoidance system that—using only the data from theKinect—was able to autonomously roam the hallways on all floors of thebuilding without running into anything. This task presupposed an abilityto avoid whatever common hazards, whether stationary or mobile, it mightreasonably be expected to encounter during such a journey. Such capabilitywould be desirable for potential use with or integration into future projectsdeveloped on the same system or a similar one.

2 Similar Work (Literature Review)

Microsoft launched the Kinect on November 4, 2010, in order to add a newand innovative breed of entertainment to its XBox 360 gaming console. How-ever, the sensor immediately caught the attention of researchers and softwaredevelopers of all persuasions; as a result, and thanks to the effort of manydedicated hackers, open source drivers were soon available to facilitate itsuse for more diverse applications. Using these drivers, researchers have sinceused the Kinect for room mapping, desktop application control, 3-D video-conferencing, surveillance, and even diagnosis and surgery.

When using RGB-D sensors on systems with limited resources, the largeststumbling block tends to be the computational cost of processing each frameof the cloud data. In an attempt to alleviate this burden, Microsoft ini-tially planned to include an onboard embedded microprocessor capable ofmany common image processing operations, a feature that was cut from theproduction Kinect. As a result, the full burden of working with the three-dimensional data continues to rest with the main CPU.

The XBox Kinect has an infrared projector and infrared camera separated by

6

Page 7: Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive property of the above obstacle composition criteria. This works very well if the terrain

Infrared emitterRGB camera

Infrared camera

Microsoft Kinect

iRobot CreateLaptop

Frame

Figure 1: The TurtleBot and its components(Image credit: http://ros.org/wiki/TurtleBot)

about 7.5 cm, and a color camera about 2 cm away from the latter (Nguyen,2012). The infrared pair is able to assemble a grid of distance measurementstriangulated from the lateral displacement of the projected points from theknown emitter pattern. Unfortunately, the device is unable to perform anydistance measurements closer than about 0.5 m. One method of detectingobstacles is as follows: First, perform a voxel grid downsampling on the pointcloud to decrease processing time. Next, apply a pass-through filter to cropout regions of little interest or accuracy. Then, use the RANSAC algorithmto perform plane detection. Finally, Euclidean cluster extraction revealsindividual obstacles, and additional analysis of those obstacles is performedin order to determine their sizes. This procedure avoids many difficultiesof using a single RGB camera, as well as enjoying faster run times thandual–RGB camera systems.

7

Page 8: Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive property of the above obstacle composition criteria. This works very well if the terrain

(a) A picture from the color camera (b) The corresponding 3-D point cloud

Figure 2: Sample imagery from the Kinect

Whenever information from multiple image sensors is integrated, there is arisk that it will not line up appropriately, either due to simple displacementresulting from the sensors’ relative positions or because of unique lens dis-tortions created by inconsistencies in the manufacturing process (Herrera etal., 2011). Herrera et al. describe their noise-tolerant method for calibratinga color camera and depth camera against each other, enabling the attain-ment of better results than would ever be possible by calibrating the twocameras individually. They start by computing for the color camera the two-dimensional projection coordinates in the image at which a three-dimensionalpoint in space—the corner of a checkerboard calibration pattern—appears,then perform a distortion correction. Next, they repeat the projection calcu-lation for the depth image, this time using the corners of the plane on whichthe checkerboard rests—because the board itself isn’t visible in this image—and omitting the distortion correction step, as it will be much less effectivethan for the color imagery. Using the projections and data from several im-ages with different perspectives, it is possible to calculate the rotation andtranslation necessary to match the two images’ reference frames. These firstparameters obtained for the color camera are much better than those for thedepth sensor, so the former are used to optimize the latter by performing anonlinear error minimization; then, another minimization is performed acrossthe parameters for both cameras until the results are convergent. Using 35calibration images, the authors are able to demonstrate comparable accuracy

8

Page 9: Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive property of the above obstacle composition criteria. This works very well if the terrain

to that achieved by the proprietary calibration algorithm provided with theirXBox Kinect test sensor.

A common traditional method of obstacle avoidance is the potential fieldmodel, or PFM (Koren and Borenstein, 1999). This model represents tar-gets and obstacles as imaginary attractive and repulsive forces on the robot,respectively. Stored as vectors, such forces are easily summed to find theresultant force vector, which is used directly as the robot’s navigation vec-tor. One such implementation—the virtual force field, or VFF—uses a two-dimensional histogram grid populated from ultrasonic range sensors andholding certainty values of how likely it is that an obstacle exists at eachlocation. Objects of interest are assigned corresponding virtual repulsiveforce vectors with magnitude proportional to their certainty values and in-versely proportional to their distance from the vehicle’s center. Similarly, theattractive force between the robot and its goal location is proportional to apreassigned force constant and inversely proportional it its distance from thevehicle. After obtaining the resultant force vector, its direction and magni-tude are converted into parameters usable by the drive system and issuedas movement commands. However, four major problems have been identi-fied that effect all PFM systems, becoming increasingly noticeable as a robotmoves faster: The robot may fall into a trap situation when it reaches a deadend, a phenomenon for which workarounds exist. The robot may also bedirected in the opposite direction of its target in the case where two closeobjects stand in front of it with space between, a more difficult problemto handle. Certain environments may also cause the robot to begin oscil-lating. Finally, more severe oscillations—and even collisions—occur when arobot drives down a narrow hallway with a discontinuity in its side. Together,these factors make the same PFMs that were once seen as simple and elegantmuch less attractive, especially for applications relying on higher speeds.

One way to detect obstacles using an RGB-D camera is to segment everyplane in the point cloud and consider as obstacles both points emergingfrom the detected planes and planes whose surface orientations differ fromthat of the ground (Holz et al., 2011). Surface detection may be accom-plished computationally cheaply by considering pixel neighborhoods insteadof performing distance searches, then computing the normal vector by find-ing the cross-product of two averaged vectors tangential to the local surface.The coordinates of the points and their corresponding surface normals aretransformed to Cartesian coordinates from the robot’s perspective, then to

9

Page 10: Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive property of the above obstacle composition criteria. This works very well if the terrain

spherical coordinates. Only the plane representing the ground is considerednavigable, and the RANSAC algorithm is applied to optimize the detectedsurfaces and compensate for noisy readings. Each plane is then converted toits convex hull, and both horizontal planes other than the ground and planessupported by horizontal planes are considered to be navigational obstacles.This method is able to process plane data at high speed using only sequen-tial processing while remaining relatively accurate: the average deviationis under ten degrees, and objects are properly segmented over 90% of thetime. The algorithm is, however, sensitive to very small objects and distantmeasurements.

Another method of making use of depth information for the purpose of detect-ing obstacles is to examine the 3-D slopes between detected points (Talukder,2002). The points may be considered to compose a single obstacle if thisslope—measured with respect to the horizontal—is steeper than a set slopeand if their height difference falls within a predetermined range. Such obsta-cles may be found by searching the image from the bottom row and findingfor each obstacle pixel in that row all the other pixels that meet the afore-mentioned criteria with respect to that pixel. The resulting points may alsobe classified as obstacle points, and the process repeated to find all such asso-ciated points. Finally, individual objects may be picked out by applying thetransitive property of the above obstacle composition criteria. This worksvery well if the terrain and robot are both flat, but becomes a more difficulttask as the terrain becomes rough or if the robot is expected to climb ramps.

Although the latter few approaches offer robust, proven functionality andare highly applicable to the type of sensor used, this project sought a sim-pler solution and didn’t require segmentation or identification of individualobjects. Thus, it began instead with the development of what would evolveinto an implementation of one of the most common simple obstacle avoidancealgorithms, simply turning away from the centroid of the detected offendingpoints. However, this venerable approach was extended to consider not onlyobstacles themselves but also edges on the ground plane, an addition thatenabled the detection of several additional danger scenarios that could notbe handled by the traditional method alone.

10

Page 11: Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive property of the above obstacle composition criteria. This works very well if the terrain

3 Background

The Point Cloud Library includes many data types and numerous algorithmsthat make working with point clouds extraordinarily easy. The first of thealgorithms used in this research was the 3-D voxel grid filter, which downsam-ples point cloud data by modeling the input dataset with a three-dimensionalgrid having cubic cells of user-supplied dimensions (Rusu, 2011). Each cellcontaining at least one point in the original image is then populated with asingle voxel placed at the centroid of the points within that part of the input.

The research made extensive use of the plane edge detection algorithms si-multaneously developed by Changhyun Choi, a Ph.D. student at the GeorgiaInstitute of Technology (Choi, 2012). One of the utilized algorithms simplyfinds points bordering on those whose coordinates are set to NaN values,thereby computing the absolute boundaries of a plane. Particularly usefulwas his high curvature edge detection algorithm, which locates the pointsmaking up the boundaries between the floor and those objects that rest onit using integral images and Canny edge detection.

Integral images are a common technique in modern computer vision, and areused to detect distinctive image features (Viola and Jones, 2001). They areessentially tables storing for each coordinate in the corresponding image thesum of the pixel values lying in the box bounded by that coordinate and theupper-left corner of the image. Features from an integral image can then beused for a wide variety of purposes, including estimation of a 3-D image’ssurface normals.

The Canny edge detector starts by smoothing the input image to reducenoise (Canny, 1986). Next, the spatial gradients of the resulting image aremeasured in order to expose the edges, each of which is assigned a strengthbased on the distinctiveness of its gradient. The directions of the edges aredetermined in two dimensions using these gradients, then the directions areused to trace the edges. Those edges with strengths above a certain thresholdare kept, while those with strengths between that value and a lower constantare kept only if they are connected to one or more edges from the formergroup.

PCL also provides a radius outlier removal, which accepts from the user asearch radius and a minimum number of neighbors (O’Leary, 2011). It then

11

Page 12: Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive property of the above obstacle composition criteria. This works very well if the terrain

searches the neighborhood surrounding each point in the image and removesthat point if it has fewer than the specified number of neighbors.

As the project progressed, it became necessary to discern information aboutthe ground plane directly in front of the robot. In order to determine whichpoints were part of this plane, a linear model was calculated from the y-and z-coordinates of two known floor points, one—(z1, y1)—very near to therobot and the other—(z2, y2)—farther away. First, the plane’s slope m wascomputed, as in Equation 1:

m =y2 − y1

z2 − z1

(1)

Next, the y-intercept y0 was calculated using the average of the coordinatessubstituted into the point-slope form of a linear equation (Equation 2):

y0 =y1 + y2

2−m

z1 + z2

2(2)

The resulting slope and intercept were both stored; thus, the y-coordinatecorresponding to a given z-coordinate could be calculated using Equation 3’ssimple linear equation:

y = mz + y0 (3)

Sufficient deviation of the z-coordinate from its expected value allowed theconclusion that the point was not, in fact, part of the ground. Another—slightly less strict—threshold was used to broaden consideration to pointsthat were very near the ground plane, as well as those actually composing it.

4 Approach

Developed the height range cropping algorithm

Experimented with cluster detectionDeveloped the ground plane edgesalgorithm

Combined the height range andground plane approaches

Figure 3: An overview of the progression of code development

12

Page 13: Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive property of the above obstacle composition criteria. This works very well if the terrain

This section of the report describes the development process of the project’salgorithms and code. A brief visual overview covering the major stages ofdevelopment appears as Figure 3, and a sub-sections providing a correspond-ing narrative of each stage follow. The included Appendix provides practicalinformation about using the robot, setting up a development environment,and upgrading the PCL installation, as well as a glossary of ROS-related ter-minology, lists of useful ROS commands and documentation resources, anda complete copy of the final version of the code for this project.

4.1 The height range cropping algorithm

The point cloud coming off the Kinect exhibited noticeable noise, was ex-tremely dense, and was consequently slow to transmit, display, and process.Thus, the first action taken was the application of a voxel grid filter todownsample the data and eradicate most of the noise while achieving betterupdate speeds and faster processing time. Noticing that both Holz et al.and Nguyen used surface detection algorithms, while Koren and Borensteinsimply didn’t train sensors on the floor, a decision was made to crop the y-dimension so as to discard all points falling outside the robot’s height range.This step—which was possible because the robot was going to be used chieflyin indoor environments possessing smooth terrain—made it possible to ig-nore the floor and focus exclusively on those points that represented actualobstacles. However, it also meant sacrificing the ability to climb ramps andtraverse highly uneven floors.

The initial revision of the obstacle avoidance algorithm simply split the viewinto three parts: The center region was used to determine whether to proceedforward or turn, the latter of which was triggered whenever the number ofpoints in this region exceeded a set noise threshold. Once the robot hadentered a turning mode, it ceased forward motion and decided on a directionby choosing the peripheral vision field with fewer points in it. The entirefield of view was cropped in the z-dimension in order to prevent the robotfrom being distracted by objects well ahead of its current position.

The biggest problem with this first version was that the robot was proneto becoming stuck oscillating in place between a left and right turn whenfaced with a sufficiently large obstruction. To work around this problem, themachine was only allowed to choose a direction of rotation as long as it wasn’t

13

Page 14: Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive property of the above obstacle composition criteria. This works very well if the terrain

already turning. In this way, it was forced to pick a direction whenever itfirst encountered an obstacle, then continue turning in that direction untilit was able to drive forward again. As a side effect, it would now rotate adinfinitum when enclosed on all sides.

As testing continued, it became clear that the noise threshold was prevent-ing the detection of many small—but still significant—obstacles. Decreasingthis constant, however, caused the robot to turn spuriously in order to avoidoffending points that were, in fact, nothing but noise. To solve this prob-lem, the noise threshold was eliminated altogether by instead averaging thenumber of points in the forward regions of the last several images taken.

Next, a relatively minor but undeniable problem was discovered: given ascene where the only obstacle was located mainly within one half of thecenter region and didn’t extend into either periphery, the robot might just aseasily turn toward the object as away from it, thereby forcing itself to turnfarther. Replacing the consideration of the peripheral regions with a simpleturn away from the centroid of all points detected in the center region solvedthis issue.

4.2 Experiments with cluster detection

In an effort to allow the traversal of more complicated, maze-like situations,work began on a track that would eventually lead to a dead end. The ideawas that, in severely confined spaces, the robot will attempt to turn longbefore reaching a wall, missing the side passageway because it turns all theway around before it ever gets to the point where it could have entered it. Inorder to solve this problem, an attempt was made at implementing the abilityto distinguish between individual objects using the Point Cloud Library’sbuilt-in implementation of the simple Euclidean cluster detection algorithm.An iterative algorithm to determine the perpendicular distances betweenobjects’ edges was developed and implemented, and the new measurementswere used to determine whether the robot could fit through a given gap.Next, the areas in front of the gaps were checked for blockages, then thecandidate openings were ranked based on their distances from the centerof view. Unfortunately, it soon became clear that although this approachdid a better job of planning logical paths in confined spaces, it was largelyunsuitable for use with the Kinect because of the sensor’s inability to detect

14

Page 15: Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive property of the above obstacle composition criteria. This works very well if the terrain

sufficiently-close obstacles. This meant that, before even getting through agap, the bot would lose sight of it. In order to work around this hardwarelimitation, a state machine could have been implemented and the abilityto measure driving distance could have been added. Unfortunately, suchsteps would have resulted in complete blindness during the time the robotwas traversing the gap, and consequently a vulnerability to any unexpectedenvironmental changes during that time. As such, the work was abandonedin search of a more general and universally-applicable solution.

4.3 The ground plane edges algorithm

Toward the end of development of the gap detection algorithm, another se-vere problem surfaced; it was discovered that, due to a combination of noiseand distortions in the robot’s coordinate system, both of the algorithms de-veloped thus far were unable to detect objects as much as a couple of incheshigh. Noticing that all objects resting on or otherwise obscuring the groundcreated prominent occlusions on it, an effort was made toward detectingthese discontinuities in the ground plane. First, a section of the ground cor-responding to the region immediately in front of the robot—and hence inits path—was selected from the rest of the point cloud by tight cropping.Then, the slope of the floor was modeled to account for the Kinect’s coordi-nate distortion, and all points falling outside a given height tolerance of thisplane were filtered out. By examining the surface normals of this isolatedsample, the edge points could be estimated. Next, a radius-driven minimumneighbors filter was applied to eliminate false positives. The results werepromising when tested on a smooth carpet: after some fine-tuning, no falsepositives were being detected and a good number of edge points arose whenany given obstruction was placed on the ground in front of the sensor. Un-fortunately, speed had become a problem, as estimating the edge points wastaking several seconds per sample.

It was in order to solve the speed issues that Choi’s work was used; bymaking use of the organization of the Kinect’s point cloud data instead ofconstructing an entire search tree for each frame, his algorithms were ableto function at least an order of magnitude faster than the main PCL edgedetection routines. At first, his absolute plane boundaries detector was used,but this was not ideal for two main reasons: First, it was unable to pick up

15

Page 16: Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive property of the above obstacle composition criteria. This works very well if the terrain

objects in the middle of the portion of the plane which we were examining.Additionally, it was vulnerable to poor-quality floor samples far ahead, whichwould appear as rounded patches cutting into the distant edge of the floorplane measurably. Consequently, Choi’s class was patched to enable greatercontrol over its high curvature edge detection, which—similarly to the earlierapproach—makes use of the plane’s normals rather than its boundaries, andis therefore less vulnerable to noise once one has filtered out all but the closestpoints to the floor plane. A careful tuning of the edge detection and outlierremoval parameters succeeded in eliminating almost all false positives whilequite effectively capturing the footprints of those objects that intruded onthe focal area of the ground plane. The robot was then programmed to turnaway from the centroid of the detected edge points.

Unfortunately, this approach alone was unable to detect obstacles fallingcompletely in front of the area of interest on the ground or expansive holes atany distance. In anticipation of such situations, the total number of detectedground points was compared to a set threshold; if it fell under this value, therobot would back up in order to get a broader view of the obstruction. Thisturned out to be a poor way to handle the situation, however, as the numberof ground points varied significantly depending on the type of flooring, andbacking up blindly often resulted in crashing into some invisible obstruction.As such, absolute plane boundaries were merged back in, this time in additionto curvature detection, and with the added restriction of ignoring expectedborder regions for the former in order to solve the problem of distant noise.Now, if the edge of the ground moved into the area where the plane wasexpected to be fully intact, it was assumed that there was either a holeencroaching upon the robot’s position or an object between the Kinect andthe close edge of the portion of the ground visible to it, and the detectededge points were pooled with the curvature keypoints in order to determinewhich direction to turn.

Together, the curvature points and outstanding plane boundary points wereable to keep the Kinect from getting close enough to most obstacles to becomecompletely blind. However, to further ensure the robot’s safety, a third checkwas added: As the robot drove forward or turned, it constantly rememberedthe direction in which it would have turned—whether or not it had actuallydone so—given the data from the previous frame. In the case where noground points were visible, and thus something was completely obscuringthe Kinect’s view, it would then begin to turn in the stored direction. This

16

Page 17: Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive property of the above obstacle composition criteria. This works very well if the terrain

step proved effective against high-speed situations where moving objects’trajectories, when combined with the processing delay, brought obstructionsout of the robot’s view before it had yet evaluated them, as well as scenarioswhere a large obstruction was suddenly placed very close to the robot’s front.

4.4 Combining the height range and ground plane ap-proaches

While the floor occlusion detection approach worked very well for just abouteverything, it had a somewhat significant disadvantage that was not sharedby the earlier height range–cropping approach: When confronted with along, deep object having a region without floor contact—a bench or vendingmachine, for instance—the system was unable to detect it because of its lackof interactions with the floor plane. In order to solve this shortcoming, thetwo approaches were combined into a single program; each was placed ina separate thread, with a third thread to integrate the steering advice ofeach. This approach solved the problem of suspended objects and enabledfaster response to objects detectable by the less computationally-intensiveheight region approach while preserving the robust detection capabilities ofthe surface analysis.

A detailed visual summary of the code’s progression along with the timeframes of feature additions is given in Figure 4. The test cases noteworthyenough to have prompted implementation changes are collected in Table 1.Additionally, the pseudo code for the final project is discussed in Figure 5.

5 Results and Discussion

While Section 4 discussed the changes that were made to the code as a resultof the challenges cataloged in Table 1, we will now discuss the behavior ofthe final implementation when faced with these same scenarios. In orderto quantify the system’s success rate, intensive testing of these cases wasperformed.

17

Page 18: Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive property of the above obstacle composition criteria. This works very well if the terrain

Applied a voxel grid downsampling

Cropped out points outside robot’s height range

Split view into center and two peripheries

Established center noise threshold for turning

Disallowed oscillatory direction reversals

Averaged center scans instead of using noise threshold

Based turn direction on obstacles’ centroid instead of number of points

Implemented Euclidean clusterdetection

Moved toward sufficiently-narrowopenings between segmented objects

Ignored blocked and inaccessiblepassageways

Used a linear plane model to focusexclusively on the floor

Employed normal calculation to findholes in the plane

Added radius-based outlier removal toreduce false positives

Switched to a faster algorithm onlydetecting absolute plane boundaries

Switched to detecting edges based ondistinctive curvature

Backed up when floor visibility fellbelow a set threshold

Added back absolute plane boundarydetection with regions in the vicinity

of the expected perimeter ignored

Replaced backing up with turning inthe direction computed whileprocessing the previous frame

Replaced floor visibility tolerancewith check for complete blindness

Merged in the downsampled heightrange algorithm, running it in a

parallel thread

Week 2

Week 3

Week 4

Week 5

Week 6

Week 7

Week 8

Week 9

Figure 4: The complete progression of code development by week

18

Page 19: Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive property of the above obstacle composition criteria. This works very well if the terrain

Downsample the point cloud

Crop out everything outside therobot’s height and width regions and

more than a set distance away

Is the robot currently turning? Clear the backlog of prior pointcounts

yes

Add the current number of visiblepoints to the backlog

no

Is the backlog bigger than its sizelimit?

Pop the oldest entries off of the loguntil it is small enough

yes

Compute the average number ofpoints across all backlog entries

no

After averaging, did we count anypoints in our way?

Calculate and turn away from thecentroid of the points in the current

cloud

yes

Drive straight forward

no

(a) The thread implementing the height range cropping algorithm

Figure 5: The final program’s flow of control (continued on page 20)

19

Page 20: Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive property of the above obstacle composition criteria. This works very well if the terrain

Has the user changed the parametersfor the floor points?

Recompute the ground plane model

yes

Crop out everything except the regioncontaining the floor

no

Use the linear ground plane model toremove all points above a certaindistance away from the floor andcount the number of actual floor

points

Did we find any floor points?

Estimate edge points based oncurvature and absolute plane

boundaries

yesRemove edge points falling within the

regions of the expected borders

Detect and remove outliers based onneighbor radii

Were any edge points detected andleft unfiltered?

Calculate and turn away from thecentroid of all the edge points

yes

Choose and remember for later theturning direction of the side with

more floor points

no

Drive straight forward

Turn in the direction chosen on theprevious iteration

no

(b) The thread implementing the ground plane edges algorithm

Do the two algorithms’ advice agree? Is either one advising us to driveforward?

no

Use the directionsuggested by the floor

analysis

noTake the advice telling

us to turn

yes

Take their mutual recommendation

yes

Would this be a turn directionreversal?

Turn instead in the same direction asbefore

yes

Has the user enabled robotmovement?

no

Send the navigation commands to thedrive system

yes

no

(c) The thread integrating the decisions of the two separate algorithms andissuing appropriate drive commands

Figure 5: The final program’s flow of control (continued)

20

Page 21: Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive property of the above obstacle composition criteria. This works very well if the terrain

Table 1: Test cases to which the implementation was subjected

Scenario Examples tested True positives False negatives % correct

Wide obstacle Wall, recycle bin,round trashcan

55 0 100%

Tall, thin obstacle Chair leg, table leg 36 0 100%

Short object on theground

Pad of paper,marker, serialconsole cable

29 25 54%

Hole in the ground Staircase 18 0 100%

Suspended object Vending machine 18 0 100%

Overall: 156 25 86%

5.1 Intensive testing by structured test cases

For each row of Table 1, the specific cases given in the second column weretested at three different distances. First, each object was tested at long range;it would be placed ahead of the robot’s area of interest so that the robot wouldfirst encounter it as it was in the process of approaching. Next, the objectwas placed at medium distance away, within the area of interest, before thealgorithm was started, so that it was visible from the very beginning of thetrial. Finally, it was placed extremely close to the robot so that it fell infront of the region visible to the Kinect and was not directly visible. At eachdistance, the item would be tested six times, three on each side of the robot,and each trial would be recorded as either a true positive or false negative,with the former also classified by expected or unexpected turn direction.

When the autonomous robot encounters a wide obstacle, both algorithmsare able to sense it. Additionally, the robot is not allowed to reverse thedirection of its turn, so it cannot enter an oscillatory state. A corollary tothis behavior is that, if the robot is surrounded on all sides, it will continuespinning until freed. During testing, the system was able to detect the sampleobject 100% of the time, as well as choose the appropriate turn direction inall cases except when the wall was in the closest configuration. In such acase, the robot’s view is completely blocked, and it is forced to decide on adirection arbitrarily unless it has already remembered one while processinga previous frame.

Tall, thin obstacles with a limited base footprint such as a chair or table are

21

Page 22: Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive property of the above obstacle composition criteria. This works very well if the terrain

best detected by the height range algorithm. It is only thanks to the noisereduction attained by averaging several samples that it is able to spot suchsmall objects. Testing of this functionality revealed 100% accuracy for bothdetection and choice of direction.

Short objects resting on the floor, including markers and pads of paper, aretypically invisible to the cropping algorithm. However, the curvature of thecontact points between their edges and the ground plane is usually detectableto the other approach, largely depending upon the distinctiveness of theiredges and amount of contact area. While the experimental results show onlya 54% success rate for this type of challenge, it should be noted that 18 of the25 false negatives occurred at the closest distance. Detection was actuallyimpossible in these cases because the objects had completely disappearedfrom the Kinect’s field of view and weren’t tall enough to occlude the visibleregion of the floor. If this portion of the test is to be discounted, one insteadfinds an 81% accuracy, with the console cable detected every time. Giventhat the pad of paper mimics the ground plane and the marker has verylittle floor contact, this detection rate seems reasonable.

When a hole in the ground such as a descending staircase comes into view, theground plane algorithm detects that the ground’s absolute plane boundaryhas receded into an unexpected region. All plane boundary points fallingoutside of the expected regions are treated identically to curvature points,and the direction recommendation is accordingly based on their centroid.Such situations were detected without error and resulted in the direction ofthe shorter turn the majority of the time.

Upon approaching a suspended or apparently-suspended object such as abench or vending machine, the plane detection algorithm sees no change inthe floor plane. The cropping method, however, is able to see that an objectis infringing upon the robot’s height range, and directs it to turn away fromthe obstruction’s centroid. These threats were always detected, and test unitwas low enough to the floor to obscure the Kinect’s view of the ground by thetime it was very close, so that even the close trials discovered its presence.Of course, this ability would vary with the elevation of the bottom ledge ofthe object in question, as well as its depth if positioned in front of a wall.

On the whole, the algorithms suffer from almost no problems with falsepositives. However, intense external infrared radiation is capable of maskingthe Kinect’s infrared grid. If, for instance, the robot is approaching a patch

22

Page 23: Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive property of the above obstacle composition criteria. This works very well if the terrain

of direct sunlight shining through a window, it will appear from the Kinect’spoint cloud as though there is a hole in the ground: the absolute planeboundary will begin to recede into the expected ground region. Consequently,the robot will avoid such regions, treating them as actual danger.

When face-to-face with an entirely transparent glass wall, the Kinect’s in-frared grid passes straight through the window. Therefore, the barrier isn’treflected in the returned point cloud, and neither algorithm is able to see itat all.

5.2 Extensive testing by environmental observation

With the intensive test cases evaluated, the robot was released on all threefloors of the building for unstructured test drives in order to determine howlikely it was to encounter the already-tested situations. Each time the robotturned, either a true positive or a false one was recorded, depending onwhether there was actually an object in its way. Additionally, false negativeswere to be noted every time the robot actually ran into anything; however,this never occurred in the test environment. The results of such observationare noted in Table 2.

Each false positive uncovered by the first two courses occurred during therobot’s transition from a tile surface to a carpet, or vice versa, and resultedwhen the slight height changes between the surfaces triggered the groundplane curvature detection, and could likely be solved simply by fine-tuningthe ground plane curvature detection parameters. Such cases never resultedin more than a few degrees of turning before the robot resumed drivingforward. In the third and fourth trials, the robot drove across a floor withlarger, less regular tiles; here, it would turn away from particularly unevenedges. As before, it also picked up a few false positives when transitioningbetween floorings. However the majority of its unprompted turns in thiscase stemmed from sunlight: While on the tile floor, it encountered severalpatches and spent some time wandering between them before being freed.

The addition of the ground plane edge detection to the traditional obstacleavoidance solution brings several key advantages: First, examining the cur-vature of the plane enables the detection of almost any obstacle that makescontact with the floor, including those that are very short. Next, looking

23

Page 24: Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive property of the above obstacle composition criteria. This works very well if the terrain

for absolute plane edges means hazards that have no corresponding obstaclewithin the robot’s height range—such as holes in the ground—can be easilyavoided. Finally, since objects passing in front of the infrared emitter oc-clude the floor in front of the sensor, examining plane edges also reveals thepresence of objects suddenly appearing very close to the sensor, includinganimate objects whose motion is perpendicular to the robot’s; in this way,the algorithm is able to infer the presence of objects that are closer than theKinect’s hardware-limited minimum range. The latter principle makes theplane analysis approach especially beneficial for sensors such as the Kinectthat are unable to see objects closer than a certain distance away.

As noted earlier, the complete implementation fails to perform in two specificcases: It is vulnerable to false positives in patches of direct sunlight and tofalse negatives in the case of transparent walls. Such problems stem fromthe limitations of the infrared-based Kinect point cloud; however, they couldlikely be solved by examining the Kinect’s RGB data in tandem with itsdepth values.

Table 2: Observations from unstructured test drives

Location True positives False positives False negatives Success

First floor 29 1 0 97%

Second floor 55 2 0 96%

Third floor 66 4 0 94%

Atrium, side wing 105 17 0 85%

Overall: 255 24 0 91%

6 Table of Hours Worked

The time spent working on the project is addressed in Table 3. The timeframe of the code’s evolution is described in Figure 4.

7 Conclusion

The combination of the two methods of achieving obstacle avoidance washighly successful because they complemented each other so well: The crop-

24

Page 25: Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive property of the above obstacle composition criteria. This works very well if the terrain

Table 3: Hours spent working on the project

Week Research Implementation Testing Documentation Administration Subtotal

1 14:00 6:20 18:00 38:202 22:00 9:00 3:00 3:20 1:30 38:503 12:00 4:00 16:00 12:40 44:404 3:00 8:00 5:30 4:00 5:00 25:305 3:00 13:00 3:00 10:00 3:00 32:006 11:10 9:00 4:00 20:00 44:107 18:00 8:00 5:00 14:00 45:008 8:00 5:00 21:00 8:10 42:109 4:00 3:20 28:00 5:00 40:20

10 3:00 10:00 6:30 14:30 34:00

Total: 385:00

ping approach, which was by nature incapable of detecting very short objectsor floor discontinuities, was able to rely on the less common plane surfaceanalysis for these tasks. The plane analysis, on the other hand, was poor atdetecting the truly- or apparently-suspended objects that were readily de-tected by the other. As might be expected, then, the synthesis of the twoalgorithms was able to autonomously navigate in almost all tested indoorsituations without a problem. Thus, the project’s goals were realized.

25

Page 26: Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive property of the above obstacle composition criteria. This works very well if the terrain

References

Canny, John. (1986). A computational approach to edge detection. InIEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI),Vol. 8, no. 6, November 1986. July 26, 2012. 〈http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=4767851〉.

Choi, Changhyun. (2012). Organized Edge Detection. PCL Google Summerof Code Developers Blog. July 26, 2012. 〈http://pointclouds.org/blog/gsoc12/cchoi

Herrera, Daniel et al. (2011). Accurate and practical calibration of a depthand color camera pair. In Computer Vision, Image Analysis, and Process-ing (CAIP 2011). June 8, 2012. 〈http://www.ee.oulu.fi/~dherrera/papers/2011-depth_calibration.pdf〉.

Holz, Dirk et al. (2011). Real-time plane segmentation using RGB-D cam-eras. In Proceedings of the 15th RoboCup International Symposium, Istanbul,July 2011. June 11, 2012. 〈ais.uni-bonn.de/papers/robocup2011_holz.pdf〉.

Koren, Y. and Borenstein, J. (1991). Potential field methods and their in-herent limitations for mobile robot navigation. In Proceedings of the IEEEConference on Robotics and Automation, Sacramento, April 7-12, 1991, 1398-1404. June 8, 2012. 〈http://www-personal.umich.edu/~johannb/Papers/paper27.pdf〉.

Nguyen, Van-Duc. (2012). Obstacle avoidance using theKinect. June 15, 2012 〈http://scribd.com/doc/80464002/Obstacle-Avoidance-Using-the-Kinect〉.

O’Leary, Gabe. (2011). Removing Outliers Using a Conditional or Radiu-sOutler Removal. Point Cloud Library Tutorials. July 26, 2012. 〈http://pointclouds.org/documentation/tutorials/remove_outliers.php〉.

Rusu, Radu B. (2011). Downsampling a PointCloud using a VoxelGrid filter.Point Cloud Library Tutorials. July 26, 2012. 〈http://pointclouds.org/documentation/tutorials/voxel_grid.php〉.

Talukder, A. et al. (2002). Fast and reliable obstacle detection and seg-mentation for cross-country navigation. In Proceedings of the IEEE In-telligent Vehicle Symposium (IVS 02), 610-618. June 25, 2012. 〈http:

26

Page 27: Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive property of the above obstacle composition criteria. This works very well if the terrain

//users.soe.ucsc.edu/~manduchi/papers/IV02.pdf〉.

Viola, Paul and Jones, Michael. (2001). Robost real-time object detection.In Second International Workshop on Statistical and Computational Theoriesof Vision (IJCV 2001), Vancouver, July 13, 2001. July 26, 2012. 〈http://research.microsoft.com/~viola/Pubs/Detect/violaJones_IJCV.pdf〉.

27

Page 28: Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive property of the above obstacle composition criteria. This works very well if the terrain

Appendices

1 Starting the TurtleBot

1. Disconnect both chargers from the robot, if applicable.

2. Turn on the iRobot Create by pressing the power button on its back;the power light should turn green.

3. Unplug and remove the laptop from the TurtleBot.

4. Open the laptop’s lid and press the power button.

5. Close the laptop, replace it in the chassis, and reconnect the cables.

6. Wait until the Ubuntu startup noise sounds; at this point, the robot isready to accept connections.

7. From another machine, enter: $ ssh [email protected]

8. Once authenticated, ensure that the robot service is running: $ sudo

service turtlebot start

9. The iRobot Create should beep and its power light should go out. Therobot is now ready for use.

10. Enter the following command to enable the Kinect: $ nohup

roslaunch turtlebot bringup kinect.launch &

11. Enter the following command to enable the Interactive tabin RViz and allow GUI-driven teleoperation: $ nohup rosrun

turtlebot interactive markers turtlebot marker server &

12. You may now safely close your robot shell connection: $ exit

2 Stopping the TurtleBot

1. Connect to the robot.

2. Stop the Interactive Markers server: $ kill ‘ps -ef | grep

marker server | tr -s " " | cut -d " " -f 2‘

28

Page 29: Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive property of the above obstacle composition criteria. This works very well if the terrain

3. Stop the Kinect driver with: $ kill ‘ps -ef | grep

kinect.launch | grep -v grep | tr -s " " | cut -d " "

-f 2‘

4. Release the iRobot Create with: $ rosservice call

/turtlebot node/set operation mode 1

5. At this point, it is safe to plug the charger into the iRobot Create. Ifyou want to turn off the laptop as well, continue with the below stepsinstead.

6. Shut down the robot laptop: $ sudo halt

7. Turn off the Create by pressing its power button.

8. Plug in the chargers for the iRobot Create and the laptop.

3 Setting up a Development Workstation

1. Ready a machine for your use. (We’ll assume you’re using Ubuntu10.04 through 11.10.)

2. Ensure that your system has either a hostname or a static IP that isvisible from the robot.

3. Download the ROS package signing key: $ wget

http://packages.ros.org/ros.key

4. Add the signing key to your system: $ sudo apt-key add ros.key

5. Add the ROS repository to your system: $ sudo apt-add-repository

http://packages.ros.org/ros/ubuntu

6. Update your repository cache: $ sudo apt-get update

7. Install the TurtleBot desktop suite: $ sudo apt-get install

ros-electric-turtlebot-desktop

8. Edit your bash configuration($ $EDITOR ~/.bashrc), adding the fol-lowing lines to the end:

29

Page 30: Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive property of the above obstacle composition criteria. This works very well if the terrain

•• source /opt/ros/electric/setup.bash

• export ROS MASTER URI=http://turtlebot.rit.edu:11311

• export ROS PACKAGE PATH=<directory where you’ll store your-

source code>:$ROS PACKAGE PATH

9. Write and close the file, then enter the following command in each ofyour open terminals: $ source ~/.bashrc

10. Install the Chrony NTP daemon: $ sudo apt-get install chrony

11. Synchronize the clock: $ sudo ntpdate ntp.rit.edu

12. If the robot and the workstation both have hostnames, but they arein different domains, perform the following steps. (In this exam-ple, the robot is at turtlebot.rit.edu and the workstation is atturtlecmd.wireless.rit.edu.)

(a) On each machine, right-click the Network Manager applet in thenotification area, choose Edit Connections..., and open the prop-erties for the specific connection that is being used.

(b) On the IPv4 Settings tab, change the Method dropdown toAutomatic (DHCP) address only.

(c) In the DNS servers field, enter the same DNS servers thatwere being used, with commas in between (e.g. 129.21.3.17,

129.21.4.18).

(d) In the Search domains field, enter the local machine’s domain first,followed by the remote machine’s. For instance, in our example,one might enter rit.edu., wireless.rit.edu. on the robotand wireless.rit.edu., rit.edu. on the workstation.

(e) Save all your changes and exit the Network Connections dialog.

(f) Force a reconnection by clicking on the Network Manager applet,then selecting the network to which you are already connected.

13. If the workstation has no qualified hostname and is to be reached viaa static IP, make the following changes on the robot instead:

(a) Edit the robot’s hosts file: $ sudo $EDITOR /etc/hosts

30

Page 31: Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive property of the above obstacle composition criteria. This works very well if the terrain

(b) For each static host, add a line such as: 〈IP address〉〈hostname〉. It is important to note that the hostname you usemust exactly match the output of $ hostname on the developmentworkstation.

(c) Save the file and quit; the changes should take effect immediatelyand automatically.

4 Upgrading to the Latest Version of PCL

The version of the Point Cloud Library shipped with ROS lags significantlybehind that available directly from the community. These instructions showhow to install the latest version of PCL on top of an existing ROS Electricinstallation.

1. Create a folder to contain the build files and a replacement copy of theperception pcl stack: $ mkdir ~/ros

2. Install the Python package management utilities: $ sudo apt-get

install python-setuptools

3. Install the dependencies for the rosinstall utility: $ sudo

easy install -U rosinstall

4. Create a new ROS overlay in the current directory: $ rosinstall .

/opt/ros/electric

5. Install any missing build dependencies: $ rosdep install

perception pcl

6. Obtain a rosinstall file describing the repository for theperception stack: $ roslocate info perception pcl 〉perception pcl.rosinstall

7. Edit the rosinstall file to point at the correct repositoryfor Electric: $ sed -i s/unstable/electric unstable/

perception pcl.rosinstall

8. Fetch the makefiles for the stack: $ rosinstall .

perception pcl.rosinstall

9. Inform your shell of the overlay’s location: $ source setup.bash

31

Page 32: Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive property of the above obstacle composition criteria. This works very well if the terrain

10. Move into the cminpack directory: $ cd perception pcl/cminpack

11. Build the package: $ make

12. Move into the flann directory: $ cd ../flann

13. Build the package: $ make

14. Move into the pcl directory: $ cd ../pcl

15. Select the most recent tagged version of the code, for instance: $ sed

-i s/\\/trunk/\\/tags\\/pcl-1.6.0/ Makefile

16. Build the PCL codebase: $ make

17. Move into the pcl ros directory: $ cd ../pcl ros

18. Build the ROS PCL bindings: $ make

19. Move back out into the stack: $ cd ..

20. Build the stack’s particulars: $ make

21. Edit your bashrc file to add the following line after the line that sourcesthe system-wide ROS setup.bash: source ~/ros/setup.bash

22. If you intend on continuing to use your current terminals, enter thefollowing in each after saving the file: $ source ~/.bashrc

5 Backporting in a Class from the PCL Trunk

Often, the trunk version of the Point Cloud Library will fail to compile;therefore, it may be desirable to backport a specific class from trunk into areleased copy of the library. For instance, the code written for this projectrelies on the OrganizedEdgeDetection class, which—at the time of writing—isonly available from trunk. These steps present an example of how to backportrevision 6467 of this specific class and the new subsystem on which it reliesinto the 1.6.0 release of PCL. We’ll assume that the steps from Section 4 ofthe Appendix have already been completed.

1. Change to the source directory of your newly-compiled copy of thePoint Cloud Library: $ roscd pcl/build/pcl trunk

32

Page 33: Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive property of the above obstacle composition criteria. This works very well if the terrain

2. Download the required 2d subsystem: $ svn checkout

http://svn.pointclouds.org/pcl/trunk/2d@r6467

3. Move into the directory that is to contain the OrganizedEdgeDetectionheader: $ cd features/include/pcl/features

4. Download the header: $ svn export http://

svn.pointclouds.org/pcl/trunk/features/include/pcl/

features/organized edge detection.h@r6467

5. Move into the directory that is to contain the templated code: $ cd

impl

6. Download the templated source: $ svn export http://

svn.pointclouds.org/pcl/trunk/features/include/pcl/

features/impl/organized edge detection.hpp@r6467

7. Move into the directory that is to contain the instantiations: $ cd

../../../../src

8. Download the instantiations list: $ svn export

http://svn.pointclouds.org/pcl/trunk/features/src/

organized edge detection.cpp@r6467

9. Move back into the root of the code directory: $ cd ../..

10. Edit the features package’s build configuration: $ $EDITOR

features/CMakeLists.txt

(a) At the end of the SUBSYS DEPS list, add: 2d

(b) Under set(incs, add: include/pcl/${SUBSYS NAME}/organized edge detection.h

(c) Under set(impl incs, add: include/pcl/${SUBSYS NAME}/impl/organized edge detection.hpp

(d) Under set(srcs, add: src/organized edge detection.cpp

11. Apply the necessary patches, as described in the included directorythat comes with my code.

12. Return to the package root: $ roscd pcl

13. Build in the changes and relink: $ make

33

Page 34: Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive property of the above obstacle composition criteria. This works very well if the terrain

6 ROS Glossary

• message. A ROS communication packet that carries information be-tween nodes in a single direction. New message types may be de-clared by creating text files in a package’s msg directory and en-abling the rosbuild genmsg() directive in its CMakeLists.txt file.The composition of existing message types may be found usingthe rosmsg show command. ROS automatically generates a C++struct for each message type; these struct types are declared inthe 〈package〉/〈messagetype〉.h headers; these must be included be-fore they may be used. Two examples of useful message types arestd msgs/Int32—an int—and geometry msgs/Twist—used for driv-ing the Create around.

• node. A single ROS executable, which may be added to apackage by appending a rosbuild add executable directive to theCMakeLists.txt file of the latter. Once the package has been com-piled using GNU Make, each of its nodes may be run using the rosrun

command.

• package. A “project” containing executables and/or libraries; newpackages may be created with the roscreate-pkg command, and ex-isting ones may be imported into a dependent one by adding depend

tags to its manifest.xml file. A couple of important packages areroscpp, which contains the ros/ros.h header that allows one to in-terface with ROS, and pcl ros, which depends on the pcl package toprovide the Point Cloud Library bindings.

• parameter. A variable hosted on the ROS parameter server; it ispersistent across multiple runs of a node, provided that the ROS masteris not restarted. Depending upon the node’s implementation, changingone of its parameters while it is running may also affect its continuedbehavior. The user interface to the parameter server is provided bythe rosparam command, while the C++ API supports the analogoussetParam, getParam, deleteParam, and other methods located in theros::NodeHandle class.

• service. A link between ROS nodes allowing two-way communica-tion carried in the form of service types from a client to a server.

34

Page 35: Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive property of the above obstacle composition criteria. This works very well if the terrain

The user may call an existing service using the rosservice com-mand, while C++ programs may create and call services via theros::ServiceServer and ros::ServiceClient classes, which may bebuilt by means of the advertiseService and serviceClient methodsof ros::NodeHandle. Service types—the analog of messages from theworld of topics—may be declared in text files within a package’s srv

directory after enabling its CMakeLists.txt file’s rosbuild gensrv()

call. Service types’ components may be seen with the rosservice

show invocation, and C++ service structs are generated and used sim-ilarly to those for messages. One example of a service used on theTurtleBot is /turtlebot node/set operation mode, which takes aninteger—usually 1, 2, or 3—responds whether it is valid, and brings theiRobot Create into either Passive, Safety, or Full mode, respectively.

• topic. A link between ROS nodes that allows one-way communicationof information carried in the form of messages from a publisher to oneor more subscribers. The user may publish or subscribe to a topic bymeans of the rostopic command, while C++ programs may do soby creating a ros::Publisher or ros::Subscriber object using theros::NodeHandle class’s advertise or subscribe method. Examplesof topics on the TurtleBot are /cmd vel—modified in order to to con-trol the Create’s drive and steering—and /cloud throttled—whichprovides the point cloud from the Kinect.

7 ROS Commands

This section aims to list the commands needed to interface with ROS andbriefly address their commonly-used arguments. For the sake of clarity, thefollowing conventions are used: unless otherwise noted, arguments in 〈angledbrackets〉 are required, while those in [square brackets] are optional.

• roscore brings up a ROS master, which is useful for experimentingwith ROS on one’s workstation when the TurtleBot is not online.However, in order to actually use this master instead of the Turtle-Bot’s, one must do the following in each pertinent shell: $ export

ROS MASTER URI=http://localhost:11311

35

Page 36: Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive property of the above obstacle composition criteria. This works very well if the terrain

• roscd 〈package〉 is a convenience script that allows one to immediatelymove into the root directory of the specified package.

• roscreate-pkg 〈package〉 [dependencies] initializes package direc-tories to contain the source code for one or more modules. The packagedirectory structure will be created in a new subdirectory called packagewithin the current folder, which must appear in $ROS PACKAGE PATH.Typically, the dependencies should include the roscpp package—whichcontains the ROS C++ bindings—as well as any other ROS packagesthat will be used, such as pcl ros. The dependencies may be mod-ified later by editing the manifest.xml file in the root directory ofthe package to add additional depend tags. ROS nodes may be addedto a project by adding a rosbuild add executable directive to theCMakeLists.txt file, also located in the package root.

• rosmsg 〈verb〉 〈arguments〉 shows information about currently-defined message types that may be passed over topics. When verb isshow and the arguments are 〈package〉/〈messagetype〉, for instance,an “API reference” of the types and names of the variables in the mes-sage’s corresponding struct hierarchy is displayed.

• rosparam 〈verb〉 〈parameterpath〉 supports verbs such as: list,set, get, and delete. In the case of the former, the parameterpath maybe omitted if a complete listing is desired. The set invocation expectsan additional argument containing the new value to be appended.

• rosrun 〈package〉 〈node〉 is simply used to execute a node once thepackage has been compiled with make.

• rosservice 〈verb〉 〈servicepath〉 allows interfacing with the presently-available services over which service types may be sent. When verb islist, servicepath may optionally be omitted, in which case all serviceswill be shown. With call, the user may call a service by passing ar-guments and receive a response as supported by the service type. Thetype verb is important, as it returns the package and service type cor-responding to the service at the specified path.

• rossvc 〈verb〉 〈arguments〉 allows querying currently-defined servicetypes for passing over services. When verb is show and the argumentsare of the form 〈package〉/〈servicetype〉, the command outputs an“API reference”–style listing of the types and names of the variables in

36

Page 37: Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive property of the above obstacle composition criteria. This works very well if the terrain

the struct type representing the service type.

• rostopic 〈verb〉 〈topicpath〉 provides a bridge to currently-advertisedtopics over which messages may be passed. When verb is list, topic-path may optionally be omitted to list all available topics. With echo,the user may subscribe to a topic and view the data that is being sub-scribed to it. Conversely, invocation with pub allows publishing to thetopic, which will influence the nodes that are presently subscribed to it.The type verb is particularly useful: it prints the package and messagetype of a given registered topic.

8 Useful Links

Unfortunately, much of the ROS documentation is rather terse and un-friendly. Here, I’ve made an effort to catalog the documentation that Ifound most helpful. I’ve also included documentation from the PCL web-site, which is perhaps better organized and certainly more comprehensivethan that available on the ROS Wiki.

• ROS TurtleBot wiki: http://ros.org/wiki/TurtleBot

• ROS tutorials: http://ros.org/wiki/ROS/Tutorials

• ROS C++ tutorials: http://ros.org/wiki/roscpp/Tutorials

• ROS C++ overview: http://ros.org/wiki/roscpp/Overview

• ROS C++ API reference: http://ros.org/doc/electric/api/

roscpp/html

• ROS Kinect calibration tutorials: http://ros.org/wiki/openni_

launch/Tutorials

• ROS PCL data type integration examples: http://ros.org/wiki/

pcl_ros

• PCL tutorials: http://pointclouds.org/documentation/

tutorials

• PCL API reference (1.1.0): http://docs.pointclouds.org/1.1.0

• PCL API reference (1.6.0): http://docs.pointclouds.org/1.6.0

37

Page 38: Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive property of the above obstacle composition criteria. This works very well if the terrain

• PCL API reference (trunk): http://docs.pointclouds.org/trunk

9 Code Listing

1 #include ” ros / ros . h”2 #include ” p c l r o s / po in t c l oud . h”3 #include ” pc l / po in t type s . h”4 #include ” pc l / f i l t e r s / passthrough . h”5 #include ” pc l / f i l t e r s / vox e l g r i d . h”6 #include ” pc l / f e a t u r e s / o r gan i z ed edg e de t e c t i on . h”7 #include ” pc l / f i l t e r s / r ad i u s ou t l i e r r emova l . h”8 #include ”geometry msgs/Twist . h”910 /∗∗11 Represents a r e que s t f o r a p a r t i c u l a r d r i v e act ion , which

may be to go s t r a i g h t , turn l e f t , or turn r i g h t12 ∗/13 enum DriveAction14 {15 FORWARD, LEFT, RIGHT16 } ;1718 /∗∗19 Performs o b s t a c l e d e t e c t i on and avoidance us ing two

a l gor i t hms s imu l t aneous l y20 ∗/21 class TandemObstacleAvoidance22 {23 private :24 ro s : : NodeHandle node ;25 ro s : : Pub l i she r v e l o c i t y ;26 ro s : : Pub l i she r panorama ; //downsampled c loud27 ros : : Pub l i she r he ight ; // heightRange ’ s reg ion o f

i n t e r e s t28 ros : : Pub l i she r ground ; // groundEdges ’ s reg ion o f

i n t e r e s t29 ros : : Pub l i she r o c c l u s i o n s ; //ground− l e v e l

o c c l u s i on s

38

Page 39: Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive property of the above obstacle composition criteria. This works very well if the terrain

30 DriveAction currentMOTION ; // p i l o t ’ s account o fwhat was l a s t done : d e t e c t i on a l gor i t hms shou ldnot modify !

31 DriveAction d i r e c t i onsPr imary ; // the h e i g h t rangea l gor i thm ’ s su g g e s t i on

32 DriveAction d i r e c t i on sSecondary ; // the ground edgesa l gor i thm ’ s su g g e s t i on

33 std : : l i s t <int> heightRangeFrontSamples ;34 double last GROUND CLOSEY, last GROUND CLOSEZ ,

last GROUND FARY, last GROUND FARZ ; // onlyr e c a l c u l a t e the be low when necessary

35 double GROUND SLOPE, GROUNDYINTERCEPT; //model theground ’ s l o c a t i o n

36 DriveAction groundLastForcedTurn ; //which way wewould have turned : shou ld never be s e t toFORWARD

3738 const char∗ d i r e c t i onRep r e s en ta t i on ( DriveAction

plan )39 {40 switch ( plan )41 {42 case LEFT:43 return ”LEFT” ;44 case RIGHT:45 return ”RIGHT” ;46 default :47 return ”FORWARD” ;48 }49 }5051 public :52 /∗∗53 Constructs the ob j e c t , s t a r t s the a lgor i thms , and

b l o c k s u n t i l the node i s asked to shut down . Byde f au l t , a l l c a l c u l a t i o n s are performed , but nocommands are a c t u a l l y sen t to the d r i v e systemun l e s s the user s e t s the <t t>drive move</t t>parameter to <t t>true</t t >, us ing the <t t>rosparam</t t> command , f o r in s tance .

39

Page 40: Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive property of the above obstacle composition criteria. This works very well if the terrain

54 @param handle a <t t>NodeHandle</t t> de f ined wi ththe nodespace con ta in ing the runtime parameters ,i n c l ud i n g <t t>drive move</t t>

55 ∗/56 TandemObstacleAvoidance ( ro s : : NodeHandle& handle ) :57 node ( handle ) , v e l o c i t y ( node . adve r t i s e<

geometry msgs : : Twist>(”/ cmd vel ” , 1) ) ,panorama ( node . adve r t i s e<pc l : : PointCloud<pc l: : PointXYZ> >(”panorama” , 1) ) , he ight ( node .adve r t i s e<pc l : : PointCloud<pc l : : PointXYZ> >(”he ight ” , 1) ) , ground ( node . adve r t i s e<pc l : :PointCloud<pc l : : PointXYZ> >(”ground” , 1) ) ,o c c l u s i o n s ( node . adve r t i s e<pc l : : PointCloud<pc l : : PointXYZ> >(” o c c l u s i o n s ” , 1) ) ,currentMOTION(FORWARD) , d i r e c t i onsPr imary (FORWARD) , d i r e c t i on sSecondary (FORWARD) ,last GROUND CLOSEY(0) , last GROUND CLOSEZ(0), last GROUND FARY(0) , last GROUND FARZ(0) ,groundLastForcedTurn (LEFT)

58 {59 ros : : MultiThreadedSpinner threads (3 ) ;60 ro s : : Subsc r ibe r heightRange=node . sub s c r i b e ( ”/

c l o ud th r o t t l e d ” , 1 , &TandemObstacleAvoidance : : heightRange , this ) ;

61 ro s : : Subsc r ibe r groundEdges=node . sub s c r i b e ( ”/c l o ud th r o t t l e d ” , 1 , &TandemObstacleAvoidance : : groundEdges , this ) ;

62 ro s : : Timer p i l o t=node . createTimer ( ro s : : Duration( 0 . 1 ) , &TandemObstacleAvoidance : : p i l o t , this) ;

6364 threads . sp in ( ) ; // b l o c k s u n t i l the node i s

i n t e r r up t e d65 }6667 /∗∗68 Performs the primary o b s t a c l e d e t e c t i on and motion

p lanning by downsampling the tunnel− l i k e reg ionin f r on t o f the robo t and matching i t sapproximate h e i g h t and width

40

Page 41: Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive property of the above obstacle composition criteria. This works very well if the terrain

69 @param cloud a Boost po in t e r to the <t t>PointCloud</t t> from the sensor

70 ∗/71 void heightRange ( const pc l : : PointCloud<pc l : :

PointXYZ> : : Ptr& cloud )72 {73 // dec l a r e ” constants ,” generated as de s c r i b ed

in p i l o t74 double CROP XRADIUS, CROP YMIN, CROPYMAX,

CROP ZMIN, CROPZMAX, HEIGHTDOWNSAMPLING;75 int HEIGHT SAMPLES;76 bool HEIGHT VERBOSE;7778 // popu la t e ” constants ,” generated as de s c r i b ed

in p i l o t79 node . getParamCached ( ” c rop xrad iu s ” ,

CROP XRADIUS) ;80 node . getParamCached ( ” crop ymin” , CROP YMIN) ;81 node . getParamCached ( ”crop ymax” , CROPYMAX) ;82 node . getParamCached ( ” crop zmin ” , CROP ZMIN) ;83 node . getParamCached ( ”crop zmax” , CROPZMAX) ;84 node . getParamCached ( ”height downsampling ” ,

HEIGHTDOWNSAMPLING) ;85 node . getParamCached ( ” he ight sample s ” ,

HEIGHT SAMPLES) ;86 node . getParamCached ( ” he i gh t ve rbo s e ” ,

HEIGHT VERBOSE) ;8788 // v a r i a b l e d e c l a r a t i o n s / i n i t i a l i z a t i o n s89 pc l : : PassThrough<pc l : : PointXYZ> crop ;90 pc l : : VoxelGrid<pc l : : PointXYZ> downsample ;91 pc l : : PointCloud<pc l : : PointXYZ> : : Ptr downsampled

(new pc l : : PointCloud<pc l : : PointXYZ>) ;92 pc l : : PointCloud<pc l : : PointXYZ> : : Ptr f r on t (new

pc l : : PointCloud<pc l : : PointXYZ>) ;93 int averageObstac l e s =0; //number o f po in t s in

our way a f t e r averag ing our read ings9495 //downsample c loud96 downsample . setInputCloud ( c loud ) ;

41

Page 42: Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive property of the above obstacle composition criteria. This works very well if the terrain

97 i f (HEIGHTDOWNSAMPLING>=0) downsample .s e tL e a f S i z e ( ( f loat )HEIGHTDOWNSAMPLING, (f loat )HEIGHTDOWNSAMPLING, ( f loat )HEIGHTDOWNSAMPLING) ;

98 downsample . f i l t e r (∗downsampled ) ;99

100 // crop the c loud101 crop . setInputCloud ( downsampled ) ;102 crop . s e tF i l t e rF ie ldName ( ”x” ) ;103 crop . s e tF i l t e r L im i t s (−CROP XRADIUS,

CROP XRADIUS) ;104 crop . f i l t e r (∗ f r on t ) ;105106 crop . setInputCloud ( f r on t ) ;107 crop . s e tF i l t e rF ie ldName ( ”y” ) ;108 crop . s e tF i l t e r L im i t s (CROP YMIN, CROPYMAX) ;109 crop . f i l t e r (∗ f r on t ) ;110111 crop . setInputCloud ( f r on t ) ;112 crop . s e tF i l t e rF ie ldName ( ”z” ) ;113 crop . s e tF i l t e r L im i t s (CROP ZMIN, CROPZMAX) ;114 crop . f i l t e r (∗ f r on t ) ;115116 i f ( currentMOTION!=FORWARD)

heightRangeFrontSamples . c l e a r ( ) ; // uses t r a i g h t snapshot s wh i l e turn ing

117 heightRangeFrontSamples . push f ront ( f ront−>s i z e( ) ) ;

118 while ( heightRangeFrontSamples . s i z e ( )>(unsigned )HEIGHT SAMPLES) heightRangeFrontSamples .pop back ( ) ; // cons t ra in our back l o g

119120 //compute average number o f po in t s121 for ( std : : l i s t <int > : : i t e r a t o r l o c a t i o n=

heightRangeFrontSamples . begin ( ) ; l o c a t i o n !=heightRangeFrontSamples . end ( ) ; l o c a t i o n++)

122 averageObstac l e s+=∗ l o c a t i o n ;123 averageObstac l e s/=heightRangeFrontSamples . s i z e

( ) ;124

42

Page 43: Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive property of the above obstacle composition criteria. This works very well if the terrain

125 // l e t ’ s DRIVE!126 i f ( averageObstac les >0) // something i s in our

way !127 {128 f loat centroidX=0;129130 //compute the cen t ro i d o f the d e t e c t e d

po in t s131 for ( pc l : : PointCloud<pc l : : PointXYZ> : :

i t e r a t o r po int=front−>begin ( ) ; point<f ront−>end ( ) ; po int++)

132 centroidX+=point−>x ;133 centroidX/=front−>s i z e ( ) ;134135 i f (HEIGHT VERBOSE)136 ROS INFO(”HEIGHT RANGE : : See ing %4d

po in t s in our way\n −> Centroid i sat %.3 f i ” , averageObstac les ,centroidX ) ;

137138 i f ( centroidX<0) // o b s t a c l e ( s ) ’ [ s ] c en t ro i d

i s o f f to l e f t139 d i r e c t i onsPr imary=RIGHT;140 else // centroidX>=0141 d i r e c t i onsPr imary=LEFT;142 }143 else // noth ing to see here144 d i r e c t i onsPr imary=FORWARD;145146 // send our imagery to any connected v i s u a l i z e r147 panorama . pub l i sh (∗downsampled ) ;148 he ight . pub l i sh (∗ f r on t ) ;149 }150151 /∗∗152 Performs secondary o b s t a c l e d e t e c t i on and motion

p lanning by d e t e c t i n g curva ture changes on ,boundar ies of , and absense o f the ground p lane

153 @param cloud a Boost po in t e r to the ( organ i zed ) <t t>PointCloud</t t> from the sensor

43

Page 44: Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive property of the above obstacle composition criteria. This works very well if the terrain

154 ∗/155 void groundEdges ( const pc l : : PointCloud<pc l : :

PointXYZRGB> : : Ptr& cloud )156 {157 // dec l a r e ” constants ,” generated as de s c r i b ed

in p i l o t158 double CROP XRADIUS, CROP YMIN, CROPYMAX,

CROP ZMIN, CROPZMAX, GROUNDBUMPERFRONTAL,GROUNDBUMPERLATERAL, GROUNDCLOSEY,GROUNDCLOSEZ, GROUNDFARY, GROUNDFARZ,GROUNDTOLERANCEFINE, GROUNDTOLERANCEROUGH,GROUNDNORMALSMOOTHING,

GROUNDTHRESHOLDLOWER,GROUNDTHRESHOLDHIGHER, GROUNDOUTLIERRADIUS;

159 int GROUNDNORMALESTIMATION,GROUNDOUTLIERNEIGHBORS;

160 bool GROUNDVERBOSE;161162 // popu la t e ” constants ,” generated as de s c r i b ed

in p i l o t163 node . getParamCached ( ” c rop xrad iu s ” ,

CROP XRADIUS) ;164 node . getParamCached ( ” crop ymin” , CROP YMIN) ;165 node . getParamCached ( ”crop ymax” , CROPYMAX) ;166 node . getParamCached ( ” crop zmin ” , CROP ZMIN) ;167 node . getParamCached ( ”crop zmax” , CROPZMAX) ;168 node . getParamCached ( ” ground bumperfrontal ” ,

GROUNDBUMPERFRONTAL) ;169 node . getParamCached ( ” ground bumper latera l ” ,

GROUNDBUMPERLATERAL) ;170 node . getParamCached ( ” ground c lo sey ” ,

GROUNDCLOSEY) ;171 node . getParamCached ( ” g round c l o s e z ” ,

GROUNDCLOSEZ) ;172 node . getParamCached ( ” ground fary ” , GROUNDFARY)

;173 node . getParamCached ( ” ground fa rz ” , GROUNDFARZ)

;

44

Page 45: Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive property of the above obstacle composition criteria. This works very well if the terrain

174 node . getParamCached ( ” g r ound to l e r an c e f i n e ” ,GROUNDTOLERANCEFINE) ;

175 node . getParamCached ( ” ground to l e rancerough ” ,GROUNDTOLERANCEROUGH) ;

176 node . getParamCached ( ”ground normalsmoothing” ,GROUNDNORMALSMOOTHING) ;

177 node . getParamCached ( ” ground thre sho ld lower ” ,GROUNDTHRESHOLDLOWER) ;

178 node . getParamCached ( ” ground thre sho ldh ighe r ” ,GROUNDTHRESHOLDHIGHER) ;

179 node . getParamCached ( ” g r ound ou t l i e r r ad i u s ” ,GROUNDOUTLIERRADIUS) ;

180 node . getParamCached ( ” ground normalest imat ion ” ,GROUNDNORMALESTIMATION) ;

181 node . getParamCached ( ” g round ou t l i e rn e i ghbo r s ” ,GROUNDOUTLIERNEIGHBORS) ;

182 node . getParamCached ( ” ground verbose ” ,GROUNDVERBOSE) ;

183184 //model the p lane o f the ground i f f the user

changed i t s k eypo in t s185 i f (GROUNDCLOSEY!=last GROUND CLOSEY | |

GROUNDCLOSEZ!=last GROUND CLOSEZ | |GROUNDFARY!=last GROUND FARY | | GROUNDFARZ!=last GROUND FARZ)

186 {187 GROUND SLOPE=(GROUNDFARY−GROUNDCLOSEY) /(

GROUNDFARZ−GROUNDCLOSEZ) ;188 GROUNDYINTERCEPT=(GROUNDCLOSEY+

GROUNDFARY)/2−GROUND SLOPE∗(GROUNDCLOSEZ+GROUNDFARZ) /2 ;

189 last GROUND CLOSEY=GROUNDCLOSEY;190 last GROUND FARY=GROUNDFARY;191 last GROUND CLOSEZ=GROUNDCLOSEZ;192 last GROUND FARZ=GROUNDFARZ;193 }194195 // v a r i a b l e d e c l a r a t i o n s / i n i t i a l i z a t i o n s196 pc l : : PassThrough<pc l : : PointXYZRGB> crop ;

45

Page 46: Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive property of the above obstacle composition criteria. This works very well if the terrain

197 pc l : : OrganizedEdgeDetection<pc l : : PointXYZRGB,pc l : : Label> de tec t ;

198 pc l : : RadiusOutlierRemoval<pc l : : PointXYZRGB>remove ;

199 pc l : : PointCloud<pc l : : PointXYZRGB> : : Ptr po in t s (new pc l : : PointCloud<pc l : : PointXYZRGB>) ;

200 pc l : : PointCloud<pc l : : Label> edgePoints ;201 std : : vector<pc l : : Po int Ind ice s> edges ;202 pc l : : PointCloud<pc l : : PointXYZRGB> : : Ptr

nav igat i on (new pc l : : PointCloud<pc l : :PointXYZRGB>) ;

203 int trueGroundPoints=0; // s i z e o f the groundi t s e l f , not i n c l u d i n g any o b s t a c l e s

204 double trueGroundXTotal=0; // t o t a l o f a l l theground ’ s x−coord ina t e s

205206 // crop to focus e x c l u s i v e l y on the approximate

range o f ground po in t s207 crop . setInputCloud ( c loud ) ;208 crop . s e tF i l t e rF ie ldName ( ”x” ) ;209 crop . s e tF i l t e r L im i t s (−CROP XRADIUS−

GROUNDBUMPERLATERAL, CROP XRADIUS+GROUNDBUMPERLATERAL) ;

210 crop . setKeepOrganized ( true ) ;211 crop . f i l t e r (∗ po in t s ) ;212213 crop . setInputCloud ( po in t s ) ;214 crop . s e tF i l t e rF ie ldName ( ”y” ) ;215 crop . s e tF i l t e r L im i t s (CROPYMAX, 1) ;216 crop . setKeepOrganized ( true ) ;217 crop . f i l t e r (∗ po in t s ) ;218219 crop . setInputCloud ( po in t s ) ;220 crop . s e tF i l t e rF ie ldName ( ”z” ) ;221 crop . s e tF i l t e r L im i t s (CROP ZMIN, CROPZMAX+

GROUNDBUMPERFRONTAL) ;222 crop . setKeepOrganized ( true ) ;223 crop . f i l t e r (∗ po in t s ) ;224225 // ignore e v e r y t h in g t ha t i s not the ground

46

Page 47: Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive property of the above obstacle composition criteria. This works very well if the terrain

226 for ( pc l : : PointCloud<pc l : : PointXYZRGB> : : i t e r a t o rl o c a t i o n=points−>begin ( ) ; l o ca t i on<points−>

end ( ) ; l o c a t i o n++)227 {228 double distanceFromGroundPlane=fabs (

l o ca t i on−>y/∗ po in t ’ s a c t ua l y−coord ina te∗/ − (GROUND SLOPE∗ l o ca t i on−>z+GROUNDYINTERCEPT) /∗ ground ’ s expec ted y−coord ina te ∗/ ) ;

229230 i f ( distanceFromGroundPlane>

GROUNDTOLERANCEROUGH) // t h i s po in t i sn ’t anywhere near the ground

231 { // the s e aren ’ t the po in t s we ’ re l o o k in gf o r

232 l o ca t i on−>x=std : : numer i c l im i t s<f loat> : : quiet NaN ( ) ;

233 l o ca t i on−>y=std : : numer i c l im i t s<f loat> : : quiet NaN ( ) ;

234 l o ca t i on−>z=std : : numer i c l im i t s<f loat> : : quiet NaN ( ) ;

235 }236 else i f ( distanceFromGroundPlane<=

GROUNDTOLERANCEFINE && fabs ( l o ca t i on−>x)<CROP XRADIUS−GROUNDBUMPERLATERAL &&loca t i on−>z>GROUNDCLOSEZ+GROUNDBUMPERFRONTAL && loca t i on−>z<CROPZMAX−GROUNDBUMPERFRONTAL) //a c t u a l l y par t o f the ground and in thesubreg ion where we do not t o l e r a t ei n t rud ing p lane edges

237 {238 trueGroundPoints++;239 trueGroundXTotal+=loca t i on−>x ;240 }241 // e l s e par t o f the ground border or a

con tac t ing o b j e c t : j u s t keep i t242 }243

47

Page 48: Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive property of the above obstacle composition criteria. This works very well if the terrain

244 i f ( trueGroundPoints>0) //don ’ t waste time i f we’ re b l i n d

245 {246 // d e t e c t edges247 de t e c t . setInputCloud ( po in t s ) ;248 de t e c t . setEdgeType ( de t e c t .

EDGELABELHIGHCURVATURE+detec t .EDGELABELNANBOUNDARY) ;

249 i f (GROUNDNORMALESTIMATION>=0) de t e c t .setHighCurvatureNormalEstimationMethod ( (pc l : : IntegralImageNormalEstimation<pc l : :PointXYZRGB, pc l : : Normal> : :NormalEstimationMethod )GROUNDNORMALESTIMATION) ;

250 i f (GROUNDNORMALSMOOTHING>=0) de t e c t .setHighCurvatureNormalSmoothingSize ( (f loat )GROUNDNORMALSMOOTHING) ;

251 i f (GROUNDTHRESHOLDLOWER>=0) de t e c t .setHighCurvatureEdgeThresholdLower ( (f loat )GROUNDTHRESHOLDLOWER) ;

252 i f (GROUNDTHRESHOLDHIGHER>=0) de t e c t .setHighCurvatureEdgeThresholdHigher ( (f loat )GROUNDTHRESHOLDHIGHER) ;

253 de t e c t . compute ( edgePoints , edges ) ;254255 i f (GROUNDVERBOSE)256 ROS INFO(”GROUND EDGES : : Saw raw %4lu

curves and %4lu borders ” , edges [ 3 ] .i n d i c e s . s i z e ( ) , edges [ 0 ] . i n d i c e s .s i z e ( ) ) ;

257258 // assemble the de t e c t e d po in t s259 navigat ion−>header=points−>header ;260 for ( std : : vector<pc l : : Po int Ind ice s > : :

i t e r a t o r edge=edges . begin ( ) ; edge<edges .end ( ) ; edge++)

261 for ( std : : vector<int > : : i t e r a t o rpo intIndex=edge−>i n d i c e s . begin ( ) ;pointIndex<edge−>i n d i c e s . end ( ) ;po intIndex++)

48

Page 49: Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive property of the above obstacle composition criteria. This works very well if the terrain

262 i f ( f abs ( (∗ po in t s ) [∗ pointIndex ] . x )<CROP XRADIUS−GROUNDBUMPERLATERAL && (∗ po in t s) [∗ pointIndex ] . z>GROUNDCLOSEZ+GROUNDBUMPERFRONTAL && (∗ po in t s) [∗ pointIndex ] . z<CROPZMAX−GROUNDBUMPERFRONTAL) // po in t i sf a r enough from the edge

263 navigat ion−>push back ( (∗ po in t s )[∗ pointIndex ] ) ;

264265 // e l im ina t e o u t l i e r s266 i f (GROUNDOUTLIERRADIUS>=0 && navigat ion−>

s i z e ( )>0)267 {268 remove . setInputCloud ( nav igat i on ) ;269 remove . setRadiusSearch ( ( f loat )

GROUNDOUTLIERRADIUS) ;270 i f (GROUNDOUTLIERNEIGHBORS>=0) remove .

setMinNeighborsInRadius (GROUNDOUTLIERNEIGHBORS) ;

271 remove . f i l t e r (∗ nav igat i on ) ;272 }273 }274 else i f (GROUNDVERBOSE) ROS INFO(”GROUND EDGES

: : Lost s i g h t o f the ground ! ” ) ;275276 // plan our next move277 i f ( nav igat ion−>s i z e ( )>0) // curve or p lane

boundary in our way278 {279 f loat centroidX=0;280281 //where are our o b s t r u c t i o n s centered ?282 for ( pc l : : PointCloud<pc l : : PointXYZRGB> : :

i t e r a t o r po int=navigat ion−>begin ( ) ;point<navigat ion−>end ( ) ; po int++)

283 centroidX+=point−>x ;284 centroidX/=navigat ion−>s i z e ( ) ;

49

Page 50: Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive property of the above obstacle composition criteria. This works very well if the terrain

285 i f (GROUNDVERBOSE) ROS INFO(”GROUND EDGES: : See ing %3lu o f f end ing po in t s centeredat %.3 f i ” , nav igat ion−>s i z e ( ) ,

centroidX ) ;286287 // choose a course o f ac t i on288 i f ( centroidX<0) // o f f end e r s most ly to our

l e f t289 d i r e c t i on sSecondary=RIGHT;290 else // centroidX>=0291 d i r e c t i on sSecondary=LEFT;292 groundLastForcedTurn=d i r e c t i on sSecondary ;

// cont inue the same turn even i f we l o s es i g h t o f the ground in the next frame

293 }294 else i f ( trueGroundPoints==0) //where ’ d the

ground go?295 {296 i f (GROUNDVERBOSE) ROS INFO(”GROUND EDGES

: : Ground has vanished ; c a l l i n g f o remergency eva s i v e maneuvers ! ” ) ;

297 d i r e c t i on sSecondary=groundLastForcedTurn ;298 }299 else //we ’ re a l l c l e a r300 {301 d i r e c t i on sSecondary=FORWARD;302 groundLastForcedTurn=trueGroundXTotal/

trueGroundPoints>0 ? RIGHT : LEFT; // incase we l o s e s i g h t o f the ground in thenext frame , we ’ l l turn toward thed i r e c t i o n where more o f i t i s v i s i b l e

303 }304305 ground . pub l i sh (∗ po in t s ) ;306 o c c l u s i o n s . pub l i sh (∗ nav igat i on ) ;307 }308309 /∗∗310 I n t e g r a t e s the d e c i s i on s o f the two o b s t a c l e

d e t e c t i on methods and sends an appropr ia t e d r i v e

50

Page 51: Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive property of the above obstacle composition criteria. This works very well if the terrain

command only i f the <t t>drive move</t t>parameter i s s e t

311 @param time the <t t>TimerEvent</t t> t h a t t r i g g e r e dour schedu l e

312 ∗/313 void p i l o t ( const ro s : : TimerEvent& time )314 {315 // dec l a r e ” constants ,” p l u s Vim macros to

genera te them from ” popu la t e ’ cons tan t s ’”316 #i f 0317 : inoremap <cr> <esc>318 J ld f , d/\a319 f )C,320 $r ; j321 #end i f322 double DRIVE LINEARSPEED, DRIVE ANGULARSPEED;323 bool DRIVEMOVE, DRIVE VERBOSE;324325 // popu la t e ” constants ,” p l u s a Vim macro to

genera te them from ” c lean up parameters ”326 #i f 0327 : inoremap <cr> <esc>328 >>>>ˆf . l 6 s g e t329 eaCached330 f ” l ”yyt” f ) i ,331 ”ypvT , l ˜332 j333 #end i f334 node . getParamCached ( ” d r i v e l i n e a r s p e e d ” ,

DRIVE LINEARSPEED) ;335 node . getParamCached ( ” dr i v e angu la r speed ” ,

DRIVE ANGULARSPEED) ;336 node . getParamCached ( ”drive move ” , DRIVEMOVE) ;337 node . getParamCached ( ” d r i v e v e rbo s e ” ,

DRIVE VERBOSE) ;338339 // v a r i a b l e d e c l a r a t i o n s340 DriveAction newMotion ;341 geometry msgs : : Twist d e c i s i o n ;342

51

Page 52: Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive property of the above obstacle composition criteria. This works very well if the terrain

343 // dec ide what to do , g i ven the adv ice we ’ ver e c e i v ed

344 i f ( d i r e c t i onsPr imary != d i r e c t i on sSecondary ) //a l gor i t hms are at odds

345 {346 i f (DRIVE VERBOSE)347 ROS INFO(”PILOT : : One recommendation

says %5s and the other counter s %5s ”, d i r e c t i onRep r e s en ta t i on (d i r e c t i onsPr imary ) ,d i r e c t i onRep r e s en ta t i on (d i r e c t i on sSecondary ) ) ;

348349 i f ( d i r e c t i onsPr imary==FORWARD) newMotion=

d i r e c t i on sSecondary ;350 else i f ( d i r e c t i on sSecondary==FORWARD)

newMotion=di r e c t i onsPr imary ;351 else newMotion=d i r e c t i on sSecondary ; // i t

though t about t h i s harder352 }353 else //we ’ re agreed !354 newMotion=di r e c t i onsPr imary ;355356 //don ’ t r e v e r s e the d i r e c t i o n o f a turn357 i f ( newMotion!=FORWARD && currentMOTION!=FORWARD

&& newMotion!=currentMOTION)358 {359 i f (DRIVE VERBOSE) ROS INFO(”PILOT : :

Overrode recommended o s c i l l a t i o n ” ) ;360361 newMotion=currentMOTION ; // keep r o t a t i n g in

the same d i r e c t i o n we were362 }363364 //make our move365 switch ( newMotion )366 {367 case LEFT:368 i f (DRIVE VERBOSE) ROS INFO(”PILOT : :

Turning %5s ” , ”LEFT” ) ;

52

Page 53: Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive property of the above obstacle composition criteria. This works very well if the terrain

369 d e c i s i o n . angular . z=DRIVE ANGULARSPEED;370 break ;371 case RIGHT:372 i f (DRIVE VERBOSE) ROS INFO(”PILOT : :

Turning %5s ” , ”RIGHT” ) ;373 d e c i s i o n . angular . z=−DRIVE ANGULARSPEED;374 break ;375 default :376 d e c i s i o n . l i n e a r . x=DRIVE LINEARSPEED;377 }378 i f (DRIVEMOVE) v e l o c i t y . pub l i sh ( d e c i s i o n ) ;379380 // t e l l t he o b s t a c l e d e t e c t o r s what we ’ ve done381 currentMOTION=newMotion ;382 }383 } ;384385 int main ( int argc , char∗∗ argv )386 {387 ros : : i n i t ( argc , argv , ” xbo t su r f a c e ” ) ; // s t r i n g here i s

the node name388 ros : : NodeHandle node ( ” su r f a c e ” ) ; // s t r i n g here i s the

namespace f o r parameters389390 // i n i t i a l parameter va l u e s391 node . setParam ( ” c rop xrad iu s ” , 0 . 2 ) ; // shou ld be

s l i g h t l y g r ea t e r than robo t ’ s rad ius392 node . setParam ( ” crop ymin” , −0.07) ; // shou ld be s l i g h t l y

above robo t ’ s h e i g h t393 node . setParam ( ”crop ymax” , 0 . 35 ) ; // shou ld be s l i g h t l y

above the ground ’ s h i g h e s t po in t394 node . setParam ( ” crop zmin ” , 0 . 0 ) ; // g r ea t e r than zero

exc l ude s po in t s c l o s e to robo t395 node . setParam ( ”crop zmax” , 1 . 5 ) ; // f a r t h e s t to search

f o r o b s t a c l e s : lower f o r t i g h t e r maneuvering , h i ghe rf o r g r ea t e r s a f e t y

396 node . setParam ( ”height downsampling ” , 0 . 04 ) ; // l e s s i smore : shou ld be low enough to e l im ina t e no i se fromthe reg ion o f i n t e r e s t ( nega t i v e f o r [ r e a l l y bad ]d e f a u l t )

53

Page 54: Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive property of the above obstacle composition criteria. This works very well if the terrain

397 node . setParam ( ” he ight sample s ” , 5) ; //number o f samplesto average : shou ld be low enough to prevent f a l s e

p o s i t i v e s398 node . setParam ( ” he i gh t ve rbo s e ” , fa l se ) ;399 node . setParam ( ” ground bumperfrontal ” , 0 . 1 ) ; // ex t ra

uncropped space on the f r on t and back edges o f thep lane whose edges , borders , and presence ared i s r e garded ; note t ha t f o r the f r on t edge only , t h i si s used wi th g round c l o s e z to t o l e r a t e the gap

between the robo t and p lane400 node . setParam ( ” ground bumper latera l ” , 0 . 0 2 ) ; // ex t ra

uncropped space on the l e f t and r i g h t edges o f thep lane whose edges , borders , and presence ared i s r e garded

401 node . setParam ( ” ground c lo sey ” , 0 . 3525) ; //y−coord ina teo f the c l o s e s t po in t on the ground

402 node . setParam ( ” ground c l o s e z ” , 0 . 8 ) ; // corresponding z−coord ina te f o r bumper border and modeling the p lane

403 node . setParam ( ” ground fary ” , 0 . 47 ) ; //y−coord ina te o f af a r po in t on the ground

404 node . setParam ( ” ground fa rz ” , 2 . 5 ) ; // corresponding z−coord ina te f o r modeling the p lane

405 node . setParam ( ” g r ound to l e r an c e f i n e ” , 0 . 03 ) ; //maximumy−coord ina te d e v i a t i on o f po in t s t ha t are s t i l lcons idered par t o f the ground i t s e l f

406 node . setParam ( ” ground to l e rancerough ” , 0 . 1 ) ; //maximumy−coord ina te d e v i a t i on o f po in t s t ha t are eva l ua t edat a l l

407 node . setParam ( ”ground normalsmoothing” , −1.0) ; //smoothing s i z e f o r normal e s t ima t ion ( nega t i v e f o rd e f a u l t )

408 node . setParam ( ” ground thre sho ld lower ” , 1 . 0 ) ; // f o rcurvature−based edge d e t e c t i on : c u t o f f f o rcons i d e ra t i on as p o s s i b l e edges ( nega t i v e f o rd e f a u l t )

409 node . setParam ( ” ground thre sho ldh ighe r ” , 1 . 7 ) ; // f o rcurvature−based edge d e t e c t i on : c u t o f f f o r d e f i n i t ec l a s s i f i c a t i o n as edges ( nega t i v e f o r d e f a u l t )

410 node . setParam ( ” g r ound ou t l i e r r ad i u s ” , 0 . 05 ) ; // rad iusused f o r ne ighbor search to f i l t e r out o u t l i e r s (

54

Page 55: Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive property of the above obstacle composition criteria. This works very well if the terrain

nega t i v e to d i s a b l e o u t l i e r removal )411 node . setParam ( ” ground normalest imat ion ” , −1) ; //normal

e s t ima t ion method : as de f ined inIntegra lImageNormalEst imation ( nega t i v e f o r d e f a u l t )

412 node . setParam ( ” g round ou t l i e rn e i ghbo r s ” , 6) ; //minimumne ighbor s to be spared by o u t l i e r pe r s e cu t i on (nega t i v e f o r d e f a u l t )

413 node . setParam ( ” ground verbose ” , fa l se ) ;414 node . setParam ( ” d r i v e l i n e a r s p e e d ” , 0 . 5 ) ;415 node . setParam ( ” dr i v e angu la r speed ” , 0 . 3 ) ;416 node . setParam ( ”drive move ” , fa l se ) ;417 node . setParam ( ” d r i v e v e rbo s e ” , true ) ;418419 TandemObstacleAvoidance workhorse ( node ) ; // b l o c k to do

o b s t a c l e avoidance420421 // c l ean up parameters , p l u s a Vim macro to genera te

them from ” d e f a u l t parameter va l u e s ”422 #i f 0423 : inoremap <cr> <esc>424 ˆ f . l 3 s d e l e t e425 f , dt ) f ;C;426 j427 #end i f428 node . deleteParam ( ” c rop xrad iu s ” ) ;429 node . deleteParam ( ” crop ymin” ) ;430 node . deleteParam ( ”crop ymax” ) ;431 node . deleteParam ( ” crop zmin ” ) ;432 node . deleteParam ( ”crop zmax” ) ;433 node . deleteParam ( ”height downsampling ” ) ;434 node . deleteParam ( ” he ight sample s ” ) ;435 node . deleteParam ( ” he i gh t ve rbo s e ” ) ;436 node . deleteParam ( ” ground bumperfrontal ” ) ;437 node . deleteParam ( ” ground bumper latera l ” ) ;438 node . deleteParam ( ” ground c lo sey ” ) ;439 node . deleteParam ( ” ground c l o s e z ” ) ;440 node . deleteParam ( ” ground fary ” ) ;441 node . deleteParam ( ” ground fa rz ” ) ;442 node . deleteParam ( ” g r ound to l e r an c e f i n e ” ) ;443 node . deleteParam ( ” ground to l e rancerough ” ) ;

55

Page 56: Obstacle Detection and Avoidance Using TurtleBot Platform ...sboucher/turtlebot.pdftransitive property of the above obstacle composition criteria. This works very well if the terrain

444 node . deleteParam ( ”ground normalsmoothing” ) ;445 node . deleteParam ( ” ground thre sho ld lower ” ) ;446 node . deleteParam ( ” ground thre sho ldh ighe r ” ) ;447 node . deleteParam ( ” g r ound ou t l i e r r ad i u s ” ) ;448 node . deleteParam ( ” ground normalest imat ion ” ) ;449 node . deleteParam ( ” g round ou t l i e rn e i ghbo r s ” ) ;450 node . deleteParam ( ” ground verbose ” ) ;451 node . deleteParam ( ” d r i v e l i n e a r s p e e d ” ) ;452 node . deleteParam ( ” dr i v e angu la r speed ” ) ;453 node . deleteParam ( ”drive move ” ) ;454 node . deleteParam ( ” d r i v e v e rbo s e ” ) ;455 }

56