TortoiseBot: Low-cost ROS-Based Mobile 3D Mapping …TortoiseBot: Low-cost ROS-Based Mobile 3D...

7
TortoiseBot: Low-cost ROS-Based Mobile 3D Mapping Platform SIT Technical Report No. TR2-DC, February 2012 Patrick C. Hammer, David J. Cappelleri and Michael M. Zavlanos AbstractThis paper provides details on a low-cost 3D map- ping robot called the TortoiseBot. Based on the Willow Garage TurtleBot platform, TortoiseBot uses off-the-shelf parts and is programmed on top of Willow Garage’s Robot Operating System (ROS). This design address shortcomings of the TurtleBot by of- fering the expandability to add additional sensors, computational power, or extended battery life. The robot is capable of creating dense point clouds of indoor environments and is an ideal low- cost platform for research on new localization, mapping, and planning algorithms as well as active sensor and collaborative robot research. INTRODUCTION Accurate and detailed three dimensional maps are very useful to fields such as surveying, defense, security, robotic rescue, and automation. Robots use these maps for navigation and localization in complex environments and it is a key component for full robot autonomy in unstructured environ- ments [1]. People use maps for recording and representing scenes and locations. These maps are based on point-clouds, or datasets of Cartesian points. Color information can be combined with the the map to make photo-realistic 3D models [2]. This eases recognition of objects and places in the map. Thus, the pursuit of three dimensional (3D) mapping and navigation of robots for indoor and outdoor environments is an active area of research [3], [4], [5], [6], [7], [8]. Until recently, devices for capturing 3D maps were expen- sive and required significant setup and human-control [9]. For example, tripod mounted scanners such as the Faro Focus [10] require being moved several times to make a complete map of an area. Individual scans are not com- bined together until later at a computer so the final map is topically not available until sometime after the scanning process. These units also cost forty to fifty thousand dollars. Other robotic platforms with mapping capabilities like the Google Street View trike and trolley [11], Segway RMP [4], ROAMS [2], AVENUE [12], PR2 [13], and Junior [14] provide very good solutions but are either too bulky to navigate through common indoor areas and/or use very expensive 3D scanning systems hardware such as SICK or Velodyne LIDAR sensors (Table I). The topic of how to create and make use of these 3D maps is still one under heavy research. As such, it is beneficial to have an affordable, flexible, and reliable platform to run P. Hammer and D. Cappelleri are with the Department of Mechanical Engineering, Stevens Institute of Technology, 1 Castle Point on Hudson, Hoboken, NJ USA {dcappell, phammer} @stevens.edu M. Zavlanos is with the Department of Mechanical Engineering & Materials Science, Duke University, Box 90300 Hudson Hall, Durham, NC, USA [email protected] Fig. 1. THE TORTOISEBOT WITH THE ELECTRONICS EXPOSED experiments on. Our goal then, is to create just such a platform to enable this research. The ideal robot should use off the shelf hardware with minimal customization and the plans should be available for anyone to freely use. All software should be open-source, which is free for anyone to modify and re-use. It should be mobile enough to drive around different terrains and output a map or model of the environment. Most of the data collection and processing should be autonomous. The design should be upgradable to match the mission requirements. This will help keep the base configuration of the robot affordable. TORTOISEBOT We call our solution to the ideal, low-cost mapping robot design problem the TortiseBot, as shown in Figure 1. The robot is based on the TurtleBot design by Willow Garage [15]. Both robots use some of the same major off-the- shelf components such as the Microsoft Kinect and iRobot

Transcript of TortoiseBot: Low-cost ROS-Based Mobile 3D Mapping …TortoiseBot: Low-cost ROS-Based Mobile 3D...

Page 1: TortoiseBot: Low-cost ROS-Based Mobile 3D Mapping …TortoiseBot: Low-cost ROS-Based Mobile 3D Mapping Platform ... software should be open-source, ... shelf components such as the

TortoiseBot: Low-cost ROS-Based Mobile 3D Mapping PlatformSIT Technical Report No. TR2-DC, February 2012

Patrick C. Hammer, David J. Cappelleri and Michael M. Zavlanos

Abstract— This paper provides details on a low-cost 3D map-ping robot called the TortoiseBot. Based on the Willow GarageTurtleBot platform, TortoiseBot uses off-the-shelf parts and isprogrammed on top of Willow Garage’s Robot Operating System(ROS). This design address shortcomings of the TurtleBot by of-fering the expandability to add additional sensors, computationalpower, or extended battery life. The robot is capable of creatingdense point clouds of indoor environments and is an ideal low-cost platform for research on new localization, mapping, andplanning algorithms as well as active sensor and collaborativerobot research.

INTRODUCTION

Accurate and detailed three dimensional maps are veryuseful to fields such as surveying, defense, security, roboticrescue, and automation. Robots use these maps for navigationand localization in complex environments and it is a keycomponent for full robot autonomy in unstructured environ-ments [1]. People use maps for recording and representingscenes and locations. These maps are based on point-clouds,or datasets of Cartesian points. Color information can becombined with the the map to make photo-realistic 3Dmodels [2]. This eases recognition of objects and placesin the map. Thus, the pursuit of three dimensional (3D)mapping and navigation of robots for indoor and outdoorenvironments is an active area of research [3], [4], [5], [6],[7], [8].

Until recently, devices for capturing 3D maps were expen-sive and required significant setup and human-control [9].For example, tripod mounted scanners such as the FaroFocus [10] require being moved several times to make acomplete map of an area. Individual scans are not com-bined together until later at a computer so the final mapis topically not available until sometime after the scanningprocess. These units also cost forty to fifty thousand dollars.Other robotic platforms with mapping capabilities like theGoogle Street View trike and trolley [11], Segway RMP [4],ROAMS [2], AVENUE [12], PR2 [13], and Junior [14]provide very good solutions but are either too bulky tonavigate through common indoor areas and/or use veryexpensive 3D scanning systems hardware such as SICK orVelodyne LIDAR sensors (Table I).

The topic of how to create and make use of these 3D mapsis still one under heavy research. As such, it is beneficialto have an affordable, flexible, and reliable platform to run

P. Hammer and D. Cappelleri are with the Department of MechanicalEngineering, Stevens Institute of Technology, 1 Castle Point on Hudson,Hoboken, NJ USA {dcappell, phammer} @stevens.edu

M. Zavlanos is with the Department of Mechanical Engineering &Materials Science, Duke University, Box 90300 Hudson Hall, Durham, NC,USA [email protected]

Fig. 1. THE TORTOISEBOT WITH THE ELECTRONICS EXPOSED

experiments on. Our goal then, is to create just such aplatform to enable this research. The ideal robot shoulduse off the shelf hardware with minimal customization andthe plans should be available for anyone to freely use. Allsoftware should be open-source, which is free for anyoneto modify and re-use. It should be mobile enough to drivearound different terrains and output a map or model of theenvironment. Most of the data collection and processingshould be autonomous. The design should be upgradable tomatch the mission requirements. This will help keep the baseconfiguration of the robot affordable.

TORTOISEBOTWe call our solution to the ideal, low-cost mapping

robot design problem the TortiseBot, as shown in Figure1. The robot is based on the TurtleBot design by WillowGarage [15]. Both robots use some of the same major off-the-shelf components such as the Microsoft Kinect and iRobot

Page 2: TortoiseBot: Low-cost ROS-Based Mobile 3D Mapping …TortoiseBot: Low-cost ROS-Based Mobile 3D Mapping Platform ... software should be open-source, ... shelf components such as the

TABLE ILIDAR SENSOR OPTIONS

Model Range Scan Speed Outdoor Price

Hokuyo-URG-04LX-UG01 5.6m 100ms No $1,000.00-UBG-04LX-F01 5.6m 28ms No $2,500.00-UTM-30LX 30m 25ms Yes $5,000.00SICK-LMS 111 20m 10ms Yes $4,322.00-LMS 221 80m 13ms Yes $8,837.00Velodyne-HDL-32E 100m 100ms Yes $30,000.00-HDL-64E 120m 66ms Yes $75,000.00

Create. The user community has developed a very thoroughopen-source Robotic Operating System (ROS) [16] softwarestack for the TurtleBot which we have modified to run onTortoiseBot.

Hardware Components & Options

The major components of TortoiseBot are the iRobotCreate mobile robot platform, the vision sensors (MicrosoftKinect and Hokuyo LIDAR), and the onboard computer.They are listed in Table II and displayed in Figure 2. Weopted to design our own computer system rather than use anavailable single-board-computer or laptop computer. Thereare benefits and drawbacks to this decision. The major benefitis the option to easily expand and adapt the computer withdifferent components. This is rarely possible with laptops.We can choose components that are as fast as the applicationrequires. If those components become outdated they can beeasily changed. Each component is chosen with regard toprice, performance, and energy consumption.

Our price goal was to be less than the cost of a laptopof comparable performance. The performance goal is to use≤ 25% of processing power while running all the requiredhardware drivers and networking. That goal would ensurethat the programmer has a healthy overhead to run their ownprograms within.

Power Consumption: Because the robot operates underlimited battery power, the power consumption of everycomponent must be taken into consideration. The CPU takesby far the most power but various potential 500mA USBloads can add up quickly. The computer has an extremelyflexible power supply which can run from a 6-24 Voltsource. This allows the user to supplement the robot’s powerwith any number of rechargeable battery or direct currentoptions. For example, while the computer does not need tobe mobile (while programming), it can be powered by atypical benchtop power supply found in many electronicslaboratories or an appropriate AC to DC converter. Thecomputer components are chosen to have high performanceand relatively low power consumption. Under full load, withall peripherals operational, the computer draws 3 Amps at12 Volts. This results in a battery life of one hour (Table III),which is plenty of time for running an experiment.

Manufacturing and Hardware Modifications: One of ourgoals was to keep the hardware modifications and fabrication

TABLE IITORTOISEBOT PARTS LIST

Part Name Price

Laser ScannerHokuyo URG-04LX-UG01 $1,175.00CPUIntel i5-2390T $195.00MotherboardZOTAC Z68ITX-A-E $170.003D CameraMicrosoft Kinect $155.99Mobile RobotiRobot Create $129.99Sold State DiskOCZ Nocti mSATA 30 GB $83.99Serial GyroscopeTurtlebot Power + Gyro $79.95Computer Power SupplyM2-ATX $79.50BatteryCreate Extended Battery $69.99RAMKingston 8 GB RAM $47.99CPU CoolerThermaltake CLP0534 $22.99Various Cables and Adapters $43.84

Total Computer Cost: $701.00Total Cost: $2,254.23

to a minimum thus making the platform accessible to a wideraudience. Only the three plastic structural plates need to bemanufactured. They are cut out of sheets of ABS plasticusing 60 Watt C02 laser cutter (Universal Lasers Systems Inc.VLS6.60). The bottom plate holds the computer in the cargobay of the Create. The second plate supports the Kinect, andholds the cables above the computer. Plate two has a largevent above the CPU so that heat can escape. The topmostplate has a grid of holes 0.75 inches apart. This allowssensors to be easily attached to the robot.

The Kinect and the Create both require some slightmodification to work on the robot. The power cable of theMicrosoft Kinect needs to be modified to accept a 12 Voltinput from the computer power supply. This can be easilydone with a pair of wire strippers. As previously mentioned,the computer requires 3 Amps of current. Unfortunately, thepower supply in the cargo bay of the Create can only safelysupply 1.5 Amps. To solve this problem, the Create must

Page 3: TortoiseBot: Low-cost ROS-Based Mobile 3D Mapping …TortoiseBot: Low-cost ROS-Based Mobile 3D Mapping Platform ... software should be open-source, ... shelf components such as the

Mini-ITX Computer

Hokuyo Laser ScannerMicrosoft Kinect

iRobot Create

M2-ATXPower Supply

Fig. 2. EXPLODED VIEW (LEFT) AND ASSEMBLED (RIGHT)

be disassembled and leads must be soldered directly ontothe battery terminals so that the computer can receive fullpower from the battery. This operation is also easy becausethe Create is only held together with 16 screws and the areato be soldered is clearly labeled “Battery In.”

Performance Benchmarking: We compare our computersystem against a common dual-core netbook computer whichis included with Turtlebot, and a full-size laptop (DellXPS 15Z) which closely matches the performance of ourcustom solution. We compare raw processing power usingresults submitted by users of PassMark’s synthetic CPUbenchmark (http://www.passmark.com/about/index.htm).Theresults show the two more powerful systems to be almost fivetimes quicker than the Turtlebot netbook. This makes senseas they are newer and use more modern CPU features likeautomatic overclocking (turbo-boost) and 32nm lithography.

Comparison to Existing Systems: The benchmark and fea-ture comparison show that the Dell XPS 15Z and Tortoisebotcustom computer have similar performance capabilities butour robot has no screen, self-contained battery, keyboard,or trackpad. This helps reduce the computer footprint andreduces costs to slightly more than half that of the laptop.A computer like this is called “headless,” that is, it does nothave any visual display or keyboard. This does not affectnormal operation of the robot because it displays resultsand get all input from a remote computer. The customcomputer’s largest part is the Mini-ITX motherboard whichis still smaller than any netbook. As a result, it fits in thecargo bay of the Create, a space which is used for bundlingextra cables in the original Turtlebot design. This lowers therobots center of gravity and makes it more stable.

Other Hardware: The mobile robot base is an iRobotCreate robot [17] with a laser-cut ABS plastic structure tohold the computer and sensors in place (Figure 2). Thevision sensors are the most expensive part of the robot.The Microsoft Kinect[18] is used for color imaging andmedium accuracy depth measurement. The high-accuracy

Fig. 3. THE URDF REPRESENTATION OF THE ROBOT.

vision sensor is a fixed position LIDAR (Light Detectionand Ranging) sensor. Depending on the required rangeand outdoor or indoor environment, the appropriate LIDARsensor can be selected from those available in Table I. Thefaster the scan speed, the more scans the robot will be makingwhile driving and the denser the final point cloud will be.The TortoiseBot uses the Hokuyo URG-04LX-UG01 laserscanner. It was chosen because the Create is restricted tomoving on smooth indoor surfaces so the laser scanner needsonly to have indoor capabilities and range. The 100 msecscan time corresponds to a 10 Hz scan rate. This is acceptablefor the speed the Create moves at. The TortoiseBot platformcan also support communication hardware, that will enableus to build wireless networks among multiple robots, forautonomous coordinated sensing tasks.

SOFTWARE

The onboard computer runs ROS (Robot OperatingSystem)[16]. It is an open-source robotics solution thatprovides libraries and tools to help create robot applicationsincluding hardware abstraction, device drivers, libraries, vi-sualizers, message-passing, package management, and more.Because it’s free to use and adapt, many drivers and algo-rithms for common tasks are available for immediate use andcustomization.

ROS is able to use a model of the robot’s joints and partsto track individual movements and the resulting coordinateframe transformations. The model is an XML format filecalled URDF (Unified Robot Description Format) as seenin Figure 3. The model is generated of the robot usingreal-world dimensions and simplified geometry. Besides pro-viding a nice visual representation of the robot, the modelaccurately provides the position of each part relative to thepart it is attached to.

Driver processes run to process the data coming fromthe robots sensors. The data is tagged with its source inthe URDF model so that the appropriate coordinate frames

Page 4: TortoiseBot: Low-cost ROS-Based Mobile 3D Mapping …TortoiseBot: Low-cost ROS-Based Mobile 3D Mapping Platform ... software should be open-source, ... shelf components such as the

TABLE IIICOMPUTER COMPARISON TABLE

Metric Turtlebot Laptop Tortoisebot Computer Comparable Laptop

Model ASUS 1215N-PU27 Custom Dell XPS 15zCPU Intel D525 (1.8 GHz) Intel i5-2390T (2.7 GHz) i7-2640M (2.8 GHz)

Max CPU Frequency 1.8 GHz 3.5 GHz 3.5 GHzCache 1 MB 3 MB 4 MB

Installed RAM 2 GB @ 800 MHz 8 GB @ 1333 MHz 8 GB @ 1333 MHzMax Installable RAM 4 GB 32 GB 16 GB

Passmark Synthetic CPU Benchmark 712 4040 4046Storage 250 GB SATA Disk 30 GB mSATA Solid State Disk 750 GB SATA Disk

Battery Life 6 Hours 1 Hour 3 HoursPrice $400.00 $700.00 $1,350.00

can be transformed by the tf (transform frame) library.This allows data to be interpreted in the correct frame. Forexample, the lasers scans could be projected either verticallyor horizontally relative to the robot depending on how thesensor is mounted.

A process called Robot Pose EKF (Extended KalmanFilter) combines wheel odometry, gyroscope data, and visualodometry using an Extended Kalman Filter to calculate thefull six dimensional (x,y,z, roll, yaw and pitch angles) poseof the robot. Because the sensors are not perfect and thereare extreme conditions in the real world, there is imprecisionand noise in all data. This combination of different datasources provides the best position estimation possible. Foroutdoor applications, GPS information can also be inputand fused for more accurate position estimation. Once theabove drivers are configured and running, low level functionsof the robot, like turning individual wheels, are abstractedaway and the programmer can simply tell the robot whereit should go. This is very similar to the type of abstractionthat an operating system provides on a computer, allowingthe programmer to use convenient structures like arrays anddata structures instead of flipping bits and allocating memory.

APPLICATION TESTINGTo test the usefulness of our platform, we ran realistic

workloads that robots must process to map and interactwith their environment. We chose to run the mapping andnavigation programs described below.

Driver Performance

The absolute baseline of computer performance for therobot is to run the operating system, hardware drivers, andnetwork communication. These are all required for anyhigher level programs to run. We ran this baseline workloadfor 15 minutes and recorded the average system usage. Thecomputer has two cores so it has a maximum CPU usageof 200%. Our test ran with an average load of 20/200 =10% ≤ 25% which is our goal value.

Two Dimensional SLAM

A ROS process called SLAM (Simultaneous Localizationand Mapping) reads odometry data and laser scans to localizethe robot and generate a two dimensional map of the environ-ment [19]. The SLAM algorithm takes individual horizontal

Fig. 4. 2D SLAM NAVIGATION WITH TORTOISEBOT.

lasers scans and uses them to build a map of the environment.Later the robot can match scans against this map to localizeitself on the map with high precision. This algorithm can berun either using the Kinect sensor or by mounting the laserscanner horizontally. The computer is able to build maps andlocalize with a load of 45%. Figure 4 describes the featuresof navigating with this system executed on TortoiseBot.

Three Dimensional Point Cloud Generation

Adding another dimension to the 2D SLAM algorithmmeans either using a 3D sensor like the Kinect or sweepinga 2D sensor like the laser scanner. We chose to sweep thevertical laser scanner to generate a three dimensional pointcloud as the scanner pans and rotates with the robot. Thelaser scans at a rate of 10 Hz. First, the scans are filtered.There are inaccurate outlier points in each scan which shouldbe removed. Also, points in the scan that are parts of therobot (self-scanning) should be removed. Each scan is taggedwith the timestamp and combined with the orientation of therobot at that time. This allows the scan to be transformedby tf from a set of ranges and angles into a set of threedimensional Cartesian coordinates called a point cloud. Nowthat a point cloud has been obtained, multiple clouds can becombined into one large and dense scan. An image of one ofthese clouds is shown in Fig. 5. The final point cloud mapcan be saved as a rosbag or a PCD (Point Cloud Database)file to be manipulated by software like Autocad or Pointools.

Page 5: TortoiseBot: Low-cost ROS-Based Mobile 3D Mapping …TortoiseBot: Low-cost ROS-Based Mobile 3D Mapping Platform ... software should be open-source, ... shelf components such as the

Fig. 5. POINT CLOUD GENERATED OF STAIRS.

To test the accuracy of this method of point cloud genera-tion, we scanned several rooms and household objects. Afterthe point cloud was build, we compared dimensions of thephysical object with the measured dimension. Results fromthat comparison can be seen in Table IV. We experimentedwith different size targets, scanning the target multiple times,and rotating at different speeds. These tests proved that thesystem can handle these complex tasks. The recorded pointclouds consisted of up to 100,000 points. Thus, TortoiseBotis able to process large data sets.

With a camera correctly aligned with the laser scanner,it is possible to embed the corresponding RGB value intoevery point so that the point cloud may display a full-colormodel of the space. The addition of color information makesit much easier for a person to recognize parts of the cloud asspecific objects. This data may also improve future computervision algorithms such as object recognition. For example, acolor difference in a kitchen point cloud might help a robotdistinguish between a cabinet and a refrigerator. TortoiseBotis expected to be able to easily handle this type of task.

The robot is currently operated by driving the robot using ajoystick or keyboard. When a scanning location is found, therobot is commanded to spin in place with specified speed anddirection. The speed varies the density of the resulting pointcloud. A sample map built out of several scans is shownin Figure 6. It should be noted that the distance betweenpoints in every individual scan is dependant on the angularresolution of the laser scanner. Most scanners vary between0.25°and 0.36°of accuracy. The horizontal resolution of thescan is a function of the speed the LIDAR is panned acrossthe scene. It is possible to have lasers which scan four timesfaster and thus would have four times as many points at thesame panning speed. It is also an option to sweep the laseragainst the target area multiple times to fill in more and morefine details. The only potential problem with making multiplesweeps is that if there is any problem with transforming thescans into the point cloud, they will introduce messy outliers

Fig. 6. MULTIPLE SCAN COMBINED INTO ONE CLOUD

into the map, making it contain errors and incorrect features.It may be possible to reduce this effect with additionalfiltering before adding points to the final map.

DISCUSSION AND FUTURE WORK

There is always a tradeoff between cost efficiency and per-formance. Due to the fact that we are using low-cost sensors,we will inevitably need to compromise in performance. Weare able to get a higher level of computer performance bypicking specific components that have the right speed to meetour goals while still costing less than a comparable laptop.

Another approach under testing is to use multiple Tor-toiseBots cooperatively. We have built three identical robotsand are currently configuring them to act as a distributedsystem, more powerful than any one robot could be alone.The questions we will look to answer are (i) where do weposition these robots relative to each other to increase thetotal information, (ii) how do we control their angular speedand phase of rotation (if they are rotating), (iii) how do wemove the robots altogether and (iv) who does the processingand what information do the robots share?

Having faster processing will leave overhead for additionalalgorithms such as more autonomous behavior and increasesin performance. Because of processing limitations on theexisting Turtlebot, three dimensional SLAM, known in thiscase as RGBD SLAM, is too intensive to work well onboardin real-time. The data is too large to stream to a powerfuloffboard computer so previously, all computation had to bedone offline after the data was recorded. We should be ableto run this algorithm on the Tortoisebot leveraging the extraprocessing power.

Page 6: TortoiseBot: Low-cost ROS-Based Mobile 3D Mapping …TortoiseBot: Low-cost ROS-Based Mobile 3D Mapping Platform ... software should be open-source, ... shelf components such as the

TABLE IVEXPERIMENTAL RESULTS

Object Scan Rate Passes Measured Actual Error(◦/sec) Value (cm) Value (cm) (%)

Box 1 Width 10 1 22.00 23.00 4.35Box 1 Width 10 2 22.50 23.00 2.17Box 1 Width 10 3 21.34 23.00 7.22Box 1 Width 10 4 20.98 23.00 8.78Box 1 Height 10 1 12.45 13.30 6.39Ceiling Height 7.5 1 270.56 270.00 0.21Hallway Width 7.5 1 177.48 170.00 4.40Box 2 Width 10 1 50.22 50.00 0.44Box 3 Width 7.5 1 25.10 24.60 2.04

We plan to investigate the next-best-view problem inorder to eventually make the entire scanning and mappingprocedure autonomous. The next-best-view problem is theproblem of finding the optimal position and orientation forthe next measurement under different constraints [20]. Therobot should also be able to decide when an area has beensufficiently scanned and move on. With these elements, therobot can explore the environment on its own and find thebest way to generate a complete point cloud of the room orarea of interest. Fusing the readings of both the Kinect andthe LIDAR sensors will help with this problem. We can usethe Kinect to determine in real-time whether some regionshave more detail and require dense 3D scans and navigatethe robot accordingly.

Additionally, the density and accuracy of the scans maybe improved by fusing heterogeneous sensors such as these.Thus, we plan on writing custom navigation and obstacleavoidance algorithms to analyze a space and generate thecorresponding map as quickly and efficiently as possible.Furthermore, the LIDAR is rigidly attached to the robot. Thismakes for a very simple design but limits the usefulness ofthe sensor. On other robots, the sensor is mounted to a precisemechanism for panning and tilting [2], [1]. In this way, thelaser is swept without requiring the robot to move. Futureversions of the TortoiseBot may feature the ability to rotatethe LIDAR or at least turn it 90◦ from a vertical to horizontalorientation. This way the sensor may also be used for 2Dlocalization and obstacle avoidance when not making maps.The 240◦ field of view gives a great overview of the 2Dworld. Finally, the on-board communications hardware of theTortoiseBots will enable us to build wireless networks amongmultiple robots, and be utilized for autonomous coordinatedsensing tasks.

CONCLUSION

In this paper we have presented the design for a low-cost, 3D mapping, mobile robot platform. We used off-the-shelf components, some hardware modifications, fabrication,and open-source software to construct the TortoiseBot plat-form. Each component was chosen with regard to price,performance, and energy consumption. The design is alsoupgradable to match the mission requirements of the user.Our design goals of a target cost of less than the cost of alaptop with comparable performance and performance goal

to use ≤ 25% of the systems processing power while runningall the required hardware drivers and networking were bothmet. We demonstrated some representative workload taskswith the platform to showcase its capabilities.

REFERENCES

[1] M. Strand and R. Dillmann, “Using an attributed 2d-grid for next-best-view planning on 3d environment data for an autonomous robot,” inRobotics and Automation, (ICRA). International Conference on, june2008, pp. 314 –319.

[2] H. Men, B. Gebre, and K. Pochiraju, “Color point cloud registrationwith 4d icp algorithm,” in Robotics and Automation (ICRA), 2011IEEE International Conference on, may 2011, pp. 1511 –1516.

[3] S. Thrun, “Robotic mapping: A survey,” in Exploring ArtificialIntelligence in the New Millenium, G. Lakemeyer and B. Nebel, Eds.Morgan Kaufmann, 2002.

[4] D. Wolf, A. Howard, and G. Sukhatme, “Towards geometric 3dmapping of outdoor environments using mobile robots,” in IntelligentRobots and Systems, 2005. (IROS 2005). 2005 IEEE/RSJ InternationalConference on, aug. 2005, pp. 1507 – 1512.

[5] A. Birk, N. Vaskevicius, K. Pathak, S. Schwertfeger, J. Poppinga,and H. Buelow, “3-d perception and modeling,” Robotics AutomationMagazine, IEEE, vol. 16, no. 4, pp. 53 –60, december 2009.

[6] R. Katz, N. Melkumyan, J. Guivant, T. Bailey, J. Nieto, and E. Nebot,“Integrated sensing framework for 3d mapping in outdoor navigation,”in Intelligent Robots and Systems, 2006 IEEE/RSJ InternationalConference on, oct. 2006, pp. 2264 –2269.

[7] B. Morisset, R. B. Rusu, A. Sundaresan, K. Hauser, M. Agrawal,J.-C. Latombe, and M. Beetz, “Leaving flatland: Toward real-time3d navigation,” in Robotics and Automation, 2009. ICRA ’09. IEEEInternational Conference on, may 2009, pp. 3786 –3793.

[8] E. Marder-Eppstein, E. Berger, T. Foote, B. Gerkey, and K. Konolige,“The office marathon: Robust navigation in an indoor office environ-ment,” in Robotics and Automation (ICRA), 2010 IEEE InternationalConference on, may 2010, pp. 300 –307.

[9] M. Muller, H. Surmann, K. Pervolz, and S. May, “The accuracyof 6d slam using the ais 3d laser scanner,” in Multisensor Fusionand Integration for Intelligent Systems, 2006 IEEE InternationalConference on, sept. 2006, pp. 389 –394.

[10] “Faro focus 3d laser scanner,” http://www.faro.com/focus/us, 2011.[11] “Cars, trikes & more - google maps with street view,”

http://maps.google.com/help/maps/streetview/technology/cars-trikes.html, 2011.

[12] P. Blaer and P. Alien, “View planning for automated site modeling,” inRobotics and Automation, 2006. ICRA 2006. Proceedings 2006 IEEEInternational Conference on, may 2006, pp. 2621 –2626.

[13] “Willow garage pr2 robot,” http://www.willowgarage.com/pr2, 2011.[14] M. Montemerlo, J. Becker, S. Bhat, H. Dahlkamp, D. Dolgov,

S. Ettinger, D. Haehnel, T. Hilden, G. Hoffmann, B. Huhnke,D. Johnston, S. Klumpp, D. Langer, A. Levandowski, J. Levinson,J. Marcil, D. Orenstein, J. Paefgen, I. Penny, A. Petrovskaya,M. Pflueger, G. Stanek, D. Stavens, A. Vogt, and S. Thrun,“Junior: The stanford entry in the urban challenge,” Journal of FieldRobotics, vol. 25, no. 9, pp. 569–597, 2008. [Online]. Available:http://dx.doi.org/10.1002/rob.20258

Page 7: TortoiseBot: Low-cost ROS-Based Mobile 3D Mapping …TortoiseBot: Low-cost ROS-Based Mobile 3D Mapping Platform ... software should be open-source, ... shelf components such as the

[15] “Willow garage turtlebot,” http://www.willowgarage.com/turtlebot,2011.

[16] “Ros wiki,” http://ros.org, 2011.[17] “irobot create,” http://www.irobot.com/create, 2011.[18] “Microsoft kinect,” http://www.xbox.com/kinect, 2011.[19] G. Lionis and K. Kyriakopoulos, “A laser scanner based mobile robot

slam algorithm with improved convergence properties,” in IntelligentRobots and Systems, 2002. IEEE/RSJ International Conference on,vol. 1, 2002, pp. 582 – 587 vol.1.

[20] C. Connolly, “The determination of next best views,” in Roboticsand Automation. Proceedings. 1985 IEEE International Conferenceon, vol. 2, mar 1985, pp. 432 – 435.