AN OPEN APPROACH TO AUTONOMOUS VEHICLEScs.furman.edu/~tallen/csc271/source/openAppr.pdf · an open...

9
. ................................................................................................................................................................................................................ AN OPEN APPROACH TO AUTONOMOUS VEHICLES . ................................................................................................................................................................................................................ AUTONOMOUS VEHICLES ARE AN EMERGING APPLICATION OF AUTOMOTIVE TECHNOLOGY, BUT THEIR COMPONENTS ARE OFTEN PROPRIETARY.THIS ARTICLE INTRODUCES AN OPEN PLATFORM USING COMMODITY VEHICLES AND SENSORS.THE AUTHORS PRESENT ALGORITHMS, SOFTWARE LIBRARIES, AND DATASETS REQUIRED FOR SCENE RECOGNITION, PATH PLANNING, AND VEHICLE CONTROL.RESEARCHERS AND DEVELOPERS CAN USE THE COMMON INTERFACE TO STUDY THE BASIS OF AUTONOMOUS VEHICLES, DESIGN NEW ALGORITHMS, AND TEST THEIR PERFORMANCE. ...... Autonomous vehicles are becom- ing a new piece of infrastructure. Automotive makers, electronics makers, and IT service providers are interested in this technology, and academic research has contributed signif- icantly to producing their prototype systems. For example, one notable work was pub- lished by Carnegie Mellon University. 1 Despite this trend, autonomous vehicles are not systematically organized. Given that commercial vehicles protect their in-vehicle system interface from users, third-party ven- dors cannot easily test new components of autonomous vehicles. In addition, sensors are not identical. Some vehicles might use only cameras, whereas others might use a combi- nation of cameras, laser scanners, GPS receivers, and milliwave radars. In addition to hardware issues, autono- mous vehicles must address software issues. Because an autonomous-vehicles platform is largescale, it is inefficient to build it up from scratch, especially for prototypes. Open- source software libraries are preferred for that purpose, but they have not yet been inte- grated to develop autonomous vehicles. The design and implementation of algorithms for scene recognition, path planning, and vehicle control also require multidisciplinary skills and knowledge, often incurring a significant engineering effort. Finally, a basic dataset, such as a map for localization, must be pro- vided to drive on a public road. Overall, autonomous vehicles are com- posed of diverse technologies. Building their platform requires a multidisciplinary collabo- ration in research and development. To facili- tate this collaboration, we introduce an open platform for autonomous vehicles that many researchers and developers can study to obtain a baseline for autonomous vehicles, design new algorithms, and test their per- formance, using a common interface. Vehicles and sensors We introduce an autonomous-vehicles plat- form with a set of sensors that can be purchased in the market. We assume that intelligent mod- ules of autonomous driving, such as scene rec- ognition, path planning, and vehicle control, Shinpei Kato Eijiro Takeuchi Yoshio Ishiguro Yoshiki Ninomiya Kazuya Takeda Nagoya University Tsuyoshi Hamada Nagasaki University ....................................................... 60 Published by the IEEE Computer Society 0272-1732/15/$31.00 c 2015 IEEE

Transcript of AN OPEN APPROACH TO AUTONOMOUS VEHICLEScs.furman.edu/~tallen/csc271/source/openAppr.pdf · an open...

Page 1: AN OPEN APPROACH TO AUTONOMOUS VEHICLEScs.furman.edu/~tallen/csc271/source/openAppr.pdf · an open approach to autonomous vehicles autonomous vehicles are an emerging application

.................................................................................................................................................................................................................

AN OPEN APPROACH TOAUTONOMOUS VEHICLES

.................................................................................................................................................................................................................

AUTONOMOUS VEHICLES ARE AN EMERGING APPLICATION OF AUTOMOTIVE TECHNOLOGY,

BUT THEIR COMPONENTS ARE OFTEN PROPRIETARY. THIS ARTICLE INTRODUCES AN OPEN

PLATFORM USING COMMODITY VEHICLES AND SENSORS. THE AUTHORS PRESENT

ALGORITHMS, SOFTWARE LIBRARIES, AND DATASETS REQUIRED FOR SCENE RECOGNITION,

PATH PLANNING, AND VEHICLE CONTROL. RESEARCHERS AND DEVELOPERS CAN USE THE

COMMON INTERFACE TO STUDY THE BASIS OF AUTONOMOUS VEHICLES, DESIGN NEW

ALGORITHMS, AND TEST THEIR PERFORMANCE.

......Autonomous vehicles are becom-ing a new piece of infrastructure. Automotivemakers, electronics makers, and IT serviceproviders are interested in this technology,and academic research has contributed signif-icantly to producing their prototype systems.For example, one notable work was pub-lished by Carnegie Mellon University.1

Despite this trend, autonomous vehiclesare not systematically organized. Given thatcommercial vehicles protect their in-vehiclesystem interface from users, third-party ven-dors cannot easily test new components ofautonomous vehicles. In addition, sensors arenot identical. Some vehicles might use onlycameras, whereas others might use a combi-nation of cameras, laser scanners, GPSreceivers, and milliwave radars.

In addition to hardware issues, autono-mous vehicles must address software issues.Because an autonomous-vehicles platform islargescale, it is inefficient to build it up fromscratch, especially for prototypes. Open-source software libraries are preferred for thatpurpose, but they have not yet been inte-

grated to develop autonomous vehicles. Thedesign and implementation of algorithms forscene recognition, path planning, and vehiclecontrol also require multidisciplinary skillsand knowledge, often incurring a significantengineering effort. Finally, a basic dataset,such as a map for localization, must be pro-vided to drive on a public road.

Overall, autonomous vehicles are com-posed of diverse technologies. Building theirplatform requires a multidisciplinary collabo-ration in research and development. To facili-tate this collaboration, we introduce an openplatform for autonomous vehicles that manyresearchers and developers can study toobtain a baseline for autonomous vehicles,design new algorithms, and test their per-formance, using a common interface.

Vehicles and sensorsWe introduce an autonomous-vehicles plat-form with a set of sensors that can be purchasedin the market. We assume that intelligent mod-ules of autonomous driving, such as scene rec-ognition, path planning, and vehicle control,

Shinpei Kato

Eijiro Takeuchi

Yoshio Ishiguro

Yoshiki Ninomiya

Kazuya Takeda

Nagoya University

Tsuyoshi Hamada

Nagasaki University

.......................................................

60 Published by the IEEE Computer Society 0272-1732/15/$31.00�c 2015 IEEE

Page 2: AN OPEN APPROACH TO AUTONOMOUS VEHICLEScs.furman.edu/~tallen/csc271/source/openAppr.pdf · an open approach to autonomous vehicles autonomous vehicles are an emerging application

are located in a plug-in computer connected tovehicular Controller Area Network (CAN) busnetworks. Given that commercial vehicles arenot designed to employ such a plug-in com-puter, we need to add a secure control gatewayto the CAN bus networks. This is often a com-plex undertaking that prevents researchers anddevelopers from using real-world vehicles.

The ZMP Robocar product provides avehicle platform containing a control gatewaythrough which a plug-in computer can sendoperational commands (such as pedal strokesand steering angles) to the vehicle. Figure 1shows one of their product lines, the ZMPRobocar “HV,” which is based on the ToyotaPrius. (Details about the product are availableat www.zmp.co.jp.) We added a plug-in com-puter and a set of sensors, such as cameras andlight detection and ranging (Lidar) sensors, tothis basic platform so that the results of scenerecognition, path planning, and vehicle con-trol could apply to autonomous driving.

The specification of sensors and com-puters highly depends on the functionalrequirements of autonomous driving. Ourprototype system uses various sensors andcomputers (see Figure 1):

� Velodyne Lidar sensors produce 3Dpoint-cloud data, which can be usedfor localization and mapping, whilealso being used to measure the dis-tance to surrounding objects.

� Ibeo Lidar sensors produce long-range 3D point-cloud data, althoughtheir vertical resolution is lower thanthat of the Velodyne Lidar sensors.

� Hokuyo Lidar sensors produce short-range 2D laser scan data and are use-ful for emergent stopping rather thanfor localization and mapping.

� Point Grey Ladybug 5 and Grasshop-per 3 cameras can detect objects. TheLadybug 5 camera is omnidirectional,covering a 360-degree view, whereasthe Grasshopper 3 camera is single-directional, operating at a high rate.The former can be used to detect mov-ing objects, and the latter can be usedto recognize traffic lights.

� Javad RTK sensors receive global posi-tioning information from satellites.They are often coupled with gyro sen-sors and odometers to fix the position-ing information.

Velodyne HDL-64e (3D Lidar)

ZMP Robocar HV

Point Grey Ladybug 5 (camera)

Workstation computer

Javad RTK-GNSS (GPS)

Laptop computer

Point Grey Grasshopper 3 (camera)

Ibeo LUX 8L (3D Lidar)

Hokuyo UTM-30LX (Lidar)

Velodyne HDL-32e (3D Lidar)

Figure 1. ZMP Robocar “HV” and examples of sensors and plug-in computers. Omni view cameras, Lidar sensors and GNSS

receivers are equipped outside of the vehicle, while single-view cameras and computers are set inside.

.............................................................

NOVEMBER/DECEMBER 2015 61

Page 3: AN OPEN APPROACH TO AUTONOMOUS VEHICLEScs.furman.edu/~tallen/csc271/source/openAppr.pdf · an open approach to autonomous vehicles autonomous vehicles are an emerging application

These sensors can be connected to com-modity network interfaces, such as Ethernetand USB 3.0. Note that sensors for autono-mous vehicles are not limited to them. Forexample, milliwave radars and inertial meas-urement units are often preferred for autono-mous vehicles.

Regarding a plug-in computer, becausecommercial vehicles support only a DC 12-Vpower supply, we add an inverter and batteryto provide an AC 100-V power supply forthe prototype vehicle in Figure 1. Note thatthe choice of a plug-in computer depends onperformance requirements for autonomousvehicles. In our project, we started with ahigh-performance workstation to test algo-rithms. Once the algorithms were developed,we switched it to a mobile laptop to downsizethe system. Now, we are aiming to use anembedded system on a chip (SoC), takinginto account the production image.

AlgorithmsWe can roughly classify autonomous-drivingcomponents into scene recognition, pathplanning, and vehicle control. Each class com-

prises a set of algorithms. For instance, scenerecognition requires localization, object-detec-tion, and object-tracking algorithms. Pathplanning often falls into mission and motionplanning, whereas vehicle control correspondsto path following.

Figure 2 shows the algorithms’ basic con-trol and data flow. Here, we introduce exam-ples of the algorithms used in our auto-nomous-vehicles platform.

LocalizationLocalization is one of the most basic andimportant problems in autonomous driving.Particularly in urban areas, localization preci-sion dominates the reliability of autonomousdriving. We use the Normal DistributionsTransform (NDT) algorithm to solve thislocalization problem.2 To be precise, we usethe 3D version of NDT to perform scanmatching over 3D point-cloud data and 3Dmap data.3 As a result, localization can per-form at the order of centimeters, leveraging ahigh-quality 3D Lidar sensor and a high-pre-cision 3D map. We chose the NDT algo-rithms because they can be used in 3D forms

Point cloud data 3D map data Image data

Object detectionLocalization

Current position

Mission planning

Global waypoint

Motion planning

Local waypoint

Path following

Velocity and angle

Object positions

in real world

Current

position

Reprojection

Object positions on image

Object tracking

Object positions image

Figure 2. Basic control and dataflow of algorithms. The output velocity and angle are sent as

commands to the vehicle controller.

..............................................................................................................................................................................................

COOL CHIPS

............................................................

62 IEEE MICRO

Page 4: AN OPEN APPROACH TO AUTONOMOUS VEHICLEScs.furman.edu/~tallen/csc271/source/openAppr.pdf · an open approach to autonomous vehicles autonomous vehicles are an emerging application

and their computation cost does not sufferfrom map size (the number of points).

Localization is also a key technique tobuild a 3D map. Because 3D Lidar sensorsproduce 3D point-cloud data in real time, ifour autonomous vehicle is localized correctly,a 3D map is created and updated by register-ing the 3D point-cloud data at every scan.This is often referred to as simultaneouslocalization and mapping.

Object detectionOnce we localize our autonomous vehicle,we next detect objects, such as vehicles,pedestrians, and traffic signals, to avoid acci-dents and violation of traffic rules. We focuson moving objects (vehicles and pedestrians),although our platform can also recognizetraffic signals and lights. We use the Deform-able Part Models (DPM) algorithm to detectvehicles and pedestrians.4 DPM searches andscores the Histogram of Oriented Gradientsfeatures of target objects on the image cap-tured from a camera.5 We chose these algo-rithms because they scored the best numbersin past Pattern Analysis, Statistical Modeling,and Computational Learning (Pascal) VisualObject Classes challenges.

Apart from image processing, we also usepoint-cloud data scanned from a 3D Lidarsensor to detect objects by Euclidean cluster-ing. Point-cloud clustering aims to obtain thedistance to objects rather than to classifythem. The distance information can be usedto range and track the objects classified byimage processing. This combined approachwith multiple sensors is often referred to assensor fusion.

Operating under the assumption that ourautonomous vehicle drives on a public road,we can improve the detection rate using a 3Dmap and the current position information.Projecting the 3D map onto the image origi-nated on the current position, we know theexact road area on the image. We can there-fore constrain the region of interest for imageprocessing to this road area so that we cansave execution time and reduce false positives.

Object trackingBecause we perform the object-detectionalgorithm on each frame of the image andpoint-cloud data, we must associate its results

with other frames on a time basis so that wecan predict the trajectories of moving objectsfor mission and motion planning. We usetwo algorithms to solve this tracking prob-lem. Kalman Filters is used under a linearassumption that our autonomous vehicle isdriving at constant velocity while trackingmoving objects.6 Its computational cost islightweight and suited for real-time process-ing. Particle Filters, on the other hand, canwork for nonlinear tracking scenarios, inwhich both our autonomous vehicle andtracked vehicles are moving.7

In our platform, we use both Kalman Fil-ters and Particle Filters, depending on thegiven scenario. We also apply them for track-ing on both the 2D (image) plane and the3D (point-cloud) plane.

Projection and reprojectionWe augment scene recognition supported byour platform with sensor fusion of a cameraand a 3D Lidar sensor. To calculate theextrinsic parameters required to make thissensor fusion, we calibrate the camera andthe 3D Lidar sensor. We can then project the3D point-cloud information obtained by the3D Lidar sensor onto the image captured bythe camera so that we can add depth infor-mation to the image and filter out the regionof interest of object detection.

The result of object detection on theimage can also be reprojected onto the 3Dpoint-cloud coordinates using the sameextrinsic parameters. We use the reprojectedobject positions to determine the motionplan and, in part, the mission plan.

Mission planningOur mission planner is semiautonomous.Under the traffic rules, we use a rule-basedmechanism to autonomously assign the pathtrajectory, such as for lane change, merge,and passing. In more complex scenarios, suchas parking and recovering from operationalmistakes, the driver can supervise the path. Ineither case, once the path is assigned, the localmotion planner is launched.

Our mission planner’s basic policy is thatwe drive on the cruising lane throughout theroute provided by a commodity navigationapplication. The lane is changed only whenour autonomous vehicle is passing the

.............................................................

NOVEMBER/DECEMBER 2015 63

Page 5: AN OPEN APPROACH TO AUTONOMOUS VEHICLEScs.furman.edu/~tallen/csc271/source/openAppr.pdf · an open approach to autonomous vehicles autonomous vehicles are an emerging application

preceding vehicle or approaching an intersec-tion followed by a turn.

Motion planningThe motion planner is a design knob forautonomous driving that corresponds to thedriving behavior, which is not identicalamong users and environments. Hence, ourplatform provides only a basic motion-plan-ning strategy. A high-level intelligence mustbe added on top of this platform, dependingon the target scenarios.

In unstructured environments, such asparking lots, we provide graph-search algo-rithms, such as A*,8 to find a minimum-costpath to the goal in space lattices.9 In structuredenvironments, such as roads and traffic lanes,on the other hand, the density of vertices andedges is likely high and not uniform, whichconstrains the selection of feasible headings.We therefore use conformal spatiotemporallattices to adapt the motion plan to the envi-ronment.10 State-of-the-art research encour-ages the implementation of these algorithms.1

Path followingWe control our autonomous vehicle to followthe path generated by the motion planner.We use the Pure Pursuit algorithm to solvethis path-following problem.11 According tothe Pure Pursuit algorithm, we break downthe path into multiple waypoints, which arediscrete representations of the path. At everycontrol cycle, we search for the close way-point in the heading direction. We limit thesearch to outside of the specified thresholddistance so that we can relax the change inangle in the case of returning onto the pathfrom a deviated position. The velocity andangle of the next motion are set to such valuesthat bring the vehicle to the selected way-point with predefined curvature.

We update the target waypoint accord-ingly until the goal is reached. The vehiclekeeps following updated waypoints andfinally reaches the goal. If the control of gasand brake stroking and steering is not alignedwith the velocity and angle output of thePure Pursuit algorithm because of somenoise, the vehicle could temporarily get offthe path generated by the motion planner.Localization errors could also pose this prob-lem. As a result, the vehicle could come

across unexpected obstacles. To cope withthis scenario, our path follower ensures aminimum distance to obstacles, overwritingthe given plan.

Software stackWe build the software stack of autonomousdriving on the basis of open-source software.As a result, even the code implementation ofthe algorithms we have discussed becomesaccessible to the public. The software-stackframework is called Autoware, and it can bedownloaded from the project repository(https://github.com/cpfl/autoware).

In this section, we introduce the core com-ponents of open-source software used inAutoware. Our autonomous vehicle is entirelyoperated by Autoware. It has already run overa few hundred miles in Nagoya, Japan.

Robot Operating SystemAutoware is based on Robot Operating Sys-tem (www.ros.org), a component-based mid-dleware framework developed for robotapplications. In ROS, the system is abstractedby nodes and topics. The nodes representindividual component modules, whereas thetopics hold input and output data betweennodes. This is a strong abstraction model forcompositional development.

ROS nodes are usually standard Cþþprograms. They can use any other softwarelibraries installed in the system. Meanwhile,ROS nodes can launch several threads implic-itly. Topics are also managed by first-in, first-out queues when accessed by multiple nodessimultaneously. Real-time issues, however,must be addressed.

ROS also provides an integrated visualiza-tion tool called RViz. Figure 3 shows anexample of visualization for perception tasksin Autoware. The RViz viewer is useful forchecking the status of tasks.

Point-Cloud LibraryPoint-Cloud Library (http://pointclouds.org)was developed to manage point-cloud data.It supports many algorithms in the librarypackage, including the 3D NDT algorithm.3

Autoware uses this PCL version of 3D NDTfor localization and mapping. In our experi-ence, the localization or mapping error canbe capped at 10 to 20 cm.

..............................................................................................................................................................................................

COOL CHIPS

............................................................

64 IEEE MICRO

Page 6: AN OPEN APPROACH TO AUTONOMOUS VEHICLEScs.furman.edu/~tallen/csc271/source/openAppr.pdf · an open approach to autonomous vehicles autonomous vehicles are an emerging application

Another usage of PCL in Autoware is toimplement the Euclidean clustering algo-rithm for object detection. In addition, ROSuses PCL internally in many places. Forexample, the RViz viewer provided by ROSis a PCL application.

OpenCVOpenCV (http://opencv.org) is a popularcomputer vision library for image processing.It supports the DPM algorithm4 and the His-togram of Oriented Gradients features.5 Apartfrom algorithm implementation, OpenCVprovides API library functions (such as load-

ing, converting, and drawing images), whichare useful to construct a framework of image-processing programs. Autoware combinesOpenCV with the RViz viewer of ROS forvisualization.

CUDACUDA (https://developer.nvidia.com/cuda-zone) is a framework for general-purpose com-puting on GPUs (GPGPU). Because complexalgorithms of autonomous driving, such asNDT, DPM, and A*, are often computingintensive and data parallel, their executionspeeds can be improved significantly using

Figure 3. Visualization tools for developers. Both a 3D map plane and a 2D image plane can

be displayed. The results of traffic light recognition and path generation are also displayed.

.............................................................

NOVEMBER/DECEMBER 2015 65

Page 7: AN OPEN APPROACH TO AUTONOMOUS VEHICLEScs.furman.edu/~tallen/csc271/source/openAppr.pdf · an open approach to autonomous vehicles autonomous vehicles are an emerging application

CUDA. For example, the DPM algorithmcontains a nontrivial number of computing-intensive and data-parallel blocks that can besignificantly accelerated by CUDA.12 Becauseautonomous vehicles need real-time perform-ance, CUDA is a strong way to speed up thealgorithms.

AndroidAutoware uses Android (www.android.com)applications for the human-driver interface.The driver can search for the route using a navi-gation map on an Android tablet (see Figure4). The result of a searched route is reflected tothe 3D map used in Autoware.

openFrameworksAnother piece of software used in Autowarefor the driver interface is openFrameworks(www.openframeworks.cc). Although Androidapplications are functional to receive an inputfrom the driver, openFrameworks providescreative displays to visualize the status ofautonomous driving to the driver. For exam-

ple, it can be used with smart glasses, as shownin Figure 4.

DatasetsIn our platform, a 3D map is required tolocalize the autonomous vehicle. As we men-tioned earlier, we can generate a 3D mapusing Autoware. However, this turns into atime-consuming routine. It should be pro-vided by mapmakers as part of general data-sets for autonomous driving. We arecollaborating with Aisan Technology, a map-maker in Japan, for field-operational tests ofautonomous driving in Nagoya. Autoware isdesigned to be able to use both their 3Dmaps and those generated by Autoware itself.This compatibility to the third-party formatallows Autoware to be deployed widely inindustry and academia.

Simulation of autonomous driving alsoneeds datasets. Field-operational tests alwaystake some risk, and they are not always effi-cient in time, so simulation is desired in mostcases. Fortunately, ROS provides an excellent

Figure 4. User interface tools for drivers. Smart tablets are used to input the destination of the trip, displaying a route

candidate. To present more valuable information to drivers, smart glasses and displays can be used together.

..............................................................................................................................................................................................

COOL CHIPS

............................................................

66 IEEE MICRO

Page 8: AN OPEN APPROACH TO AUTONOMOUS VEHICLEScs.furman.edu/~tallen/csc271/source/openAppr.pdf · an open approach to autonomous vehicles autonomous vehicles are an emerging application

record-based simulation framework calledROSBAG. We suggest that ROSBAG is usedprimarily to test functions of autonomousvehicles, and field-operational tests are per-formed for a real verification.

Autoware lets researchers and developersuse such structured 3D maps and record data.Simulation of autonomous driving on a 3Dmap can be easily conducted using Autoware.Samples of these datasets can also be down-loaded through the Autoware website. Giventhat vehicles and sensors are often expensive,simulation with the sample datasets might bepreferred to on-site experiments in some cases.

Performance requirementsConsidering that cameras and Lidar sensorsoften operate at 10 to 100 Hz, each task ofautonomous driving must run under sometiming constraints.

We provide a case study of our autono-mous driving system on several computersemploying Intel CPUs and Nvidia GPUs. OnIntel CPUs, such as the Xeon and Core i7 ser-ies, the A* search algorithm consumes themost time, requiring a scale of seconds ormore, if the search area is large. Althoughfaster is better, this might not be a significantproblem given that A* search is often used formission planning, which launches only whenthe path needs to change. A more significantproblem resides in real-time tasks, such asmotion planning, localization, and objectdetection. The DPM algorithm spends morethan 1,000 ms on VGA-size images with thedefault parameter setting in OpenCV, whereasthe NDT and the State Lattice algorithms canrun in several tens of milliseconds. Their exe-cution times must be reduced to drive autono-mous vehicles in the real world.

We implemented these algorithms usingNvidia GPUs. As reported elsewhere,12 Nvi-dia GPUs bring 5 to 10 times performanceimprovements over Intel CPUs for the DPMalgorithm. In fact, we also observed equiva-lent effects on the A*, NDT, and State Latticealgorithms, although the magnitude ofimprovement depends on the parameters.

One particular case study of our autono-mous driving system used a laptop composedof an Intel Core i7-4710MQ CPU and anNvidia GTX980M GPU. We demonstratedthat most real-time tasks, including the NDT

and State Lattice algorithms, can run within50 ms, whereas the DPM algorithm still con-sumes more than 100 to 200 ms on the GPU.This implies that our system currently exhibitsa performance bottleneck in object detection.Assuming that the vehicle is self-driving at 40km/hour in urban areas and that autonomousfunctions should be effective every 1 m, theexecution time of each real-time task must beless than 100 ms, although a multitaskingenvironment could make the problem morecomplicated. If our autonomous driving sys-tem remains as is, the performance of GPUsmust improve by at least a factor of two tomeet our assumption. Application-specificintegrated circuit and field-programmablegate array solutions could also exist.

Real-time tasks other than planning, local-ization, and detection are negligible in termsof execution time, but some tasks are very sen-sitive to latency. For example, a vehicle-con-trol task in charge of steering and accelerationand brake stroking should run with a few-milliseconds latency.

Finally, because such throughput- andlatency-aware tasks are coscheduled in thesame system, operating systems must be reli-able to multitask real-time processing. Thisleads to the conclusion that codesign of hard-ware and software is an important considera-tion for autonomous driving.

T his article has introduced an openplatform for commodity autonomous

vehicles. The hardware components, includ-ing ZMP Robocars, sensors, and computers,can be purchased in the market. The autono-mous-driving algorithms are widely recog-nized, and Autoware can be used as a basicsoftware platform comprising ROS, PCL,OpenCV, CUDA, Android, and openFrame-works. Sample datasets, including 3D mapsand simulation data, are also packaged aspart of Autoware. To the best of our knowl-edge, this is the first autonomous-vehiclesplatform accessible to the public. MICRO

....................................................................References1. C. Urmson et al., “Autonomous Driving in

Urban Environments: Boss and the Urban

Challenge,” J. Field Robotics, vol. 25, no. 8,

2008, pp. 425–466.

.............................................................

NOVEMBER/DECEMBER 2015 67

Page 9: AN OPEN APPROACH TO AUTONOMOUS VEHICLEScs.furman.edu/~tallen/csc271/source/openAppr.pdf · an open approach to autonomous vehicles autonomous vehicles are an emerging application

2. P. Biber et al., “The Normal Distributions

Transform: A New Approach to Laser Scan

Matching,” Proc. IEEE/RSJ Int’l Conf. Intelli-

gent Robots and Systems, 2003, pp.

2743–2748.

3. M. Magnusson et al., “Scan Registration for

Autonomous Mining Vehicles Using 3D-

NDT,” J. Field Robotics, vol. 24, no. 10,

2007, pp. 803–827.

4. P. Felzenszwalb et al., “Object Detection

with Discriminatively Trained Part-Based

Models,” IEEE Trans. Pattern Analysis and

Machine Intelligence, vol. 32, no. 9, 2010,

pp. 1627–1645.

5. N. Dalal and B. Triggs, “Histograms of Ori-

ented Gradients for Human Detection,”

Proc. IEEE CS Conf. Computer Vision and

Pattern Recognition, 2005, pp. 886–893.

6. R. Kalman, “A New Approach to Linear Fil-

tering and Prediction Problems,” ASME J.

Basic Eng., vol. 82, no. 1, 1960, pp. 35–45.

7. M. Arulampalam et al., “A Tutorial on Par-

ticle Filters for Online Nonlinear/non-

Gaussian Bayesian Tracking,” IEEE Trans.

Signal Processing, vol. 50, no. 2, 2002, pp.

174–188.

8. P. Hart et al., “A Formal Basis for the Heu-

ristic Determination of Minimum Cost

Paths,” IEEE Trans. Systems Science and

Cybernetics, July 1968, pp. 100–107.

9. M. Pivtoraiko et al., “Differentially Con-

strained Mobile Robot Motion Planning in

State Lattices,” J. Field Robotics, vol. 26,

no. 3, 2009, pp. 308–333.

10. N. McNaughton et al., “Motion Planning for

Autonomous Driving with a Conformal Spatio-

temporalLattice,”Proc. IEEEInt’lConf.Robotics

andAutomation,2011,pp.4889–4895.

11. R. Coulter, Implementation of the Pure Pur-

suit Path Tracking Algorithm, tech. report

CMURI-TR-92-01, Robotics Institute, 1992.

12. M. Hirabayashi et al., “Accelerated Deform-

able Part Models on GPUs,” to be published

in IEEE Trans. Parallel and Distributed Sys-

tems, 2015.

Shinpei Kato is an associate professor in theGraduate School of Information Science atNagoya University. His research interests

include operating systems, cyber-physicalsystems, and parallel and distributed sys-tems. Kato has a PhD in engineering fromKeio University. Contact him at [email protected].

Eijiro Takeuchi is a designated associate pro-fessor in the Institute of Innovation for FutureSociety at Nagoya University. His researchinterests include autonomous mobile robotsand intelligent vehicles. Takeuchi has a PhD inengineering from the University of Tsukuba.Contact him at [email protected].

Yoshio Ishiguro is an assistant professor inthe Institute of Innovation for Future Soci-ety at Nagoya University. His research inter-ests include mixed reality, augmented reality,and human–computer integration. Ishigurohas a PhD in information science and stud-ies from the University of Tokyo. Contacthim at [email protected].

Yoshiki Ninomiya is a designated professorin the Institute of Innovation for Future Soci-ety at Nagoya University. His research inter-ests include machine intelligence and imageprocessing, and its applications. Ninomiyahas a PhD in engineering from NagoyaUniversity. Contact him at [email protected].

Kazuya Takeda is a professor in the Schoolof Information Science at Nagoya Univer-sity. His research interests include media sig-nal processing and its applications. Takedahas a PhD in engineering from Nagoya Uni-versity. Contact him at [email protected].

Tsuyoshi Hamada is an associate professorin the Nagasaki Advanced Computing Cen-ter at Nagasaki University. His researchinterests include massively parallel architec-tures based on FPGAs and GPUs. Hamadahas a PhD in engineering from the Univer-sity of Tokyo. Contact him at [email protected].

..............................................................................................................................................................................................

COOL CHIPS

............................................................

68 IEEE MICRO