IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND … · Fig. 1. Schematic architecture of the Mobile...

8
IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING 1 A Survey of Research on Cloud Robotics and Automation Ben Kehoe Member, IEEE, Sachin Patil Member, IEEE, Pieter Abbeel Member, IEEE, Ken Goldberg Fellow, IEEE Abstract—The Internet and more broadly the global “Cloud” infrastructure suggests a range of potential benefits for robots and automation systems. This survey summarizes over 100 research articles on this topic. In addition to early work on Networked Robots, recent work is organized under five potential benefits from the Cloud: 1) Big Data: access to remote libraries of images, maps, trajectories, and object data, 2) Cloud Computing: access to parallel grid computing on demand for statistical analysis, learning, and motion planning, 3) Open-Source/Open- Access: human sharing of code, data, algorithms, and hardware designs, 4) Collective Robot Learning: robots sharing trajectories, control policies, and outcomes, and 5) Crowdsourcing and call centers: access to remote human expertise for analyzing images, classification, learning, and error recovery. This survey will be updated as we learn of other work and as new research emerges. An open online repository for suggested additions to the survey is online at: http://goldberg.berkeley.edu/cloud-robotics/ Note to Practitioners—Although robots and automation sys- tems are at the forefront of research, most systems still operate independently using onboard computation, memory, and pro- gramming. Emerging advances and the increasing availability of networking in the “Cloud” facilitates new approaches where robot and automation processing is performed remotely with access to dynamic global datasets to support a range of functions. This paper surveys research to date. Index Terms—Cloud Automation, Cloud Robotics, Big Data, Cloud Computing, Open source, crowdsourcing I. I NTRODUCTION Most industrial automation systems require multiple ma- chines to be coordinated; the concept of communication between automation systems and robots has a long history. Efforts to standardize network protocols date back to the early 1980s when General Motors developed the Manufacturing Automation Protocol (MAP) [66]. Since the introduction of the World Wide Web in the early 1990s, TCP and UDP over IP has become the standard networking protocol, combined with other protocol layers such as HTTP for rapid access to local and global resources [89]. An industrial robot was connected to the Web in 1994, allowing anyone to teleoperate the robot remotely via an inter- net browser [49]. A number of other web-based teleoperated robots were developed [50]–[52], [85] and the IEEE Robotics and Automation Society established a Technical Committee on Networked Robots in May 2001 [10] which organized a number of conference workshops. Two chapters of the Springer Handbook are devoted to Networked Tele-robots Manuscript received Saturday 15 th March, 2014. Authors are with the University of California, Berkeley, CA, USA, (e-mail: {benk, sachinpatil, pabbeel, goldberg}@berkeley.edu). Fig. 1. Schematic architecture of the Mobile Millennium system in the Cloud. Hunter et al. describe algorithms for a system where data such as location, noise level, and air quality is streamed from thousands of automobiles to remote servers that track traffic and other conditions for large metropolitan areas [63]. (Image reproduced with permission from authors). (where robots are operated remotely by humans using global networks) and Networked Robots (where robots communicate with each other using local networks, also related to sensor networks and swarm robotics) respectively [77], [105]. This survey focuses on Cloud Robotics and Automation, where robots and automation systems communicate with each other using global networks. In 2010, James Kuffner at Google introduced the term “Cloud Robotics” to describe a broad vision where the Internet provides massively parallel computation and sharing of vast data resources [56], [76], related to earlier work on “Remote Brained” robots [65]. Steve Cousins, former CEO of Willow Garage, aptly summarized the concept: “No robot is an island.” The Google autonomous car exemplifies the idea. It in- dexing maps and images collected and updated by satellite, Streetview, and crowdsourcing from the network to facilitate accurate localization. Similarly, the Mobile Millenium project is illustrated in Figure 1. Cloud Robotics and Automation is also related to industry initiatives such as the “Internet of Things” [30], “Industrial Internet” [44], and “Industry 4.0” [11]. There is an active interest in incorporating RFID and inexpensive processors into a a vast array of objects and systems from inventory items to household appliances, allowing them to communicate and share information [1]. This survey is organized under five potential benefits from the Cloud: 1) Big Data: access to remote libraries of images, maps, trajectories, and object data, 2) Cloud Computing: ac-

Transcript of IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND … · Fig. 1. Schematic architecture of the Mobile...

Page 1: IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND … · Fig. 1. Schematic architecture of the Mobile Millennium system in the Cloud. Hunter et al. describe algorithms for a system where

IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING 1

A Survey of Research onCloud Robotics and Automation

Ben Kehoe Member, IEEE, Sachin Patil Member, IEEE, Pieter Abbeel Member, IEEE, Ken Goldberg Fellow, IEEE

Abstract—The Internet and more broadly the global “Cloud”infrastructure suggests a range of potential benefits for robotsand automation systems. This survey summarizes over 100research articles on this topic. In addition to early work onNetworked Robots, recent work is organized under five potentialbenefits from the Cloud: 1) Big Data: access to remote libraries ofimages, maps, trajectories, and object data, 2) Cloud Computing:access to parallel grid computing on demand for statisticalanalysis, learning, and motion planning, 3) Open-Source/Open-Access: human sharing of code, data, algorithms, and hardwaredesigns, 4) Collective Robot Learning: robots sharing trajectories,control policies, and outcomes, and 5) Crowdsourcing and callcenters: access to remote human expertise for analyzing images,classification, learning, and error recovery. This survey will beupdated as we learn of other work and as new research emerges.An open online repository for suggested additions to the surveyis online at: http://goldberg.berkeley.edu/cloud-robotics/

Note to Practitioners—Although robots and automation sys-tems are at the forefront of research, most systems still operateindependently using onboard computation, memory, and pro-gramming. Emerging advances and the increasing availabilityof networking in the “Cloud” facilitates new approaches whererobot and automation processing is performed remotely withaccess to dynamic global datasets to support a range of functions.This paper surveys research to date.

Index Terms—Cloud Automation, Cloud Robotics, Big Data,Cloud Computing, Open source, crowdsourcing

I. INTRODUCTION

Most industrial automation systems require multiple ma-chines to be coordinated; the concept of communicationbetween automation systems and robots has a long history.Efforts to standardize network protocols date back to the early1980s when General Motors developed the ManufacturingAutomation Protocol (MAP) [66]. Since the introduction ofthe World Wide Web in the early 1990s, TCP and UDP overIP has become the standard networking protocol, combinedwith other protocol layers such as HTTP for rapid access tolocal and global resources [89].

An industrial robot was connected to the Web in 1994,allowing anyone to teleoperate the robot remotely via an inter-net browser [49]. A number of other web-based teleoperatedrobots were developed [50]–[52], [85] and the IEEE Roboticsand Automation Society established a Technical Committeeon Networked Robots in May 2001 [10] which organizeda number of conference workshops. Two chapters of theSpringer Handbook are devoted to Networked Tele-robots

Manuscript received Saturday 15th March, 2014.Authors are with the University of California, Berkeley, CA, USA, (e-mail:

{benk, sachinpatil, pabbeel, goldberg}@berkeley.edu).

Fig. 1. Schematic architecture of the Mobile Millennium system in the Cloud.Hunter et al. describe algorithms for a system where data such as location,noise level, and air quality is streamed from thousands of automobiles toremote servers that track traffic and other conditions for large metropolitanareas [63]. (Image reproduced with permission from authors).

(where robots are operated remotely by humans using globalnetworks) and Networked Robots (where robots communicatewith each other using local networks, also related to sensornetworks and swarm robotics) respectively [77], [105]. Thissurvey focuses on Cloud Robotics and Automation, whererobots and automation systems communicate with each otherusing global networks.

In 2010, James Kuffner at Google introduced the term“Cloud Robotics” to describe a broad vision where the Internetprovides massively parallel computation and sharing of vastdata resources [56], [76], related to earlier work on “RemoteBrained” robots [65]. Steve Cousins, former CEO of WillowGarage, aptly summarized the concept: “No robot is an island.”

The Google autonomous car exemplifies the idea. It in-dexing maps and images collected and updated by satellite,Streetview, and crowdsourcing from the network to facilitateaccurate localization. Similarly, the Mobile Millenium projectis illustrated in Figure 1.

Cloud Robotics and Automation is also related to industryinitiatives such as the “Internet of Things” [30], “IndustrialInternet” [44], and “Industry 4.0” [11]. There is an activeinterest in incorporating RFID and inexpensive processors intoa a vast array of objects and systems from inventory itemsto household appliances, allowing them to communicate andshare information [1].

This survey is organized under five potential benefits fromthe Cloud: 1) Big Data: access to remote libraries of images,maps, trajectories, and object data, 2) Cloud Computing: ac-

Page 2: IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND … · Fig. 1. Schematic architecture of the Mobile Millennium system in the Cloud. Hunter et al. describe algorithms for a system where

IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING 2

Image

Object Label

3D CADModel

CandidateGrasps

GoogleObject Recognition

Engine

GoogleCloud Storage

Select Feasible Grasp with

Highest Success Probability

PoseEstimation

Camera

Robots

Cloud

3D SensorPoint Cloud

GraspExecutionResults

Fig. 2. System Architecture for cloud-based object recognition for grasping.The robot captures an image of an object and sends via the network tothe Google object recognition server. The server processes the image andreturns data for a set of candidate objects, each with pre-computed graspingoptions. The robot compares the returned CAD models with the detected pointcloud to refine identification and to perform pose estimation, and selects anappropriate grasp. After the grasp is executed, data on the outcome is usedto update models in the cloud for future reference [72]. (Image reproducedwith permission from authors).

cess to parallel grid computing on demand for statistical anal-ysis, learning, and motion planning, 3) Open-Source/Open-Access: human sharing of code, data, algorithms, and hardwaredesigns, 4) Collective Robot Learning: robots sharing trajecto-ries, control policies, and outcomes, and 5) Crowdsourcing andcall centers: access to remote human expertise for analyzingimages, classification, learning, and error recovery.

This survey will be updated as we learn of other workand as new research emerges. An open online repository forsuggested additions to the survey is online at: http://goldberg.berkeley.edu/cloud-robotics/

II. BIG DATA

The term “Big Data” describes large data sets involvinggigabytes or terabytes of data that cannot be efficiently man-aged using standard relational database systems, for exampleimages, video, maps, and real-time network and financialtransactions [79]. Such datasets are often available onlineand can be indexed by robots to compute, for example,appropriate grasps of an object. The Columbia Grasp dataset[53], the MIT KIT object dataset [71], and the Willow GarageHousehold Objects Database [37] are available online and havebeen widely used to evaluate different aspects of graspingalgorithms, including grasp stability [41] [40], robust grasping[116], and scene understanding [97].

Related work explores how computer vision can be usedwith Cloud resources to incrementally learn grasp strategies[37] [88] by matching sensor data against 3D CAD models inan online database. Examples of sensor data include 2D imagefeatures [62], 3D features [54], and 3D point clouds [36].Google Goggles [9], a free network-based image recognitionservice for mobile devices, has been incorporated into a systemfor robot grasping [72] as illustrated in Figure 2.

Dalibard et al. attach “manuals” of manipulation tasks toobjects [39]. The RoboEarth project stores data related toobjects maps, and tasks, for applications ranging from objectrecognition to mobile navigation to grasping and manipulation(see Figure 8) [114].

Fig. 3. The “Google Goggles” system combines an enormous dataset ofimages and textual labels with machine learning to facilitate object recognitionin the Cloud [9], [76]. (Image reproduced with permission from authors).

Fig. 4. The Darpa Robotics Challenge (DRC) used CloudSim, an open-source cloud-based simulation platform for testing the performance of theAtlas humanoid robot (shown) on a variety of disaster response tasks [6], [7].The Cloud permits running interactive, real-time simulation tasks in parallelfor purposes such as predicting and evaluating performance, validating designdecisions, optimizing designs optimizing designs, and training users. Thiscompetition also resulted in enabling sharing of robotics research efforts.(Image reproduced with permission from authors).

Online datasets are effectively used to facilitate learning incomputer vision. By leveraging Google’s 3D warehouse, Laiet al. reduced the need for manually labeled training data [78].Using community photo collections, Gammeter et al. createdan augmented reality application with processing in the cloud[45]. Combining internet images with querying a local humanoperator, Hidago-Pena et al. provided a more robust objectlearning technique [59].

III. CLOUD COMPUTING

Massively-parallel computation on demand is now widelyavailable [27]. Examples include Amazon Web Services [3]Elastic Compute Cloud, known as EC2 [2], Google ComputeEngine [8], and Microsoft Azure [12]. These provide a largepool of computing resources that can be rented by the publicfor short-term computing tasks, and recent research has beenperformed in evaluating and comparing them [81], [82]. Theseservices were originally used primarily by web applicationdevelopers, but have increasingly been used in scientific andtechnical high performance computing (HPC) applications[69] [86] [110] [20].

Massively parallel computation can be combined with BigData approaches for object recognition [72] [92], and with

Page 3: IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND … · Fig. 1. Schematic architecture of the Mobile Millennium system in the Cloud. Hunter et al. describe algorithms for a system where

IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING 3

Fig. 5. A cloud-based approach to geometric shape uncertainty for grasping.(Top) Uncertainty in object pose and shape. (Bottom) Computed push grasps.Kehoe et al. use sampling over uncertainty distributions to find a lower boundon the probability of success for grasps [73] [74]. (Image reproduced withpermission from authors).

Fig. 6. Distributed sampling-based roadmap of trees for motion planningin high-dimensional spaces. Plaku et al. show that their planner can “easilysolve high-dimensional problems that exhaust resources available to singlemachines” [95]. (Image reproduced with permission from authors).

Collective Robot Learning for tracking and mapping [100],[101].

Cloud computing is challenging when there are real-timeconstraints [67]; this is an active area of research [83], andcloud-based formation control of ground robots has beendemonstrated [111]. Additionally, research is ongoing intocharacterizing the properties of networked control systems[24], [25], [75]. However, there are many robotics applicationsthat are not time sensitive, such as decluttering a room or pre-computing grasp strategies.

There are many sources of uncertainty in robotics and au-tomation [48]. Cloud computing allows massive sampling overerror distributions and Monte Carlo sampling is “embarrass-ingly parallel”; recent research in fields as varied as medicine[115] and particle physics [104] have taken advantage of thecloud. Real-time video and image analysis can be performed inthe Cloud [78] [90] [94]. Image processing in the cloud hasbeen used for assistive technology for the visually impaired[32] and for senior citizens [46]. Cloud computing is idealfor sample-based statistical motion planning under uncertainty,where it can be used to explore many possible perturbationsin object and environment pose, shape, and robot response tosensors and commands [112]. Cloud-based sampling is alsobeing investigated for grasping objects with shape uncertainty[73] [74] (see Fig. 5). A grasp planning algorithm accepts as

Fig. 7. A Cloud framework for cooperative tracking and mapping (C2TAM).Riazuelo et al. demonstrate computer intensive bundle adjustment for simulta-neous localization and mapping (SLAM) performed in the Cloud [100], [101].(Image reproduced with permission from authors).

input a nominal polygonal outline with Gaussian uncertaintyaround each vertex and the center of mass to compute a graspquality metric based on a lower bound on the probability ofachieving force closure.

The Darpa Robotics Challenge (DRC) is “a competition ofrobot systems and software teams vying to develop robotscapable of assisting humans in responding to natural andman-made disasters”. A unique aspect of the DRC is thesimulator is provided to all contestants through CloudSim, anopen-source cloud-based simulation platform for testing theperformance of the Atlas humanoid robot (shown) on a varietyof disaster response tasks [6], [7]. The Cloud permits runninginteractive, real-time simulation tasks in parallel for purposessuch as predicting and evaluating performance, validatingdesign decisions, optimizing designs optimizing designs, andtraining users.

IV. COLLECTIVE ROBOT LEARNING

The Cloud allows robots and automation systems to “share”data from physical trials in a variety of environments, for ex-ample initial and desired conditions, associated control policiesand trajectories, and importantly: data on performance andoutcomes. Such data is a rich source for robot learning. KivaSystems uses hundreds of mobile platforms to move palletsin warehouses using a local network to coordinate motionand update tracking data. Such systems can also be expandedto global networks to facilitate shared path planning, wherepreviously-generated paths are adapted to similar environments[31], and for grasping [33], where for example grasp stability

Page 4: IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND … · Fig. 1. Schematic architecture of the Mobile Millennium system in the Cloud. Hunter et al. describe algorithms for a system where

IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING 4

of finger contacts can be learned from previous grasps on anobject [40].

The RoboEarth project [114] envisions “a World Wide Webfor robots: a giant network and database repository whererobots can share information and learn from each other abouttheir behavior and their environment” [22]. It develops systemarchitectures related to other efforts for service robotics [28],[43], developing cloud networking [61] [70], and computingresources [64]. projects to generate 3D models of environ-ments, speech recognition, and face recognition [109].

The MyRobots project [13] from RobotShop proposes a“social network” for robots: “In the same way humans benefitfrom socializing, collaborating and sharing, robots can benefitfrom those interactions too by sharing their sensor informationgiving insight on their perspective of their current state” [21].

The “Internet of Things” [30] [55], the “Industrial Inter-net” [44], and ubiquitous robotics [35] are three conceptsthat envision how RFID and inexpensive processors can beincorporated into a vast array of robots and objects frominventory items to household appliances [84] to allow themto communicate and share information.

Fig. 8. RoboEarth architecture. RoboEarth aims to develop “a World WideWeb for robots: a giant network and database repository where robots canshare information and learn from each other about their behavior and theirenvironment” [22], [114]. (Image reproduced with permission from authors).

Fig. 9. (Left) Schematic architecture of the Lightning path planning frame-work. Berenson et al. show a system that is able to learn from experiencefrom pre-computed motion plans, which could be stored in the Cloud. Theplanner attempts to find a brand-new plan as well as find an existing plan fora problem similar to the current one. Whichever finishes first is chosen [31].(Image reproduced with permission from authors).

V. OPEN-SOURCE AND OPEN-ACCESS

The Cloud facilitates sharing by humans of designs forhardware, data, and code. The success of open source software

Fig. 10. Schematic architecture of CloudThink. Wilhem et al. developedan open-standard for self-reporting sensing devices such as sensors mountedin automobiles. Cloud-enabled storage of sensor network data can enablecollaborative sharing of data for traffic routing and other applications [117].(Image reproduced with permission from authors).

[38] [58] [91] is now widely accepted in the robotics andautomation community. A primary example is ROS, the RobotOperating System, which provides libraries and tools to helpsoftware developers create robot applications [18] [99] [14].ROS has also been ported to Android devices [19]. ROS hasbecome a standard akin to Linux and is now used by almostall robot developers in research and many in industry, withthe ROS Industrial project created to support these users [17].Additionally, many simulation libraries for robotics are nowopen source, which allows students and researchers to rapidlyset up and adapt new systems and share the resulting software.There are many open source simulation libraries, includingBullet [5], a physics simulator originally used for video games,OpenRAVE [15] and Gazebo [7], simulation environmentsgeared specifically towards robotics, OOPSMP, a motion-planning library [96], and GraspIt!, a grasping simulator [87].The open source nature of these libraries allows them to bemodified to suit applications and they were not originallydesigned for.

Another exciting trend is in open source hardware, whereCAD models and the technical details of construction of de-vices are made freely available [42] [102]. The Arduino project[4] is a widely-used open source microcontroller platformwith many different sensors and actuators available, and hasbeen used in many robotics projects. The Raven [57] is anopen-architecture laparoscopic surgery robot developed as aresearch platform an order of magnitude less expensive thancommercial surgical robots [23].

The Cloud can also be used to facilitate open challengesand design competitions. For example, the African RoboticsNetwork with support from IEEE Robotics and AutomationSociety hosted the “$10 Robot” Design Challenge in the sum-mer of 2012. This open competition attracted 28 designs fromaround the world including a winning entry from Thailand (seeFig. 13 that modified a surplus Sony game controller, adaptingits embedded vibration motors to drive wheels and adding

Page 5: IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND … · Fig. 1. Schematic architecture of the Mobile Millennium system in the Cloud. Hunter et al. describe algorithms for a system where

IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING 5

Fig. 11. Crowdsourcing object identification to facilitate robot grasping. Sorokin et al. developed a Cloud robot system that incorporates Amazon’s MechanicalTurk to obtain semantic information about the world and subjective judgments [106]. (Image reproduced with permission from authors).

Fig. 12. Tiered human-in-the-loop assistance using Cloud-based resourcesfor teleoperation. Leeper et al. developed an interface for operators to controlgrasp execution using a set of different strategies. The results indicate humansare able to select better and more robust grasp strategies [80], [118]. (Imagereproduced with permission from authors).

Fig. 13. Suckerbot, designed by Tom Tilley of Thailand, a winner of the$10 Robot Design Challenge. This design can be built from surplus parts forUS $8.96. [26]. (Image reproduced with permission from authors).

lollipops to the thumb switches as inertial counterweights forcontact sensing, which can be built from surplus parts forUS $8.96 [26].

A separate form of sharing known as Software as a Service(SaaS) has been developing. With SaaS, the code does notneed to be provided, nor even the algorithms used. Instead, aninterface to the software is provided for other users, allowingthem to utilize the algorithm. This model has been successfullyused by Amazon and Google, and may be attractive forcompanies or individuals wishing to share the fruits of theirresearch without divulging proprietary information. GoogleGoggles has been successfully used as a black box objectrecognition system through SaaS [72].

VI. CROWDSOURCING AND CALL CENTERS

In contrast to automated telephone reservation and technicalsupport systems, consider a future scenario where errors andexceptions are detected by robots and automation systems,which then access human guidance on-demand at remote callcenters. Human skill, experience, and intuition is being tappedto solve a number of problems such as image labeling forcomputer vision [37] [70] [76], including through the famousreCAPTCHA system [113], and learning associations betweenobject labels and locations [103]. Amazon’s Mechanical Turkis pioneering on-demand “crowdsourcing” that can draw on“human computation” or “social computing systems”.

Research projects are exploring how this can be used forpath planning [60], to determine depth layers, image normals,and symmetry from images [47], and to refine image seg-mentation [68]. Researchers are working to understand pricingmodels [107] and apply crowdsourcing to grasping [106].

Networked robotics has a long history of allowing robots tobe controlled over the web [49], but the ubiquitous natureof the Cloud is enabling new research into remote humanoperation [80], [106], [118].

VII. FUTURE DIRECTIONS

A recent U.S. National Academy of Engineering Reportsummarizes many of the opportunities and challenges createdby Big Data [93]. By the law of large numbers, biggerdatasets facilitate more accurate inferences. Data processingmay leverage this by using cheaper algorithms on larger datasets to keep running times manageable [34]. This will allowautomated processes to become more responsive to the datathey generate.

A primary challenge for real-time use of the Cloud isnetwork latency. Faster data connections, both wired internetconnections and wireless standards such as LTE [29], arereducing latency, and in the future latency may be low enoughto reliably perform real-time computation. Increasingly, algo-rithms run both locally and in the cloud. For example, speechrecognition on smart phones attempts to find a solution locallyas well as sending the recording to the Cloud (if available) formore robust analysis, using whichever result is returned first.Algorithms similar in concept have been recently developedin robotics [31].

Page 6: IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND … · Fig. 1. Schematic architecture of the Mobile Millennium system in the Cloud. Hunter et al. describe algorithms for a system where

IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING 6

Another challenge is creating common formats for repre-senting data. While sensor data such as images and pointclouds have a small number of widely-used formats, evenrelatively simple data such as trajectories have no commonstandards. Higher-level robot “knowledge” is also difficultto standardize; work in developing a framework for thesestandards is ongoing [108] [98].

We expect the Software as a Service (SaaS) paradigmdiscussed in Section V to see wider acceptance and use.Innovators may make use of service-oriented architecture(SOA) to provide algorithms and computational resources tothe industrial automation community. SaaS may see a furtherextension as Robotics and Automation as a Service (RAaaS),in which algorithms and datasets are made widely available asservices that can be used and combined in ways not imaginedby their creators [16].

With the growing availability of on-demand human assis-tance for robots, we expect these methods to become moreintegrated with standard robotics and automation approaches.The need to contact a human must be balanced with thecost associated with doing so. Algorithms will be designed todetermine when best to contact a human and when to proceedwith the robot’s judgment.

This survey will be updated we learn of other work andas new research emerges. An open online repository forsuggested additions to the survey is online at: http://goldberg.berkeley.edu/cloud-robotics/

ACKNOWLEDGMENT

The authors thank Alexandre Bayen, Kostas Bekris, DmitryBerenson, Gary Bradski, Matei Ciocarlie, Javier Civera, SteveCousins, Raffaello D’Andrea, Brian Gerkey, Erico Guizzo,Timothy Hunter, Volkan Isler, James Kuffner, Vijay Kumar,Sanjay Sarma, and Alex Waibel for ongoing insights andadvice on this topic.

SPECIAL NOTE

We will also include references from the T-ASE specialissue on Cloud-based Grasping in the final version of thissurvey, and can be contacted with suggestions for otherreferences to include.

REFERENCES

[1] “A Roadmap for U.S. Robotics: From Internet to Robotics.” [Online].Available: http://www.robotics-vo.us/node/332

[2] “Amazon Elastic Cloud (EC2).” [Online]. Available: http://aws.amazon.com/ec2/

[3] “Amazon Web Services.” [Online]. Available: http://aws.amazon.com[4] “Arduino.” [Online]. Available: http://www.arduino.cc[5] “Bullet Physics Library.” [Online]. Available: http://bulletphysics.org[6] “CloudSim.” [Online]. Available: http://gazebosim.org/wiki/CloudSim/[7] “Gazebo.” [Online]. Available: http://gazebosim.org/[8] “Google Compute Engine.” [Online]. Available: https://cloud.google.

com/products/compute-engine[9] “Google Goggles.” [Online]. Available: http://www.google.com/

mobile/goggles/[10] “IEEE Networked Robots Technical Committee.” [Online]. Available:

http://www-users.cs.umn.edu/∼isler/tc/[11] “Industry 4.0.” [Online]. Available: http://www.bmbf.de/en/19955.php[12] “Microsoft Azure.” [Online]. Available: http://www.windowsazure.com[13] “MyRobots.com.” [Online]. Available: http://myrobots.com[14] “Open Source Robotics Foundation.” [Online]. Available: http:

//www.osrfoundation.org[15] “OpenRAVE.” [Online]. Available: http://openrave.org/

[16] “Robotics and Automation as a Service (RAaaS).” [Online]. Available:http://rll.berkeley.edu/raas

[17] “ROS-Industrial.” [Online]. Available: http://rosindustrial.org[18] “ROS (Robot Operating System).” [Online]. Available: http://ros.org[19] “rosjava, an implementation of ROS in pure Java with Android

support.” [Online]. Available: http://cloudrobotics.com[20] “TOP500.” [Online]. Available: http://www.top500.org/list/2012/06/

100[21] “What is MyRobots?” [Online]. Available: http://myrobots.com/wiki/

About[22] “What is RoboEarth?” [Online]. Available: http://www.roboearth.org/

what-is-roboearth[23] “An Open-source Robo-surgeon,” 2012. [Online]. Available: http:

//www.economist.com/node/21548489[24] B. Addad, S. Amari, and J.-J. Lesage, “Analytic Calculus of Response

Time in Networked Automation Systems,” IEEE Transactions onAutomation Science and Engineering, vol. 7, no. 4, pp. 858–869, 2010.

[25] ——, “Client-Server Networked Automation Systems Reactivity: De-terministic and Probabilistic Analysis,” IEEE Transactions on Automa-tion Science and Engineering, vol. 8, no. 3, pp. 540–548, 2011.

[26] T. A. R. N. (AFRON), ““Ten Dollar Robot” Design ChallengeWinners.” [Online]. Available: http://robotics-africa.org/designchallenge.html

[27] M. Armbrust, I. Stoica, M. Zaharia, A. Fox, R. Griffith, A. D. Joseph,R. Katz, A. Konwinski, G. Lee, D. Patterson, and A. Rabkin, “A Viewof Cloud Computing,” Communications of the ACM, vol. 53, no. 4,p. 50, Apr. 2010.

[28] R. Arumugam, V. Enti, L. Bingbing, W. Xiaojun, K. Baskaran, F. Kong,A. Kumar, K. Meng, and G. Kit, “DAvinCi: A Cloud ComputingFramework for Service Robots,” in Proc. Int. Conf. Robotics andAutomation (ICRA), 2010, pp. 3084–3089.

[29] D. Astely, E. Dahlman, A. Furuskar, Y. Jading, M. Lindstrom, andS. Parkvall, “LTE: The Evolution of Mobile Broadband,” Comm. Mag.,vol. 47, no. 4, pp. 44–51, 2009.

[30] L. Atzori, A. Iera, and G. Morabito, “The Internet of Things: A Survey,”Computer Networks, vol. 54, no. 15, pp. 2787–2805, Oct. 2010.

[31] D. Berenson, P. Abbeel, and K. Goldberg, “A Robot Path PlanningFramework that Learns from Experience,” in Proc. Int. Conf. Roboticsand Automation (ICRA), May 2012, pp. 3671–3678.

[32] B. Bhargava, P. Angin, and L. Duan, “A Mobile-Cloud PedestrianCrossing Guide for the Blind,” in Int. Conf. on Advances in Computing& Communication, 2011.

[33] J. Bohg, A. Morales, T. Asfour, and D. Kragic, “Data-Driven GraspSynthesis—A Survey,” IEEE Trans. on Robotics, no. 99, pp. 1–21,2013.

[34] V. Chandrasekaran and M. I. Jordan, “Computational and statisticaltradeoffs via convex relaxation,” Proceedings of the National Academyof Sciences of the United States of America, vol. 110, no. 13, pp.E1181–90, 2013.

[35] A. Chibani, Y. Amirat, S. Mohammed, E. Matson, N. Hagita, andM. Barreto, “Ubiquitous robotics: Recent challenges and future trends,”Robotics and Autonomous Systems, vol. 61, no. 11, pp. 1162–1172,2013.

[36] M. Ciocarlie, K. Hsiao, E. G. Jones, S. Chitta, R. Rusu, and I. Sucan,“Towards Reliable Grasping and Manipulation in Household Environ-ments,” in Int. Symposium on Experimental Robotics (ISER), 2010.

[37] M. Ciocarlie, C. Pantofaru, K. Hsiao, G. Bradski, P. Brook, andE. Dreyfuss, “A Side of Data With My Robot,” IEEE Robotics &Automation Magazine, vol. 18, no. 2, pp. 44–57, June 2011.

[38] L. Dabbish, C. Stuart, J. Tsay, and J. Herbsleb, “Social Coding inGitHub: Transparency and Collaboration in an Open Software Repos-itory,” in Proc. ACM 2012 Conf. on Computer Supported CooperativeWork, 2012, p. 1277.

[39] S. Dalibard, A. Nakhaei, F. Lamiraux, and J.-P. Laumond, “Manip-ulation of Documented Objects by a Walking Humanoid Robot,” inIEEE-RAS Int. Conf. on Humanoid Robots, Dec. 2010, pp. 518–523.

[40] H. Dang and P. K. Allen, “Learning Grasp Stability,” in Proc. Int. Conf.Robotics and Automation (ICRA), May 2012, pp. 2392–2397.

[41] H. Dang, J. Weisz, and P. K. Allen, “Blind Grasping: Stable RoboticGrasping using Tactile Feedback and Hand Kinematics,” in Proc. Int.Conf. Robotics and Automation (ICRA), May 2011, pp. 5917–5922.

[42] S. Davidson, “Open-source Hardware,” IEEE Design and Test ofComputers, vol. 21, no. 5, pp. 456–456, Sept. 2004.

[43] Z. Du, W. Yang, Y. Chen, X. Sun, X. Wang, and C. Xu, “Design ofa Robot Cloud Center,” in Int. Symp. on Autonomous DecentralizedSystems, Mar. 2011, pp. 269–275.

Page 7: IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND … · Fig. 1. Schematic architecture of the Mobile Millennium system in the Cloud. Hunter et al. describe algorithms for a system where

IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING 7

[44] P. C. Evans and M. Annunziata, “Industrial Internet: Pushing theBoundaries of Minds and Machines,” General Electric, Tech. Rep.,2012.

[45] S. Gammeter, A. Gassmann, L. Bossard, T. Quack, and L. Van Gool,“Server-side Object Recognition and Client-side Object Tracking forMobile Augmented Reality,” in IEEE Conf. on Computer Vision andPattern Recognition, June 2010, pp. 1–8.

[46] J. Garcıa, “Using Cloud Computing as a HPC Platform for EmbeddedSystems,” 2011.

[47] Y. Gingold, A. Shamir, and D. Cohen-Or, “Micro Perceptual HumanComputation for Visual Tasks,” ACM Trans. on Graphics, vol. 31, no. 5,pp. 1–12, Aug. 2012.

[48] J. Glover, D. Rus, and N. Roy, “Probabilistic Models of ObjectGeometry for Grasp Planning,” in Robotics: Science and Systems (RSS),2008.

[49] K. Goldberg, “Beyond the Web: Excavating the Real World viaMosaic,” in Second International World Wide Web Conference, 1994,pp. 1–12.

[50] K. Goldberg, B. Chen, R. Solomon, S. Bui, B. Farzin, J. Heitler,D. Poon, and G. Smith, “Collaborative teleoperation via the Internet,”in Proc. Int. Conf. Robotics and Automation (ICRA), vol. 2. IEEE,2000, pp. 2019–2024.

[51] K. Goldberg, M. Mascha, S. Gentner, N. Rothenberg, C. Sutter, andJ. Wiegley, “Desktop Teleoperation via the World Wide Web,” in Proc.Int. Conf. Robotics and Automation (ICRA), 1995.

[52] K. Goldberg and R. Siegwart, Eds., Beyond Webcams: An Introductionto Online Robots. MIT Press, 2002.

[53] C. Goldfeder, M. Ciocarlie, and P. Allen, “The Columbia GraspDatabase,” in Proc. Int. Conf. Robotics and Automation (ICRA), May2009, pp. 1710–1716.

[54] C. Goldfeder and P. K. Allen, “Data-Driven Grasping,” AutonomousRobots, vol. 31, no. 1, pp. 1–20, Apr. 2011.

[55] J. Gubbi, R. Buyya, S. Marusic, and M. Palaniswami, “Internet ofThings (IoT): A vision, architectural elements, and future directions,”Future Generation Computer Systems, vol. 29, no. 7, 2013.

[56] E. Guizzo, “Cloud Robotics: Connected to the Cloud, Robots GetSmarter,” 2011.

[57] B. Hannaford, J. Rosen, D. Friedman, H. King, P. Roan, L. Cheng,D. Glozman, J. Ma, S. Kosari, and L. White, “Raven II: Open Platformfor Surgical Robotics Research,” IEEE Trans. Biomedical Engineering,pp. 954–959, 2012.

[58] A. Hars and S. Ou, “Working for Free? Motivations of Participatingin Open Source Projects,” in Proc. 34th Annual Hawaii Int. Conf. onSystem Sciences, 2001, p. 9.

[59] E. Hidago-Pena, L. F. Marin-Urias, F. Montes-Gonzalez, A. Marin-Hernandez, and H. V. Rios-Figueroa, “Learning from the Web: Recog-nition method based on object appearance from Internet images,”in ACM/IEEE International Conference on Human-Robot Interaction.IEEE, 2013, pp. 139–140.

[60] J. Higuera, A. Xu, F. Shkurti, and G. Dudek, “Socially-Driven Collec-tive Path Planning for Robot Missions,” 2012 Ninth Conf. on Computerand Robot Vision, pp. 417–424, May 2012.

[61] G. Hu, W. Tay, and Y. Wen, “Cloud Robotics: Architecture, Challengesand Applications,” IEEE Network, vol. 26, no. 3, pp. 21–28, 2012.

[62] K. Huebner, K. Welke, M. Przybylski, N. Vahrenkamp, T. Asfour, andD. Kragic, “Grasping Known Objects with Humanoid Robots: A Box-Based Approach,” in Int. Conf. on Advanced Robotics, 2009.

[63] T. Hunter, T. Moldovan, M. Zaharia, S. Merzgui, J. Ma, M. J.Franklin, P. Abbeel, and A. M. Bayen, “Scaling the Mobile MillenniumSystem in the Cloud,” in Proc. of the 2nd ACM Symposium on CloudComputing (SOCC), 2011, p. 28.

[64] D. Hunziker, M. Gajamohan, M. Waibel, and R. D’Andrea, “Rapyuta:The roboearth cloud engine,” in Proc. Int. Conf. Robotics and Automa-tion (ICRA). IEEE, 2013.

[65] M. Inaba, “Remote-brained robots,” in International Joint Conferenceon Artificial Intelligence, 1997, pp. 1593–1606.

[66] J. D. Irwin, The Industrial Electronics Handbook. CRC Press, 1997.[67] N. K. Jangid, “Real Time Cloud Computing,” in Data Management &

Security, 2011.[68] M. Johnson-Roberson, J. Bohg, G. Skantze, J. Gustafson, R. Carlson,

B. Rasolzadeh, and D. Kragic, “Enhanced Visual Scene Understandingthrough Human-Robot Dialog,” in Proc. Int. Conf. on Intelligent Robotsand Systems (IROS), Sept. 2011, pp. 3342–3348.

[69] G. Juve, E. Deelman, G. B. Berriman, B. P. Berman, and P. Maechling,“An Evaluation of the Cost and Performance of Scientific Workflowson Amazon EC2,” Journal of Grid Computing, vol. 10, no. 1, pp. 5–21,Mar. 2012.

[70] K. Kamei, S. Nishio, N. Hagita, and M. Sato, “Cloud NetworkedRobotics,” IEEE Network, vol. 26, no. 3, pp. 28–34, 2012.

[71] A. Kasper, Z. Xue, and R. Dillmann, “The KIT object models database:An object model database for object recognition, localization andmanipulation in service robotics,” Int. Journal of Robotics Research,vol. 31, no. 8, pp. 927–934, May 2012.

[72] B. Kehoe, A. Matsukawa, S. Candido, J. Kuffner, and K. Goldberg,“Cloud-Based Robot Grasping with the Google Object RecognitionEngine,” in Proc. Int. Conf. Robotics and Automation (ICRA), 2013.

[73] B. Kehoe, D. Berenson, and K. Goldberg, “Estimating Part ToleranceBounds Based on Adaptive Cloud-Based Grasp Planning with Slip,” inIEEE Int. Conf. on Automation Science and Engineering, 2012.

[74] ——, “Toward Cloud-based Grasping with Uncertainty in Shape:Estimating Lower Bounds on Achieving Force Closure with Zero-slipPush Grasps,” in Proc. Int. Conf. Robotics and Automation (ICRA),May 2012.

[75] W.-J. Kim, K. Ji, and A. Ambike, “Real-time operating environmentfornetworked control systems,” IEEE Transactions on Automation Scienceand Engineering, vol. 3, no. 3, pp. 287–296, 2006.

[76] J. Kuffner, “Cloud-Enabled Robots,” in IEEE-RAS Int. Conf. on Hu-manoid Robots, 2010.

[77] V. Kumar, R. Daniela, and G. S. Sukhatme, “Networked Robotics,”in Springer Handbook of Robotics, B. Siciliano and O. Khatib, Eds.Springer Berlin Heidelberg, 2004.

[78] K. Lai and D. Fox, “Object Recognition in 3D Point Clouds UsingWeb Data and Domain Adaptation,” Int. Journal of Robotics Research,vol. 29, no. 8, pp. 1019–1037, May 2010.

[79] Q. Le, M. Ranzato, R. Monga, M. Devin, K. Chen, G. Corrado, J. Dean,and A. Ng, “Building high-level features using large scale unsupervisedlearning,” in International Conference in Machine Learning, 2012.

[80] A. E. Leeper, K. Hsiao, M. Ciocarlie, L. Takayama, and D. Gossow,“Strategies for Human-in-the-loop Robotic Grasping,” in Proc. of theACM/IEEE Int. Conf. on Human-Robot Interaction, 2012, pp. 1–8.

[81] Z. Li, L. O’Brien, H. Zhang, and R. Cai, “On the Conceptualizationof Performance Evaluation of IaaS Services,” IEEE Transactions onServices Computing, vol. X, no. X, pp. 1–1, 2014.

[82] Z. Li, H. Zhang, L. OBrien, R. Cai, and S. Flint, “On evaluatingcommercial Cloud services: A systematic review,” Journal of Systemsand Software, vol. 86, no. 9, pp. 2371–2393, 2013.

[83] B. Liu, Y. Chen, E. Blasch, and K. Pham, “A Holistic Cloud-EnabledRobotics System for Real-Time Video Tracking Application,” in FutureInformation Technology, 2014, pp. 455–468.

[84] C.-H. Lu and L.-c. Fu, “Robust Location-Aware Activity RecognitionUsing Wireless Sensor Network in an Attentive Home,” IEEE Trans-actions on Automation Science and Engineering, vol. 6, no. 4, pp.598–609, 2009.

[85] G. McKee, “What is Networked Robotics?” Informatics in ControlAutomation and Robotics, vol. 15, pp. 35–45, 2008.

[86] P. Mehrotra, J. Djomehri, S. Heistand, R. Hood, H. Jin, A. Lazanoff,S. Saini, and R. Biswas, “Performance evaluation of Amazon EC2 forNASA HPC applications,” in Proc. of the 3rd Workshop on ScientificCloud Computing Date - ScienceCloud ’12, 2012, pp. 41–50.

[87] A. Miller and P. Allen, “GraspIt! A Versatile Simulator for RoboticGrasping,” IEEE Robotics & Automation Magazine, vol. 11, no. 4, pp.110–122, Dec. 2004.

[88] M. Moussa and M. Kamel, “An Experimental Approach to RoboticGrasping using a Connectionist Architecture and Generic GraspingFunctions,” IEEE Trans. on Systems, Man and Cybernetics, Part C,vol. 28, no. 2, pp. 239–253, May 1998.

[89] M. Narita, S. Okabe, Y. Kato, Y. Murakwa, K. Okabayashi, andS. Kanda, “Reliable cloud-based robot services,” in Conference of theIEEE Industrial Electronics Society. IEEE, 2013, pp. 8317–8322.

[90] D. Nister and H. Stewenius, “Scalable Recognition with a VocabularyTree,” in IEEE Conf. on Computer Vision and Pattern Recognition(CVPR), vol. 2, 2006, pp. 2161–2168.

[91] D. Nurmi, R. Wolski, C. Grzegorczyk, G. Obertelli, S. Soman,L. Youseff, and D. Zagorodnov, “The Eucalyptus Open-Source Cloud-Computing System,” in IEEE/ACM Int. Symp. on Cluster Computingand the Grid, 2009, pp. 124–131.

[92] G. Oliveira and V. Isler, “View Planning For Cloud-Based ActiveObject Recognition,” Department of Computer Science, University ofMinnesota, Tech. Rep., 2013.

[93] C. on the Analysis of Massive Data; Committee on Applied, T. S. B.on Mathematical Sciences, T. A. D. on Engineering, and P. S. N. R.Council, Frontiers in Massive Data Analysis. The National AcademiesPress, 2013.

Page 8: IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND … · Fig. 1. Schematic architecture of the Mobile Millennium system in the Cloud. Hunter et al. describe algorithms for a system where

IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING 8

[94] J. Philbin, O. Chum, M. Isard, J. Sivic, and A. Zisserman, “ObjectRetrieval with Large Vocabularies and Fast Spatial Matching,” in IEEEConf. on Computer Vision and Pattern Recognition (CVPR), June 2007.

[95] E. Plaku, K. E. Bekris, B. Y. Chen, A. M. Ladd, and L. E. Kavraki,“Sampling-based Roadmap of Trees for Parallel Motion Planning,”IEEE Trans. on Robotics, vol. 21, no. 4, pp. 597–608, 2005.

[96] E. Plaku, K. E. Bekris, and L. E. Kavraki, “OOPS for Motion Planning:An Online, Open-source, Programming System,” in Proc. Int. Conf.Robotics and Automation (ICRA), Apr. 2007, pp. 3711–3716.

[97] M. Popovic, G. Kootstra, J. A. Jorgensen, D. Kragic, and N. Kruger,“Grasping Unknown Objects using an Early Cognitive Vision Systemfor General Scene Understanding,” in Proc. Int. Conf. on IntelligentRobots and Systems (IROS), Sept. 2011, pp. 987–994.

[98] E. Prestes, J. L. Carbonera, S. Rama Fiorini, V. a. M. Jorge, M. Abel,R. Madhavan, A. Locoro, P. Goncalves, M. E. Barreto, M. Habib,A. Chibani, S. Gerard, Y. Amirat, and C. Schlenoff, “Towards acore ontology for robotics and automation,” Robotics and AutonomousSystems, vol. 61, no. 11, pp. 1193–1204, 2013.

[99] M. Quigley, B. Gerkey, K. Conley, J. Faust, T. Foote, J. Leibs,E. Berger, R. Wheeler, and A. Ng, “ROS: an open-source RobotOperating System,” in ICRA Workshop on Open Source Software, 2009.

[100] L. Riazuelo, J. Civera, and J. Montiel, “C2TAM: A Cloud Frameworkfor Cooperative Tracking and Mapping,” Robotics and AutonomousSystems, vol. 62, no. 4, pp. 401–413, 2013.

[101] L. Riazuelo, M. Tenorth, D. D. Marco, M. Salas, L. Mosenlechner,L. Kunze, M. Beetz, J. D. Tardos, L. Montano, and J. Montiel,“RoboEarth Web-Enabled and Knowledge-Based Active Perception,”in IROS Workshop on AI-based Robotics, 2013.

[102] E. Rubow, “Open Source Hardware,” Tech. Rep., 2008.[103] M. Samadi, T. Kollar, and M. Veloso, “Using the Web to Interactively

Learn to Find Objects.” in AAAI Conference on Artificial Intelligence,2012, pp. 2074–2080.

[104] M. Sevior, T. Fifield, and N. Katayama, “Belle Monte-Carlo Productionon the Amazon EC2 Cloud,” Journal of Physics: Conference Series,vol. 219, no. 1, p. 012003, Apr. 2010.

[105] D. Song, K. Goldberg, and N. Y. Chong, “Networked Telerobots,”in Springer Handbook of Robotics, B. Siciliano and O. Khatib, Eds.Springer Berlin Heidelberg, 2004.

[106] A. Sorokin, D. Berenson, S. S. Srinivasa, and M. Hebert, “PeopleHelping Robots Helping People: Crowdsourcing for Grasping NovelObjects,” Proc. Int. Conf. on Intelligent Robots and Systems (IROS),pp. 2117–2122, Oct. 2010.

[107] A. Sorokin and D. Forsyth, “Utility Data Annotation with AmazonMechanical Turk,” in IEEE Conf. on Computer Vision and PatternRecognition (CVPR) Workshops, June 2008, pp. 1–8.

[108] M. Tenorth and M. Beetz, “KnowRob: A Knowledge ProcessingInfrastructure for Cognition-Enabled Robots,” Int. Journal of RoboticsResearch, vol. 32, no. 5, pp. 566–590, 2013.

[109] M. Tenorth, A. Clifford Perzylo, R. Lafrenz, and M. Beetz, “TheRoboEarth language: Representing and exchanging knowledge aboutactions, objects, and environments,” in Proc. Int. Conf. Robotics andAutomation (ICRA), no. 3, May 2012, pp. 1284–1289.

[110] R. Tudoran, A. Costan, G. Antoniu, and L. Bouge, “A PerformanceEvaluation of Azure and Nimbus Clouds for Scientific Applications,”in Proc. of the 2nd Int. Workshop on Cloud Computing Platforms -CloudCP ’12. ACM Press, 2012, pp. 1–6.

[111] L. Turnbull and B. Samanta, “Cloud robotics: Formation control of amulti robot system utilizing cloud infrastructure,” in Proceedings ofIEEE Southeastcon. IEEE, 2013, pp. 1–4.

[112] J. van den Berg, P. Abbeel, and K. Goldberg, “LQG-MP: Optimizedpath planning for robots with motion uncertainty and imperfect stateinformation,” Int. Journal of Robotics Research, vol. 30, no. 7, pp.895–913, June 2011.

[113] L. von Ahn, “Human Computation,” in Proc. ACM Int. Conf. on Imageand Video Retrieval (CIVR), 2009, p. 418.

[114] M. Waibel, M. Beetz, J. Civera, R. D’Andrea, J. Elfring, D. Galvez-Lopez, K. Haussermann, R. Janssen, J. Montiel, A. Perzylo,B. Schieß le, M. Tenorth, O. Zweigle, and R. De Molengraft,“RoboEarth,” IEEE Robotics & Automation Magazine, vol. 18, no. 2,pp. 69–82, June 2011.

[115] H. Wang, Y. Ma, G. Pratx, and L. Xing, “Toward Real-time MonteCarlo Simulation using a Commercial Cloud Computing Infrastruc-ture.” Physics in Medicine and Biology, vol. 56, no. 17, pp. N175–81,Sept. 2011.

[116] J. Weisz and P. K. Allen, “Pose error robust grasping from contactwrench space metrics,” in Proc. Int. Conf. Robotics and Automation(ICRA). IEEE, May 2012, pp. 557–562.

[117] E. Wilhelm, J. Siegel, S. Mayer, J. Paefgen, M. Tiefenbeck, M. Bicker,S. Ho, R. Dantu, and S. Sarma, “CloudThink: An Open Standardfor Projecting Objects into the Cloud,” 2013. [Online]. Available:http://cloud-think.com/

[118] Willow Garage, “Personal Service Robotics with Tiered Human-in-the-Loop Assistance,” 2013.

Ben Kehoe received his B.A. in Physics and Mathe-matics from Hamline University in 2006. He is nowa Ph.D. student in the ME department at Univer-sity of California, Berkeley. His research interestsinclude cloud robotics, medical robotics, controls,and grasping.

Sachin Patil received his B.Tech degree in Com-puter Science and Engineering from the Indian Insti-tute of Technology, Bombay, in 2006 and his Ph.D.degree in Computer Science from the Universityof North Carolina at Chapel Hill, NC in 2012.He is now a postdoctoral researcher in the EECSdepartment at University of California, Berkeley. Hisresearch interests include motion planning, cloudrobotics, and medical robotics.

Pieter Abbeel received a BS/MS in Electrical En-gineering from KU Leuven (Belgium) and receivedhis Ph.D. degree in Computer Science from StanfordUniversity in 2008. He joined the faculty at UCBerkeley in Fall 2008, with an appointment in theDepartment of Electrical Engineering and ComputerSciences. His current research focuses on roboticsand machine learning with a particular focus onchallenges in personal robotics, surgical roboticsand connectomics. He has won numerous awards,including best paper awards at ICML and ICRA,

the Sloan Fellowship, the Air Force Office of Scientific Research YoungInvestigator Program (AFOSR-YIP) award, the Office of Naval ResearchYoung Investigator Program (ONR-YIP) award, the Darpa Young FacultyAward (Darpa-YFA), the Okawa Foundation award, the TR35, the IEEERobotics and Automation Society (RAS) Early Career Award, and the DickVolz Best U.S. Ph.D. Thesis in Robotics and Automation Award.

Ken Goldberg is Professor of Industrial Engineeringand Operations Research at UC Berkeley, with ap-pointments in Electrical Engineering, Computer Sci-ence, Art Practice, and the School of Information. Hewas appointed Editor-in-Chief of the IEEE Transac-tions on Automation Science and Engineering (T-ASE) in 2011 and served two terms (2006–2009) asVice-President of Technical Activities for the IEEERobotics and Automation Society. Goldberg earnedhis PhD in Computer Science from Carnegie MellonUniversity in 1990. Goldberg is Founding Co-Chair

of the IEEE Technical Committee on Networked Robots and FoundingChair of the (T-ASE) Advisory Board. Goldberg has published over 200refereed papers and awarded eight US patents, the NSF Presidential FacultyFellowship (1995), the Joseph Engelberger Award (2000), the IEEE MajorEducational Innovation Award (2001) and in 2005 was named IEEE Fellow.