Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations,...

105
Semi-Autonomous, Teleoperated Search and Rescue Robot Kristoffer Cavallin and Peter Svensson February 3, 2009 Master’s Thesis in Computing Science, 2*30 ECTS-credits Supervisor at CS-UmU: Thomas Hellstr¨ om Examiner: Per Lindstr¨ om Ume ˚ a University Department of Computing Science SE-901 87 UME ˚ A SWEDEN

Transcript of Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations,...

Page 1: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

Semi-Autonomous,Teleoperated

Search and Rescue Robot

Kristoffer Cavallin and Peter Svensson

February 3, 2009Master’s Thesis in Computing Science, 2*30 ECTS-credits

Supervisor at CS-UmU: Thomas HellstromExaminer: Per Lindstrom

Umea UniversityDepartment of Computing Science

SE-901 87 UMEASWEDEN

Page 2: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced
Page 3: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

Abstract

The interest in robots in the urban search and rescue (USAR) field has increased thelast two decades. The idea is to let robots move into places where human rescue workerscannot or, due to high personal risks, should not enter.

In this thesis project, an application is constructed with the purpose of teleoperatinga simple robot. This application contains a user interface that utilizes both autonomousand semi-autonomous functions, such as search, explore and point-and-go behaviours.The purpose of the application is to work with USAR principles in a refined and simpli-fied environment, and thereby increase the understanding for these principles and howthey interact with each other.

Furthermore, the thesis project reviews the recent and the current status of robotsin USAR applications and use of teleoperation and semi-autonomous robots in general.

Some conclusions that are drawn towards the end of the thesis are that the use ofrobots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced and cheaper by the day, teleoperation andsemi-autonomous robots will also be seen in more and more places.

Key Words: Robotics, Urban Search and Rescue, Path Planning, Semi-autonomy, Tele-operation.

Page 4: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

ii

Page 5: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

Contents

1 Introduction 11.1 Goal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2 Disposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

2 Urban Search and Rescue 52.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.2 USAR Robotics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72.3 Robotic USAR in practice . . . . . . . . . . . . . . . . . . . . . . . . . . 92.4 Robot design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2.4.1 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.4.2 Choosing Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . 132.4.3 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

3 Human-Robot Interaction 213.1 Teleoperation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

3.1.1 Time delays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233.1.2 Planetary Exploration . . . . . . . . . . . . . . . . . . . . . . . . 233.1.3 Unmanned Aerial Vehicles . . . . . . . . . . . . . . . . . . . . . . 243.1.4 Urban Search and Rescue . . . . . . . . . . . . . . . . . . . . . . 243.1.5 Other examples of teleoperation . . . . . . . . . . . . . . . . . . 24

3.2 Semi-autonomous Control . . . . . . . . . . . . . . . . . . . . . . . . . . 253.2.1 Shared Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253.2.2 Traded Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253.2.3 Safeguarded teleoperation . . . . . . . . . . . . . . . . . . . . . . 263.2.4 Adjustable autonomy . . . . . . . . . . . . . . . . . . . . . . . . 26

3.3 Common ground and situation awareness . . . . . . . . . . . . . . . . . 283.4 User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

3.4.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303.4.2 Telepresence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303.4.3 Sensor fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333.4.4 Visual feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

iii

Page 6: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

iv CONTENTS

3.4.5 Interactive maps and virtual obstacles . . . . . . . . . . . . . . . 353.4.6 USAR User Interface Examples . . . . . . . . . . . . . . . . . . . 36

4 Implementation 414.1 Hardware set-up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

4.1.1 Amigobot and the Amigobot Sonar . . . . . . . . . . . . . . . . . 424.1.2 Swissranger SR3000 camera . . . . . . . . . . . . . . . . . . . . . 43

4.2 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474.2.1 System overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 474.2.2 Coordinate transformation . . . . . . . . . . . . . . . . . . . . . 524.2.3 Obstacle detection . . . . . . . . . . . . . . . . . . . . . . . . . . 544.2.4 Map Building . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 554.2.5 Sensor fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 584.2.6 Path planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 594.2.7 Autonomous behaviours . . . . . . . . . . . . . . . . . . . . . . . 614.2.8 Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

5 Results 715.1 Obstacle Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 715.2 Path planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 745.3 Human Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 745.4 Exploration behaviour . . . . . . . . . . . . . . . . . . . . . . . . . . . . 755.5 Manual control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 765.6 Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

6 Conclusions 836.1 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 836.2 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

7 Acknowledgments 87

References 89

Page 7: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

List of Figures

1.1 The Packbot USAR-robot facing a potential USAR scenario. . . . . . . 1

1.2 System overview; showing the laptop, the robot, the sonars, the 3D-camera and the connections between them. . . . . . . . . . . . . . . . . 3

2.1 Oklahoma City, OK, April 26, 1995 - Search and Rescue crews workto save those trapped beneath the debris, following the Oklahoma Citybombing (FEMA News Photo). . . . . . . . . . . . . . . . . . . . . . . . 6

2.2 Hierarchy of a FEMA USAR task force that includes four robotic elements[8]. 7

2.3 A destroyed PackBot, made by iRobot, displayed at the 2007 Associa-tion for Unmanned Vehicles International (AUVSI) show. The robot wasdestroyed while surveying an explosive device in Iraq. . . . . . . . . . . 8

2.4 iRobot’s PackBot is an example of a robot that can operate even when itis flipped upside down[27]. . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2.5 An example of an image created by using data from a 3D-camera[1]. . . 15

2.6 The Solem robot, which was used in the World Trade Center rescue op-erations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

2.7 The operator control unit for controlling iRobots PackBot. This portabledevice is used to control the robot from a distance. . . . . . . . . . . . . 18

2.8 A prototype image of a robot swarm using a world embedded interface.The arrows points towards a potential victim in a USAR situation. . . . 19

3.1 The principle of teleoperation. The operator (or “master”) is connected tothe robot (or “slave”) via an arbitrary connection medium. The operatorcontrols the robot based on the feedback received from the robot’s remoteenvironment[17]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

3.2 The Sojourner Mars rover, which was sent by NASA to Mars in 1997 toexplore the planet. (Image by NASA) . . . . . . . . . . . . . . . . . . . 23

3.3 A Joint Service Explosive Ordnance Disposal robot ’Red Fire’ preparedto recover a mine on February 9, 2007 in Stanley, Falkland Islands (photoby Peter Macdiarmid/Getty Images). . . . . . . . . . . . . . . . . . . . . 25

v

Page 8: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

vi LIST OF FIGURES

3.4 The neglect curve containing teleoperation and full autonomy. The x-axisrepresents the amount of neglect that a robot receives, and the y-axisrepresents the effectiveness of the robot. The dashed curve representsintermediate types of semi-autonomous robots, such as a robot that useswaypoints[16]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

3.5 Autonomy modes as a function of neglect. The x-axis represents theamount of neglect that a robot receives, and the y-axis represents theeffectiveness of the robot[16]. . . . . . . . . . . . . . . . . . . . . . . . . 28

3.6 An implementation of scripts concept, which is an attempt at improvingcommon ground between a robot and an operator[6]. . . . . . . . . . . . 29

3.7 An example of Virtual Reality equipment, including headgear and motionsensing gloves. (Image by NASA) . . . . . . . . . . . . . . . . . . . . . . 31

3.8 A prototype image of a world embedded interface, which is a mix of realityand an interface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

3.9 A traditional teleoperated system (top), and a system which utilizes apredictive display to reduce the problems caused by latency (bottom). . 33

3.10 The CASTER user interface. In addition to the main camera, threeother cameras are used. At the top of the interface there is a rear-viewcamera. In the lower corners of the screen, auxiliary cameras show imagesof the robots tracks. Various feedback data is superimposed on the maincamera’s image[19]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

3.11 The Idaho National Laboratory User Interface. The most distinctive fea-ture is the mixture between real world images and a virtual perspective[32]. 38

4.1 Block scheme of main components in the system. Every oval represents acomponent of the system, and the lines show how they relate to each other. 41

4.2 The main hardware components used for this project: the Amigobot robotwith the SR3000 3D-camera attached. . . . . . . . . . . . . . . . . . . . 42

4.3 The SR3000 Swissranger 3D-camera[1]. . . . . . . . . . . . . . . . . . . . 444.4 4-times sampled incoming light signal. The figure was taken from the

SR3000 manual[1]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444.5 Example of application utilizing the SR3000 camera, taken from the

SR3000 manual[1]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454.6 Schematic drawing illustrating the problem with light scattering artifacts.

This occurs when the camera observes nearby objects, or objects thatreflect extraordinary much of the transmitted light and shines so brightlyback to the sensor, that all light cannot be absorbed by the imager. Thisin turn results in that the light is reflected to the lens, and back to theimager again. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

4.7 Illustration of a problematic scenario for the SR3000 camera featuringmultiple reflections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

Page 9: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

LIST OF FIGURES vii

4.8 The graphical user interface of the system. Parts A and D are the 3D-camera displays, part B is the map display, part C is the command barand part E is the output console. . . . . . . . . . . . . . . . . . . . . . . 48

4.9 The sensor tab (top) and the map tab (bottom) in the settings menu. Thesensor tab contains settings for the 3D-camera (mounting parameters,obstacle detection parameters, etc) and the sonar (the coverage angle).The map tab contains settings for the robot’s map (update frequencies,display modes, etc). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

4.10 The architecture of the software as implemented in Java. . . . . . . . . . 53

4.11 The SR3000 mounted on the Amigobot. There are two different coor-dinate systems available, which is the reason that a transformation isused. The coordinates in the 3D-camera’s coordinate system (y, z) to therobot’s coordinate system (y′, z′). . . . . . . . . . . . . . . . . . . . . . . 54

4.12 The angular sensor coverage of the robot. The dark cones are covered byultrasonic sonars, and the light cone is covered by the 3D-camera. Whitespace denotes dead, uncovered, angles. . . . . . . . . . . . . . . . . . . . 55

4.13 The base of the sensor model used for updating the map with the help of asonar. The cone represents a sonar reading, and the dark areas representsthe parts of the reading that can provide the occupancy grid with newinformation. Nothing can be determined about areas C and D. . . . . . 56

4.14 Wave-front propagation in a map grid[23]. The starting point is seen inthe top left corner of (a). The first wave of the wave-front is shown asa light-colored area around the starting point in (b). The next wave isshown in (c), with the old wave now being shown in a darker color, etc.More waves are added until the goal point is found. . . . . . . . . . . . . 60

4.15 The left flowchart describes the wave-front matrix generation, which is theprocess of creating the matrix that describes the wave-front propagationbetween the starting point and the wanted goal point. The right flowchartdescribes the way the wave-front propagation matrix is used to find thebest path between the points. . . . . . . . . . . . . . . . . . . . . . . . . 62

4.16 The simplification of “human beings” used in the application. Two cylin-dric wooden blocks fitted with reflective tape, easily detected by theSR3000 camera. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

4.17 A big red triangle moving in to encircle the location of a suspected ”humanbeing” in the sensor view section of the user interface. This is done toclearly indicate this (possibly) important discovery. . . . . . . . . . . . . 63

4.18 A big red triangle moving in to encircle the location of a suspected ”humanbeing” in the map section of the user interface. This is done to clearlyindicate this (possibly) important discovery. . . . . . . . . . . . . . . . . 64

Page 10: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

viii LIST OF FIGURES

4.19 The two grids used for the exploration behaviour. The map grid (left) andthe frontier grid (right). Gray areas represent obstacles and F’s representfrontiers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

4.20 Target selection for the exploration behaviour. Blank cells are unoccu-pied, gray cells are occupied and numbered cells are frontiers[21]. . . . . 65

4.21 Exploration behaviour flowchart: Showing the process of the autonomousexploration behaviour of the robot. . . . . . . . . . . . . . . . . . . . . . 66

4.22 Navigation with the help of named locations: In this picture a blue dottedpath from the robot to the location “Treasure” can be seen. This isone of the results after the activation of the command “Go to location:Treasure”. The other result is the initiation of the robot’s journey towardsthis location. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

4.23 Navigation with the help of waypoints: As seen in both pictures, the threewainpoints wp0, wp1 and wp2 are already added. The left picture showsthe menu with various options. After the activation of the command“Follow waypoint path” the view changes into the one visible to the rightand the robot starts to follow the dotted blue line, moving to all waypointsin order. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

5.1 An overview of the setup and the environment used for testing the system. 72

5.2 A map over the testing area with all important features such as startingposition of the robot, obstacles and the “humans” to be found marked. . 72

5.3 A case when the 3D-camera obstacle detection provides good results.Both the pillar and the amigobot-twin are detected without problems. . 73

5.4 A case when the 3D-camera obstacle detection provides bad results. Thesegmented shape of the chair pose problems. It is only partially detected. 73

5.5 A visualization of the robot’s chosen path (the dotted line) after a gotocommand has been processed. . . . . . . . . . . . . . . . . . . . . . . . . 74

5.6 A third person perspective of an encounter between the robot and a “hu-man being”. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

5.7 The system provides visual feedback (a triangle shape zooms in on thedetected “human”) whenever the robot detects “humans”. . . . . . . . . 76

5.8 The test-case environment with the chosen path of the robot’s explorationbehaviour from test-case number one. None of the “humans” were found. 77

5.9 The resulting sensor-fused map of test-case number one. No human labelincluded, since no human object was found. . . . . . . . . . . . . . . . . 77

5.10 The test-case environment with the chosen path of the robot’s explorationbehaviour in test-case number two. One “human” was found, a “x” andan arrow indicates where that “human” was found. . . . . . . . . . . . . 78

Page 11: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

LIST OF FIGURES ix

5.11 The resulting sensor-fused map of test-case number two. One human-label can be seen in this picture. “human0”. The other “human” was notfound. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

5.12 A third person view of the test-arena for the exploration test-cases show-ing the robot on a mission to find the two “humans” (encircled). Thedepicted path is the one the robot chose in test-case two. . . . . . . . . 79

5.13 The resulting map of the manually controlled test-case, using only the 3D-camera data for mapping. There are many unexplored areas left (grayareas), because of the low resolution and restrained coverage area of the3D-camera. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

5.14 The resulting map of the manually controlled test-case, using only thesonar data for mapping. There is loads of clutter all over the picture, andsome walls are missing. This is mostly due to specular reflections. . . . . 81

5.15 The resulting map of the manually controlled test-case. The map wasconstructed by fusing both the sonar and 3D-camera data. It containsfewer holes than the 3D-camera map, as well as less clutter and moreconsistent walls than the sonar-map. . . . . . . . . . . . . . . . . . . . . 81

Page 12: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

x LIST OF FIGURES

Page 13: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

List of Tables

3.1 Sensor failure situations of a common sensor suite[22]. . . . . . . . . . . 34

xi

Page 14: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

xii LIST OF TABLES

Page 15: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

Chapter 1

Introduction

Humans are well adapted at handling a huge variety of tasks. After a short trainingperiod, we can handle almost any assignment (even if it is in a inadequate or inefficientway sometimes). However, we have several serious disadvantages.

First of all, we have problems with concentrating on repetitive work and we areprone to make mistakes. Most people also agree that humans are not expendable, andthat excessive human suffering is intolerable. As a consequence of this, and also of thefact that we are fragile and vulnerable to things such as poison, smoke, radiation, heat,cold, explosions and corrosive materials, we would love to have some sort of non-humanstunt-men to assign our hazardous and dull tasks to.

The vision and the dream comes in the form of super-robots that with great easecan substitute a human in any task. Unfortunately the current technology has a longway to go before it satisfies these ambitious wishes.

The best solution at hand is a compromise. A compromise in where humans and

Figure 1.1: The Packbot USAR-robot facing a potential USAR scenario.

1

Page 16: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

2 Chapter 1. Introduction

robots team up. Where the robot does repetitive, dull, hazardous and dangerous partsof the work, while the human concentrates on finding patterns, making conclusions andhelping the robot to understand what part of a task to concentrate on and in what orderto execute its sub-goals. The general idea is to let the robot do things it does betterthan the human, and vice versa. Each part of the system compensates for the otherpart’s shortcomings, while utilizing each part’s strengths to their full potential. Thisarea of intelligent robotics is called teleoperation.

Teleoperation and semi-autonomy are hot topics and their potential is being explored,not only by the industries and the military, but also by the service sector.

A specific sub-area of semi-autonomous robotics is urban search and rescue (USAR)robotics. The goal of USAR is to rescue people from hazardous sites where disasterssuch as hurricanes, bombings or flooding have happened. Working at such a site, findingpeople in a collapsed building for example, can be extremely dangerous for rescue workersconsidering the risks of being crushed by debris, suffocation, etc. It would be a muchbetter situation if these risks could be transferred to robots instead. Another reason issize, since a small robot can go places that a human can not.

1.1 Goal

The goal of this project is to program a simple semi-autonomous teleoperated searchand rescue robot, that is capable of working together with an operator as a team, solvingproblems such as exploring its surroundings and performing object detection; the baseof a successful USAR-robot.

The work is divided into two distinct parts. The first part is to create a graphicaluser interface with which the operator easily can get a intuitive overview of what isgoing on around the robot, while also allowing him or her to access the control functionsof the robot. The other part consists mainly of semi-autonomous, or autonomous, partsthat allows the robot to carry out high-level tasks, such as navigation from one locationto another, and thereby minimizing the cognitive load that is put on the operator.

In addition to a number of sensors, with which it observes the world, a successfulUSAR robot also needs an intelligent user interface (UI). The UI between the humanoperator and the robot should be able to transfer commands from the human to therobot and information from the robot to the human operator. The UI should maximizethis information transfer and at the same time minimize cognitive load on the operator.

The software should make it possible for the operator to easily control the robotwith the help of, for example, a regular keyboard and a mouse. The operator shouldalso have quick access to several options, making it possible to choose what sensors toreceive information from, how the robot should respond to various commands, etc. Theoperator should also be able to instruct the robot to move to some location with a clickin the map in the user interface.

The map, depicting the robots surroundings, should be constructed with the help ofthe robot’s sensor readings. This map should be good enough to be of use for both therobot’s navigation and the operator’s situational awareness later on.

The operator should also be able to command the robot to execute some autonomousbehaviours, such as “explore” and “find humans”. Things to consider here include: Howthe robot is supposed to navigate around obstacles without the help of the operator,and how will the robot identify interesting objects?

As for how the sensor data should be presented, the UI should have some sort of mapand a sensor view showing the data received from the sensors in a intuitive way for the

Page 17: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

1.2. Disposition 3

Figure 1.2: System overview; showing the laptop, the robot, the sonars, the 3D-cameraand the connections between them.

operator. It should also have some sort of transparency, allowing the user insight intothe robots autonomous functions, perhaps through some console where the operator canchoose different levels of output. Ranging from output such as “Robot is now exploringits surroundings.” to low level information such as “Robot initiating partial movementto (x = 77, y = 18) as a sub goal toward (x = 89, y = 20).”.

When the robot is navigating from one point to another, it should perform obstacledetection. Meaning that it should see where obstacles are, and avoid them. In order totruly detect humans autonomously, the robot would require expensive equipment notavailable for this project, therefore a simplified version of detection will be implemented.

The hardware that was to be used in this project was the Amigobot from MobileRobots Inc, equipped with a ring of sonars combined with a Swissranger SR3000 3D-camera. The purpose of the 3D-camera was to supply a 3D-image of the world andto determine distances to objects. This would help with obstacle detection and mapbuilding, and it would simplify the process of delivering a clear overview of the robot’ssituation to the operator. The sonar would supply information about the robot’s closesurroundings, and would compensate for the few shortcomings of the 3D-camera. Thesonar would also contribute to map building. The software parts of the project were tobe implemented in the programming language Java.

In summary, the goal is to program a robot with the following capabilities:

– Manual navigation.

– Semi-autonomous navigation.

– A human-finding behaviour.

– A autonomous exploration behaviour.

A graphical user interface should be constructed, with the capability of controlling therobot and to observing its environment.

1.2 Disposition

Chapter 1, Introduction

This chapter provides an introduction to this thesis project, describes the problems,introduces our preliminary approach and loosely forms the context for the rest of the

Page 18: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

4 Chapter 1. Introduction

project.

Chapter 2, Urban Search and Rescue

This chapter brings the attention to some Urban Search and Rescue (USAR) backgroundand history. It discusses the robot’s part in this, and reviews some real world examples.Later in this chapter, the design of a USAR-robot is examined thoroughly.

Chapter 3, Human-Robot Interaction

This chapter reviews all aspects of Human-Robot Interactions relevant to this project:teleoperation, common ground, situational awareness, telepresence and more. It bringsforth examples from each field, and discusses both problems and strengths of the differentaspects.

Chapter 4, Implementation

This chapter presents our software, what parts we included in our project, what sensorswe used, and how we used them. Principles and algorithms we included in the projectare explained and discussed.

Chapter 5, Results

This chapter presents the results of the thesis. Various test-cases are conducted withthe intention of showing the different aspects of the system.

Chapter 6, Conclusions

This chapter presents a discussion about the results of the project. It also containssuggestions for future work on the project.

Page 19: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

Chapter 2

Urban Search and Rescue

The purpose of the Urban Search And Rescue (USAR) field is to find humans, or othervaluable assets, in distress during catastrophes (search) and then to relocate them tosafety (rescue). Although the “Urban” part of the name indicates that this activityusually takes place in cities, USAR-missions also takes place in more lightly populatedareas.

A few examples of the disasters that require the expertise of USAR-teams are earth-quakes, tornadoes and explosions. Figure 2.1 shows a USAR-team in action trying tosave trapped victims that got caught under the debris of a collapsed building followingthe 1995 Oklahoma City bombing.

Being a USAR-worker is a very tough and dangerous occupation, which is the mainreason that a lot of efforts has been directed towards designing robots to assist theUSAR-crews with their missions.

This chapter will briefly describe the background of USAR, the use of robotics withinthis field, and lastly it will discuss the appropriate robot hardware that is required tocomplete actual USAR-tasks.

2.1 Background

Urban search and rescue teams are required to handle situations that are classed as“anemergency involving the collapse of a building or other structure”[14].

When a disaster strikes, such as an earthquake, hurricane, typhoon, flood, damfailure, technological accident, terrorist activities or the release of hazardous materials,rescue workers have to step in, in order to prevent, or at least minimize, the loss of humanlives. While the situation can vary a lot, the main objective of the rescue workers isto find humans in need of assistance (Search) and to move them out of harms way(Rescue). They also have two secondary objectives. The first of which is to providetechnical expertise of structural engineering in order to evaluate the structural integrityof a collapsing building, so that it can be established which parts of a building that arestable and thus safe to enter. The other secondary objective is to provide the vacatedvictims with medical care.

In most countries, the task of search and rescue is divided among various institutionssuch as fire-fighters, the police and the military. But in addition to this, many countrieshave established special departments for this purpose. The Federal Emergency Manage-ment Agency (FEMA), which was created in 1979, is an example of such an agency in

5

Page 20: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

6 Chapter 2. Urban Search and Rescue

Figure 2.1: Oklahoma City, OK, April 26, 1995 - Search and Rescue crews work to savethose trapped beneath the debris, following the Oklahoma City bombing (FEMA NewsPhoto).

the United States.USAR is a fairly new field. It started to emerge during the early eighties with the

formation of the Fairfax County Fire & Rescue and Metro-Dade County Fire Depart-ment elite USAR teams. These teams provided support in such places as Mexico City,the Philippines and Armenia. This concept of a specialized team of USAR operativesbecame more and more popular. And while FEMA itself started as a general disasterresponse agency, in 1989 it initiated a framework called National Urban Search and Res-cue Response System, which then became the leading USAR task force in the UnitedStates, and it is still active as of 2008.

Within FEMA, USAR task forces can consist of up to 70 highly trained personnel.And depending on the classification of the team, up to 140 people can stand in readi-ness per team at one time. The team consists of emergency service personnel, medics,engineers and search dog pairs, as well as specialized equipment (which may includerobots, see Section 2.2). As of 2008, there are 28 USAR national task forces preparedfor deployment[3].

The list of past missions the FEMA USAR task forces have performed include[2]:

– Hurricane Iniki – Kauai, Hawaii; 1992

– Northridge Earthquake – Los Angeles, California; 1994

– Murrah Federal Building, Oklahoma City Bombing – Oklahoma, 1995

– Hurricane Opal – Ft. Walton Beach, Florida; 1995

– Humberto Vidal Building Explosion – Puerto Rico, 1996

– DeBruce Grain elevator explosion – Wichita, Kansas; 1998

Page 21: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

2.2. USAR Robotics 7

Figure 2.2: Hierarchy of a FEMA USAR task force that includes four robotic elements[8].

– Tornadoes – Oklahoma, 1999

– Earthquakes – Turkey, 1999

– Hurricane Floyd – North Carolina, 1999

– World Trade Center and Pentagon Disaster – New York & Washington, D.C.; 2001

– Olympic Games – Utah, 2002

2.2 USAR Robotics

The introduction of USAR-robots into the arsenal of some USAR-teams is a fairly recentdevelopment. See Figure 2.2 for an example of how the make-up of a USAR task forcethat contains robotic elements can be arranged.

Being a USAR-worker is an exposed occupation. There are a variety of risks involvedwhen working in a USAR task force[26]:

– Risk of physical injury, such as cuts, scrapes, burns, and broken bones.

– Risk of respiratory injuries due to hazardous materials, fumes, dust, and carbonmonoxide.

– Risk of diseases such as diphtheria, tetanus and pneumonia.

– Risk of psychological and emotional trauma caused by gruesome scenes.

Page 22: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

8 Chapter 2. Urban Search and Rescue

Figure 2.3: A destroyed PackBot, made by iRobot, displayed at the 2007 Associationfor Unmanned Vehicles International (AUVSI) show. The robot was destroyed whilesurveying an explosive device in Iraq.

Robots, on the other hand, are insusceptible to all these things. Robots can not catchdiseases, and they don’t breathe. Nor do they suffer psychological or emotional traumas.They can break, but parts of the robot that break can be replaced. It is therefore easyto see why it would be preferable to have robots perform such a hazardous job insteadof risking humans. See Figure 2.3 for an example of such a situation. Even though thatparticular robot was destroyed when inspecting an improvised explosive device (IED) inIraq, it is still a good example of a situation where it is advantageous to have a robot totake the lead in a dangerous situation. It is unlikely that a human would have survivedsuch an explosion.

A fact of USAR is that “the manner in which large structures collapse often preventsheroic rescue workers from searching buildings due to the unacceptable personal risk fromfurther collapse”[26]. Robots are expendable; humans are not. A robot can be sent intoa unstable building that has a risk of crashing down, but a rescue team on the otherhand, would not go in because of the risk to themselves. Fewer problems arise when itis a replaceable robot that will take the damage. When a robot is crushed under debrisit is only some economical value is lost. After an excavation the robot can most likelybe repaired, or at least some of the more valuable parts can be salvaged. And mostimportantly, no human life is lost in the process. But the physical failings of humansis not the only motivation why it would be a good idea to involve robots in the field ofUSAR. There are plenty of other reasons.

Another fact of USAR is that “collapsed structures create confined spaces witch arefrequently too small for both people and dogs to enter”[26]. This has two consequences.

Page 23: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

2.3. Robotic USAR in practice 9

One, robots can be constructed to have access to places people can’t, which could resultin finding victims that would otherwise be lost. Two, this allows for a faster search of anarea, since the task force can use paths that would otherwise be blocked. This is a veryimportant advantage that USAR-robotics can provide, since time is a highly valuablecommodity in rescue scenarios. A fast search is vital, because of the fact that the lackof food, water and medical treatment causes the likelihood of finding a person alive tobe greatly diminished after 48 hours has passed since the incident.

One more valuable asset that robots bring to USAR is the potential ability to surveya disaster area better than a human alone would be able to. Robots equipped withthe right kinds of sensors can be of much use thanks to “superhuman” senses. Withthings like IR-sensors, carbon-dioxide-sensors and various other sensors, robots can befar superior to humans when it comes to detecting victims in USAR missions, especiallyunder difficult circumstances, such as searching rooms that are filled with smoke or dust.

In summary, robots will have a bright future in USAR because of four key reasons[26]:

1. They can “reduce personal risk to workers by entering unstable structures”

2. They can “increase speed of response by accessing ordinarily inaccessible voids”

3. They can “increase efficiency and reliability by methodically searching areas withmultiple sensors using algorithms guaranteed to provide a complete search in threedimensions”

4. They can “extend the reach of USAR specialists to go places that were otherwiseinaccessible.”

2.3 Robotic USAR in practice

“One of the first uses of robots in search and rescue operation was during the WorldTrade Center disaster in New York.”[26]

The World Trade Center incident can be seen as the breakthrough for USAR-roboticsand, although the robots did not perform as well as most people were hoping for, manyvaluable lessons were learned.

The Center of Robot-Assisted Search and Rescue (CRASAR) responded to the catas-trophe almost immediately, and within six hours, six robots from four different teamswere in place to help FEMA and the Fire Department of New York in the recovery ofvictims.

The Foster-Miller team brought the robots “Talon” and “Solem”. More informationabout these robots can be found in Section 2.4.3.

Other robots on the scene included the “micro-VGTV” and “MicroTracs” by Inuk-tun. iRobot had brought their “Packbot” and SPAWAR had the “UrBot”. All therobots were of different sizes, weights and all had different kinds of mobility, tethers,visions, lightning, communication, speed, power supplies and sensing capabilities.

During the rescue period, which lasted from September 11th to the 20th, robotteams were sent in eight times into the restricted zone that surrounded the rubble of thedisaster area itself. A total of eight “drops”(which is defined as “an individual void/spacerobots are dropped into) were performed. The average time that such a drop lasted was6 minutes and 44 seconds[8]. Within the first 10 days, the robot teams found at leastfive bodies in places inaccessible by humans and dogs.

Page 24: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

10 Chapter 2. Urban Search and Rescue

The robots faced tough problems. Extreme heat sources deep within the debriscaused softening of the robot tracks, the software on some of the robots would notaccept new sensors and almost all robots had problems with poor user interface designswhich made the robots hard to control. The robots lacked image processing skills andmost of them had a hard time with communications. The robots with tethers foundthemselves stuck often as the tether got caught in the debris, and the wireless oneshad problems with noisy and clogged communications since so many people were tryingto use walkie talkies, radios and mobile phones. In fact, about 25% of the wirelesscommunication was useless[26].

A study was conducted in 2002, which examined about 11 hours of tape that wasrecorded during the operations, as well as various field notes by the people involved.The article concluded that the priorities of further studies should be “reducing boththe transport and operator human-robot ratios, intelligent and assistive interfaces, anddedicated user studies to further identify issues in the social niche”[8].

The lessons that were learned according to the article, in summary:

– Tranportation of the robots needs to be taken into account. The robot teams hadto carry their robots over 75 feet of rubble, and some of the robots required severalpeople to carry them. If a robot could be carried by single person, the teams couldhave carried several robots to the scene at the same time, which would have createda redundancy if a robot was damaged in some way.

– The robot to operator ratio should be reduced to 1:1, meaning that there shouldonly have to be one operator per robot. At the disaster scene, a second personhad to stand near the void where the robot was sent in order to hold the rope orthe tether. This was a very dangerous position to be in for the person holding therope, as the fall was quite high.

– The response performance needs to be maximized. This can be approached fromseveral angles. One thing that should be done is to improve team organization.Training standards needs to be developed, with regards to both USAR and robots.But another thing that must happen is that research need to be conducted on howrobots can be integrated in regular USAR task forces.

– Cognitive fatigue needs to be minimized. Better team organization will contributeto this, but work also needs to be done on creating better user interfaces. If USAR-workers don’t become comfortable with the user interfaces of the robots, they willnot use them, as rescue workers most often use “tried and true” methods duringdisasters.

– Better robot communication needs to be developed. User confidence of USAR-proffesionals will be affected if the communication with a remote robot only worksintermittently.

– And lastly, researchers that wants to actively help in real USAR situations mustacquire USAR training certification, and they should also work on establishing arelationship with a real USAR team beforehand.

2.4 Robot design

When it comes to the design of USAR-robots, scientists have yet to agree on a stan-dardization. A wide variety of different designs have been tried both in artificial test

Page 25: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

2.4. Robot design 11

environments and in actual USAR situations, with varying results.A successful USAR-robot should have excellent mobility, a balanced composition of

sensors and it needs to be robust. There are many things to consider when designing aUSAR-robot, since most decisions have both pros and cons. This section will mentionsome of the things that needs to be taken into consideration, as well as some of thedifferent designs that has been tried already.

2.4.1 Requirements

When a USAR-situation occurs, time is in short supply. Rescue workers have to actquickly and correctly. There is no time for time-consuming errors. Therefore everyaspect of USAR-robotics needs to be reliable. If some shortcomings of a robot is known,thats no big problem. The rescue workers can simply work around these problems.What cannot happen though, is that a robot fails miserably at some task it is supposedto handle without problems.

The requirements that are essential to USAR-robots can be classified into threecategories:

1. Awareness. How well both the robot and the operator conceives an mental imageof the environment the robot currently is in.

2. Mobility. What amount of time and at what distance the robot can operate, andhow well the robot can traverse a particular area

3. Robustness. How consistent and durable the robot is.

The interesting thing is that these requirements are all entangled in a web. They are alldependent on each other, if one of them fails, the others cannot necessarily compensatefor that loss.

Awareness

Awareness, meaning the possibility of the robot and the user to get a good understand-ing of the environment, is an important requirement. Good situational awareness leadsto both increased work effectiveness, and a reduced risk of cognitive fatigue (see Sec-tion 3.4.2). Having good victim detection and mapping are also important parts of theawareness concept.

The awareness of both the operator and the robot comes mainly from the robotssensors. Different sensors have different strengths and weaknesses. A thing to considerwhen choosing a sensor is not only the use it has, but also the cost it brings. There isnot only a economical cost for each sensor, but there is also a cost in terms mobility androbustness to have in mind. Even if a sensor gives an excellent view of the world, it isuseless if it is too big, breaks too easily or if it draws so much power that the batterytime will be too limited. See Section 2.4.2 for more information about sensor selection.

If the operator and the robot can get a clear view of the surroundings, crucial mistakescan be avoided. But if the sensing is so poor that it is easy for the robot to run of acliff by mistake, and then fall and break, then it does not matter how mobile, or howrobust, it is constructed. It would still be useless.

Page 26: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

12 Chapter 2. Urban Search and Rescue

Figure 2.4: iRobot’s PackBot is an example of a robot that can operate even when it isflipped upside down[27].

Mobility

Mobility is another important consideration. A small and flexible robot will be able toexplore areas that rescue workers can not. Things such as working duration and therange of the robot also falls under this category.

If we imagine a rock-solid robot; one that is able to withstand even explosions;one that can sense everything in its surroundings with crystal clear resolution, withoutambiguities. That robot is still totally useless in a USAR-scenario if it is either to largeor to clumsy to navigate through the disaster scenes. The same problems arises if ithas too limited battery duration, or if it cannot receive orders from the operator furtherthan 1 meter from a transmitter. The point is that even if a robot can handle two ofthe three categories well, it would still be a poor USAR-robot if it could not handle allthree.

The terrain the robot will move around in will be very varying. It might containrocks, piles of debris and concrete, among other things. A successful USAR-robot musthave a robust and reliable locomotion to be able to move across such terrain. MostUSAR-robots have a tank-like design, with track-wheels. A common design philosophywithin USAR-robotics is to make the track-wheels in a triangular shape. This allowsthe robot to drive over obstacles that would otherwise have been impassable for a robotwith traditional, oval tracks. Being able to transform between oval and triangular tracksduring a mission can also be a very efficient way to increase mobility.

As shown in figure 2.4, some robots have the ability to flip itself on the right keelagain if it happens to find itself upside-down, which is an invaluable asset for a robotstriving achieve good mobility. A rescue robot is totally useless if it finds itself helplesslyturtled upside down halfway through a mission, unable to right itself.

Page 27: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

2.4. Robot design 13

The “PackBot” approach to this problem is far from the only one. Some robots workequally good upside-down as they do normally, making flipping unnecessary. Somerobots solve this problem with specially designed “flippers”, and others solve it withstandard arms.

Choosing between having wireless communication and a battery or having com-munication and power via a tether is an important choice to make when designing aUSAR-robot. The use of a tether can give the robot a reliable source of energy, anda channel for noiseless communication, and it can also reduce the weight of the robotdrastically by removing the need to carry a battery along. But a tether also greatly lim-its the mobility of the robot, since the tether will be of a limited length. The tether willalso tend to get stuck in objects. Another drawback is that it may require an additionalperson to operate the cable full time, keeping it out of harms way[8].

Other interesting mobility-striving approaches is the construction of robots with feet,robots that crawl around like snakes or robots that are polymorphic (robots that canchange shape during missions). See Section 2.4.3 for more detailed information.

Robustness

The robot should be able to achieve its goal every time, even if it is working under lessthan ideal conditions. It should also be resistant to hardware failure. If a robot breaksdown in any way during a stressful USAR mission, it is not only the work that the robotcould have done that is lost, but also the would-be work of operators and engineerswhom then would have to repair it. It may even be impossible to repair in a realistictime frame, and lives may be lost because of it.

Super senses and excellent mobility is useless, if the robot breaks down at every turn,pointing again to the fact that all three requirement categories needs to be satisfied.Since a USAR-robots main task is to move around in dangerous areas with the risk ofcollapses, falling objects, sharp edges and intense heat, the robot must be very reliableand robust.

Many sensors available on the market are not designed with rough conditions in mind,and special-designed variations of such equipment can be very costly. A cost-effectivecompromise can be found by reinforcing and protecting weaker equipments. Amongstothers, the “Caster”-team solves this problem by adding the following to their robot:“two 1cm thick polycarbonate plastic roll cages to protect the additional equipment”[19].

2.4.2 Choosing Sensors

“The sensor is a device that measures some attribute of the world”[23].A sensor suite is a set of sensors for a particular robot. The selection of a sensor suite

is a very important part of the USAR-robot design process. There are eight attributesto consider when choosing a sensor for a sensor suite[13]:

1. Field of view and range. The amount of space that is covered by a sensor. Forexample, a 70 degree wide-lens camera might be selected for its superior field ofview compared to a regular 27 degree camera, in order to provide a less constrainedview of the world.

2. Accuracy, repeatability and resolution. Accuracy refers to the correctnessof the sensor reading, repeatability refers to how often a reading is correct giventhe same circumstances, and resolution is how finely grained the sensor readingsare.

Page 28: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

14 Chapter 2. Urban Search and Rescue

3. Responsiveness in the target domain. How well a sensor works in the in-tended environment. Some sensors work well in some situations, but are useless inothers. A sonar, for example, will produce low quality results if it navigates an areathat contains a lot of glass panels, since the sound waves will reflect unpredictably.

4. Power consumption. The amount of drain a sensor has on the robots battery. Ifa sensor has a high power consumption, it limits the amount of other sensors thatcould be added to the robot, as well as lowering the robots mobility by reducingthe time that it can operate without recharging its battery.

5. Hardware reliability. The physical limitations of a sensor. For example, somesensors might be unreliable under certain temperature and moistness conditions.

6. Size. Size and weight considerations.

7. Computational complexity. The amount of computational power that is neededto process the different algorithms. This problem has become less critical as pro-cessors have become more powerful, but it can still remain a problem for smallerrobots with less advanced CPUs.

8. Interpretation reliability. The reliability of the algorithms that interpret thesensor data. The algorithms must be able to correctly handle any mistakes thatthe sensor makes, and not make bad choices because of bad information. In otherwords, it should know when the sensor “hallucinates”, and when it is workingcorrectly.

The points that ties the most into the requirements mentioned earlier are:

– Awareness: 1, 2 and 8.

– Mobility: 4 and 6.

– Robustness: 2, 3, 5 and 8.

Here is a list of examples of sensors that often are chosen for use in USAR-robots:

CCD Cameras

CCD cameras are one of the most common types of sensors when it comes to USAR-robots. Several CCD cameras are often placed in different directions on the same robotto give a wider view of the environment. The main argument for using this kind ofcamera is that it gives an image in the RGB-spectrum, which is very similar to the viewof the world that humans have. It can also be used for movement detection, which isuseful for finding victims. It is a very established technology, and it is generally cheapto acquire[23].

Laser range imaging

A laser range imager is a sensor that has the purpose of providing obstacle detection.Laser rangers function by sending out a laser beam, and then it measures how longit took before the beam’s reflection returned in order to calculate the distance to anobject. While lasers that can cover a entire 3D-area are technically possible, they arevery expensive, with costs on the order of 30.000$ to 100.000$. The more commonly

Page 29: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

2.4. Robot design 15

Figure 2.5: An example of an image created by using data from a 3D-camera[1].

available laser range sensors only cover a 180 degree horizontal plane. The strength ofthe laser is its high resolution, and relatively long range. The are several downsides,however. Not every type material reflects light well enough for it to be read; specularreflection might occur (“light hitting corners gets reflected away from the receiver”); thepoint may be out of range, all of which could lead to an incorrect depth image. Anotherissue is that even if the depth image of the horizontal plane is correct, obstacles stillmight exist beneath or above the plane. This problem has been combated by researchesrecently though, by mounting two laser range devices, one tilted slightly upwards, andone slightly downwards[23].

3D-cameras

A 3D-camera, or a time-of-flight camera, is similar to laser range imaging in many ways,with the difference that the 3D-camera sends rays of IR-light in a rectangular shape,rather than in a plane. The 3D-camera calculates the distance to an object with thehelp of the phase status of returned signal (see Section 4.1.2 for details on the process).See figure 2.5 for an example of an application displaying data from a 3D-camera. Theadvantage of a 3D-camera is that it is relatively cheap compared to similar alternatives,while still providing a reasonably accurate depth image. The downside, besides thesame ones that also affect laser range imaging, is that it provides a rather low-resolutionimage[1].

Carbon dioxide sensors

Carbon dioxide sensors are mostly specific to USAR-robots, at least in the area ofrobotics. It measures the carbon dioxide contents of the air around it, with the purpose

Page 30: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

16 Chapter 2. Urban Search and Rescue

of finding spaces occupied by humans (since such spaces will have a higher concentrationof carbon dioxide because of humans breathing the air). This can obviously be veryhelpful when searching for victims in a collapsed building[26].

IR sensors

An active proximity sensor that is quite cheap. A near-infrared signal is emitted andthen a measurement is taken how much light is returned. This technique often fails inpractice because the light gets “washed out” by bright ambient lightning or is absorbedby dark material[23].

Thermal cameras

Thermal cameras acquire a heat image of the environment. They are very useful forUSAR because of the fact that humans emit more heat than the building itself. Theycan also be used to spot dangerous areas, like when a door emits very high temperatures,there could very likely be a fire raging on the other side of it[23].

Sound sensors

Sound sensors are basically just microphones placed on the robot. They are usefulfor both victim detection and for increased situational awareness for the operator (seeSection 3.3). Victim detection is enhanced by allowing the operator to hear the voicesof people that are trapped. The victims can also speak to the operator via the robot,in order to reveal vital information regarding other nearby victims. Having sound fromthe robot can also increase the situational awareness of the operator by letting him orher hearing what the robot hears. This might reveal information that otherwise mightbe lost, such as hearing the wheels of the robot skidding, and thus realizing the reasonwhy the robot is stuck[19].

Sonar

The sonar is one of the most common robotic sensors. It sends out an acoustic signal,and measures the time it takes for it to reflect back. It has the same problems as thelaser range finder has, with the additional weakness of having a very poor resolution.It performs poorly in noisy environments and is more suitable for controlled researchsituations. The low price and its relatively high gain is its main strength[23].

This list is by no means exhaustive, the number of sensors possible is far too numerousto mention in this paper. But the above are some of the most commonly used ones, andthe ones that have already proven their worth.

Choosing what sensors should be included in a USAR-robot project is only part ofthe sensor design problem. Another important part is making sure that they are usedefficiently.

Due to limited bandwidth, it can be of great use to have some sort of preprocessingof the collected data on-board the robot. This way, only useful information is sent to aoperator, preventing clogging of the communication medium.

A prerequisite for this is good sensors and smart algorithms for sensor refinementand sensor fusion (see Section 3.4.3).

Page 31: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

2.4. Robot design 17

Figure 2.6: The Solem robot, which was used in the World Trade Center rescue opera-tions.

2.4.3 Related Work

Many different groups of researchers are trying different angles of tackling the problemsof constructing robust and mobile robots that are aware of their environment. Thissection brings forth only a few of the many different creative ways to construct everincreasingly effective USAR-robots.

Basic Search and Rescue approach

The robots “Talon” and “Solem” from “Foster-Miller” will serve as examples of whatcan be considered as “standard” urban search and rescue robots, as they use manytechniques that are ubiquitous in USAR robot design. They were two of the robots thatwere used in the rescue operation follow the World Trade Center attacks in 2001, seeSection 2.3 for more information.

“Talon” is a wireless, suitcase-sized, tracked robot. Its tracks are built to handleheavy brush and soft surfaces such as sand, mud or even water. It is equipped withseveral cameras, including a zoom thermal camera and a night-vision camera. It alsohas a two-stage arm that can be used to move pieces of debris or to pick up small objects.Talon is large enough to tackle mobility problems such as stairs very easily. It can carrya payload of 138 kg, and it can pull a 91 kg load. However, its size makes it unable toenter small spaces, which along with a rather short battery-time of around one hour,account for Talons major weaknesses.

The “Solem” robot (shown in Figure 2.6) is much lighter than the Talon-robot, butalso a lot slower. It is an amphibious, all-weather, day-night robot. Just like the “Talon”,it uses radio for communication. “Solem” is equipped with a 4 mm wide-lens camera,and additional sensors, such as night vision or thermal cameras, can be attached to itsarm. It carries with it four Nickel-Metal batteries that allows it to operate one hour atfull capacity when moving through rough terrain.

Page 32: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

18 Chapter 2. Urban Search and Rescue

Figure 2.7: The operator control unit for controlling iRobots PackBot. This portabledevice is used to control the robot from a distance.

Both robots can be controlled with a Operator Control Unit (OCU), which is aportable computer that can be used to control every aspect of the robot. See Figure 2.7for an example of a common OCU.

The images from the robots sensors can either be sent to the screen of the OCU orto a pair of higher resolution virtual reality goggles[26].

Robot swarms

The article “World Embedded Interfaces for Human-Robot Interaction” by Mike Daily,et al.[10] discusses the idea of robot swarms.

The authors of the article have a vision in which a operator arrives at a scene,quickly programs his army of mini-robots (at approximately the size of small rodents)to a certain task, and then lets the swarm lose.

This swarm then starts to scout around, with no lone robot communicating directlywith the operator, instead they only talk with the closest robots in its surroundings(much like a mesh network).

In a hypothetical USAR scenario, these robots would enter the debris, some of therobots would stay put and act as beacons propagating information from the robotsfurther inside, while others would enter further into the debris. And suddenly, some ofthe robots starts to blink with a red light. This means that some robot far down in thedebris have found a human. This has been communicated only to the closest robots inits surroundings, and it gives the end result of a breadcrumb-like path that is formedfrom the operator down to the suspected victim.

Page 33: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

2.4. Robot design 19

Figure 2.8: A prototype image of a robot swarm using a world embedded interface. Thearrows points towards a potential victim in a USAR situation.

Now some bigger robot (or an operator) can be dispatched to affirm the sighting ofa victim, following a path of red-blinking robots indicating the path.

These types of robots are far from reality at this point, but the idea certainly hassome potential, as emergent systems have proven their worth many times. See Figure 2.8for an image of a prototype of such a system.

Snake and snake hybrid

The article “Survey on Urban Search and Rescue Robotics”[26] makes the followingstatement regarding the design of a good robot base: “The base should be able to driveon wet surfaces possibly contaminated with corrosives and it needs to be heat resistant,water proof and fire proof. Without such a base, a robot cannot explore a disaster sitemaking all its sophisticated sensors and software useless. These strict requirements stemfrom the extreme environments which they need to explore”.

The same article then goes on to highlight some of the suggestions that has beenmade for such a solid base. One of the most interesting ones is a robot with a snakelikebody as a foundation for this base. A snake shaped robot is extremely well suited tonavigate through confined spaces. And it has little difficult with moving at any directionin a three-dimensional space.

The article also points out that the advantages of a snake-robot comes at a price.“The many degrees of freedom that furnish the snake with its wide range of capabil-

ities also prove its major challenges: mechanism design and path planning.”, after thisconclusion they continue and present some of the work of a researcher named “ShigeoHirose”, namely the work on a 3-degree of freedom, 116 cm long crawler named Souryo,also known as blue dragon. This robot is sort of a hybrid between a snake and a crawler

Page 34: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

20 Chapter 2. Urban Search and Rescue

robot, and it is trying to utilize the advantages of each side. Its design and effectivenesshas been displayed in the article: “Development of Souryo-I: Connected Crawler Vehiclefor Inspection of narrow and Winding Space” - T. Takayama, S. Hirose[28].

Page 35: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

Chapter 3

Human-Robot Interaction

Human-robot interaction (HRI) is a research field concerned with the interactions be-tween a human user, or operator, and a robot. This subject is a very important partof urban search and rescue robotics (USAR) because of the need of the operator to geta good overview of the situation, and the need to effectively operate the robot duringstressful conditions.

The objective of HRI is to simplify the potentially complex interactions between theoperator and the robot. The problem lies in providing the operator with an interface tothe robot that is powerful enough that all necessary tasks can be executed successfully,while at the same time being so simple and intuitive that it is easy, quick and painlessto use.

This chapter will specifically deal with tele-operated robots, and how humans interactwith them. It will also mention some real world applications of teleoperation and it willtouch on various principles that are useful to teleoperation in general, such as semi-autonomy, common ground and graphical user interfaces.

3.1 Teleoperation

There are several negative aspects associated with the strictly autonomous approachthat traditionally has existed within the robotics field. Robots have been found to lackboth the necessary perception and decision making capabilities required to operate inthe real world environment. Therefore a lot of the research efforts was directed towardsteleoperation (i.e. robots and humans working together), rather than robots that areworking entirely on their own. Teleoperation is often seen as an interim solution, withfully autonomous robots being the ideal long term goal.

Teleoperation is a type of control where a human operator controls a robot froma distance. In most cases the human (or “master”) directs the robot (or “slave”) viasome sort of workstation interface located out of viewing distance from the robot (seeFigure 3.1). The human operator is required to have a user interface (UI) consisting ofa display and a control mechanism, and the robot is required to have power, effectorsand sensors. For more detailed information regarding user interfaces see Section 3.4.

The robot’s sensors are required because the operator generally cannot see the robotdirectly, so the robot needs to collect data about its nearby environment to send backto the operator. The display enables the operator to see a representation of the robot’ssensor data, with the goal of making more informed decisions regarding the robot’s

21

Page 36: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

22 Chapter 3. Human-Robot Interaction

Figure 3.1: The principle of teleoperation. The operator (or “master”) is connected tothe robot (or “slave”) via an arbitrary connection medium. The operator controls therobot based on the feedback received from the robot’s remote environment[17].

situation. The control mechanism of teleoperation normally consists of a joystick, or akeyboard and a mouse, but a wide number of innovative control devices, such as virtualreality headgear, can also be used.

Teleoperation is a popular approach because it tries to evade some of the problemsof purely autonomous robotics by letting a human be part of the decision process. Thegoal is then to create a synergy between the strengths of a robot and the strengths of ahuman in order to minimize the limitations of both[23].

According to Wampler[31], teleoperation is best suited to applications where thefollowing points apply:

1. The tasks are unstructured and not repetitive.

2. The task workspace cannot be engineered to permit the use of industrial manipu-lators.

3. Key portions of the task intermittently require dexterous manipulation, especiallyhand-eye coordination.

4. Key portions of the task require object recognition, situational awareness, or otheradvanced perception.

5. The needs of the display technology do not exceed the limitations of the commu-nication link (bandwidth, time delays).

6. The availability of trained personnel is not an issue.

Page 37: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

3.1. Teleoperation 23

Figure 3.2: The Sojourner Mars rover, which was sent by NASA to Mars in 1997 toexplore the planet. (Image by NASA)

3.1.1 Time delays

A significant disadvantage when teleoperating robots, especially over huge distances, isthe fact that radio communications between the local operator and the remote robottakes a long time to transmit. For example, a task that requires direct control takes justone minute for operators to do on earth takes two and a half minutes on the Moon and140 minutes on Mars. The increase in time between the different scenarios is caused bytransmission delays between the robot and the operator, so the operator has to wait forthe results after every action[23]. This time delay can cause a lot of problems, since theoperator has to predict what the robot’s status is several minutes into the future. Thiscan often lead to situations where the operator unawarely puts the robot in a danger.There are several proposed solutions to this problem, including predictive displays (seeSection 3.4.2) and some of the various approaches to semi-autonomous control, such asTraded control and Shared control (see Section 3.2).

3.1.2 Planetary Exploration

A practical example of a situation where teleoperation has been found to be useful isplanetary exploration. The reasons why it is preferred to use robots for space missionsinstead of a human are plentiful:

– It is cheaper

– A robot does not need any biological support systems, such as oxygen and food

– A robot can theoretically stay forever, so no return journey is required

– A robot can handle harsh environmental situations

– There is no risk of loss of life

Page 38: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

24 Chapter 3. Human-Robot Interaction

Planetary exploration is an area that has been tried in practice several times in recentyears by the National Aeronautics and Space Administration (NASA). The first teleop-erated robot to have successfully been sent to an other planet was the Sojourner robot,shown in Figure 3.2, which was sent to explore the surface of Mars in the summer of1997. The Sojourner remained operational for almost three months before radio con-tact was lost permanently on the 27:th of September[23]. In 2003 NASA sent two moreexploration robots to Mars, named Spirit and Opportunity, to investigate the Martiangeology and topology. While the original mission was planned to last 90 days, they wereso successful that the mission was extended to operate all through 2009[24].

3.1.3 Unmanned Aerial Vehicles

An unmanned aerial vehicle (UAV) is a robotic aircraft that can be either entirelyautomated, or teleoperated. The advantages of not having a pilot are that the planecan be made a lot smaller, and therefore use less fuel, and also that it can be put intodangerous situations, such as flying into war zones or natural disaster areas, withoutfear for the pilots life. A disadvantage is that UAV:s currently require several peopleto operate, as the planes sensors and its controls are often manned by different people.The work requires a high degree of skill, and the training of the operator takes about ayear to complete[23].

While teleoperated robots have various potential military applications, the UAV isone that has been tested in real life situations. The United States Air Force has createdseveral UAV:s in the last decades, the Darkstar UAV and The Predator, for example.

The Darkstar UAV was an advanced prototype that could fly autonomously, butwas teleoperated by a human during take-off and landing, since those are the mostcritical moments of the mission. But unfortunately, transmission latency was not takeninto account when constructing the first working prototype, and the seven second delayinduced by the satellite communications link caused the Darkstar no.1 to crash on take-off, since the operators commands took to long to arrive (see Section 3.1.1 for more infoon effects of latency).

The Predator was used successfully in Bosnia, where it was used to take surveillancephotos in order to verify that the Dayton Accords was being honored[23].

3.1.4 Urban Search and Rescue

Urban search and rescue robotics is an area of teleoperation that is currently the focusof a lot of research effort, see the chapter 2 for more information.

3.1.5 Other examples of teleoperation

Other similarly dangerous environments where teleoperated robots have been appliedinclude:

– Underwater missions (such as the exploration of the wreck of the Titanic[23])

– Volcanic missions[5]

– Explosive Ordnance Disposal[20], such as the one seen in Figure 3.3.)

Page 39: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

3.2. Semi-autonomous Control 25

Figure 3.3: A Joint Service Explosive Ordnance Disposal robot ’Red Fire’ preparedto recover a mine on February 9, 2007 in Stanley, Falkland Islands (photo by PeterMacdiarmid/Getty Images).

3.2 Semi-autonomous Control

Semi-autonomous control, or supervisory control, is when the operator gives the robota task it can more or less perform on its own. The two major kinds of semi-autonomouscontrol are Shared control, or continuous assistance, and Traded control. While theconcepts are separate, both paradigms can be present in the same system (as in the casewith the previously mentioned Sojourner rover (see Section 3.1.2)).

Safeguarded teleoperation is an attempt to alleviate the effects of time delays byletting the robot have the power to override or ignore commands if it feels the commandwould put it in danger. Another approach to semi-autonomy is adjustable autonomy,where the autonomy level of the robot is dynamically set, depending on the situation.

3.2.1 Shared Control

Shared control is a type of teleoperation where the human operator and the robot sharethe control over the robot’s actions. The operator can choose between delegating a taskto the robot, or accomplish it him or herself via direct control. If the task is delegated,then the operator takes the role of a supervisor, and monitors the robot to check ifany problems arise. This relationship enables the operator to handle simple and boringor repetitive tasks to the robot, while personally handling tasks that require hand-to-eye coordination, for example. This helps to reduce the issue of cognitive fatigue (seeSection 3.4.2), but the communication bandwidth that is demanded by the direct controlis potentially high, since a lot of sensor data needs to be sent to the operator[23].

3.2.2 Traded Control

Traded control is an attempt to avoid the problem of high demands on bandwidth andoperator attention. The idea is for the operator to just initiate the robots actions, and

Page 40: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

26 Chapter 3. Human-Robot Interaction

stop ongoing actions as needed, thus removing the demands of direct control from theequation.

The concept is that the robot should be capable of performing some sub-tasks on itsown, and that the operator just should give it instructions on what that sub-task shouldbe. It is then assumed that the robot would complete the task without any furtheroperator assistance. This would then lead to a scenario where the human can operatea significant number of these low maintenance robots simultaneously, without havingto monitor them. And because of the fact that the robots would not need any delicatedirect controls or high-bandwidth sensory equipment (since they would work largely ontheir own), both latency and cognitive fatigue would become less of a problem.

The issue with traded control is that it inherits the problems of robotic autonomy.The whole point of teleoperation is to avoid the problem that robots are not that usefulwhen it comes to acting on their own. And while some of the issues are avoided by lettingthe human make the overreaching decisions, the underlying faults are still present. Buttraded control can still be viewed as a valid stepping stone towards the ultimate goal offull autonomy[23].

3.2.3 Safeguarded teleoperation

Safeguarded teleoperation, or guarded motion, is a concept within the semi autonomouscontrol field that was originally conceived to enable the remote driving of a lunar rover.The idea is that the operator has full control of the robot’s motion if the situation isdeemed safe, but if the robot decides that it is in a hazardous situation, it overridesthe operators control in order to ensure that it is kept safe. For example, if a operatorunintentionally orders the robot to drive into a wall, it should not obey, and it shouldinstead slow down the closer it gets to a wall, and stop entirely it gets close enough.

The point is to evade some of the problems with having long communication delaysbetween the operator and the robot. So when the operator makes decisions that areuninformed (due to the unavoidably outdated information available when operatingover large distances), they are less potentially fatal to the robot. The concept is mostcommonly applied to shared control situations, but work has also been done suggestingapplications of traded control[15].

3.2.4 Adjustable autonomy

Adjustable autonomy is a concept where the user has the option of selecting just howmuch autonomy the robot should have at any particular moment. An article was writtenabout this subject, called “Experiments in Adjustable Autonomy” (Goodrich, Olsen, etal., 2001)[16]. The goal of the system mentioned in the article was to reduce the negativeeffects of neglect. Neglect is defined as the time when the operator is not activelycontrolling the robot, see Figure 3.4 . “The x-axis represents the amount of neglect thata robot receives, which can loosely be translated into how long since the operator hasserviced the robot. As neglect increases, effectiveness decreases. The nearly verticalcurve represents a teleoperated robot which includes the potential for great effectivenessbut which fails if the operator neglects the robot. The horizontal line represents a fullyautonomous robot which includes less potential for effectiveness but which maintainsthis level regardless of operator input. The dashed curve represents intermediate typesof semi-autonomous robots, such as a robot that uses waypoints, for which effectivenessdecreases as neglect increases”[16]. The time delay that teleoperation suffers from (see

Page 41: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

3.2. Semi-autonomous Control 27

Figure 3.4: The neglect curve containing teleoperation and full autonomy. The x-axisrepresents the amount of neglect that a robot receives, and the y-axis represents theeffectiveness of the robot. The dashed curve represents intermediate types of semi-autonomous robots, such as a robot that uses waypoints[16].

Section 3.1.1), can also be categorized as a form of neglect.Another situation that causes neglect is when an operator is operating several robots

at the same time. This is because the operator can actively work with only one robotat a time, and constantly has to switch between the different robots.

The article draws two conclusions regarding the design of robot and interface agents,and presents them as two rules of thumb:

– “As autonomy level increases, the breadth of tasks that can be handled by a robotdecreases.”

– “The objective of a good robot and interface agent design is to move the knee ofthe neglect curve as far right as possible; a well designed interface and robot cantolerate much more neglect than a poorly designed interface and robot.”

The project described in the article supported five levels of autonomy: full autonomy,goal-based autonomy, waypoints and heuristics, intelligent teleoperation, and dormant.The operator has the possibility to switch between each autonomous mode, but therobot has some authority over their behavior. See Figure 3.5 for an approximation ofthe effectiveness of each mode as a function of neglect. “The waypoints level permitmore user control and higher efficiency, but when the waypoints are exhausted then theefficiency drops off. The goal-based autonomy allows less user control then waypoints,but can include some capability to build local maps even if neglected”[16].]

In conclusion, the goal of a system with adjustable autonomy is to always maximizeeffectiveness by picking the autonomy mode that corresponds to the highest robot ef-fectiveness at that specific point in the neglect curve. The point of this is to reduce thenegative effects of neglect. Neglect arises in teleoperation not only due to time delaysin the communication between robot and operator, but also when a human operatesseveral robots at the same time.

Page 42: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

28 Chapter 3. Human-Robot Interaction

Figure 3.5: Autonomy modes as a function of neglect. The x-axis represents theamount of neglect that a robot receives, and the y-axis represents the effectivenessof the robot[16].

A variation on the same theme is a system of adjustable autonomy that is basedon user competence. The idea of this system is for the robot to evaluate the operatorsperformance, and adjust the level of autonomy accordingly. The evaluation can be donein numerous ways, but one example is for the robot to calculate how often the userdrives the robot into, or too close to, an obstacle. A user that is deemed incompetentis then given very little control over the robots actions, while a user that is consideredfully competent is given full access to the steering of the robot. If the user is consideredto be semi-competent, then the robot can provide steering “advice” to the user[18].

3.3 Common ground and situation awareness

Common ground is the knowledge, beliefs and suppositions that two individuals thinkthey share in regards to a joint task. In a tennis match, for example, the common groundbetween two players would consist of the rules of tennis, who won the last match, howto hold the racket etc[29]. It has been suggested that common ground is required for asuccessful collaboration between partners[9].

The common ground framework was originally developed as a tool for understandingcollaboration between people, but research has recently been extended to include humanrobot interaction. This research suggests that interfaces can be improved by looking atthe interaction between a human and a robot as a conversation with the purpose ofdeveloping a “shared meaning between the user and the machine interface”. Severalrecent reports have indicated that the communication between the operator and therobot improves as the common ground that is available between them expands[29].

While common ground theory is more directed towards dialog and communication, italso overlaps with an important area within teleoperation, namely situation awareness.Situation awareness is defined as “knowing what is going on around you”[12]. Empiricalstudies of shared control teleoperation within urban search and rescue (USAR) show thata significantly higher amount of time is spent trying to achieve situational awareness thanis spent on actually navigating the robot. The applicability of teleoperation involving

Page 43: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

3.4. User Interface 29

Figure 3.6: An implementation of scripts concept, which is an attempt at improvingcommon ground between a robot and an operator[6].

traded control is not clear[29].With the basis of the observations made in the USAR domain, it has been proposed

that “shared mental models contribute to [situation awareness] and that communicationis critical to refining these models”[29].

There are many proposed solutions to the common ground problem. One exampleof such a solution is the use of scripts.

”...the script mechanism consists of causal chain of modular activities (behaviors)to be executed by a specific actor (in this case, a human or a robot) on the basis ofcues (events in the world, changes in state, etc.). The causal chain is equivalent tofinite-state automata in expressiveness and power, but uses case statement-like logicproviding more readability”[6]. The purpose of scripts is to solve the problem of thehuman not knowing what the robot is doing by giving the operator a graphical insightinto the decision process of the robot, and thereby letting the human know what therobot is “thinking”. The opposite problem (the robot does not know what the humanis thinking) is solved by letting the operator interact directly with the behaviors in thescript. See Figure 3.6 for a an example of an implementation of the scripts concept.

3.4 User Interface

“All teleoperation interfaces include tools and displays to help the operator perceive theremote environment, to make decisions, and to generate commands”[15].

The user interface (UI) is an important part of Human-Robot Interaction, as itis the medium through which all communication between the robot and the operatoroccurs. The interface needs to be powerful enough to allow the operator to accomplishthe things that he or she requires. But it also needs to be simple enough so that theoperator isn’t overwhelmed by too many options. A poorly designed user interfacecan lead to ambiguity, misunderstanding and mistakes. And in the teleoperation field,problems such as cognitive fatigue and simulator sickness (see Section 3.4.2)can alsoarise.

This section will examine some important user interface concepts and the proposedsolutions to common UI problems. It will then finish with real world examples of userinterfaces within the urban search and rescue field.

Page 44: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

30 Chapter 3. Human-Robot Interaction

3.4.1 Background

User interfaces do not only relate to computers and computer software, but to everythingwhere human interaction is a part of a system. Vehicles, such as cars or airplanes, areexamples of places where a user interface is present. One could even argue that thehandle of a hammer or a knife is a simple user interface.

It was not a long time since machines, tools and systems were designed without asingle thought of the simplification of human use and the understanding of them. Thetypewriter machine is a excellent example of this. The keys in the first typewriterswere placed in an alphabetic order, but this caused the underlying machinery to jamwhen the users started to write very quickly. The keys were scattered in such a waythat the keys that are most commonly used have the greatest distance between them.Then humans where forced to adapt to the machine. Instead, the keys could have beenarranged in such a order that it would make it as easy and intuitive as possible for theusers to write, probably by arranging the keys so that most commonly used letters areat the closest possible range to each other.

Since the user interface is the place through which all the communication betweenthe robot and human passes, it is essential that the user interface is designed in such away that the communication can proceed without trouble.

Just as the first typewriters are inferior to modern times software word processors,the beginning of the robotics area had user interfaces vastly inferior to what is availabletoday. A robot’s output could have been printed paper, combined with the study of therobots behavior. The task of giving the robot input could have been very complicated,with complex text commands typed in via a terminal.

As for today, robot user interfaces are greatly enhanced. Input can be provided ina wide variety of ways such as microphone and voice recognition, keyboard commands,joysticks, mouse-pointers, gestures, signs and written commands. There are even inter-faces reading data directly from the brain.

Output from robots has been equally enhanced, with complex 3D pictures, soundand touch/sensing. There are even interfaces that mix reality with output from robots,called “augmented reality”.

3.4.2 Telepresence

Teleoperation, as it is traditionally defined, has several drawbacks. The robots environ-ment is most commonly viewed through a single, front mounted, camera. The resultinglack of peripheral vision makes the job of the operator much harder than it needs tobe. The lack of unlimited communication bandwidth can also cause jerkiness in thetransmitted images, which makes the situation for the operator even less pleasant. Thetask also has the potential to be boring and repetitive if conducted for hours in a row.The purpose of the field of telepresence is to alleviate some of these factors by makingthe interface between human and robot more natural.

Cognitive fatigue and simulator sickness

All these factors can lead to cognitive fatigue. Cognitive fatigue is a mental unbalancethat can happen to people when they concentrate too hard for too long on a task. Symp-toms include headaches and lessened attention span, which in turn leads to decreasedoperator performance.

Page 45: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

3.4. User Interface 31

Figure 3.7: An example of Virtual Reality equipment, including headgear and motionsensing gloves. (Image by NASA)

Another possible danger of teleoperation lies in the fact that the human body expectsthe visual input it receives to match up with the balance sense in the inner ear. Thisdiscordance can lead to a condition called simulator sickness, which is in many wayssimilar to sea sickness, with many of the same symptoms[23].

Both of these dangers are things that the field of telepresence is trying very hard tominimize.

Virtual Reality

An example of a interaction method that was designed to solve some of the issues withtraditional interaction is called virtual reality (VR). The most common form of VRincludes headgear that covers the operators entire vision, while also providing audio cuesfor deeper immersion. The control mechanism could potentially be a regular joystick,but it can also be more advanced controls like motion sensing gloves (see Figure 3.7).The purpose of a VR-controlled robot is to give the operator complete sensor feedbackand the feeling of being the robot. For example, if the robot receives a “move forward”command, but is stuck and its wheels are just spinning freely, then the operator couldbe made aware of this by hearing the motor sounds and feeling the motors straining, butseeing that no visual changes happen. This would be a much more natural interface forthe human, compared to just getting a message on a computer screen. The drawbacks ofvirtual reality are the steep costs of the equipment itself, and the very high bandwidthrequirement[23].

World embedded interface and Augmented reality

The article ”World Embedded Interfaces for Human-Robot Interaction” by Mike Daily,et al.[10] talks about a helmet similar to a virtual reality helmet. But the one they

Page 46: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

32 Chapter 3. Human-Robot Interaction

Figure 3.8: A prototype image of a world embedded interface, which is a mix of realityand an interface.

have in mind also has transparent vision, so that the operator not only sees what is onthe screen, but also sees the real world, such as in Figure 3.8, meaning that it providesan image of reality mixed with a user interface. The purpose of this is to enhance ahuman’s natural senses, rather than supplanting them. This type of technology is called“augmented reality” or a “world embedded interface”.

The things that they mention are discussed in relation to the concept of robot swarms(see Section 2.4.3 for more information).

If the robot swarms’ mission is to locate something, the world embedded interfacewill for example show arrows from the robots in front of the operator’s eyes, leading himor her in the right direction. This can be seen in Figure 2.8.

However, this type of augmented reality is not only interesting in the symbiosis withrobot swarms. It is very useful in a wide variety of situations.

Augmented reality interfaces can for example ease the navigation of vehicles in denseweather conditions with limited sight. Sensors unaffected by the weather can sendinformation to the augmented reality interface, where the extra information is presentedin a way that enhances the operators situational awareness. This concept can be usedin air planes, ferries or in cars, for example.

One can imagine the ease a mechanic could have, if all parts that he or she searchesfor when performing maintenance on some machine would glow in some bright color.

Even such a common task as reading a book could be enhanced by a augmentedreality system. The reader could for example want to search for all entities of a word.A camera could then scan the page, a image analysis program could find all entities ofthat word. Then the augmented reality system could make it appear as all these entitieswere printed in a different color than the others.

Page 47: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

3.4. User Interface 33

Figure 3.9: A traditional teleoperated system (top), and a system which utilizes apredictive display to reduce the problems caused by latency (bottom).

Doctors could gain see-through vision, being able to directly see internal organsthrough the skin of a patient.

In the USAR-scene, an operator could be able to see through all debris, and alwaysget quick and intuitive access to the location of the robot and of the items the robotfinds. Seeing a victim through the wall would be much more intuitive than seeing a blipon a 2-dimensional map on a computer screen.

Predictive displays

Predictive displays seek to limit the problem caused by long delays by simulating theresults of a command before it is even sent (see Figure 3.9) from the operator to therobot. This means that the operator will be able to experience the operation as beingin real time, rather than having to mentally calculate the effects of the delay. If theoperator then is satisfied with the predicted result, he or she can choose to send theactual command to the robot. A lot of progress has been made in this area recently[23].

3.4.3 Sensor fusion

To be able to comprehend several information flows at the same time, and to do this away that is not unnecessarily cognitive demanding, the information needs to be prepro-cessed.

Important parts need to be highlighted, and unimportant things, such as noise andirrelevant information, need to be filtered out. The crucial information from all thedifferent kinds of informational flows should then be merged in a way that gives theoperator a clear overview in a very short time. “In other words, we need to design thehuman-machine interface so that it maximizes information transfer while minimizingcognitive load.” A way to accomplish this is by using sensor fusion[22].

“Sensor fusion is a broad term used for any process that combines information frommultiple sensors into a single percept.” What that means is that sensor fusion tries tomake the operators life easier by reducing the number of sensors the operator has to

Page 48: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

34 Chapter 3. Human-Robot Interaction

Situation 2D images Stereo vision SonarSmooth surfaces OK OK Fails (specular reflection)Rough surfaces OK Fails (no correlation) OKClose obstacles (<0.6 m) OK Fails (high disparity) OKFar obstacles (>10 m) OK Fails (poor resolution) Fails (echo not received)No external light Fails Fails OK

Table 3.1: Sensor failure situations of a common sensor suite[22].

keep track of at the same time[23]. This can be done in a wide variety of ways. Moreinformation in Section 4.2.5.

Another side of sensor fusion is that it tries to make the available sensory informationmore reliable by automatically combining sensors in order to get a more comprehensiveworld view. In theory, the system should be able to dynamically select what sensorsand which fusion methods should be used depending on what task is being performed.

Sensor fusion deals with three methods of sensory combination[23]:

– Competing sensors

– Complementary sensors

– Coordinated sensors

Competing, or redundant, sensors are sensors that try to do the same thing at thesame time. An example of this is having both a laser range finder and a stereo cameratrying to extract a range image simultaneously, and then letting the system only usethe data from the sensor that reports the closest ranges. The reason why they are saidto be competing is that both try to post the “winning” percept. The point is to addrobustness to the sensory system, so that if one sensor fails, the other might still performwell.

Complementary sensors try to provide different, disjoint, parts of a percept. Anexample related to urban search and rescue is to have a thermal camera searching forbody heat and a motion sensing camera looking for movement at the same time, for theoverarching purpose of finding human survivors.

Coordinated sensors try to cooperate in a sequence. If a predator sees a motion, itmight stop and examine the scene more closely with its other sensors, such as smell orhearing.

The following suite of sensors will serve as an example of a common setup in ateleoperated urban search and rescue robot:

– One or more 2D-cameras.

– A 3D-camera.

– A sonar.

In order to get the best result from this particular suite, different sensors shouldbe used for different situations (see table 3.1). The idea is that the operator only getsinformation from the sensor that can handle the particular situation well. This is anexample of complementary sensors.

Page 49: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

3.4. User Interface 35

Sensor fusion is not only devoted to making information easy available to operators.It can also be a important step in order to give better information to an automated partof the robot. To be able to apply intelligent algorithms in a efficient way, the data thatis going to be processed usually needs to be refined first.

“Although many problems are common to both (sensor selection, registration, datarepresentation, fusion levels), sensor fusion for teleoperation differs from classic sensorfusion because it has to consider human needs and capabilities”[22].

In other words, an approach that is very specialized to make data appealing to ahuman operator is not always the most desirable when looking for good input to anautomated system.

3.4.4 Visual feedback

Many USAR researchers, and teleoperation researchers in general, have arrived at theconclusion that visual feedback of the robot’s surroundings is extremely important.When the robot is in complete darkness, or in very smoky conditions, the visual feedback,such as a camera image, that the robot sends to the operator is basically useless. Asystem that takes the information from the robot’s sensors and simulates its nearbyenvironment, in a way that the human can intuitively understand, would be very useful.The robot can take information from sensors not dependent on vision, and transformthis information to a 3D-simulation. Conclusions about the importance of a 3D-interfaceare found in numerous articles. One example is: “Virtual Camera Perspectives withina 3-D Interface for Robotics Search and Rescue” - David Bruemmer, et al[7].

Some researchers argue that it is important to have some sort of feedback about therobots internal status, meaning temperatures, battery levels, noise, if it’s stuck, etc. Forexample, the article “Remote Driving With a Multi sensor user Interface” - GregoireTerrien et al.[30] describes a part of their system’s functionality in the following way:“...it also continually monitors robot health and generates appropriate status messagesto be displayed to the user.” The reason why this is important is because it gives theoperator vital information about the robot that would otherwise be impossible for theoperator to realize.

Gaining situational awareness quickly is a very important feature of USAR inter-faces. Many implementations of UIs try to accommodate this by having a command forhiding all the clutter in the interface, leaving simply a clear cut view from the robotsvisual output. The point of this is to quickly reduce sensory overload during complexsituations[19].

3.4.5 Interactive maps and virtual obstacles

If an operator makes the judgment that a region is particularly dangerous, and the robotssensors and AI-modules have not yet arrived at this conclusion, then the operator wantsto share this information with the software. The article “Remote Driving With a Multisensor user Interface” by Gregoire Terrien et al.[30] describes their solution to this as “Ifhe decides that a particular region is dangerous, he can draw an artificial barrier on themap and the robot’s obstacle avoidance will keep the robot from entering the region”.This is an example of a subject where a human cooperates with a robot in order toachieve greater results, which is the whole point of semi-autonomous teleoperation.

Page 50: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

36 Chapter 3. Human-Robot Interaction

Figure 3.10: The CASTER user interface. In addition to the main camera, three othercameras are used. At the top of the interface there is a rear-view camera. In thelower corners of the screen, auxiliary cameras show images of the robots tracks. Variousfeedback data is superimposed on the main camera’s image[19].

3.4.6 USAR User Interface Examples

This section will analyze the design of two different user interfaces that have been triedin USAR applications. This is done in order to provide a more concrete overview ofcurrent user interface design ideas. The two user interfaces that will be analyzed are:“CASTER: A Robot for Urban Search and Rescue”[19] and “Idaho National LaboratoryRobot Intelligence Kernel”[32].

CASTER

The CASTER robot was designed in order to compete in the international USAR eventRobocup Rescue in 2005, held in Osaka, Japan. The Robocup takes place in threedifferent approximations of “real world” disaster scenarios. These scenarios containthings such as loose material, debris, low clearances, and variable lightning and multiplelevels. The different robot teams’ performances in these three areas were rated onmobility, sensing, autonomy, mapping and human-robot interface developments. TheCASTER team came in third in the competition[19].

The user interface for the CASTER robot is displayed in Figure 3.10. Its mostprominent feature is that most of the space is taken up by the video feed from its mainpan-tilt camera unit. The pan-tilt camera can be operated independently of the robotitself. In order to not confuse the user by this, a translucent 3D-arrow is located inthe middle of the screen provides feedback on the angle of the camera compared to therobot. The 3D-arrow was found to be intuitive to users.

In addition to the main camera, three other cameras are also used. At the top ofthe interface, a rear-view camera is placed. In the lower corners of the screen, auxiliarycameras show images of the robots tracks.

Page 51: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

3.4. User Interface 37

Information from a thermal camera is superimposed on the main camera’s view. Thethermal data is represented as red areas in the image, with the purpose of easily findinglive humans in an area. In order to not clutter the image too much, a filter that onlyallows areas of 30 to 35 degrees Celsius is applied. The range of this filter was arrivedat experimentally.

Also superimposed over the main video feed are the various sensor gauges. Themost distinct of these is the artificial horizon in the upper left corner of the screen.The artificial horizon provides the user with information regarding the robot’s currenttilt, and is very similar to an accelerometer in a flight simulator. This is useful forcomplex situations where it is hard for the user to orient themselves by just looking atthe video feed. In the top right corner data about the robot’s internal status is displayed.This data includes things such as speed, remaining battery level and connection signalstrength.

The interface is very sparse compared to most others. This was done on purposein order to minimize the amount of “context switching” required. Context switching isdefined as a situation where the user must switch from one mode of working to another,and thereby introduce a delay when they have to reacclimatize themselves to this newcontext. The user interface is also lacking a map display for this same reason, but a3D-map can be generated on demand.

The robot’s locomotion is controlled by a keyboard, specifically, the keys W, A, Sand D are used to steer forward, left, backwards and right, respectively. The pan-tiltcamera is controlled by a computer mouse; it is panned by holding down the left mousebutton and dragging.

In conclusion, the CASTER user interface was designed with four principles inmind[19]:

– “Design the robot and the camera placement in order to enhance situational aware-ness.”

– “Try to present the situation in a manner that reduced cognitive load. For in-stance, where possible information is displayed graphically rather than textuallyor numerically.”

– “Reduce context switching [. . . ].”

– “Use metaphors and familiar controls from existing computer applications and thereal world in order to maximize positive transfer.”

Idaho National Laboratory Robot Intelligence Kernel

The Idaho National Laboratories (INL) have developed a control architecture designedto accommodate a number of different robot geometries and sensor suites. “The INLcontrol architecture is the product of an iterative development cycle where behaviors havebeen evaluated in the hands of users, modified, and tested again.” This architecture iscalled Robot Intelligence Kernel, and it is used by a variety of HRI research teamsthroughout the community[32].

The user interface of INL is displayed in Figure 3.11. Its most distinctive aspect isthe mixture between real world images and a virtual perspective. The virtual world isbuilt with the help of sensor data from the robot. A actual real world image is thenprojected onto the virtual world, in order to give the user a more intuitive awareness ofthe situation.

Page 52: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

38 Chapter 3. Human-Robot Interaction

Figure 3.11: The Idaho National Laboratory User Interface. The most distinctive featureis the mixture between real world images and a virtual perspective[32].

The blue markings in the image represents obstacles for the robot, while the greenones represent user defined waypoints. Besides waypoints, the user can mark any pointof interest in the map, such as a potential victim or the robot’s starting point. Thered triangle in the image means that that particular direction is blocked, and should beavoided. The robot itself is displayed in proportion to its real world size, and it has anarrow placed on top of it pointing in the robot’s direction.

A small map is placed on top of the virtual image. The map is a two-dimensionalrepresentation of the robot’s world view. The point of this is to give the user a betteroverview of the current situation at a glance. The interface also contains a numberof buttons that provide a variety of different functionality such as: zoom and pan,autonomy degrees, waypoint control, and other aspects of the virtual perspective.

One of the biggest advantages of having a virtual view is the fact that it allows a third-person perspective. A survey that was conducted in order to evaluate the performanceof the 1st person view compared to the 3rd person view of the INL interface concludedthat: “The results presented here indicate that the 1 st person perspective within the 3-Ddisplay, which uses a similar perspective as the presentation of video within a traditionalinterface, is inferior to the exocentric perspectives that show the robot and how it fitsinto the world”[7].

Another survey measured the performance of actual USAR workers when using theINL interface compared to a an interface constructed by University of MassachusettsLowell (UML). The interface made by UML was of a more traditional kind, with a firstperson camera view and a overlaying 2-D map. The test was conducted in the testarena at the National Institute of Standards and Technology (NIST) in Gaithersburg,Maryland. The USAR workers were asked to guide a robot through an area, whilesearching for fake victims at the same time. All eight workers that participated inthe test used both the INL and UML interfaces to guide the robot. The result wasdivided into three categories: area coverage, number of bumps, and victims found.

Page 53: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

3.4. User Interface 39

There was a negligible difference between the two interfaces when it came to victimsfound and number of bumps, but INL was found to be superior when it came to areacoverage, winning by close to a 50% margin in average area covered when compared toits competitor, even though the users rated the INL interface as being slightly harderto use. Several users accredited the 3D-mapping capabilities of INL as a major reasonof why it performed so well in the “area coverage” part of the competition[32].

Page 54: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

40 Chapter 3. Human-Robot Interaction

Page 55: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

Chapter 4

Implementation

This chapter describes the software and hardware implementation of this project. Thegoal was to program a teleoperated search and rescue robot equipped with a user in-terface and a number of autonomous and semi-autonomous functions to help with theoperation of the robot.

Figure 4.1: Block scheme of main components in the system. Every oval represents acomponent of the system, and the lines show how they relate to each other.

A general overview of the system’s architecture is shown in Figure 4.1. Every ovalrepresents a module of the system, and the lines show how they relate to each other.

The solution made use of the following hardware parts: An Amigobot robot equippedwith a sonar (the Motor and Sonar modules, see Section 4.1.1), a SR3000 3D-camera(the SR3000 module, see Section 4.1.2) and a laptop computer.

The graphical user interface (the Gui module, see Section 4.2.1) provides the op-erator with sensory feedback from the robot. An image analysis process is performedin order to detect obstacles and to point them out to the operator in the sensor view(the ObstacleDetector module, see Section 4.2.3). The user interface contains a mapview, where the operator can get a larger overview of the robot’s surroundings (theMap module). The map displays the information the system receives from the sonarand the 3D-camera either as separate maps (see Section 4.2.4), or fused together (seeSection 4.2.5). The map construction is represented by the MapBuilder module.

The software provides several semi-autonomous control modes, such as traded andshared control (see Section 4.2.8). The software can navigate with the help of path

41

Page 56: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

42 Chapter 4. Implementation

Figure 4.2: The main hardware components used for this project: the Amigobot robotwith the SR3000 3D-camera attached.

planning (the PathPlanner module, see Section 4.2.6). The robot has a couple offully autonomous behaviours: “explore” (the Explorer module, see Section 4.2.7) and“search” (the Searcher module, see Section 4.2.7). Initiation of the different controlmodes of the robot is possible through the map view, the sensor view or via the keyboard(the Controller module, see Section 4.2.8).

4.1 Hardware set-up

While the scope of this project mainly covered the construction of a software system,various hardware was needed to accomplish it. This section will provide details of therobot used, and also the main sensor, the “SR3000” 3D-camera. See Figure 4.2.

The software of the system ran on a 2 GHz, dual-core laptop computer runningWindows Vista. The laptop had wireless networking capabilities in the form of W-LAN.The laptop computer was used for both programming the software, and for running theresulting program. A laptop was chosen over a stationary computer because of the extramobility provided.

4.1.1 Amigobot and the Amigobot Sonar

This robot from “ActivMedia Robotics” is equipped with two wheels for locomotionand a third wheel for support and balance. The robot is also equipped with a wirelessEthernet connection and an on-board battery which enables total tether-less operation.

This project’s setup also included the SR3000 3D-camera, which required both a in-formation tether and power supply via cable. Because of this, the tether-less advantagesof the Amigobot could not be fully utilized.

The Amigobot is equipped with a very precise odometer (a device that measures howmuch the wheels of the robot have turned, in order to determine how far it has movedin a direction), which enables pretty accurate map-building and localization withouthaving to synchronize with artificial or natural landmarks.

Page 57: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

4.1. Hardware set-up 43

Another very important part of the Amigobot is the ring of eight sonars that sur-rounds it. While sonars do not provide as accurate information about the robot’s sur-rounding as the SR3000 camera, they are quick and pretty reliable. Their 30 degreecoverage angle enables them to cover almost every direction at the same time (see Fig-ure 4.12).

The Amigobot’s distributor “ActivMedia Robotics” supplied with a Java control in-terface. This was greatly appreciated since Java was the choice of programming languagefor the project.

Several disadvantages with the Amigobot showed up during the course of the project.First of all, the Java-interface had some major flaws, which were troublesome to

solve. All classes were located under the default-package, which made it impossible touse it with programming IDEs such as Eclipse. That meant that the JAR-file containingthe code had to be repackaged and restructured, and all the .DLL files that were usedto control the robot had to be recompiled, which required a lot of time and effort toaccomplish.

Another big issue was that every now and then, the robot’s WLAN would get discon-nected from the application. This would happen at very different intervals, ranging fromonce every minute to once every hour. Many different ways to limit or to work aroundthis problem were tried and implemented. The final solution included the work-aroundto automatically reconnect to the robot as soon as a disconnect was detected. The mostdramatic decrease in disconnects was achieved by cutting the number of commands sentto the robot in half. This final solution is somewhat resistant to this problem, but thesystem is still a bit sub-optimal because of it. To the present day, the source of thisproblem is not known.

Other small problems also occurred, such as the fact that the Amigobot was unableto drive over even a small obstacle such as a cable laying in its path. Despite somedisadvantages, the Amigobot was suitable for a project of this magnitude. The reasonsfor this was the low cost and the excellent odometer, which rarely got out of sync. Thesize of the robot was large enough to carry the extra sensors that were included, butit was also small enough to be practical to handle. The wireless characteristics of theAmigobot was of great use during the construction phase of the project, at least atthe times when the SR3000 camera was not included, since this allowed total wirelessoperation.

4.1.2 Swissranger SR3000 camera

This 50x67x42.3 mm3 aluminum coated 3D-camera from “MESA Imaging AG”, seenin Figure 4.3, operates with a 12 volt power supply, and has a typical 12 watts powerconsumption. It utilizes a USB 2.0 interface for communication.

The camera has a resolution of 176x144 pixels, which is a quarter Common Inter-mediate Format(QCIF), and it is able to provide Cartesian coordinate information: x,y and z - and also an intensity value - of each pixel.

The camera works at a maximum of 25 frames per second, and has a field of viewof 47.5 x 39.6 degrees. It is designed for indoor use, but can withstand temperaturesranging from -10 ◦C to +50 ◦C. It would need to be protected by a casing when used ina real USAR environment.

Much like a sonar, or a bat, the camera works on a TOF (time-of-flight) principle.However, where a sonar utilizes sound wave and estimates the TOF directly, this camerasends out modulated infrared light and derives distance from phase information. The

Page 58: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

44 Chapter 4. Implementation

Figure 4.3: The SR3000 Swissranger 3D-camera[1].

Figure 4.4: 4-times sampled incoming light signal. The figure was taken from the SR3000manual[1].

camera emits a light beam with the wavelength 850 nm modulated at a frequency of20 MHz (as standard), which yields a 7.5 meter non-ambiguous range. This emittedlight is reflected by objects in the scene and travels back to the camera. On arrival, acalculation of the precise phase shift is used to determine the distance to each pixel inthe image sensor. The phase shift calculation is done by sampling the reflected signalfour times, as seen in Figure 4.4.

The phase shift is calculated by the equation:

ϕ = arctan(

c(τ3)− c(τ1)c(τ0)− c(τ2)

)(4.1)

The range readings typically have an error of 1%, however this varies greatly sinceobjects can have great differences in reflectivity.

The drivers for the camera is written in C++, and they work in Linux, Windowsand MacOS. MESA Imaging also provides a Matlab user interface.

Users who wish to include the SR3000 camera in Java-applications have to utilizetricks involving the Java Native Interface (JNI) to be able to get Java to communicatewith the drivers.

Page 59: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

4.1. Hardware set-up 45

Figure 4.5: Example of application utilizing the SR3000 camera, taken from the SR3000manual[1].

One of the things a user can modify in the SR3000 camera is the “Integration time”.This is the exposure time, or the amount of time the camera uses to receive informationfor each picture. The integration time can be varied from 200 µs to 51.2 ms in steps of200 µs.

This means that in practice, one can improve the stability of the images acquired atthe cost of frame rate, by increasing the integration time.

Two main problems are associated with the 3D-camera. “Light Scattering” (seeFigure 4.6) and “Multiple Reflections” (see Figure 4.7)[1].

Light scattering is a phenomena occurring in the camera’s internal optical path.In essence, light scattering means that nearby objects, or object that reflect extraor-

dinary much of the transmitted light shines so brightly back to the sensor, that all lightcannot be absorbed by the imager. This in turn results in light reflecting to the lens,and back to the imager again.

Multiple reflections is a phenomena mainly occurring when images are acquired incorners.

What happens is that since the light transmitted from the camera can take manydifferent paths back to the camera, it receives a reflection from the point near the point’A’, but that reflection has taken two totally different paths (one reflected back directlyand the other took a jump at the other wall first), leading to ambiguity of the actualdistance to that point.

Another issue with the 3D-camera that affected the project is mentioned in themanual: “The SR-3000 sensor is controlled as a so-called 1-tap sensor. This meansthat in order to obtain distance information, four consecutive exposures have to beperformed. Fast moving targets in the scene may therefore cause errors in the distancecalculations”[1]. Due to this problem, map updating with the help of the 3D-cameraonly occurs when the robot is standing still. This is done in order to prevent artifactsfrom cluttering the map. This behaviour can be changed by the user in the settingsmenu (the map-tab) through the dropdown menu see Section 4.2.1.

Despite these disadvantages, the SR3000 proved to be a solid choice for a project

Page 60: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

46 Chapter 4. Implementation

Figure 4.6: Schematic drawing illustrating the problem with light scattering artifacts.This occurs when the camera observes nearby objects, or objects that reflect extraordi-nary much of the transmitted light and shines so brightly back to the sensor, that alllight cannot be absorbed by the imager. This in turn results in that the light is reflectedto the lens, and back to the imager again.

Figure 4.7: Illustration of a problematic scenario for the SR3000 camera featuring mul-tiple reflections.

Page 61: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

4.2. Software 47

with a moderate budget.The SR3000 camera is a relatively cheap device when compared to similar devices

providing range data (other more exclusive 3D-cameras or lasers). The informationthat can be gathered with it has a quality far superior to that of a sonar. Furthermore,sonars and lasers tend to supply two-dimensional data, while the SR3000 provides three-dimensional data.

4.2 Software

The software language that was used for the project was Java 1.6. Java is a platform-neutral language, but due to the drivers used for the hardware, the program only runson a Windows platform. This section will cover the different software components ofthe system; algorithms that have been used are described and explained.

4.2.1 System overview

This section will provide a brief description of the functionality of the system.The user interface for the system is shown in Figure 4.8. The interface is divided

into several parts: two 3D-camera displays (parts A and D), a map display (part B),controls (part C) and an output console (part E). Although a settings menu is availablein the program, it is not displayed in the figure. When a button marked “Settings” ispressed (located in the top of parts A, D and B of Figure 4.8), the menu is displayed.

The program runs on a laptop that is connected to the robot via wireless LAN. TheIP-address of the robot is entered when the program starts. If the connection is lostbetween the robot and the laptop, the system will automatically try to re-establish it.

3D-camera displays

The 3D-camera used for the project is the SR3000 by Mesa Imaging (see Section 4.1.2for details). There are two different displays (parts A and D in Figure 4.8) that show theoutput of the 3D-camera. The reason for two displays is that there are three differentdisplay modes available. The user may dynamically select which two modes to display.The display mode is switched with a mouse click on the appropriate radio button locatedabove the displays.

The three different available display modes for the 3D-camera are:

– Depth

– Intensity

– Obstacles

The “depth” mode represents the depth data readings from the 3D-camera as a colorimage. Every distance between 0 and 7.5 meters is represented as a different hue in thehue-saturation-value (HSV) spectrum . The color conversion starts with the color redat 0 meters, and then traverses the entire spectrum until it becomes red again a bitafter 7.5 meters (the maximum expected range), thus guaranteeing that the same hueis not used twice for different distances. The purpose of this mode is to provide a senseof depth that is not present in a regular camera image.

The “intensity” mode represents the intensity data readings from the 3D-camera asa gray-scale image. The intensity of an object is indicative of how well it reflects the

Page 62: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

48 Chapter 4. Implementation

Figure 4.8: The graphical user interface of the system. Parts A and D are the 3D-cameradisplays, part B is the map display, part C is the command bar and part E is the outputconsole.

Page 63: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

4.2. Software 49

IR-light transmitted by the 3D-camera. The purpose of this mode is to provide a displaythat is easy to relate to, since it mostly looks like an ordinary video-camera feed.

The “obstacles” mode attempts to mix the “intensity” mode with the “depth” modein an intelligent way, in order to let the user detect obstacles more easily. The base of theimage consists of intensity data, but every object that the robot classifies as an obstacleis displayed as depth data. The result is that the colored obstacles stand out from thegray-scale background, which should make obstacle detection cognitively undemandingfor the operator.

For more details on how the robot determines what an obstacle is, see Section 4.2.3.The “Settings” button will take the user to the “sensor settings” menu, which has

options related to the 3D-camera. Right clicking anywhere inside a 3D-camera displaywill bring up a pop-up menu. The pop-up menu displays options such as “go here”(which tells the robot to navigate to that point) and “Show depth & intensity” (whichprints the raw depth and intensity numbers for the pixel that was clicked on).

The Map display

The map display section of the user interface (part B in Figure 4.8) shows the robot’sinternal world representation to the user, and it can be used for navigation. There arethree different map modes available: sonar, SR3000 (3D-camera) and a sensor fusionmode denoted “fused”. In the fused mode, a simple sensor fusion mode, that combinesboth the 3D-camera and the sonar, is used. See Section 4.2.5 for details.

The gray area in the map means “unknown” territory, white means “obstacles”, andblack means “open space”. The red circle represents the robot’s position, and the arrowthat goes through the circle shows in what direction the robot is pointing. Locationsare shown as green colored circles, and waypoints are shown as teal colored circles. Thecurrent goal of the robot is shown as a green “X”, and the robot’s current planned path(see Section 4.2.6) is shown as a blue line.

A grid is overlaid on the map in order to give a better perspective of the relationbetween points in the map. Every square grid represents 1x1 meters in the real world.

Right clicking in the map brings up a pop-up menu, with selections such as “gohere” (see Section 4.2.8), “add location” (see Section 4.2.8) and “add waypoint” (seeSection 4.2.8). The “+” and “-” buttons at the top of the map allows the user to zoomin and out, respectively. Left-clicking in the map and holding down the button allowsthe user to drag the entire contents of the map around. The “reset” button removes alldata from the map, and resets the robot’s position to its starting value.

For information on how the map is constructed, see Section 4.2.4.

Controls

The controls section of the user interface (part C in Figure 4.8) shows the top levelcommands available to the user. The “Begin exploring” and “Stop robot” commandsactivate the robot immediately when pressed, while the “add map waypoint” and “goto map location” commands needs to interact with the map before they activate (firstselecting “go to map location”, and then clicking at the wanted goal point in the mapwill make the robot go to that point). A button can be activated either by clicking it,or by pressing the shortcut key on the keyboard that is listed within brackets on thebutton.

For more detailed information regarding the controls of the system, see Section 4.2.8.

Page 64: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

50 Chapter 4. Implementation

Output console

The output console (part E in Figure 4.8) is a one-way communication medium from therobot to the user. Here the robot prints messages about its activity. Status messagessuch as “Communication down”, “No path to goal found”, and “Arrived at location”will be displayed here. All messages are timestamped in order to show exactly whenthey were issued. The purpose of the output console is to provide some insight into therobot’s “thought process”.

Settings

A settings menu is available by selecting the drop-down menu denoted “options” at thetop of the screen, or by pressing “Ctrl+p” keys on the keyboard. Additionally, abovethe 3D-camera displays and the map display there are “settings” buttons located. Thesebuttons take the user to their respective related location in the options menu.

The different menu tabs available in the settings menu are:

– Sensor settings

– Map settings

– Output settings

– Robot settings

The sensor settings include 3D-camera and sonar settings (see Figure 4.9 (top)).The parameters that can be adjusted include: integration time (3D-camera exposuretime, valid values range from 0 to 255), 3D-camera mounting values (height from theground in millimeters and tilt-angle in degrees), the update rate of the 3D-camera image(frames per second), the Y-level cutoff range in millimeters (obstacle detection in the3D-camera image is active within this range), the normalization threshold of the 3D-camera (in order for a pixel to be regarded as an obstacle, it has to have an intensityvalue that is this many percentage points larger than the mean intensity value of theimage), the intensity cutoff value (the minimum intensity value required for a pixel tobe considered an obstacle) and the sonar coverage angle (in degrees).

The map settings include various map related options (see Figure 4.9 (bottom)). Themap display options include a toggle between an analog and a binary display. The analogdisplay shows the map in gray-scale (the gray level of a pixel depends on how certain itis that an obstacle exists there, with black meaning no obstacle, white meaning a certainobstacle, and gray meaning values in between). The binary map display thresholds thepixel values to become either black or white. The map update frequency (updates persecond) and the updating mode (the map can be updated either when the robot ismoving, standing still or it can be updated at all times) can be changed for both thesonar and the 3D-camera. The 3D-camera’s update factors can also be changed, seeSection 4.2.4 for details.

The output tab contains a setting for the amount of text that should be printed tothe output console. The robot tab contains an ip-address option.

Settings are applied when the user presses the “OK” or “apply” buttons, while the“cancel” button discards all changes made.

All settings are saved to a file on the hard drive when the program is exited normally.

Page 65: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

4.2. Software 51

Figure 4.9: The sensor tab (top) and the map tab (bottom) in the settings menu. Thesensor tab contains settings for the 3D-camera (mounting parameters, obstacle detectionparameters, etc) and the sonar (the coverage angle). The map tab contains settings forthe robot’s map (update frequencies, display modes, etc).

Page 66: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

52 Chapter 4. Implementation

Software architecture

The program is written in the programming language Java, and requires Java version 1.6to run. Since the 3D-camera and the Amigobot both provide Windows-specific drivers,the program will only run on a Windows platform.

The system architecture, as implemented in Java, is seen in Figure 4.10. The programis written with parallel computing in mind. Each sensor updates an image of its view ofthe world in a separate thread at a set interval that is dependent on the sensor type. Inaddition, both the graphical user interface and the robot each have their own threadsthat update their respective status.

4.2.2 Coordinate transformation

The coordinate transformation process is applied on the positional data from the 3D-camera (see Section 4.1.2), in order to achieve consistency between the different coor-dinate systems existing within the system (see Figure 4.11). For technical reasons thecamera is facing slightly down towards the ground. The reason for this was a limitationin the hardware of the camera. If the 3D-camera gets reflections from a longer distancethan 7.5 meters, the camera is unable to conclude if the reflection was from an obstacle7.5+X meters away, or just simply X meters away. See Section 4.1.2 for details.

By directing the camera downwards, no reflection can be from a distance greaterthan 7.5 meters away.

The camera is mounted slightly above the center of the robot, and it is also rotateda small angle around the x-axis when compared to the robots x-y-z coordinate system.This angle is the “pitch angle” (or θ). (see Figure 4.11)

As a consequence of this, the coordinates of each pixel read by the camera, are ina different coordinate system than the robot itself is. To convert the coordinate givenby the camera to the robot’s coordinate system, a transformation is used. The methodutilizes simple mathematical principles. The position vector of a point (x, y, z) in the3D-camera coordinate system is transformed to a position vector (x′, y′, z′) in the robotcoordinate system by multiplication with a rotation matrix using the angular offsetbetween the camera’s and the robot’s coordinate system. The rotation matrix used inthe transformation is

R′

θ =[

cos(θ) sin(θ)− sin(θ) cos(θ)

](4.2)

Where θ is the pitch angle[4].

The transformation itself is

x′ ← xy′ ← z sin(θ) + y cos(θ) + translationz′ ← z cos(θ)− y sin(θ)

Where ~p = (x, y, z) is a position vector in the 3D-camera’s coordinate system,the translation is the height difference between the two coordinate systems, and ~p′ =(x′, y′, z′) is a position vector in the robot’s coordinate system.

The result of this is a system where the advantage of having the 3D-camera tilteddownwards (that it does not have to deal with out-of-range data) is retained, while thedownside of it (that it receives skewed sensor data) is removed.

Page 67: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

4.2. Software 53

Figure 4.10: The architecture of the software as implemented in Java.

Page 68: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

54 Chapter 4. Implementation

Figure 4.11: The SR3000 mounted on the Amigobot. There are two different coordinatesystems available, which is the reason that a transformation is used. The coordinates inthe 3D-camera’s coordinate system (y, z) to the robot’s coordinate system (y′, z′).

4.2.3 Obstacle detection

Obstacle segmentation is a way to process an image in order to find objects in it. Thepurpose of obstacle segmentation in this project is to let the robot and the operatorknow which paths are blocked when navigating. The obstacle segmentation is usedto determine which pixels should be treated as objects in the 3D-camera’s “obstacle”display mode (see part B of Figure 4.8) and in the map’s SR3000 and “fused” modes(see Section 4.2.4 and Section 4.2.5). The following section describes how the obstaclesegmentation works.

For each pixel in the 3D-camera image, information is received about that pixelsx, y and z offset from the center of the camera itself. Since the camera is mounted afew decimeters above ground, and also mounted with an angular offset that makes thecamera look down on the ground, these coordinates are transformed to a robot-centeredcoordinate system. For more details about the transformation, see Section 4.2.2. Whenthe transformed coordinates are obtained, the algorithm compares the pixel with fouruser defined criteria:

(1) A pixel aspiring to be an obstacle has to be located above ground level (havinga transformed y-coordinate larger than 0). (2) It also has to be lower than a certainmax height (predefined to 4.5 dm). The reason for the minimum height requirement isthat reported heights below zero probably are erroneous. The reason for the maximumheight requirement is that the robot will be able to drive underneath these obstacles.

(3) The intensity of the point is also examined. The obstacle candidate must exceeda certain threshold value. And also, (4) the intensity has to exceed a reference value forthe current picture. This reference value is the mean of all pixels in the upper 2/3 ofthe picture. The pixels in the lower 1/3 are likely to distort the results. The reason forexamining the intensity value of a pixel is that a reading with too low intensity valueis unreliable, due to how the 3D-camera works (see Section 4.1.2 for details about themechanics of the 3D-camera). If this precaution is not taken, the floor will be consideredto be an obstacle. An unfortunate side-effect of this is that obstacles occupying the entirescreen might be determined to be open area in some cases.

Page 69: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

4.2. Software 55

Figure 4.12: The angular sensor coverage of the robot. The dark cones are covered byultrasonic sonars, and the light cone is covered by the 3D-camera. White space denotesdead, uncovered, angles.

If all four criteria mentioned above are fulfilled, the pixel is considered to be an ob-stacle. This information is used both for the “obstacles” mode display (see Section 4.2.1)and for mapping (see Section 4.2.4).

4.2.4 Map Building

Map building is the process of constructing maps containing topological information ofthe robot’s surroundings for the use of the robot and the operator. Maps are used inseveral parts of the system: the path planning uses maps to determine where the robot isand how it can get to another point in the best way (see Section 4.2.6), the explorationbehaviour (see Section 4.2.7) uses the map to determine where there are unexploredareas, and the user interface displays the maps to the operator in order to provide himor her with improved situational awareness (see part B of Figure 4.8).

There are three different maps available in the system: one based on readings fromthe robot’s ring of sonars, one based on readings from the 3D-camera, and one, called“Fused”, based on a mix of data from both sonars and the 3D-camera with a processcalled “sensor fusion” (see Section 3.4.3 for details). The data of the map is stored inthe memory as a data matrix called an “Occupancy grid”[23].

The robot’s position in the map is determined with the help of the odometry (seeSection 4.1.1), using the robot’s starting position as the reference point.

Sonar

When working with a sonar, there are two values available: the sonar’s position and thedistance to a possible obstacle. To get useful information from this raw data, a sensormodel is used.

As seen in Figure 4.13 there are four areas in the model. The cone represents asonar reading, and the dark areas represents the parts of the reading that can provide

Page 70: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

56 Chapter 4. Implementation

Figure 4.13: The base of the sensor model used for updating the map with the help ofa sonar. The cone represents a sonar reading, and the dark areas represents the partsof the reading that can provide the occupancy grid with new information. Nothing canbe determined about areas C and D.

Page 71: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

4.2. Software 57

the occupancy grid with new information. Nothing can be determined about areas Cand D.

Nothing can be determined about areas D and C with the available data. In the caseof D, the value that is read from the sonar could not possibly be located in that area,due to the limited covering angle of the sonar. Nothing can be said about C becausethe area is located beyond the potential obstacle.

It would be a mistake to conclude that area B is the only relevant area. Since therehas occurred a reading with the particular distance that B represents, it means thatthe possibility of an obstacle located closer than that is lessened. If such an objectexisted, the distance reading would probably show that object instead. Therefore, someconclusions can also be made about area A.

– In the cells that are represented by area B, the likelihood of an object residingthere is increased.

– In the cells that are represented by area A, the likelihood of an object residingthere is decreased.

The sonar’s occupancy grid keeps two distinct values in its cells: the probabilityof that cell being empty, and the probability of that cell being occupied. In theory, acell’s value is between 0 and 1, but in practice they are capped between 0.05 to 0.95 inorder to prevent the system from “over learning” (meaning that it is being so certainabout the state of the cell so that it would be hard to change later on). The initialvalue of all cells is 0.5, meaning that nothing is known about them. However, in ageneral application this value can be chosen differently if there is information about theenvironment available from pre-loaded maps (for example, in an environment likely tobe empty, the probability for each cell to be occupied can be set to much lower than0.5). The change in the probability of a specific cell in areas A and B depends on twothings: firstly, how close to the maximum range of the sonar the cell is. Secondly, howclose to the maximum covered angle of the sonar the cell is located. We also introducea “goodness”-factor, which is a heuristic of how reliable the reading is.

This method of updating the cells in the occupancy grid is called Bayesian updating.[23]

First, the goodness is calculated[23, p. 384]:

goodness =(

sonar range− relative cell distance

2 ∗ sonar range

)+

(covered angle− relative cell angle

2 ∗ covered angle

)(4.3)

Then the values of the cells are updated with the following equations (PO = proba-bility of occupied, PE = probability of empty):

New PO =goodness ∗Old PO

((goodness ∗Old PO) + ((1− goodness) ∗Old PE))(4.4)

New PE =(1− goodness) ∗Old PE

(goodness ∗Old PO) + ((1− goodness) ∗Old PE)(4.5)

Page 72: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

58 Chapter 4. Implementation

This results in two different matrices, one containing the probabilities of cells beingempty, and one containing the probabilities of cells being occupied. A cell’s place in thematrix depends on its position in the robot’s world. The matrices are the base for thesonar map display (see Section 4.2.1).

3D-camera

The occupancy grid updating of the 3D-camera is not unlike the one of the sonar, butthe 3D-camera only keeps track of the probability that a cell is occupied, it does notkeep a separate value for the probability that a cell is empty like the sonar does. Thereason for this is that the 3D-camera uses a simpler sensor model that does not makeuse of an empty-probability matrix.

If we look at one pixel in the 3D-camera and compare it to the A, B, C, D picture(Figure 4.13), it works in a similar way. In contrast to the updating from the sonar,we don’t use a cone. One pixel of the 3D-camera picture is more like a straight linefrom the camera to the reading. Also in contrast to the sonar, the area between thecamera and the reading is ignored (the A-area in the sonar cone). In the sonar’s case,the probability for cells this area to hold any obstacle was lowered.

A reading from the 3D-camera provides information about the distance to the pixel.First, the system determines if this pixel is an obstacle or if it is not. Through a coor-dinate transformation (see Section 4.2.2 for more information), and some trigonometry,each pixel is determined to be either an obstacle or not (as explained in Section 4.2.3).If the pixel is determined not to be an obstacle, this will result in that the representationof this physical point in the occupancy grid will get its obstacle probability lowered by auser defined probability (-20% per default). Readings that are considered as obstacles,have the obstacle probability of their points in the occupancy grid increased by a factor,the default value of this factor is +50%. The reason that the decrease happens slowerthan the increase, is that it is more important to find new obstacles quickly than toremove old ones.

The 3D-camera map updating procedure, in pseudo code:

For all pixels:if pixel = obstacle then

map pixel value← 1.5 ∗map pixel valueelse

map pixel value← 0.8 ∗map pixel valueend if

This results in a matrix of map pixel values, where the value of the cell representsthe likelihood of it being an obstacle.

4.2.5 Sensor fusion

To yield an occupancy grid with higher quality than either of the sonar or the sr3000occupancy grids alone, the application utilizes information from both sensors, mergingthem together with a naive sensor fusion method. The sensor fusion method values allnew data from the 3D-camera higher than the data from the sonar, new or old.

This approach yields a surprisingly good result, possibly since the 3D-camera is somuch better than the sonar, so that the sonar’s values can be safely ignored. The process

Page 73: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

4.2. Software 59

is shown below in pseudo-code.

For all pixels:if sr3000 pixel = undefined then

fused pixel← sonar pixelelse

fused pixel← sr3000 pixelend if

4.2.6 Path planning

Path planning is the answer to the robot’s question of “What is the best way to getto the point that I am trying to navigate to?”. The path planning component is usedin the system whenever autonomous navigation is required. This includes all formsof autonomous and semi-autonomous control (see Section 4.2.8) and the explorationbehaviour (see Section 4.2.7). The path planning solution bases its decisions on therobot’s maps (see Section 4.2.4).

Path planning is a well known problem area within the robotics field, with researchof it spanning over 30 years. Therefore, a wide variety of techniques are available tochoose from when trying to solve this problem[23, p. 319].

This project utilizes a path planning method called the “wave-front” algorithm. Themain reason why this type of path planning was chosen is that it is relatively easy toimplement and it is well suited to grid types of representations[23, p. 365]. Both a “force-field” method and an “A-*” graph search method were considered as alternatives[23],before the final decision was made.

A wave-front algorithm can be compared to a raging bush-fire or the spreading ofpaint that has been poured onto a surface. The idea is to start at a point, then iteratearound it, coloring/burning all cells adjacent to that point. In the next step, the coloror the fire spreads further from the source. If there is a wall, or an obstacle in the way,the fire/paint will work around it. Ultimately the fire/paint will reach the wanted endpoint. When this happens, the algorithm backtracks to the starting point again, thuscreating a path. The principle of this is shown in Figure 4.14. The starting point is seenin the top left corner of (a). The first wave of the wave-front is shown as a light-coloredarea around the starting point in (b). The next wave is shown in (c), with the old wavenow being shown in a darker color, etc. More waves are added until the goal point isfound. This is called wave-front propagation.

The wave-front path-planning method can be divided into two distinct parts. Inpart one, we traverse from a goal coordinate through the grid based world towards astart coordinate. In part two, we traverse from the start coordinate toward the goalcoordinate, along the most cost-effective way.

The two flowcharts in Figure 4.15 describe in detail how the algorithm was imple-mented. The left flowchart describes the wave-front matrix generation, which is theprocess of creating the matrix that describes the wave-front propagation between thestarting point and the wanted goal point. The right flowchart describes the way thewave-front propagation matrix is used to find the best path between the points.

One of the reasons that the wave-front method was chosen was that it can takethe cost for traveling through different types of terrain into consideration. This meansthat it takes into account that it is better for the robot to drive across an office floorcompared to driving through sand, for example, even if the sand path is shorter.

Page 74: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

60 Chapter 4. Implementation

Figure 4.14: Wave-front propagation in a map grid[23]. The starting point is seen in thetop left corner of (a). The first wave of the wave-front is shown as a light-colored areaaround the starting point in (b). The next wave is shown in (c), with the old wave nowbeing shown in a darker color, etc. More waves are added until the goal point is found.

Page 75: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

4.2. Software 61

The wave-front algorithm handles this the following way: obstacles are representedwith a conductivity of 0 (nothing can traverse through that point), while open areaswith good surfaces are assigned a very high conductivity. Undesirable terrains, whichare traversable but not desirable, are assigned a low conductivity. This can often makethe wave-front algorithm find the best path, even when it goes through undesirableterrain[23].

The idea is to treat unexplored areas as traversable, but at a higher cost. Thisresults in favoring already known working paths, instead of trying to find alternative,but possible shorter, paths. It would also make it possible for the robot to find thesemore optimal solutions in some scenarios.

In addition to the plans to further improve the wave-front method, we had an ideato involve several different types of path planning. However, due to time considerations,this idea was discarded.

4.2.7 Autonomous behaviours

Autonomous behaviours serve the purpose of offloading work from the operator by lettingthe robot perform some tasks on its own. To achieve this, the robot makes use of severalcomponents in the system: obstacle detection and maps in order to find new areas andto detect obstacles (see Section 4.2.3 and Section 4.2.4, respectively), and path planningfor navigation (see Section 4.2.6).

The robot has two different autonomous behaviours: search, which is always active,and exploration, which can be activated by the user at any time. The search behaviouraims at finding humans in the environment of the robot. For the sake of simplicity,humans and small IR-reflexes have been declared equivalent in this project. The purposeof the exploration behaviour is for the robot to cover as much area as possible, withoutrequiring operator supervision. While exploring, the robot constructs a map of the areathat it covers. It then marks where potential humans and obstacles are in that map, sothat the operator gets a good overview of the area (see Section 4.2.4 for details regardingmap building).

Search behaviour

The search behaviour is active at all times. The purpose of the search behaviour, intheory, is to examine the robot’s sensory data with the hope of finding humans. Thisallows the robot to notify the operator of any possible victims in the USAR disasterarea it is currently working in, victims that otherwise might be missed by the operatoralone.

Since the budget for this project did not allow for any sensors that realisticallycould find humans with any certainty (such as a thermal camera, carbon dioxide sensor,etc.), a compromise was made. Instead of looking for actual humans, the 3D-camera’scapability to sense IR-light was used to look for small wooden cylinders with IR-reflectivetape wrapped around them, such as the ones seen in Figure 4.16. So instead of lookingfor living, breathing beings, the search behaviour looks for materials that reflect IR-lightwell.

The search behaviour was implemented as follows: every 3 seconds, the 3D-camerachanges its exposure time to the shortest possible (200 µs) for about one second. This isdone because the IR-reflex tape would otherwise reflect so much light that the camerawould be blinded by it. After the exposure time is set, the robot searches through theimage looking for very high values. If a high enough value is found, the robot decides that

Page 76: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

62 Chapter 4. Implementation

Figure 4.15: The left flowchart describes the wave-front matrix generation, which is theprocess of creating the matrix that describes the wave-front propagation between thestarting point and the wanted goal point. The right flowchart describes the way thewave-front propagation matrix is used to find the best path between the points.

Page 77: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

4.2. Software 63

Figure 4.16: The simplification of “human beings” used in the application. Two cylindricwooden blocks fitted with reflective tape, easily detected by the SR3000 camera.

Figure 4.17: A big red triangle moving in to encircle the location of a suspected ”humanbeing” in the sensor view section of the user interface. This is done to clearly indicatethis (possibly) important discovery.

Page 78: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

64 Chapter 4. Implementation

Figure 4.18: A big red triangle moving in to encircle the location of a suspected ”humanbeing” in the map section of the user interface. This is done to clearly indicate this(possibly) important discovery.

Figure 4.19: The two grids used for the exploration behaviour. The map grid (left) andthe frontier grid (right). Gray areas represent obstacles and F’s represent frontiers.

it has found a human. The place where the “human” was found is then highlighted by ared animated triangle in both the 3D-camera image and the map in the user interface inorder to get the attention of the operator (see Figure 4.17). A map location marking willthen be added in the appropriate place in the map, named “humanN”, where N denotesthe total number of humans that have been found during that session (see Figure 4.18).

Exploration behaviour

The exploration behaviour is the robot’s only truly autonomous functionality. Whenactivated, the robot’s goal is to cover as much space as possible. The purpose is to searcha disaster area as quickly as possible, even if the operator is busy with other things. If theexploration behaviour is acting alone, the result will be a map of topological featuresof the area the robot drives through (see Section 4.2.4 and part B of Figure 4.8). Ifthe exploration behaviour is combined with the search behaviour, the robot executesthe “search” part of “urban search and rescue”, meaning that it automatically tries toexamine an area while looking for humans.

Page 79: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

4.2. Software 65

Figure 4.20: Target selection for the exploration behaviour. Blank cells are unoccupied,gray cells are occupied and numbered cells are frontiers[21].

When exploring, the robot uses its map to establish which areas it has alreadyvisited, and which areas it has not been to yet. The robot’s map consists of a gridof cells that represents the robot’s world view. The cells are considered to be eitherunknown, occupied or unoccupied. In order to know which locations the robot shoulddrive towards when it wants to explore new areas, a concept called frontiers is used. Afrontier cell is an unoccupied cell that has at least one unoccupied and one unknowncell as its neighbors. A frontier cell will be regarded as a regular unoccupied cell if theseconditions no longer apply. A cell labeled frontier is guaranteed to be relevant to theexploration behaviour since it will always be located on the border of unknown territory,which is just the type of area the robot wants to go to. The system maintains a grid offrontiers separately from the regular map grid at all times[21] (see Figure 4.19).

When choosing which cell to move towards next, the robot checks its grid of frontiers.A grid search is then performed, originating from the robot’s position in the grid, ina clockwise expanding spiral, starting at 9 o’clock, until a frontier cell is found (seeFigure 4.20). The robot then tries to build a path to the first found frontier cell usingits path planning abilities (see Section 4.2.6). If that succeeds, the robot will startmoving towards the selected frontier cell. If a path cannot be constructed, a path isbuilt to the next frontier cell it finds, and so on.

The robot uses a “greedy” target selection algorithm. Greedy means that it movestowards the most conveniently located frontier cell at all times. When moving towards afrontier, the search procedure described above is performed every few seconds. If a closerfrontier is discovered, it immediately abandons the current target and heads towards thenew one. If no frontier is reachable, or if the frontier grid is empty, the robot considers theexploration to be completed. When this happens, the robot wanders around completelyrandom in its nearby area, in hope of finding a new frontier, or of finding a new way toa previously blocked frontier. See Figure 4.21 for a flow chart of the explore behaviour.

Page 80: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

66 Chapter 4. Implementation

Figure 4.21: Exploration behaviour flowchart: Showing the process of the autonomousexploration behaviour of the robot.

Page 81: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

4.2. Software 67

4.2.8 Control

The purpose of the system’s control components is to allow a human to operate therobot in various ways. Different control modes provide different strategies for solving aproblem.

The robot is controlled by the operator via the user interface running on the laptopcomputer. There are several modes of control available, ranging from manual control tovarying degrees of semi-autonomy.

The different control modes use different components of the system. The manualcontrol mode just updates the map, it does not use the data stored there (see Sec-tion 4.2.4). The semi-autonomous modes both updates and reads the map, while alsousing the path planning component for navigation (see Section 4.2.6).

Manual control

The simplest form of control is the manual, or direct, control. The robot’s motors arecontrolled by the keyboard on the laptop, using either the arrow-keys or the ’W’, ’A’,’S’ and ’D’ keys.

This mode overrides all others, and is designed to give the operator full control, whileleaving the robot with no say at all. This mode should be used in complex situationswhere the robot cannot be trusted to handle itself. While the robot is not allowed to doany steering, all its passive functionality is still active, such as sensor input, its searchbehaviour and its mapping functionality.

Traded control

Traded control is a semi-autonomous form of control (see Section 3.2.2 for details). Theconcept is that the operator gives the robot a sub-task to do, and then the robot performsthat task on its own. In this system, traded control is implemented as a “point-and-go”command.

The user can click with the mouse pointer either in the map, or anywhere in the3D-camera image, and then order the robot to move there. If a point in the 3D-cameraimage is selected, a transformation between the 3D-camera coordinate system and themaps coordinate system is performed. The robot then plans a path towards that point,and tries to navigate there at the best of its ability. See Section 4.2.6 for more details onthis process. A blue line shows the path the robot plans to take, which allows the userto catch and correct any bad decisions made by the robot before they can cause anydamage. If the robot cannot find a way to get to the designated point, due to obstaclesobstructing the way, it gives back control to the operator and displays an error messagein the user interface.

Locations

The location control system is another case of Traded control.Locations are points of interest in the robot’s map that are either marked by the

robot or the operator. For example, a user can add named locations such as “startingpoint”, “doorway” or “possible victim”. This is done by right clicking at the point in themap, selecting “Add location” from the menu, and then entering a name. The robot’ssearch behaviour automatically adds a location in the map every time it thinks it hasfound a “human being”. See Section 4.2.7 for more details on this.

Page 82: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

68 Chapter 4. Implementation

Figure 4.22: Navigation with the help of named locations: In this picture a blue dottedpath from the robot to the location “Treasure” can be seen. This is one of the resultsafter the activation of the command “Go to location: Treasure”. The other result is theinitiation of the robot’s journey towards this location.

The robot can, at any time, be told to move to a previously added location. All theuser has to do is to bring up the list of locations by right clicking the map, and thenselecting one of them, as seen in Figure 4.22. The robot will then move towards thatlocation in the same way as it navigates using the “Traded control” scheme (describedin Section 4.2.8). Removing locations is done in the same way. Locations are displayedat the point where they were added to the map until removed.

Waypoints

Waypoints are basically a more advanced version of the location concept. A waypoint isa location without a name, with the difference that every added waypoint is put into a“navigation list”. When told to, the robot then moves to all the points in the navigationlist sequentially, starting with the first added waypoint. A blue line connecting all thewaypoints, and showing the robot’s planned path between them, is displayed in the mapwhen a command is given to follow the waypoint path (see Figure 4.23).

This allows the operator to establish a more elaborate and fine grained path for therobot to follow, instead of just directing it towards a single target point. The advantageof this is that the user can avoid mistakes that the robot’s path planning algorithmmight produce due to faulty or incomplete sensor data. For example, if the robot thinksthe best path between two points is a straight line, but the operator realizes that thatwould take the robot through a dangerous area, the user can add waypoints that goaround the area entirely, thus avoiding catastrophe.

Another feature of waypoints is that they are movable during the robot’s navigationphase. Moving a waypoint is done by clicking and dragging it to its new placement.This allows the user to easily correct any mistakes in the path that the robot plansto take. Additional waypoints can also be added to the end of the navigation list atany time, even while the robot is moving. This allows the operator and the robot tocooperate with regards to navigation, with the robot planning its own path between thewaypoints, while still allowing the operator the ability to correct and extend the pathat any time.

Waypoints are added in the same way as locations, except that the robot itself cannotadd waypoints, just the user. Waypoints are removed as the robot passes them by.

Page 83: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

4.2. Software 69

Figure 4.23: Navigation with the help of waypoints: As seen in both pictures, the threewainpoints wp0, wp1 and wp2 are already added. The left picture shows the menu withvarious options. After the activation of the command “Follow waypoint path” the viewchanges into the one visible to the right and the robot starts to follow the dotted blueline, moving to all waypoints in order.

Page 84: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

70 Chapter 4. Implementation

Page 85: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

Chapter 5

Results

In this chapter, the results of the project are presented. Various case-studies are con-ducted with the purpose of testing the different aspects of the project. All test-caseswere done at the bottom floor of the MIT building at Umea University. All tests wererun several times over the course of a couple of weeks. Although the outcome differedslightly between tests, the results presented here are representative of the average out-come in each case.

The test site, along with the setup with camera, robot and laptop, can be seen inFigure 5.1. A list of the things that were tested:

– Obstacle detection

– Path planning

– “Human” detection

– Exploration behaviour

– Manual Control

– Mapping

A schematic of the testing area and the complete setup for the two explorationtest-cases and the manual test is shown in Figure 5.2.

5.1 Obstacle Detection

The purpose of this test is to determine the capabilities of the 3D-camera’s obstacledetection. Different objects were examined with the goal of finding both a scenario thatworks well, and one that doesn’t.

The 3D-camera sensor view has three different settings, “Intensity”, “Depth” and“Obstacles”. See Section 4.2.1 for more details. The intensity mode can be seen at thelower part of Figure 5.3, the depth mode can be seen in the lower part of Figure 5.4and the obstacles mode is seen in the top part of both pictures. The two images showswhat the 3D-camera considers to be obstacles during good circumstances (Figure 5.3)and what it thinks are obstacles when faced with sub-optimal conditions (Figure 5.4).In these pictures, the bright areas are obstacles and the dark areas are non-obstacles.

71

Page 86: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

72 Chapter 5. Results

Figure 5.1: An overview of the setup and the environment used for testing the system.

Figure 5.2: A map over the testing area with all important features such as startingposition of the robot, obstacles and the “humans” to be found marked.

Page 87: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

5.1. Obstacle Detection 73

Figure 5.3: A case when the 3D-camera obstacle detection provides good results. Boththe pillar and the amigobot-twin are detected without problems.

Figure 5.4: A case when the 3D-camera obstacle detection provides bad results. Thesegmented shape of the chair pose problems. It is only partially detected.

Page 88: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

74 Chapter 5. Results

Figure 5.5: A visualization of the robot’s chosen path (the dotted line) after a gotocommand has been processed.

As seen in Figure 5.3, the camera has no problems with well defined objects. Boththe Amigobot’s twin and the pillar are detected perfectly.

As seen in Figure 5.4, the segmented shape of the chair pose problems. Not all partsof the chair are detected. However, in most cases the few pieces of obstacles that thecamera detects are enough for the robot to successfully avoid that area. Thus the biggestdisadvantage is that the operator will get a bad image in the obstacles mode display.

Overall, the 3D-camera obstacle detection works better in uncomplicated and almostempty areas. This makes it more suited for environments like warehouses, rather thana busy office landscape.

5.2 Path planning

This test-case is constructed to test and show the path-planning abilities of the robot.The same path planning module is used in the waypoint system, the location systemand the autonomous exploration behavior. Therefore it is important that it works well.

The robot was placed in front of it’s twin (acting as a human). Instructions weregiven to the robot to move to the area behind the “obstacle” (marked by a “x”), which,if everything works well, would force it to plan a path around the obstacle.

The result can be seen in the Figure 5.5. A dotted line in the map indicates thechosen path, in reality, the robot does follows this path surprisingly well.

5.3 Human Detection

The robot has a built-in function to automatically check for “humans” in its view everythree seconds. Figure 5.6 shows the setup for this test-case. In this simple case therobot’s capabilities of finding “human beings” is put to the test.

Page 89: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

5.4. Exploration behaviour 75

Figure 5.6: A third person perspective of an encounter between the robot and a “humanbeing”.

The robot is placed with a “human” straight ahead. Figure 5.7 shows the resulting“map-pinging”. Noteworthy is the label “human0” that was automatically placed in themap at the right hand side of the picture. This was the expected result, no problemswere found.

5.4 Exploration behaviour

The purpose of this test was to see how well the robot performs its exploration behaviour.The test makes use of all parts mentioned so far: obstacle detection, path planning andhuman detection.

Since this part of the project was extensive, two tests of the same problem scenariowere performed to clearly demonstrate the behaviour. Running double tests also allowedfor an examination of the robustness of the system, meaning that it should be able toproduce roughly the same results when running a similar test twice.

The expected behaviour was that the robot would explore its surroundings somewhatinefficiently, with a lot of twists and turns (since there was no implementation of anenergy conservation algorithms) Furthermore, the robot was expected to construct asomewhat satisfying map during the exploration, with labels indicating where humanswere found. The two “humans” were placed in such a way that the robot should havegood chances finding them both.

Exploration Test 1

The robot was placed in the scenario in Figure 5.2, and the explore behaviour wasinitiated. The robot was allowed to work entirely on its own for about two minutes,then the results of the exploration were checked.

The robots movement during the first test-case can be seen in Figure 5.8.The resulting map of test-case 1 can be seen in Figure 5.9. The map turned out

quite good, with a lot of area covered. More details about the map’s strengths andweaknesses will be discussed in the last test-case called “mapping”.

Unfortunately, the robot failed to detect any humans at all in the first test-case. Thiscan partly be explained by the fact that the robot only scanned for humans once everyeight seconds. This interval was lowered to three seconds for the second test, which thengave better results. Another reason that no human was found is that the 3D-cameraneeds to get within 1-2 meters to the human to spot it, and it never got close enough in

Page 90: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

76 Chapter 5. Results

Figure 5.7: The system provides visual feedback (a triangle shape zooms in on thedetected “human”) whenever the robot detects “humans”.

test-case one. This is of course connected to the detection principles for “humans” asdiscussed in Section 4.2.7.

Exploration Test 2

The second test was performed identically to test-case one, except that the “find human”search was conducted every three seconds instead of eight seconds as in test one.

The robot’s movement during the second test-case can be seen in Figure 5.10, and athird person view of the movement can be seen in Figure 5.12.

This time one of the humans were found and noted by the robot. This is most likelycaused by the increased search frequency, as all the test-cases that followed yieldedsimiliar results. The human furthest away from the robot’s starting position was notfound this time either, as the distance problem from test-case one still was a problem.

The resulting map of test-case two can be seen in Figure 5.11. It turned out a lot likethe one in test-cast one, although it shows some improvements, like the better definedwalls in the southern region.

5.5 Manual control

A test of the manual control was conducted in order to test the map building capabilitiesof the system, as well as testing pure teleoperation with the robot placed outside theviewing range of the operator. The robot was placed in the same setting as in theexploration test-cases. The operator was not allowed to observe the robot directly inthis test-case, but only through the user interface. A weakness of this test is that whilethe operator was not observing the robot directly, he did have preexisting knowledge

Page 91: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

5.5. Manual control 77

Figure 5.8: The test-case environment with the chosen path of the robot’s explorationbehaviour from test-case number one. None of the “humans” were found.

Figure 5.9: The resulting sensor-fused map of test-case number one. No human labelincluded, since no human object was found.

Page 92: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

78 Chapter 5. Results

Figure 5.10: The test-case environment with the chosen path of the robot’s explorationbehaviour in test-case number two. One “human” was found, a “x” and an arrowindicates where that “human” was found.

Figure 5.11: The resulting sensor-fused map of test-case number two. One human-labelcan be seen in this picture. “human0”. The other “human” was not found.

Page 93: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

5.6. Mapping 79

Figure 5.12: A third person view of the test-arena for the exploration test-cases showingthe robot on a mission to find the two “humans” (encircled). The depicted path is theone the robot chose in test-case two.

of the surrounding area. The manual control, and the map that was created by this,produced satisfying results. A good map of the surroundings, including both victims,was built. No severe cognitive fatigue or simulator sickness occurred.

5.6 Mapping

This section will examine the resulting maps from the “manual control” test-case.The following maps were generated from the manual control test: the sonar map is

shown in Figure 5.14, the 3D-camera map is shown in Figure 5.13 and the sensor fusedmap is shown in Figure 5.15.

The sonar map has lots of artifacts cluttering up the image, and a lot of the wallshave disappeared. This is caused by specular reflections and other issues that happenswhen using sonar, and it was to be expected. These problems also result in obstacles(and empty areas) behind real walls. These faulty data makes the image look noisy toa operator, however since this information is behind real walls it does not pose any realproblems.

In the 3D-camera map, holes can be seen at places the camera failed to look at,because of its limited angular coverage. Its low resolution causes the gray “smudges”that can be seen in the areas that were explored by the robot. But on the whole, themap generated by the 3D-camera was superior to the one made by the sonar. Its wallsare well defined, and all obstacles that were spotted were really there.

The fused map, with its naive sensor fusion approach (it overrides sonar informationwith 3D-camera information whenever new 3D-camera data is available) did improvethe map more than expected. This should be considered to be the best map. It looksalmost as cluttered as the sonar-map, also the faulty data behind walls are carried onfrom the sonar map. However, for the path-planning algorithms this clutter is irrelevant,the important thing is completeness. In this map no holes are present, thus making thismap the most complete and therefore the best. To ease common ground between theoperator and the robot (see Section 3.3), it can be favorable for the operator to viewthis map instead of the 3D-camera map, since this is the map used by the robot, even

Page 94: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

80 Chapter 5. Results

Figure 5.13: The resulting map of the manually controlled test-case, using only the 3D-camera data for mapping. There are many unexplored areas left (gray areas), becauseof the low resolution and restrained coverage area of the 3D-camera.

though the 3D-camera map may look more well defined to an operator.

Page 95: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

5.6. Mapping 81

Figure 5.14: The resulting map of the manually controlled test-case, using only thesonar data for mapping. There is loads of clutter all over the picture, and some wallsare missing. This is mostly due to specular reflections.

Figure 5.15: The resulting map of the manually controlled test-case. The map wasconstructed by fusing both the sonar and 3D-camera data. It contains fewer holes thanthe 3D-camera map, as well as less clutter and more consistent walls than the sonar-map.

Page 96: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

82 Chapter 5. Results

Page 97: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

Chapter 6

Conclusions

This chapter discusses the results of the project and proposes some potential futurework.

6.1 Discussion

Urban search and rescue robotics is an intriguing subject, but it is also a very challengingone. It is challenging, because one does not only have to face the problems that plaguerobotics in general, such as the limited mobility and awareness of today’s robots, butone also has to be able to overcome these problems during life and death situations inenvironments that are unknown to us beforehand. If a Mars-rover fails its mission, itcan try again tomorrow. If a USAR-robot fails its mission, people can die because of it.

The results of this project have been ambivalent. While some things turned outbetter than expected, such as the use of the 3D-camera (it turned out to be the bestsource of map building, rather than the sonar), and some things turned out worse,such as the amount of work required to get the hardware to work in Java, most thingsturned out somewhere in the middle of those two extremes. The search and explorationbehaviors work well half the time, and half the time they don’t. The same thing is trueabout obstacle detection and path planning. It has been hard to get everything to workconsistently.

The most successful result was the “obstacles” mode of the 3D-camera. It endedup providing a very intuitive view of what the robot considered obstacles. The coloredobstacles come across very starkly in the black and white image, which made themvery easy to spot for the operator. Additionally, the coloring of obstacles also provideddata about the distance to the obstacles without using needlessly cognitively demandingmethods (such as printing numbers on the screen).

Most of the problems that emerged during the project originate from the fact thatthings work better in theory than in practice. The computer laboratory environment,that served as the site for most of the development, was not ideal for this projectbecause it contained so many small obstacles. If a simpler location, such as a warehouseenvironment, had been used, a lot of the problems would have been avoided. Someof the other “real world” issues that affected the project were: the 3D-camera couldnot handle fast moving objects, the sonar has problems with specular reflections, thewireless communication with robot would go down for no apparent reason, the wires ofthe 3D-camera limited the mobility of the robot, etc.

83

Page 98: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

84 Chapter 6. Conclusions

Some things were added to the project that were not thought of beforehand: wavefront path planning, frontier based exploration, automatic saved options, the obstacledisplay mode in the user interface, and more extensive map and control functionality(such as zoom, waypoints, named locations, etc). Most of these things are improvementscompared to the original specification.

Many things that were planned to be included in the project initially, were discardedas time passed. The reasons for this varied, some things were deemed to be too timeconsuming to realistically implement within the time frame of the project, some thingswere considered too hard to do, and some things were replaced by better solutions.

A web camera was originally included as a way for the operator to get a naturalview of the robot’s situation. It was even implemented and fully working at one time,but it was abandoned anyway for several reasons. First of all, it was not wireless, andtherefore required the robot to carry around another cord, further limiting its mobility.Secondly, the 3D-camera provided a surprisingly natural image for the operator to relateto, which reduced the need for a web camera. And thirdly, to be used to its full potential,it would have to be mounted behind and above the robot (in order to provide a thirdperson view), and that would require additional work and custom made equipment toget right.

The autonomous parts of the robot turned out to be the hardest to implement. Onlytwo autonomous behaviours were implemented. The additional ones that were originallyplanned were abandoned along the way. Discarded concepts include: a “find the door”command, a “leave the room” command, etc. Door-finding, and the like, would requireadditional work in the areas of both image analysis and artificial intelligence. Adjustableautonomy levels was something that also was left out of the project due to time concerns.

Early on, there were discussions about the possibilities of doing parts of the work inentirely simulated environments (such as Microsoft Robotics Studio). This would havebeen both a blessing and a curse. It would probably have reduced the problems thatoccurred when working in a real world environment. But that is the very same reasonwhy it would have been a mistake to do so: the project would have been less attuned toreal world problems, and therefore less relevant with regards to real world applications.

This was the type of project that one could spend endless amounts of time on.The solutions never become perfect, the parameters can always be tweaked a littlemore, a different approach to a problem can always be attempted, additional softwarefunctionality added, and maybe there is a new piece of hardware that could be tried.Therefore, it seems like a project like this is not really completed, it is merely abandoned.Some of the things that we would have liked to have tried, if the available time hadallowed, are mentioned in Section 6.2.

6.2 Future work

Any future work would have to be focused on the robustness of the system, firstand foremost. Unexpected or complex situations almost always cause the robot tomake mistakes, mistakes that easily could compromise a real USAR operation. USAR-professionals need to be able to rely on all equipment they use, so this poses a significantproblem. This applies to both the software and the hardware parts of the project.

There are many ways to improve software reliability. One way is to perform mapsynchronization, which is a way to compensate for odometry drift. Odometry drift iswhen the robot loses track of its location in the world, and in its map. This inevitablyhappens over time due to accumulating errors. Map synchronization can be used to

Page 99: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

6.2. Future work 85

minimize this problem by realigning the robot’s place in the map with the help oflandmarks. A landmark is a distinctive feature in the environment, which the robotthen uses to reaffirm its position[23]. In essence, map synchronization would make therobot’s maps more reliable to the user by reducing the problems caused by odometrydrift. This would in turn also cause the robot’s navigation to be more reliable, since itwould have better knowledge of its position in relation to obstacles.

Another way to improve mapping and obstacle detection would be to refine the useof the robot’s sensors. One way to accomplish this is to implement a simultaneous local-ization and mapping (SLAM) technique[11], which is a more advanced way of creatinggeometrically correct maps.

Sensor fusion is a complex subject, and there are multiple ways in which it can bedone. The method that is currently used in the project is a very simplistic one, and itis very likely that there are more efficient solutions available. The sensor model usedfor the sonar could also be improved. The parameterized sensor model detailed in [25]would be a preferable method of map updating, since it has been found to producebetter results.

The look of the maps could be improved, even if the same sensor models would beused for map updating. The sonar “ghost data” (seen in Figure 5.15, for example) couldbe removed without losing any worthwhile information. The walls that the sonar thinksit sees behind the real walls clearly do not exist. The map would look much better tohuman eyes if they were removed. This could be done by image analysis, for example.

Other potential map-related improvements include: the ability to save and loadmaps; the ability to change map size and resolution while running the program; thepossibility of letting the user supply information about the environment to the robot(marking an area as dangerous, for example).

Possible navigational improvements also exist. A redundant path planning algorithm,such as potential fields[23, p. 122], could be a useful addition to the system, since it couldbe used in situations where the wave-front method has problems. The navigationalsystem would also benefit from an implementation of safeguarded teleoperation (seeSection 3.2.3).

The explore behaviour could be improved by using an algorithm that wastes lessenergy compared of the current greedy one. Instead of just heading towards the closestfrontier at all times, a long-term plan, detailing the optimal exploration path couldbe calculated. This would cause the robot to thoroughly explore its surroundingsmore quickly, but it might also cause it to take longer to get a rough overview of theenvironment[21].

A hardware related improvement would be an automatic calibration of the 3D-camera. The 3D-camera, and subsequently the map that is built using it, is sensitiveto small changes in its mounting position. Currently, the parameters used for the co-ordinate transformation (see Section 4.2.2), has to be adjusted by hand almost everytime the system is used. A function that automatically calibrates the transformationparameters would both increase accuracy and simplify the use of the system.

Page 100: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

86 Chapter 6. Conclusions

Page 101: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

Chapter 7

Acknowledgments

The authors of this thesis would like to thank Thomas Hellstrom at the Departmentof Computing Science at Umea University for his invaluable support during the entirecourse of the project. We would also like to thank our late-shift colleagues in MA336for their moral support and inspirational work ethics.

87

Page 102: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

88 Chapter 7. Acknowledgments

Page 103: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

References

[1] MESA Imaging AG. SwissRanger SR-3000 Manual 1.02, 2006.

[2] Federal Emergency Management Agency. About usar.http://www.fema.gov/emergency/usr/about.shtm (visited 2008-11-29).

[3] Federal Emergency Management Agency. Typed resource definitions, search andrescue resources. FEMA 508-8, 2005.

[4] George B. Arfken, Hans J. Weber, and Hans-Jurgen Weber. Mathematical Methodsfor Physicists, page 195. Academic Press, 1985.

[5] J. Bares and D. Wettergreen. Dante ii: Technical description, results and lessonslearned. International Journal of Robotics Research, 18(7):621–649, 1999.

[6] Laura E. Barnes, Robin R. Murphey, and Jeffery D. Craighead. Human-robotcoordination using scripts. Technical report, Center for Robot-Assisted Search andRescue, University of South Florida, Tampa, Florida, USA, 2004.

[7] David Breummer, Douglas Few, Heather Hunting, Miles Walton, and CurtisNielsen. Virtual camera perspectives within a 3-d interface for robotic search andrescue. Technical report, Idaho National Laboratory, Idaho Falls, Idaho, USA, 2005.

[8] Jennifer Casper and Dr. Robin Murphy. Human-robot interactions during therobot-assisted urban search and rescue response at the world trade center. 33:367–385, 2002.

[9] H. Clark and D. Wilkes-Gibbs. Referring as a collaboratorive proccess. Cognition,22(1):1–39, 1986.

[10] Mike Daily, Youngkwan Cho, Kevin Martin, and Dave Payton. Caster: A robot forurban search and resuce. Technical report, HRL Laboratories, LLC, 3011 MalibuCanyon Road, Malibu, California, 2002.

[11] H Durrant-Whyte and T Bailey. Simultaneous localisation and mapping (slam):Part i the essential algorithms. Robotics and Automation Magazine, 13:99–110,2006.

[12] M.R. Endsley and D.J. Garland. Theoretical underpinnings of situation awareness:A critical review. Situation Awareness: Analysis and Measurement, pages 1–32,2000.

[13] H.R. Everett. Sensors for Mobile Robots: Theory and Application. A.K. Peters,Ltd, 1995.

89

Page 104: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

90 REFERENCES

[14] Fire and Resilience Policy Division. The fire and rescue services (emergencies),england. http://www.ukusar.com/downloads/frsc132007.pdf (visited 2008-11-29).

[15] Terrence Fong, Charles Thorpe, and Charles Baur. A safeguarded teleoperationcontroller. Technical report, The Robotics Institute, Carnagie Mellon University,Pittsburgh, Pennsylvania 15213, USA, 2001.

[16] Michael A. Goodrich, Dan R. Olsen Jr., Jacob W. Crandall, and Thomas J. Palmer.Experiments in adjustable autonomy. Technical report, Computer Science Depart-ment, Brigham Young University, 2001.

[17] P. F. Hokayem and M. W. Spong. Bilateral teleoperation: An historical survey.Automatica, 42:2035–2057, 2006.

[18] Ray Jarvis. A go where you look tele-autonomous rough terrain mobile robot.Technical report, Intelligent Robotics Research Centre, Monash University, P.O.Box 35, Victoria 3800, Australia, 2002.

[19] Mohammed Waleed Kadous, Raymond Ka-Man Sheh, and Claude Sammut. Caster:A robot for urban search and resuce. Technical report, University of New SouthWales, Sydney, NSW 2052, Australia, 2005.

[20] A. Schmidt Kron, B. Zah G. Petzold, M.I. Hinterseer, and E. P. Steinbach.Disposal of explosive ordnances by use of a bimanual haptic telepresence system.Robotics and Automation, 2:1968–1973, 2004.

[21] Yongguo Mei, Yung-Hsiang Lu, and et al. Energy-efficient mobile robot exploration.Robotics and Automation, pages 505–511, 2006.

[22] Roger Meier, Terrence Fong, Charles Thorpe, and Charles Baur. A sensor fusionbased user interface for vehicle teleoperation. Technical report, The Robotics In-stitute, Carnagie Mellon University, Pittsburgh, Pennsylvania 15213, USA, 2004.

[23] Robin R. Murphy. Introduction to AI Robotics. The MIT Press, 2000.

[24] NASA. Pressrelease: Nasa extends operations for its long-lived marsrovers. http://marsrovers.jpl.nasa.gov/newsroom/pressreleases/20071015a.html(visited 2008-11-29).

[25] Jayedur Rashid. Parameterized sensor model and handling specular refections forrobot map building. Master’s thesis, Department of Computing Science, UmeaUniversity, 2006.

[26] Binoy Shah and Howie Choset. Survey on urban search and rescue robotics. Tech-nical report, Carnegie Mellon University, Pittsburg, Pennsylvania, 2003.

[27] James Sherwood. Wii remote raised to defuse explosive situations.http://www.reghardware.co.uk/2008/03/27/wii remote packbot robot/ (visited2008-11-29).

[28] Hirose Shigeo and Takayama Toshio. Souryu-i: Connected crawler vehicle for in-spection of narrow and winding space. Nippon Kikai Gakkai Robotikusu, Meka-toronikusu Koenkai Koen Ronbunshu, 1998:1CI4.1(1)–1CI4.1(2), 1998.

Page 105: Semi-Autonomous, Teleoperated Search and Rescue Robot · robots, especially in USAR situations, will continue to increase. As robots and sup-port technology both become more advanced

REFERENCES 91

[29] Kristen Stubbs, Pamela J. Hinds, and David Wettergreen. Autonomy and commonground in human-robot interaction. IEEE Intelligent Systems, 22(2):42–50, 2007.

[30] Gregoire Terrien, Terrence Fong, Charles Thorpe, and Charles Baur. Remote driv-ing with a multisensor user interface. Technical report, The Robotics Institute,Carnagie Mellon University, Pittsburgh, Pennsylvania 15213, USA, 2000.

[31] C Wampler. Concise International Encyclopedia of Robotics: Applications andAutomation, chapter Teleoperators, Supervisory Control. John Wiley and Sons,Inc., 1990.

[32] Holly Yanco, Michael Baker, Robert Casey, Philip Thoren Brenden Keyes, andet al. Analysis of human-robot interaction for urban search and rescue. Technicalreport, University of Massachusetts Lowell, One University Ave, Olsen Hall Lowell,MA, USA, 2006.