Third International Workshop on Synthetic …...Third International Workshop on Synthetic Simulation...

51
Third International Workshop on Synthetic Simulation and Robotics to Mitigate Earthquake Disaster (SRMED 2006) 16th, June, 2006 Messe Bremen Bremen, Germany

Transcript of Third International Workshop on Synthetic …...Third International Workshop on Synthetic Simulation...

Page 1: Third International Workshop on Synthetic …...Third International Workshop on Synthetic Simulation and Robotics to Mitigate Earthquake Disaster (SRMED 2006) 16th, June, 2006 Messe

Third International Workshop on

Synthetic Simulation and Robotics to

Mitigate Earthquake Disaster

(SRMED 2006)

16th, June, 2006

Messe Bremen

Bremen, Germany

Page 2: Third International Workshop on Synthetic …...Third International Workshop on Synthetic Simulation and Robotics to Mitigate Earthquake Disaster (SRMED 2006) 16th, June, 2006 Messe

Contents

Wearable Computing meets Multiagent Systems: A realword interface forthe RoboCupRescue simulation platform

1

Alexander Kleiner, Nils Behrens and Holger Kenn

Design of Human-In-The-Loop Agent Simulation for Disaster SimulationSystems

9

Yoshitaka Kuwata, Tomoichi Takahashi, Nobuhiro Ito, and Ikuo

Takeuchi

Influence of Interaction on Evacuation Effciency in Large-scale Disaster 15

Hidehisa Akiyama, Masayuki Ohta, and Itsuki Noda

VirtualIUB – development of a team of autonomous agents for the Vir-tual Robots Competition

16

M. Mahmudi, S. Markov, Y. Nevatia, R. Rathnam, T. Stoy-

anov, and S. Carpin

The Virtual Robots Competition: vision and short term roadmap 17

Stephen Balakirsky, Mike Lewis, Stefano Carpin

Device Level Simulation of Kurt3D Rescue Robots 18

Sven Albrecht, Joachim Hertzberg, Kai Lingemann, Andreas

Nuchter, Jochen Sprickerhof, Stefan Stiene

Simulation of Fluid Objects in Disasters — Tokai heavy rainfall simula-tion using IDSS —

24

Tomoichi Takahashi, Tesuhiko Koto, Ikuo Takeuchi, and Itsuki

Noda

Information Sharing and Integration in Rescue Robots and Simulations 30

Itsuki Noda

Multi-Objective Autonomous Exploration in a Rescue Environment 36

Daniele Calisi, Alessandro Farinelli, Luca Iocchi, Daniele Nardi,

Francesca Pucci

Development of an autonomous rescue robot within the USARSim 3Dvirtual environment

42

Giuliano Polverari, Daniele Calisi, Alessandro Farinelli, Daniele

Nardi

Page 3: Third International Workshop on Synthetic …...Third International Workshop on Synthetic Simulation and Robotics to Mitigate Earthquake Disaster (SRMED 2006) 16th, June, 2006 Messe

Wearable Computing meets Multiagent Systems: Areal-word interface for the RoboCupRescue simulation

platform

Alexander KleinerInstitut fur InformatikUniversitat Freiburg

79110 Freiburg, Germany

[email protected]

Nils BehrensTechnologie-Zentrum

InformatikUniversitat Bremen

28359 Bremen, Germany

[email protected]

Holger KennTechnologie-Zentrum

InformatikUniversitat Bremen

28359 Bremen, Germany

[email protected]

ABSTRACTOne big challenge in disaster response is to get an overviewover the degree of damage and to provide this information,together with optimized plans for rescue missions, back toteams in the field. Collapsing infrastructure, limited visibil-ity due to smoke and dust, and overloaded communicationlines make it nearly impossible for rescue teams to report thetotal situation consistently. This problem can only be solvedby efficiently integrating data of many observers into a singleconsistent view. A Global Positioning System (GPS) devicein conjunction with a communication device, and sensorsor simple input methods for reporting observations, offer arealistic chance to solve the data integration problem.

We propose preliminary results from a wearable computingdevice, acquiring disaster relevant data, such as locationsof victims and blockades, and show the data integrationinto the RoboCupRescue Simulation [8] platform, which isa benchmark for MAS within the RoboCup competitions.We show exemplarily how the data can consistently be in-tegrated and how rescue missions can be optimized by so-lutions developed on the RoboCupRescue simulation plat-form. The preliminary results indicate that nowadays wear-able computing technology combined with MAS technologycan serve as a powerful tool for Urban Search and Rescue(USAR).

KeywordsWearable Computing, GPS, Multi Agent Systems, MAS,USAR, GIS, RoboCupRescue

1. INTRODUCTIONOne big challenge in disaster response is to get an overviewover the degree of damage and to provide this information,together with optimized plans for rescue missions, back toteams in the field. Collapsing infrastructure, limited visibil-ity due to smoke and dust, and overloaded communicationlines make it nearly impossible for rescue teams to reportthe total situation consistently. Furthermore, they might beaffected psychologically or physically by the situation itselfand hence report unreliable information.

This problem can only be solved by efficiently integratingdata of many observers into a single consistent view. A

Global Positioning System (GPS) device in conjunction witha communication device, and sensors or simple input meth-ods for reporting observations, offer a realistic chance tosolve the data integration problem. Furthermore, an inte-grated world model of the disaster allows to apply solutionsfrom the rich set of AI methods developed by the Multi-Agent Systems (MAS) community.

We propose preliminary results from a wearable computingdevice, acquiring disaster relevant data, such as locationsof victims and blockades, and show the data integrationinto the RoboCupRescue Simulation [8] platform, which isa benchmark for MAS within the RoboCup competitions.Communication between wearable computing devices andthe server is carried out based on the open GPX protocol [21]for GPS data exchange, which has been extended for addi-tional information relevant to the rescue task. We show ex-emplarily how the data can consistently be integrated andhow rescue missions can be optimized by solutions devel-oped on the RoboCupRescue simulation platform. The pre-liminary results indicate that nowadays wearable computingtechnology combined with MAS technology can serve as apowerful tool for Urban Search and Rescue (USAR).

RoboCupRescue simulation aims at simulating large-scaledisasters and exploring new ways for the autonomous coor-dination of rescue teams [8] (see Figure 1). These goals leadto challenges like the coordination of heterogeneous teamswith more than 30 agents, the exploration of a large-scale en-vironment in order to localize victims, as well as the schedul-ing of time-critical rescue missions. Moreover, the simulatedenvironment is highly dynamic and only partially observableby a single agent. Agents have to plan and decide their ac-tions asynchronously in real-time. Core problems are pathplanning, coordinated fire fighting, and coordinated searchand rescue of victims. The solutions presented in this paperare based on the OpenSource agent software [1], which wasdeveloped by the ResQ Freiburg 2004 team [9], the winner ofRoboCup 2004. The advantage of interfacing RoboCupRes-cue simulation with wearable computing is twofold: First,data collected from a real interface allows to improve thedisaster simulation towards disaster reality. Second, agentsoftware developed within RoboCupRescue might be advan-tageous in real disasters, since it can be tested in many sim-

SRMED-2006 1

Page 4: Third International Workshop on Synthetic …...Third International Workshop on Synthetic Simulation and Robotics to Mitigate Earthquake Disaster (SRMED 2006) 16th, June, 2006 Messe

Figure 1: A 3D visualization of the RoboCupRescuemodel for the City of Kobe, Japan.

ulated disaster situations and can also directly be comparedto other approaches.

Nourbakhsh and colleagues utilized the MAS Retsina formixing real-world and simulation-based testing in the con-text of Urban Search and Rescue [15]. Schurr and col-leagues [17] introduced the DEFACTO system, which en-ables agent-human cooperation and has been evaluated inthe fire-fighting domain with the RoboCupRescue simula-tion package. Liao and colleagues presented a system thatis capable of recognizing the mode of transportation, i.e., bybus or by car, and predicting common travel destinations,such as the office location or home location, from data sam-pled by a GPS device [12].

The remainder of this paper is structured as follows. Wepresent an interface between human rescue teams and therescue simulator in Section 2. In Section 3 we give some ex-amples how approaches taken from MAS can be utilized fordata integration and rescue mission optimization. In Sec-tion 4 we propose preliminary experiments from integratingdata into RoboCupRescue from a real device and concludein Section 5.

2. INTERFACING REAL RESCUE2.1 Requirement analysisIn wearable computing, one main goal is to build devicesthat support a user in the primary task with little or noobstruction. Apart from the usual challenges of wearablecomputing [20, 19], in the case of emergency response, thesituation of the responder is a stressful one. In order toachieve primary task support and user acceptance, specialattention has to be given to user interface design. For thisapplication, the user needs the possibility to enter informa-tion about perceptions and needs feedback from the sys-tem 1. Furthermore, the user needs to receive task-relatedinstructions from the command center.

The implementation has to cope with multiple unreliablecommunication systems such as existing cell phone net-works, special-purpose ad-hoc communication and existing

1Technically, this feedback is actually not required by theapplication, but we envision that it will improve user accep-tance.

emergency response communication systems. As the anal-ysis of the different properties of these communication sys-tems is beyond the scope of this article, we will thereforeabstract from them and assume an unreliable IP-based con-nectivity between the mobile device and a central commandpost. This assumption is motivated by the fact that bothinfrastructure-based mobile communication networks andcurrent ad-hoc communication systems can transport IP-based user traffic.

For mobile devices, a number of localization techniques areavailable today, for an overview see [6]. Although someinfrastructure-based communication networks are also ca-pable of providing localization information of their mobileterminals, we assume the presence of a GPS-based local-ization device. The rationale behind this is that the local-ization information provided by communication systems isnot very precise (e.g., sometimes limited to the identifica-tion of the current cell, which may span several square kilo-meters) and therefore not usable for our application. TheGPS system also has well-known problems in urban areasand in buildings. But based on additional techniques suchas the ones stated in [11], its reliability and accuracy canbe sufficiently improved. Particularly the coexistence of aGPS device with an Internet connection allows to utilizeInternet-based Differential GPS, which leads to a position-ing accuracy of decimeters [2].

The situation of the device and its user is also characterizedby harsh environmental conditions related to the emergencyresponse, such as fire, smoke, floods, wind, chemical spillingsetc. The device has to remain operable under such condi-tions, and moreover has to provide alternative means of in-put and output under conditions that affect human sensingand action abilities. As these requirements are quite com-plex, we decided to design and implement a preliminary testsystem and a final system. The components of the two sys-tems and their interconnections can be found in Figure 4.

2.2 A preliminary test systemIn order to analyze the properties of the communication andlocalization systems, a preliminary test system has been im-plemented, for which two requirements have been dropped,the design for harsh envionmental conditions and the abilityto use alternative input and output.

The communication and localization system is independentof the user requirements with the exception of the fact thatthe system has to be portable. Therefore we chose a mobileGPS receiver device and a GSM cell phone device as ourtest implementation platform. The GPS receiver uses thebluetooth [3] personal area network standard to connect tothe cell phone. The cell phone firmware includes a Java VMbased on the J2ME standard with JSR82 extensions, i.e.,a Java application running on the VM can present its userinterface on the phone but can also directly communicatewith bluetooth devices in the local vicinity and with Internethosts via the GSM networks GPRS standard.

The implementation of the test application is straightfor-ward: It regularly decodes the current geographic positionfrom the NMEA data stream provided by the GPS receiverand sends this information to the (a priori configured) server

SRMED-2006 2

Page 5: Third International Workshop on Synthetic …...Third International Workshop on Synthetic Simulation and Robotics to Mitigate Earthquake Disaster (SRMED 2006) 16th, June, 2006 Messe

IP address of the central command center. The utilizedprotocol between the cell phone and the command centeris based on the widely used GPX [21] standard for GPSlocations. Among other things, the protocol defines datastructures for tracks and waypoints. A track is a sequence oflocations with time stamps that has been visited with theGPS device. A waypoint describes a single location of inter-est, e.g., the peak of a mountain. We extended the protocolin order to augment waypoint descriptions with informationspecific to disaster situations. These extensions allow res-cue teams to report the waypoint-relative locations of roadblockades, building fires, and victims. Currently, the wear-able device automatically sends the user’s trajectory to thecommand center, whereas perceptions can manually be en-tered. A detailed description of the protocol extension canbe found in Appendix A.

2.3 Designing the full emergency response wear-able system

In order to fulfill the additional requirements for robustnessand user interface, the full system will be based on additionalhard- and software. The system uses a wearable CPU core,the so-called qbic belt-worn computer [4] (see Figure 3 (a)).It is based on a ARM CPU running the Linux operatingsystem, has a bluetooth interface, and can be extended viaUSB and RS232 interfaces. The wearable CPU core runsthe main application program. For localization, the samemobile GPS receiver as in the test system is used, but canbe replaced by a non-bluetooth serial device for increasedreliability. For communication, the system can use multi-ple communication channels whose already used GSM cellphone can be one of those 2.

As already stated, the design of the user interface is a crucialone for this application. Therefore, we envision a user inputdevice integrated in the clothing of the user, e.g., an arm-mounted textile keyboard [13] and a wireless link of the key-board to the belt computer. Such an interface has alreadybeen designed for other applications such as aircraft cabinoperation [14] (see Figure 2). Due to the harsh environmen-

Figure 2: A textile keyboard for aircraft cabin op-eration.

tal conditions, we plan two independent output devices forinformation output and user feedback. A bluetooth head-set device provides audible feedback for user input, and atext-to-speech engine provides audible text output.

The second output device is a head-mounted display thatcan be integrated into existing emergency response gear such

2As we assumed IP-based connectivity, flexibleinfrastructure-independent transport mechanisms suchas MobileIP [16] can be used to improve reliability overmultiple independent and redundant communication links.

(a)

(b) (c)

Figure 3: The qbic belt-worn computer : (a) The beltwith CPU. (b) The head-mounted display. (c) Bothworn by the test person.

as firefighter helmets and masks (see Figure 3(b)). In appli-cations where headgear is not commonly used, the outputcan also be provided through a body-worn display device.

The application software driving the user interface is basedon the so-called WUI toolkit [22], which uses an abstract de-scription to define user interface semantics independent ofthe input and output devices used. The application code istherefore independent of the devices available in a particularinstance of an implementation, i.e., with or without head-mounted display. The WUI toolkit can also take contextinformation into account, such as the user’s current situa-tion, in order to decide on which device and in what formoutput and input are provided.

(a)

(b)

Figure 4: System diagrams: (a) test system basedon a GSM phone (b) full system design based on abelt-worn wearable computer

3. MULTI AGENT SYSTEMS (MAS) FORURBAN SEARCH AND RESCUE (USAR)

SRMED-2006 3

Page 6: Third International Workshop on Synthetic …...Third International Workshop on Synthetic Simulation and Robotics to Mitigate Earthquake Disaster (SRMED 2006) 16th, June, 2006 Messe

3.1 Data integrationGenerally, we assume that if communication is possible andnew GPS fixes are available, the wearable device of a rescueteam continuously reports the team’s trajectory as a trackmessage to the command center. Additionally, the rescueteam might provide information for specific locations, as forexample, indicating the successful exploration of a building,the detection of a victim, and the detection of a blockedroad, by sending a waypoint message.

Based on an initial road map and on the information onroad blockage and the autonomously collected data on tra-jectories traveled by the agents, the current system buildsup a connectivity graph indicating the connectivity of loca-tions. The connectivity graph between a single location andall other locations is constructed by the Dijkstra algorithm.The connectivity between two neighboring locations, i.e., theweight of the corresponding edge in the graph, depends onthe true distance, the amount of blockage, the number ofcrossings, and the number of other agents known to travelon the same route. In the worst case, the graph can be cal-culated in O (m + nlog (n)), where n is the number of loca-tions and m the number of connections between them. Theknowledge of the connectivity between locations allows thesystem to recommend “safe” routes to rescue teams and tooptimize their target selection. The sequence in Figure 5(a)shows the continuous update of the connectivity graph for abuilding within the simulated City of Foligno. Note that thegraph has to be revised if new information on the connectiv-ity between two locations is available, e.g if a new blockagehas been detected or an old blockage has been removed.

The search for victims of many rescue teams can only becoordinated efficiently if the rescue teams share informationon the exploration. We assume that rescue teams reportwhen they have finished to explore a building and whenthey have found a victim, by transmitting the accordingmessage to the command center. The command center uti-lizes this information to distribute rescue teams efficientlyamong unexplored and reachable locations. The sequencein Figure 5(b) shows an agent’s increasing knowledge on theexploration status of the map over time. Victims (indicatedby green dots) and explored buildings (indicated by whitecolor) are jointly reported by all agents. Regions that aremarked by a yellow border indicate exploration targets rec-ommended by the command center to the agent.

3.2 Rescue sequence optimizationTime is a critical issue during a real rescue operation. Ifambulance teams arrive at an accident site, such as a caraccident on a highway, it is common practice to optimizethe rescue sequence heuristically, i.e., to estimate the chanceof survival for each victim and to rescue urgent cases ear-liest. During a large-scale disaster, such as an earthquake,the efficient distribution of rescue teams is even more im-portant since there are many more victims and usually aninsufficient number of rescue teams. Furthermore, the timeneeded for rescuing a group of victims might significantlyvary, depending on the collapsed building structures trap-ping the victims.

In RoboCupRescue, victims are simulated by the three vari-ables damage, health and buridness, expressing an individ-

(a) (b)

Figure 5: Online data integration of information re-ported by simulated agents: (a) The connectivitybetween the blue building and other locations in-creases over time due to removed blockades. Whitecolored locations are unreachable, red colored loca-tions are reachable. The brighter the red color, thebetter the location is reachable. (b) The agent’sinformation on the explored roads and buildings(green roads are known to be passable, green andwhite buildings are known as explored). Regionsmarked with a yellow border are exploration targetsrecommended by the command center.

ual’s damage due to fire or debris, the current health thatcontinuously decreases depending on damage, and the diffi-culty of rescuing the victim, respectively. The challenge hereis to predict an upper bound on the time necessary to res-cue a victim and a lower bound on the time the victim willsurvive. In the simulation environment these predictions arecarried out based on classifiers which were induced by ma-chine learning techniques from a large amount of simulationruns. The time for rescuing civilians is approximated by alinear regression based on the buridness of a civilian and thenumber of ambulance teams that are dispatched to the res-cue. Travel costs towards a target are directly taken fromthe connectivity graph. Travel costs between two reachabletargets are estimated by continuously averaging costs expe-rienced by the agents 3.

We assume that in a real scenario expert knowledge canbe acquired for giving rough estimates on these predictions,i.e., rescue teams estimate whether the removal of debrisneeds minutes or hours. Note that in a real disaster sit-uation the system can sample the approximate travel timebetween any two locations by analyzing the GPS trajectoriesreceived from rescue teams in the field. Moreover, the sys-

3Note that the consideration of specific travel costs betweentargets would make the problem unnecessarily complex.

SRMED-2006 4

Page 7: Third International Workshop on Synthetic …...Third International Workshop on Synthetic Simulation and Robotics to Mitigate Earthquake Disaster (SRMED 2006) 16th, June, 2006 Messe

tem can provide for different means of transport, e.g., car orby feet, the expected travel time between two locations. Thesuccessful recognition of the means of transport from GPStrajectories was already shown by Liao and colleagues [12].

30

35

40

45

50

55

60

65

70

0 1 2 3 4 5 6 7 8 9

# C

ivili

ans

KobeEasy KobeHard KobeMedium KobeVeryHard RandomMapFinal VCEasy VCFinal VCVeryHard

Greedy-HeuristicGenetic Algorithm

Figure 6: The number of civilian suvivors if applyinga greedy rescue strategy and a GA optimized rescuestrategy within simulated cities

If the time needed for rescuing civilians and the chance ofsurvival of civilians is roughly predictable, one can estimatethe overall number of survivors by summing up the necessarytime for each single rescue and by determining the overallnumber of survivors within the total time. For each rescuesequence S = 〈t1, t2, ..., tn〉 of n rescue targets, a utility U(S)that is equal to the number of civilians that are expected tosurvive is calculated. Unfortunately, an exhaustive searchover all n! possible rescue sequences is intractable. A goodheuristic solution is to sort the list of targets according tothe time necessary to reach and rescue them and to subse-quently rescue targets from the top of the list. However, asshown in Figure 6, this might lead to poor solutions. A bet-ter method could be the so-called Hungarian Method [10],which optimizes the costs for assigning n workers to m tasksin O

`

mn2´

. The method requires that the time needed un-til a task is finished does not influence the overall outcome.However, this is not the case for a rescue task, since a vic-tim will die if rescued too late. Hence, we decided to uti-lize a Genetic Algorithm [7] (GA) for the optimization ofsequences and to utilize it for continuously improving therescue sequence executed by the ambulance teams.

The GA is initialized with heuristic solutions, for example,solutions that greedily prefer targets that can be rescuedwithin a short time or urgent targets that have only littlechance of survival. The fitness function of solutions is setequal to the sequence utility U(S). In order to guaranteethat solutions in the genetic pool are at least as good as theheuristic solutions, the so-called elitism mechanism, whichforces the permanent existence of the best found solution inthe pool, has been used. Furthermore, we utilized a simpleone-point-crossover strategy, a uniform mutation probabilityof p ≈ 1/n, and a population size of 10. Within each minute,approximately 300, 000 solutions can be calculated on a 1.0GHz Pentium4 computer.

We tested the GA-based sequence optimization on different

city maps in the simulation and compared the result with agreedy strategy. As can be seen in Figure 6, in each of thetested environments, sequence optimization improved theperformance of the rescue team. One important propertyof our implementation is that it can be considered as ananytime algorithm: The method provides at least a solutionthat is as good as the greedy solution, but also a better one,depending on the given amount of time.

4. PRELIMINARY EXPERIMENTSThe system has preliminary been tested by successively in-tegrating data received from a test person. The test personequipped with the test device described in Section 2 walkedseveral tracks within a district of the City of Bremen (seeFigure 7). During the experiment, the mobile device con-tinuously transmitted the trajectory of the test person. Ad-ditionally, the test person reported victim found waypointsafter having visual contact with a victim. Note that vic-tim waypoints were selected arbitrarily, since fortunately novictims were found in Bremen.

In order to integrate the data into the rescue system, thereceived data, encoded by the extended GPX protocol thatrepresents location by latitude and longitude, has to be con-verted into a grid-based representation. We utilized the Uni-versal Transverse Mercator (UTM) [18] projection system,which provides a zone for any location on the surface of theEarth, whereas coordinates are described relatively to thiszone. By calibrating maps from the rescue system to thepoint of origin of the UTM coordinate system, locations fromthe GPS device can directly be mapped. In order to copewith erroneous data, we decided to simply ignore outliers,i.e. locations far from the track, that were detected based onassumptions made on the test person’s maximal velocity. Inthe next version of the system it is planned to detect outliersbased on the mahanalobis distance estimated by a KalmanFilter, likewise as dead reckoning methods used in the con-text of autonomous mobile robots. Figure 7(b) shows thesuccessive integration of the received data into the rescuesystem and Figure 7(a) displays the same data plotted byGoogleEarth. Note that GPX data can be directly processedby GoogleEarth without any conversion.

5. CONCLUSIONWe introduced the preliminary design of a wearable de-vice which can be utilized for USAR. Furthermore we havedemonstrated a system which is generally capable of inte-grating trajectories and observations from many of thesewearable devices into a consistent world model. As shown bythe results of the simulation, the consistent world model al-lows the system to coordinate exploration by directing teamsto globally unexplored regions as well as to optimize theirplans based on the sampled connectivity of roads, and tooptimize the sequence of rescuing victims. The applicationof this coordination also in real scenarios, i.e., to send theroad graph and mission commands back to the wearable de-vices of real rescue teams in the field, will be a part of futurework.

As we can see from our experiments, the accuracy of theGPS locations suffices for mapping trajectories on a givenroad graph. However, during a real disaster, a city’s infras-tructure might change completely, i.e., former roads might

SRMED-2006 5

Page 8: Third International Workshop on Synthetic …...Third International Workshop on Synthetic Simulation and Robotics to Mitigate Earthquake Disaster (SRMED 2006) 16th, June, 2006 Messe

(a) (b)

Figure 7: Successive integration of data reported by a test person equipped with a wearable device. (a) Thereal trajectory and observations of victims plotted with GoogleEarth (victims are labeled with “civFound”).(b) The same data integrated into the rescue system (green roads are known to be passable, white buildingsare known as explored, and green dots indicate observed victims).

SRMED-2006 6

Page 9: Third International Workshop on Synthetic …...Third International Workshop on Synthetic Simulation and Robotics to Mitigate Earthquake Disaster (SRMED 2006) 16th, June, 2006 Messe

be impassable or disappear at all, and people search for newconnections between places (e.g., off-road or even throughbuildings). Therefore, it is necessary that the system is ca-pable of learning new connections between places and tomodify the existing graph accordingly. Bruntrup and col-leagues already studied the problem of map generation fromGPS traces [5]. Our future work will particularly deal withthe problem of learning from multiple noisy routes. Wewill extend the existing rescue system with the capability ofadding new connections to the road graph and to augmentthese connections with the estimated travel time, sampledfrom the observed trajectories.

Furthermore we are investigating methods of visual odome-try for estimating the trajectories of humans walking withinbuildings, or more general, in situations where no GPS lo-calization is possible. We are confident that this odometrydata together with partial GPS localization will suffice tointegrate an accurate map of the disaster area, includingroutes leading through buildings and debris.

Finally, it would be interesting to compare the system withconventional methods that are used in emergency responsenowadays. This could be achieved by comparing the ef-ficiency of two groups of rescue teams exploring buildingswithin an unknown area, whereas one group is coordinatedby conventional radio communication and the other groupby our system via wearable devices.

6. REFERENCES[1] Resq freiburg 2004 source code. Available on:

http://gkiweb.informatik.uni-freiburg.de/

~rescue/sim04/source/resq.tgz. release September,2004.

[2] Satellitenpositionierungsdienst der deutschenlandesvermessung sapos. Available on:http://www.sapos.de/.

[3] The ieee standard 802.15.1 : Wireless personal areanetwork standard based on the bluetooth v1.1foundation specifications, 2002.

[4] O. Amft, M. Lauffer, S. Ossevoort, F. Macaluso,P. Lukowicz, and G. Troster. Design of the QBICwearable computing platform. In 15th InternationalConference on Application-Specific Systems,Architectures and Processors (ASAP ’04), Galveston,Texas, September 2004.

[5] R. Bruentrup, S. Edelkamp, S. Jabbar, and B. Scholz.Incremental map generation with gps traces. InInternational IEEE Conference on IntelligentTransportation Systems (ITSC), Vienna, Austria,2005.

[6] M. Hazas, J. Scott, and J. Krumm. Location-awarecomputing comes of age. IEEE Computer,37(2):95–97, February 2004.

[7] J. H. Holland. Adaption in Natural and ArtificialSystems. University of Michigan Press, 1975.

[8] H. Kitano, S. Tadokoro, I. Noda, H. Matsubara,T. Takahashi, A. Shinjou, and S. Shimada. RoboCup

Rescue: Search and rescue in large-scale disasters as adomain for autonomous agents research. In IEEEConf. on Man, Systems, and Cybernetics(SMC-99),1999.

[9] A. Kleiner, M. Brenner, T. Braeuer, C. Dornhege,M. Goebelbecker, M. Luber, J. Prediger, J. Stueckler,and B. Nebel. Successful search and rescue insimulated disaster areas. In In Proc. of theInternational RoboCup Symposium ’05, 2005.

[10] H. W. Kuhn. The hungarian method for theassignment problem. Naval Research LogisticsQuaterly, 2:83–97, 1955.

[11] Q. Ladetto, B. Merminod, P. Terrirt, and Y. Schutz.On foot navigation: When gps alone is not enough.Journal of Navigation, 53(02):279–285, Mai 2000.

[12] L. Liao, D. Fox, and H. A. Kautz. Learning andinferring transportation routines. In AAAI, pages348–353, 2004.

[13] U. Mohring, S. Gimpel, A. Neudeck, W. Scheibner,and D. Zschenderlein. Conductive, sensorial andluminiscent features in textile structures. In H. Kenn,U. Glotzbach, O. Herzog (eds.) : The Smart GloveWorkshop, TZI Report, 2005.

[14] T. Nicolai, T. Sindt, H. Kenn, and H. Witt. Casestudy of wearable computing for aircraft maintenance.In Otthein Herzog, Michael Lawo, Paul Lukowicz andJulian Randall (eds.), 2nd International Forum onApplied Wearable Computing (IFAWC), pages97–110,. VDE Verlag, March 2005.

[15] I. Nourbakhsh, K. Sycara, M. Koes, M. Yong,M. Lewis, and S. Burion. Human-robot teaming forsearch and rescue. IEEE Pervasive Computing: Mobileand Ubiquitous Systems, pages 72–78, January 2005.

[16] C. Perkins. Ip mobility support for ipv4. RFC, August2002.

[17] N. Schurr, J. Marecki, P. Scerri, J. P. Lewi, andM. Tambe. The defacto system: Coordinatinghuman-agent teams for the future of disaster response.Programming Multiagent Systems, 2005.

[18] J. P. Snyder. Map Projections - A Working Manual.U.S. Geological Survey Professional Paper 1395.United States Government Printing Office,Washington, D.C., 1987.

[19] T. Starner. The challenges of wearable computing:Part 1. IEEE Micro, 21(4):44–52, 2001.

[20] T. Starner. The challenges of wearable computing:Part 2. IEEE Micro, 21(4):54–67, 2001.

[21] TopoGrafix. Gpx - the gps exchange format. Availableon: http://www.topografix.com/gpx.asp. releaseAugust, 9th 2004.

[22] H. Witt, T. Nicolai, and H. Kenn. Designing awearable user interface for hands-free interaction indmaintenance applications. In PerCom 2006 - FourthAnnual IEEE International Conference on PervasiveComputer and Communication, 2006.

SRMED-2006 7

Page 10: Third International Workshop on Synthetic …...Third International Workshop on Synthetic Simulation and Robotics to Mitigate Earthquake Disaster (SRMED 2006) 16th, June, 2006 Messe

APPENDIXA. COMMUNICATION PROTOCOL<xsd:complexType name="RescueWaypoint">

<xsd:annotation><xsd:documentation>

This type describes an extension of GPX 1.1 waypoints.

Waypoints within the disaster area can be augmented

with additional information, such as observations of fires,

blockades and victims.

</xsd:documentation></xsd:annotation>

<xsd:sequence>

<xsd:element name="Agent"

type="RescueAgent_t" minOccurs="0" maxOccurs="1" />

<xsd:element name="Fire"

type="RescueFire_t" minOccurs="0" maxOccurs="unbounded" />

<xsd:element name="Blockade"

type="RescueBlockade_t" minOccurs="0" maxOccurs="unbounded" />

<xsd:element name="VictimSoundEvidence"

type="RescueVictimSoundEvidence_t" minOccurs="0" maxOccurs="unbounded" />

<xsd:element name="Victim"

type="RescueVictim_t" minOccurs="0" maxOccurs="unbounded" />

<xsd:element name="Exploration"

type="RescueExploration_t" minOccurs="0" maxOccurs="1" />

</xsd:sequence>

</xsd:complexType>

<xsd:complexType name="RescueVictim_t">

<xsd:annotation><xsd:documentation>

This type describes information on a victim

relatively to the waypoint.

</xsd:documentation></xsd:annotation>

<xsd:sequence>

<xsd:element name="VictimDescription"

type="xsd:string" "minOccurs="0" maxOccurs="1"/>

<xsd:element name="VictimSurvivalTime"

type="xsd:integer" "minOccurs="0" maxOccurs="1"/>

<xsd:element name="VictimRescueTime"

type="xsd:integer" "minOccurs="0" maxOccurs="1"/>

<xsd:element name="VictimProximity"

type="Meters_t" minOccurs="0" maxOccurs="1"/>

<xsd:element name="VictimBearing"

type="Degree_t" minOccurs="0" maxOccurs="1"/>

<xsd:element name="VictimDepth"

type="Meters_t" minOccurs="0" maxOccurs="1"/>

</xsd:sequence>

</xsd:complexType>

<xsd:complexType name="RescueFire_t">

<xsd:annotation><xsd:documentation>

This type describes the observation of fire

relatively to the waypoint.

</xsd:documentation></xsd:annotation>

<xsd:sequence>

<xsd:element name="FireDescription"

type="xsd:string" "minOccurs="0" maxOccurs="1"/>

<xsd:element name="FireProximity"

type="Meters_t" minOccurs="0" maxOccurs="1"/>

<xsd:element name="FireBearing"

type="Degree_t" minOccurs="0" maxOccurs="1"/>

</xsd:sequence>

</xsd:complexType>

<xsd:complexType name="RescueBlockage_t">

<xsd:annotation><xsd:documentation>

This type describes detected road blockages

relatively to the waypoint.

</xsd:documentation></xsd:annotation>

<xsd:sequence>

<xsd:element name="BlockageDescription"

type="xsd:string" "minOccurs="0" maxOccurs="1"/>

<xsd:element name="BlockageProximity"

type="Meters_t" minOccurs="0" maxOccurs="1"/>

<xsd:element name="BlockageBearing"

type="Degree_t" minOccurs="0" maxOccurs="1"/>

</xsd:sequence>

</xsd:complexType>

<xsd:complexType name="RescueVictimSoundEvidence_t">

<xsd:annotation><xsd:documentation>

This type describes evidence on hearing a victim

relatively to the waypoint.

</xsd:documentation></xsd:annotation>

<xsd:sequence>

<xsd:element name="VictimEvidenceRadius"

type="Meters_t" minOccurs="1" maxOccurs="1"/>

</xsd:sequence>

</xsd:complexType>

<xsd:complexType name="RescueExploration_t">

<xsd:annotation><xsd:documentation>

This type describes the area that has been exploration

around the waypoint.

</xsd:documentation></xsd:annotation>

<xsd:sequence>

<xsd:element name="ExploredRadius"

type="Meters_t" minOccurs="1" maxOccurs="1"/>

</xsd:sequence>

</xsd:complexType>

<xsd:complexType name="RescueAgent_t">

<xsd:annotation><xsd:documentation>

This type describes the observant agent.

</xsd:documentation></xsd:annotation>

<xsd:sequence>

<xsd:element name="AgentName"

type="xsd:string" "minOccurs="0" maxOccurs="1"/>

<xsd:element name="AgentTeam"

type="xsd:string" minOccurs="0" maxOccurs="1"/>

</xsd:sequence>

</xsd:complexType>

<xsd:simpleType name="Meters_t">

<xsd:annotation><xsd:documentation>

This type contains a distance value measured in meters.

</xsd:documentation></xsd:annotation>

<xsd:restriction base="xsd:integer"/>

</xsd:simpleType>

<xsd:simpleType name="Degree_t">

<xsd:annotation><xsd:documentation>

This type contains a bearing value measured in degree.

</xsd:documentation></xsd:annotation>

<xsd:restriction base="xsd:integer"/>

</xsd:simpleType>

SRMED-2006 8

Page 11: Third International Workshop on Synthetic …...Third International Workshop on Synthetic Simulation and Robotics to Mitigate Earthquake Disaster (SRMED 2006) 16th, June, 2006 Messe

Design of Human-In-The-Loop Agent Simulation

for Disaster Simulation Systems

Yoshitaka Kuwata1, Tomoichi Takahashi2, Nobuhiro Ito3, and4 Ikuo Takeuchi4

1 NTT DATA CORPORATION, JAPAN.2 Meijo University, JAPAN.

3 Aichi Institute of Technology, JAPAN.4 The University of Tokyo, JAPAN.

Abstract. ReboCupRescue Simulation System (RCR-SS) is useful forthe evaluation of the disaster mitigation strategies for in real world.Based on RCR-SS, we propose a new simulation framework named “RoboCupRes-cue Human-In-The-Loop Agent Simulation (RCR-HITLAS)”, in whichhumans act agents with various roles. As humans are involved in simula-tion, it is not required to make agents’ strategies be pre-programmed. Wecan also measure the performance of user interface and decision supportsystem that humans use, as well as humans’ skills.

1 Introduction

ReboCupRescue Simulation System (RCR-SS) is useful for real-world disastermitigation systems. One example is found in DEFACTO Coordination System[1], in which RCR-SS is used for the training of incident commanders in Los An-gels Fire Department. RCR-SS is also useful for the evaluation of the strategiesin disaster mitigation. By comparing the results of the rescue simulations withtwo different sets of rescue agents, we can evaluate the rescue strategies, whichare built in these agents. However, in order to use this evaluation method, thestrategies must be explicit and need to be pre-programmed in the agents. For ex-ample, in order to write a civilian program with realistic reactions, programmersneed to research and write explicit rules that reflect humans’ intensions[2].

We propose a new simulation framework named “RoboCupRescue Human-In-The-Loop Agent Simulation (RCR-HITLAS)”, in which humans act agentswith various roles. As humans play a role in simulation, it is not required to makeagent strategies be hard-coded in agents. We can also measure the performanceof user interface for decision support systems, as well as humans’ skills.

Because there are several kinds of agents with different roles in RCR-SS, thefollowing evaluation scenarios are possible.

1. Humans play SupervisorsIn this scenario, humans are involved as supervisors of professional agents.Examples of supervisors include commanders at fire department (FD) andcaptains of rescue team (RT). Supervisors decide the allocation of resourcenecessary for disaster mitigations. As the decisions of supervisors come ofmitigation strategies, these strategies are evaluated.

SRMED-2006 9

Page 12: Third International Workshop on Synthetic …...Third International Workshop on Synthetic Simulation and Robotics to Mitigate Earthquake Disaster (SRMED 2006) 16th, June, 2006 Messe

Traffic Road

Blcokade Fire Misc.

Rescue

Kernel

Fire Fighter

Ambulance

Police Force

Civilian

Viewer

GIS

Simulators

Command

Post

Agents

Human

Fig. 1. Architecture of RCR-HITLAS

2. Humans play professional agents (PA)Humans play professional agents (PA) such as fire fighters and rescue-teammembers in this scenario. We can evaluate strategies of fire fighting at eachsite by comparing the results of this scenario.

3. Humans play civilianIt is also possible for humans to join rescue simulations as civilian agents.By acting civilian, we can evaluate strategies and procedures for evacuation.

Although all scenarios are expected to be useful for each purpose, we decideto focus on the first scenario for the first step. This is because we are mostinterested in optimal rescue strategies in all of disaster area. The viewpoint ofsupervisors is suitable for the evaluation of global rescue strategies. It is alsoexpected that agent programmers can get idea how they should program byacting supervisors by themselves.

2 Architecture of RCR-HITLAS

We designed RCR-HITLAS based on RCR-SS. Fig. 1 represents the architectureof RCR-HITLAS.

In order to avoid the cost to rewrite programs,RCR-HITLAS is designedto reuse as many components of RCR-SS as possible. We introduce one newcomponent named “Command Post” (CP) in the simulation framework. CP isjust work like normal agents except that it has user interface.

In order to make the simulation more realistic, the following rules are appliedfor CP.

1. Limited communication channel

SRMED-2006 10

Page 13: Third International Workshop on Synthetic …...Third International Workshop on Synthetic Simulation and Robotics to Mitigate Earthquake Disaster (SRMED 2006) 16th, June, 2006 Messe

CPs must communicate PAs through the kernel. Direct communications be-tween CPs and PAs are not allowed. This rule is introduced because thecommunication should be controlled by the kernel. CPs use a set of tell/hearcommands to communicate with PAs.

2. Standard commands to PAWe designed a set of commands for PA and CP as a standard in RCR-HITLAS. With the standard, we can compare the performance of the of CPby simply exchange them.A list of command and report are shown in table 1. There are two kinds ofcommands; common commands are for all kind of PAs, individual commandsare specific by agents’ role. Commands are sent to PA by using messageexchange mechanism via kernel. When the task is completed or failed, PAssend a report to the CP.

3. Limited information sourceIn real world, it is very hard to collect disaster information. Therefore in-formation is very limited even for the supervisors. On the other hands, inRCR-SS, it is possible for humans to observe complete disaster informationwith viewers, and which is not realistic. In order to model the informationcollection of real world, CPs must get information by the following two meth-ods; A) Acquire information by themselves using sensing commands, B) Askother PAs to report information they gather so far.

4. Variable Realtime-knob (RTK)In RCR-SS, simulation speed is controlled by the kernel. In usual simulation,one minutes in simulation takes one seconds in real time. We defined this ra-tio as Realtime-knob (RTK). i.e., RTK = (SimulationT ime)/(RealworldT ime)RCR-SS uses RTK = 60. As it is often too fast for humans to make decisionat RTK = 60, we need to change RTK suitable for humans.

Table 1. List of Command and Report

Comm./Report Content Parameter Meaning

Command to PA Goto Location ID Request to go to ID

Command to PA Report Level of Details Request to report the status

Command to FireFighters

Extinguish Location ID Request Fighters to extinguishbuilding

Command to Po-lice Agents

Clear Road ID Request Police Agents to clear road

Command toRescue Agents

Rescue Location ID Request Rescue team to Search andrescue

Report Report Re-sponse

Time, Location, ID,Status, etc

Send the status of PA

Report Task Com-pleted

Time(t), Com-mand(c)

Command(c) is completed atTime(t)

Report Task Failed Time(t), Com-mand(c), Reason(r)

Command(c) is failed at Time(t) byReason(r)

SRMED-2006 11

Page 14: Third International Workshop on Synthetic …...Third International Workshop on Synthetic Simulation and Robotics to Mitigate Earthquake Disaster (SRMED 2006) 16th, June, 2006 Messe

Fig. 2. Architecture of Command Post

3 Command Post as Human-In-The-Loop Agent

Figure 2 represents an example of architecture for CPs. As CPs is a kind of agentwhich includes humans in the control loop, we call command post “Human-in-the-loop agent” Human-in-the-loop agent consists of sub-modules describes thefollowing sub-sections.

3.1 Perception Module(PM)

In each simulation steps, the Rescue kernel distributes a piece of informationto all of agents. As humans cannot know complete information in real world,the kernel select a part of information based on the humans’ perceptive model.For example, when the range of the view is limited in 300m, the kernel selectsinformation within 300m radius from each agent and sends it to the agent.Perception module is responsible to reconstruct the world-model based on theinformation received from the kernel. All other modules use the model build byperception module.

3.2 Command Handling Module (CHM)

A Command Handling Module (CHM) receives commands from user via a UserInterface Module. According to the standard, CHM composes a series of com-mands to the PAs. The commands are sent to PAs via the kernel. CHM alsohandles reports message from PAs.

3.3 User Interface Module (UIM)

User interface module (UIM) shows the world model to humans and is givenorders from humans. As the world model is based on geographical information,UIM should display them graphically.

One example of UIM design is shown in section 4 in brief.

SRMED-2006 12

Page 15: Third International Workshop on Synthetic …...Third International Workshop on Synthetic Simulation and Robotics to Mitigate Earthquake Disaster (SRMED 2006) 16th, June, 2006 Messe

3.4 Decision Support Module(DSM)

Decision Support Module (DSM) generates additional information to supportsupervisors. In many cases, the supervisors need to decide the allocation of rescueresources (PAs). The following decision supports are possible.

1. Current Status of PAsThe most basic support function of DSM is to show the latest status of PAsin action. For example, fire fighter [A] is now working at area [E]. The historyof activities of PAs also helps.

2. Priority of IncidentOne of higher-level support function of DSM is a list of incidents with prior-ity. For example, if two fires at area [A] and [B] are reported, a fire simulatorcan calculate which area will receive more serious damage PAs with theirpriority.

3. List of possible resource allocation plan with priorityDSM can calculate a list of possible resource allocation by estimating mini-mum resource for each incident. For example, if a fire is reported at area [A],and fire simulator estimates the fire will spread to 5 buildings in 30 minutes,DSM should recommend the allocation of 15 fire fighters for area [A]. DSMshould also choose 15 fire fighters available in 30 minutes from the agent list.

4. Results of sub-simulation with the PAs allocation planWhen supervisors need to decide the allocation of PAs’, it will be useful touse sub-simulators (Simulator in simulator). With a Sub-simulator, DSM cancalculate the results of a certain resource allocation. With a bunch of sub-simulators, DSM can show the best resource allocation plan and supervisorscan choose among them.

Much more support functions are possible for different kinds of tasks. There-fore, it is better to make DSM exchangeable.

4 Implementations of RCR-HITLAS

We implement the following components as a prototype of RCR-HITLAS.

1. Command Post (CP)We built a prototype system of CP, named DICE. DICE is designed basedon logViewer[3]. A screen image of DICE is shown as Fig. 3.There are three panes in the screen. Top left pane is for a global view tolook whole city at a glance. Top right pane is for local view to check PAs indetails. Bottom left pane is a list of agents, in which humans select a agentand send commands to with command buttons in the bottom of the pane.

2. Professional Agents (PA)A set of PA are implemented by Takai et al. in [4], which is based on YabAI[5]. YabAI is selected because of performance and popularity.

SRMED-2006 13

Page 16: Third International Workshop on Synthetic …...Third International Workshop on Synthetic Simulation and Robotics to Mitigate Earthquake Disaster (SRMED 2006) 16th, June, 2006 Messe

Fig. 3. Screen Image of Command Post

5 Conclusions and Future Works

We proposed a new simulation framework for RCR-SS. We described architectureand protocol for RCR-HITLAS. A Design and implementation of a prototypesystem is shown.

We are planning to extend RCR-HITLAS for collaboration. In the collabora-tion framework, more than one human join in a simulation scenario with multipleCP and the humans collaborate each other in the scenario.

References

1. Marecki, J., Schurr, N., Tambe, M.: Agent-based Simulations for Disaster RescueUsing the DEFACTO Coordination System. In: Emergent Information Technologiesand Enabling Policies for Counter Terrorism, Wiley-IEEE Press (2005)

2. Kuwata, Y., Noda, I., Ohta, M., Ito, N., Shinoda, K., Matsuno, F.: Evaluation ofDecision Support Systems for Emergency Management. In: SICE-2002. (2002)

3. Kuwata, Y., Shinjo, A.: Design of RoboCup-Rescue Viewers –Toward a Real WorldEmergency System –. In: The Fourth International Workshop on RoboCup, Mel-bourne, Australia (2000)

4. Takai, T., Kuwata, Y., Takeuchi, I.: Disaster Simulation System that Humans par-ticipate. In: 45th Game Programming Symposium, Information Processing Societyof Japan (2004)

5. Morimoto, T.: Agent team: Yabai (2001) http://ne.cs.uec.ac.jp/∼morimoto/

rescue/yabai/index.html.

SRMED-2006 14

Page 17: Third International Workshop on Synthetic …...Third International Workshop on Synthetic Simulation and Robotics to Mitigate Earthquake Disaster (SRMED 2006) 16th, June, 2006 Messe

Influence of Interaction on Evacuation Efficiency

in Large-scale Disaster

Hidehisa Akiyama, Masayuki Ohta and Itsuki Noda

National Institute of Advanced Industirial Science and Technology

It is important to prepare measures against disaster beforehand from theview point of disaster mitigation. For an approach to this, we are involved in thedevelopment of the Integrated Disaster Simulation System(IDSS)[1] that realizesa large-scale multi agent simulation using a parallel distributed processing. Wehave performed some simulations using IDSS to compare the relation betweencivilians’ behavior pattern and evacuation efficiency.

In our experiment, we assume the situation that all civilians go to the refugejust after warning in the urban areas that the area is about 300m× 400m Thereare two types of civilian; 1) He knows where the refuge is. 2) He doesn’t knowwhere the refuge is. And, we introduced three parameters to control the civilians’behavior: 1) P1: Probability that a civilian knows the refuge, 2) P2: Probabilitythat a civilian who doesn’t know refuge follows to others. 3) P3: Probability thata civilian who knows the refuge waits for others. We performed three simulationsas follows.

First, we fixed P2 and P3 to 0.5 and 0.0 respectively, and changed only P1.In this setting, when the number of civilians that know the refuge increases, thenumber of civilians that reach the refuge increases linearly. This result showsthat the change of P1 hardly influences the evacuation efficiency.

Second, we fixed P1 and P3 to 0.5 and 0.0 respectively, and changed onlyP2. In this setting, not only the increase of the evacuation efficiency is not seen,but alse the decrease of the evacuation efficiency is seen because of the decreaseof the random walk when P2 is increased. On the other hand, if civilians candistinguish the type of civilian, the efficiency is improved greatly. This resultsshow that the whole evacuation efficiencye may be improved if civilians discloseinformation that they have.

Last, we fixed P1 and P2 to 0.5 and 0.5 respectively, and changed only P3.As a result, a behavior of waiting for others did not influence the evacuationefficiency so much.

Our simulation examples show that civilians should follow the civilian thatknows the refuge but information who knows the refuge must be disclosed. Wecan say that, in order to improve the evacuation efficiency, the interaction suchas the information exchange is needed.

References

1. Itsuki Noda and Michinori Hatayama. Common frameworks of networking andinformation-sharing for advanced rescue systems. Proc. of IEEE International Con-ference on Robotics and Biomimetics 2004 (ROBIO2004) paper no. 324, 2004.

SRMED-2006 15

Page 18: Third International Workshop on Synthetic …...Third International Workshop on Synthetic Simulation and Robotics to Mitigate Earthquake Disaster (SRMED 2006) 16th, June, 2006 Messe

VirtualIUB – development of a team of autonomousagents for the Virtual Robots Competition

M. Mahmudi, S. Markov, Y. Nevatia, R. Rathnam, T. Stoyanov, S. CarpinSchool of Engineering and Science

International University BremenBremen, Germany

I. INTRODUCTION

Urban Search And Rescue is envisioned to be one ofthe major driving forces for robotics research as well ascommercial applications. A huge basin of potential users, highsocial impact, and a rich set of challenging unsolved researchproblems are key features of this rapidly growing field. Twoof the most urgent issues to be addressed within USAR areautonomy and cooperation. Promptness of response is a mustwhen locating victims. Therefore the use of multiple roboticvehicles is the obvious way to go in order to speed up thesearch process. However, it is unrealistic to think that each ofthese platforms will be remotely controlled by a human oper-ator. The vision, instead, is to have multiple robots operatingmainly autonomously and actively exchanging information.We therefore see cooperation and autonomy as two highlycoupled topics.Our efforts within the newly established Virtual Robots com-petition has been to develop a fully autonomous team ofagents that cooperatively explores and maps an unknownvirtual environment. We are aware that this is only a subsetof the possible tasks that can be explored within USARSim,i.e. the underlying software infrastructure, but we think thisis the basic starting point towards the development of moresophisticated systems in the future.

II. SYSTEM ARCHITECTURE

Each robot is controlled by the simple architecture depictedin figure 1. Perception and action are the only two modulesthat actually interact with sensors and actuators respectively.

Fig. 1. Modules controlling a single robot

The core of the system is given by the modules mappingand localization, and create potential function. The first oneembeds a freely available implementation of a very efficientSLAM algorithm [1]. The second one creates a potentialfunction with a single local minima at a desired location thatis then used by the Move to module in order to find our whereto go. The target point is set to detected victims’ locationsor other interesting points generated with the frontier basedalgorithm [2].The cooperative part of the system is still under development,but mainly reduces to map merging when two robots approachand recognize each other.

III. LESSON LEARNED

One of the unarguable advantages of robot simulators isthe possibility to run multiple extensive and easy to setupexperiments to quickly identify strengths and weaknesses ofalgorithms under development. USARSim is the ideal robotsimulator. Its close relationship to reality allows to attackproblems like cooperation between teams of robots even whenthe needed hardware is not available. Its open source naturecalls for and has seen the active contribution of differentresearch groups located all around the world. Models of manydifferent commercial and research robot platforms have beenspontaneously donated to the community and we were offeredthe possibility to practice many different platforms. Withminimal efforts we have successfully integrated software com-ponents originally developed for real world robotics platformsand made freely available. We in fact envision that, coherentlywith the spirit of the RobocupRescue Simulation League, oneof the main advantages of the Virtual Robots competition isthe lowering of entry barrier for new comers through a freedistribution of fully functioning robot control software.

REFERENCES

[1] S. Grisetti, C. Stachniss, and W. Burgard, “Improving grid-based slamwith rao-blackwellized particle filters by adaptive proposals and selectiveresampling,” in Proceedings of the IEEE International Conference onRobotics and Automation, 2005, pp. 2432 – 2437.

[2] B. Yamauchi, “A frontier-based approach for autonomous exploration,” inInternational Symposium on Computational Intelligence in Robotics andAutomation, 1997, pp. 146–151.

SRMED-2006 16

Page 19: Third International Workshop on Synthetic …...Third International Workshop on Synthetic Simulation and Robotics to Mitigate Earthquake Disaster (SRMED 2006) 16th, June, 2006 Messe

The Virtual Robots Competition:vision and short term roadmap

Stephen BalakirskyIntelligent Systems Division

NISTGaithersburg, MD, USA

Mike LewisDepartment of Information Sciences

and TelecommunicationsUniversity of Pittsburgh

Pittsburgh, PA, USA

Stefano CarpinSchool of Engineering and Science

International University BremenBremen, Germany

CURRENT STATUS AND FUTURE PLANS

The USARSim framework provides a comprehensive set ofopen source tools for the development and evaluation of au-tonomous agent systems. Urban Search and Rescue Simulation(USARSim) [1][2] is based on the Unreal Tournament gameengine and provides realistic environments and embodimentfor agents. The environments are full three dimensional worldsthat have photo-realistic textures and objects. Embodiment isaided by the Karma physics engine that allows for physics-based interactions with objects in the environment. USARSimalso supports a variety of sensor models including a SickLMS laser scanner, video camera, encoders, inertial navigationsensors, and a RFID tag sensor. All of the sensors arecapable of having noise models added to their outputs. Forthe specialized world of urban search and rescue, acoustic andmotion detectors are provided to aid in the location of victims.USARSim enjoys significant popularity and a lower entrythreshold in part due to the adopted open source philosophy.Different research groups around the world have generouslydonated robot models, sensor models, and environments andhave provided quick fixes when bugs were detected.

The RoboCup Rescue Virtual Competition is the third com-petition running under the RobocupRescue Simulation Leagueumbrella. It utilizes the USARSim framework to provide adevelopment, testing, and competition environment that isbased on the physical arenas. It is envisioned that researcherswill utilize this framework to perfect algorithms in the areasof:

1) Autonomous multi-robot control2) Human, multi-robot interfaces3) True 3D mapping and exploration of environments by

multi-robot teams4) Development of novel mobility modes for obstacle

traversal5) Practice and development for real robots that will com-

pete in the physical leagueIn our view, the Virtual Robots competition should serve

the following goals [3][4]:• provide a meeting point between the different research

communities involved in the RobocupRescue Simulationleague and the RobocupRescue Robot league. The two

communities are attacking the same problem from theopposite ends and are currently far from reaching eachother. The Virtual Competition offers close connections tothe Robot league, as well as more challenging scenariosfor multi-agents research

• lower enter barriers for new comers. The development ofa complete system performing search and rescue taskscan be overwhelming. The possibility to test and developcontrol systems using platforms and modules developedby others makes the startup phase easier. With this goal inmind, we fully support the open source strategy alreadyembraced in the other competitions in the RobocupRes-cue Simulation league

• let people concentrate on what they can do better. Strictlyconnected to the former point, the free sharing of virtualrobots, sensors, and control software allow people tofocus on certain aspects of the problem (victim detection,cooperation, mapping, etc), without the need to acquireexpensive resources or develop complete systems fromscratch.

In the near future it is our intention to extend USARSimto get an even closer connection to reality, in order to makepossible a seamless migration of code between real andsimulated worlds with known and bounded differences. In par-ticular, the advent of new hardware that performs acceleratedphysics simulation opens the doors to a new universe of yetunthinkable realism. It will be possible to perform accuratesimulation of legged and tracked vehicles, complex vehicle-environment interactions, grasping, and more. Also in responseto requests coming from researchers outside the Ropocupcommunity, it is our intention to add completely new classesof robots, like flying or swimming robots.

REFERENCES

[1] http://usarsim.sourceforge.net[2] J. Wang., M. Lewis and J. Gennari. ”USAR: A Game-Based Simulation

for Teleoperation”. Proceedings of the 47th Annual Meeting of the HumanFactors and Ergonomics Society, Denver, CO, Oct. 13-17, pp. 493-497,2003

[3] S. Carpin, J. Wang, M. Lewis, A. Birk, and A. Jacoff, ”High fidelity toolsfor rescue robotics: Results and perspectives”, Robocup Symposium 2005

[4] S. Carpin, M. Lewis, J. Wang, S. Balakirsky, C. Scrapper. ”Bridging thegap between simulation and reality in urban search and rescue”. RobocupSymposium 2006

SRMED-2006 17

Page 20: Third International Workshop on Synthetic …...Third International Workshop on Synthetic Simulation and Robotics to Mitigate Earthquake Disaster (SRMED 2006) 16th, June, 2006 Messe

Device Level Simulation of Kurt3D Rescue RobotsSven Albrecht, Joachim Hertzberg, Kai Lingemann, Andreas Nuchter, Jochen Sprickerhof, Stefan Stiene

University of OsnabruckInstitute for Computer Science

Knowledge-Based Systems Research Group

Albrechtstraße 28D-49069 Osnabruck, Germany

[email protected]://kos.informatik.uni-osnabrueck.de/download/UOSSim/index.html

Abstract— USARSIM is a worldwide used robot simulatordeployed in Urban Search and Rescue (USAR) and in thecontext of the RoboCup Rescue Real Robot contest. This paperdescribes the USARSIM simulator for KURT2 and Kurt3D robotplatforms, which we are using in both education and research.As it simulates on the device level, a seamless integration of realrobot control software with the simulations becomes possible. Weevaluate the performance for simulating laser range scans andthe camera system. In addition, we show a simulation of therescue robots.

I. INTRODUCTION

Mobile robotics is a complex area of scientific researchand education dealing with advanced technologies. Knowledgeand experiences of developing intelligent systems include thedomains electronics, mechatronics, computer hardware andsoftware. Furthermore mobile robotics projects are tied withlarge investments. Realistic simulations and fast prototypingfor developing mobile systems help to reduce the amount oftime and to minimize the costs for hardware developments. Inaddition, simulations offer the possibility to concentrate fasteron the interesting aspects of developing algorithms. Scientificeducation and research benefit from realistic, technically ma-ture, and well-engineered simulators.

State of the art computer games are cost effective, dueto the development for the mass consumer market. As anavailable free prerequisite, many students are already familiarwith computer games and are extraordinarily motivated togo into more detail. Ego shooters simulate agents in a 3Denvironments and contain a physics simulation [8].

Application of the simulator USARSIM is the area of rescuerobotics. Software of rescue robots covers artificial intelli-gence, knowledge representation and fast control algorithmdesign. RoboCup is a test and demonstration scenario forevaluating robots and their software.

The paper is organized as follows: First we introduceRoboCup Rescue and USARSIM, followed by a description ofhow to simulate environments and the Kurt2 robot platform,including a presentation of our USARSIM system architecture.Simulation performance and results conclude.

II. ROBOCUP, ROBOCUP RESCUE AND USARSIMRoboCup is an international joint project to promote AI,

robotics and related fields. It is an attempt to foster AI and

intelligent robotics research by providing standard problemswhere a wide range of technologies can be integrated andexamined. Though not as well-known as the RoboCup Soccerleagues, the Rescue league with its serious real-life back-ground got more and more attention lately. The idea is todevelop mobile robots that are able to operate in earthquake,fire, explosive and chemical disaster areas, helping humanrescue workers to do their jobs. A fundamental task for rescuerobots is to find and report injured persons. To this end, theyneed to explore and map the disaster site and inspect potentialvictims and suspicious objects. Current real deployed rescuerobots have only limited usage and are mainly designed forsearching for victims and paths through rubble that would bequicker to excavate, for structural inspection and for detectionof hazardous material [3]. These robots are designed to go a bitdeeper than traditional search equipment, i.e, cameras mountedon poles [3]. The RoboCup Rescue Contest aims at evaluatingnew rescue robot technology to speed up the development ofworking rescue and exploration systems.

In RoboCup Rescue, robots compete in finding as many“victims” (manikins) as possible, within a limited time, in agiven, previously unknown arena, and reporting their life signs,

Fig. 1. Rescue arenas at RoboCup 2004, Lisbon. Top row: Orange and redarea. Bottom left: Operator station. Bottom right: Example of a victim in ayellow area.

SRMED-2006 18

Page 21: Third International Workshop on Synthetic …...Third International Workshop on Synthetic Simulation and Robotics to Mitigate Earthquake Disaster (SRMED 2006) 16th, June, 2006 Messe

situations, and positions in a map of the arena, which has tobe generated during exploration. The idea is that, in a real-life application, this map would help humans to decide whereto send rescue parties. The arena consists of three subareas(yellow, orange, red) that differ in the degree of destruction,and therefore in the difficulty of traversal. In the “earthquakephase” between competition runs, the areas get completelyrearranged, including the distribution of the victims. Fig. 1shows some examples.

The robots in RoboCup Rescue are remotely controlled orsurveyed by one or more operators. The operator has no directview of the arena, only transmitted robot sensor data may beused for control. The degree of autonomy or telecontrol in therobots is at the team’s discretion.

Scoring is based on an evaluation function that is modi-fied between the competitions. This function incorporates thenumber of operators (the fewer the better), the map quality,the quality of the victim localization, the acquired informationabout the victim state, situation and tag, the degree of difficultyof the area, but also penalizes area bumping and victimbumping.

USARSIM is a simulation of robots and scenarios fordisaster and rescue robotics. It was developed by M. Lewisand J. Wang [8] to match the physical test scenarios of theAmerican National Institute of Standards (NIST) [1], [2]. Thefocus of the development was the evaluation of man–robotinteraction as well as research of cooperative robots [6], [7].

Sophisticated robot simulation in USARSIM is based ona game engine that stems from the computer game UnrealTournament 2003 or 2004. Due to using a game engine, thesimulator shows the excellent graphics and physic simulationof a commercial software product. Since games are producedfor a mass market, the costs are low: About $15 for a license.

Unreal Tournament is a multiplayer ego shooter for Win-dows, Linux and MacOS. The graphics is outstanding, asexpected from a commercial product. The Unreal-environmentincludes a script language that offers developers the possibilityto create objects and to control their behavior. The Unreal-editor that comes with the game and the open source programBlender were used to develop environments and models ofrobot platforms.

Multiplayer ego shooters realize a client server architecturewhere every player is a client, connecting to the game server.The fast rendering of the scene graphic is done by the client.The server coordinates the players and is responsible fortheir interaction. The communication protocol is proprietary.However, the software Gamebots modifies Unreal Tournamentsuch that agents can be controlled using an open TCP/IPinterface. This interface provides sensor information to theagent control program.

Physical simulation in Unreal Tournament is done by theKarma Physics Engine. Karma processes ridged body move-ments and allows to simulate motors, wheels, springs, hingesand joints. From these base modules, complicated objects arebuilt through compounding. The compound object comply tothe physics in simulation.

III. SIMULATION OF RESCUE ROBOTS

A. Environment Simulation

USARSIM provides Unreal maps for all three RoboCuparenas. Fig. 2 shows a photo and an Unreal rendering of theorange and yellow arena. Using the Unreal editor, arbitraryscenes can be created. Fig. 3 shows a photo of our officecorridor and the corresponding Unreal scene.

B. KURT2

KURT2 (Fig. 3, right) is a mobile robot platform with asize of 45 cm (length) × 33 cm (width) × 26 cm (height)and a weight of 15.6 kg. The core of the robot is a Centrino-1400 MHz with 256 MB RAM running Linux. An embedded16-Bit CMOS microcontroller is used to control the motor.

The robot is equipped with a 2D laser range finder, aLogitech web cam, including a pan and tilt unit, as well as a

Fig. 2. Real and simulated rescue arenas. Top: Orange arena real andsimulated. Bottom: Red arena real and simulated. Taken from [8].

Fig. 3. Top: AVZ building Osnabruck and KURT2 real. Bottom: Build-ing and KURT2 in simulation. More material available at http://kos.informatik.uos.de/download/UOSSim/index.html.

SRMED-2006 19

Page 22: Third International Workshop on Synthetic …...Third International Workshop on Synthetic Simulation and Robotics to Mitigate Earthquake Disaster (SRMED 2006) 16th, June, 2006 Messe

one-axis (horizontal) gyro and seven infrared sensors.Simulation of KURT2 robots: For the simulation of the

robot, a model of the hardware as well as a control softwareare needed. For the model, a mesh of the robot has to begenerated, which is done using the Unreal editor and Blender.Fig. 3 (right) shows the real and the simulated robot. As for thecontrol software, we extended the exiting software for the realrobots with interfaces to access either the actual hardware orthe corresponding components of the simulation software. Thisway, only small changes to the software were necessary, andany future improvements are beneficial for both applications,real world and simulated. The following code fragment showsan example of retrieving laser range data, either from a realSICK scanner or its simulated counterpart.

#ifdef USARSIMres = sim_client.SICK_read(fd_RS422,

buf, 255);#else

res = read(fd_RS422, buf, 1);#endif

Currently the following components are simulated:• A motor that drives the robot. Pulse width modulated

signals are simulated.• Odometry determining wheel revolutions in ticks.• A laser scanner yielding 181 distance values of one slice

of the environment in front of the robot.• A gyro that estimates the current heading of the robot.• A camera that provides images of the environment.The camera device drivers are fed from a camera server,

that in turn is fed by a snapshot of an Unreal spectator client.Fig. 5 (left) sketches the software structure and the data flow.For fast simulation four computers form a cluster:

1) One computer is needed as Unreal server. The serversimulates all robot sensors, except cameras.

2) The cameras are simulated on a second computer, run-ning a small program that captures pictures from anUnreal spectator window.

3) The control loop of the KURT2 robot runs on a thirdcomputer, instead of the robot’s notebook. Usually, theloop retrieves motor signals with 100 Hz and laser rangescans with 75 Hz.

4) The user interface for driving the robot runs again ona separate computer. This computer is connected to theprevious one, i.e., to the computer running the robotcontrol loop. There are no direct connections to Unreal.

The right part of Fig. 5 shows the 4-PC simulation ofKURT2. Fig. 6 shows the user interface of KURT2. The showndata is transmitted from the control loop of the robot.

C. Kurt3D

Kurt3D (Fig. 4, left) extends the KURT2 robot by a 3D laserrange finder, i.e., the SICK 2D scanner is mounted on a tiltableunit, rotating the scanner by its horizontal axis. Furthermore,Kurt3D is equipped with two cameras, mounted on selfmade

Fig. 4. Left: Real Kurt3D robot. Right: Simulated Kurt3D.

pan and tiltable units on both sides of the scanner. The robot’sheight increases to 47 cm, its weight to 22.6 kg.

In addition, the control software is extended by a 3Denvironment mapping system, i.e., 6D SLAM (simultaneouslocalization and mapping) [4]. This system always yields aprecise pose estimate in all six degrees of freedom (x, y,z position; pitch, roll and yaw angle), enabling the robot tocreate accurate three dimensional maps.

Simulation of Kurt3D robots: The simulation of the Kurt3Drobot is done according to section III-B. In the simulation, thescanner is attached to a tilting unit, yielding 3D scans. Thus,the simulation is done in the same way as in reality.

The USARSIM 3D scanner model is not used, due to itsunavailability in mobile robotics.

IV. RESULTS

A. Simulation Performance

Our simulation has been tested on a PC cluster, consisting offour 3.0 GHz Pentium-IV computer, running Linux OS. Tab. Ishows the performance of the system. To achieve a seamlessintegration of simulation and real robot control software werewrote Kurt’s control software to handle all devices in a non-blocking fashion. The control loop runs as fast as possible andwhenever new device data is present, the data gets processed.Standard Linux device drivers are used to buffer and hold backdata.

The simulated 3D scanner needs 61 sec for acquiring a3D scan of 181 × 120 3D data points. This result is dueto the fact that setting the servo motor values in Unreal is notinstantaneous.

B. Simulated 3D mapping

We have tested our 3D environment mapping system [4]in the simulated AVZ building and in the USARSIM yellowarena as provided by NIST. Fig 7 shows images of the arenavs. simulated depth data. Fig. 9. presents the result of an octreerepresentation (top right) and a marching cubes algorithm(bottom right) that extracts 3D meshes reliably from the datapoints and vectorizes the data.

V. CONCLUSIONS

The presented work introduces a simulation of KURT2robots using USARSIM. The simulation is based on the com-puter game Unreal Tournament 2004. Excellent 3D graphic

SRMED-2006 20

Page 23: Third International Workshop on Synthetic …...Third International Workshop on Synthetic Simulation and Robotics to Mitigate Earthquake Disaster (SRMED 2006) 16th, June, 2006 Messe

Fig. 5. Left: The software architecture for simulating KURT2 robots. The arrows show the data flow, lines represent TCP/IP connections, double lines arecreated, when programs are linked and dashed lines are generated, when data files are read. Right: Four computers are necessary for a fast KURT2 simulation.The lines represent TCP/IP connections and the arrows the data flow.

TABLE ICOMPUTING TIME OF THE DEVICES (PENTIUM-IV-3000) USING DIFFERENT SIMULATED ROBOCUP RESCUE ARENAS. FOR COMPARISON: THE REAL

KURT2 YIELDS ENCODER TICKS WITH A FREQUENCY OF 100 HZ, LASER SCANS WITH 75 HZ AND CAMERA IMAGES WITH ABOUT 10 FPS.

Kurt3D device computing time (4 PC cluster) computing time (single computer)scanner (181 values) 50ms 200msgyro (INU sensor) 50ms 200msencoder sensor 50ms 100mscamera 400ms 400ms

Fig. 7. Left: Rendered images from the AVZ building. Right: Simulated 3Dscan.

and physical simulation make a seamless integration in al-ready existing robot control architectures possible. We havestrictly followed the principle for simulating device drivers,resulting in the need for four standard PCs for simulation,where two computers are used for the robot and interfacesoftware, respectively. We provide a truly realistic simulation

environment for beta-testing our real robots, and experimentwith new control software or simulations of sensors that youdon’t have yet physically. This is the goal towards which wehave worked, given that we keep participating in the RealRobot league (Kurt3D in 2004 [4]; Deutschland1 in 2005 [5]).

Unfortunately, these results cannot be used in the RoboCupRescue Virtual Robots league. We started to join USARSIMcommunity with our original Kurt3D software, into which thesimulator is integrated. Kurt3D’s software development haslasted for 6 years, and is jointly developed with the FraunhoferInstitute AIS and with RTS, University of Hannover. Thesoftware underlies regulations and cannot be made availableto the public, as it is demanded in the current rules. However,parts of it, namely the complete Unreal parts as well as theinterface to our robot are available on our website.

Moreover, we believe that the Rescue Virtual Robots leagueshould focus on the seamless integration of real robot control

SRMED-2006 21

Page 24: Third International Workshop on Synthetic …...Third International Workshop on Synthetic Simulation and Robotics to Mitigate Earthquake Disaster (SRMED 2006) 16th, June, 2006 Messe

Fig. 6. The user interface for driving Kurt3D robots. The laser range data and the camera data originates from simulation.

software with the simulator. It does not make sense to developprograms just for driving a robot in an Unreal environment,without have the link to real robots.

Needless to say, a lot of work remains to be done. In futurework, we plan

• to integrate simulations of the RTS Scandrive [5] and theFLIR infrared camera into our system,

• to enhance realism of the laser scanner. So called salt andpepper noise will be added to the simulation to generatejump edge outliers as in real scans, and

• to improve the simulation of the gyro in order to yield adrift similar to the real sensor.

In addition, we plan to use USARSIM in projects dealing withservice robotics.

ACKNOWLEDGMENT

We would like to thank Jijun Wang and Mike Lewis fordiscussing simulator details. Many thanks to Christian Taubitzand Marvin Drogies for helping modeling the Osnabruck officeenvironment. The marching cubes algorithm was implementedby Thomas Wiemann. Furthermore, we thank Kai Pervolz andHartmut Surmann for preceeding joint research and OliverWulf and Bernardo Wagner for the work in the Deutschland1team.

REFERENCES

[1] A. Jacoff, E. Messina, and J. Evans. Experiences in deploying test arenasfor autonomous mobile robots. In Proceedings of the 2001 PerformanceMetrics for Intelligent Systems (PerMIS) Workshop, in association withIEEE CCA and ISIC, Mexico City, Mexico, 2001.

[2] A. Jacoff, E. Messina, B. A. Weiss, S. Tadokoro, and Y. Nagakawa.Test arenas and performance metrics for urban search and rescue robots.In Proceedings of the IEEE/RSJ International Conference on IntelligentRobots and Systems (IROS ’03), Las Vegas, NV, U.S.A., October 2003.

[3] R. R. Murphy. Activities of the Rescue Robots at the World Trade Centerfrom 11-21 September 2001. IEEE Robotics & Automation Magazine,11(3):851 – 864, September 2004.

[4] A. Nuchter, K. Lingemann, J. Hertzberg, H. Surmann, K. Pervolz,M. Hennig, K. R. Tiruchinapalli, R. Worst, and T. Christaller. Mappingof Rescue Environments with Kurt3D. In Proceedings of the IEEEInternational Workshop on Rescue Robotics (SSRR ’05), pages 158 –163, Kobe, Japan, June 2005.

[5] A. Nuchter, O. Wulf, K. Lingemann, J. Hertzberg, B. Wagner, , andH. Surmann. 3D Mapping with Semantic Knowledge. In Proceedings ofthe RoboCup International Symposium, Osaka, Japan, June 2005.

[6] J. Wang, M. Lewis, and J. Gennari. A game engine based simulationof the nist urban search and rescue arenas. In Proceedings of the 2003Winter Simulation Conference, pages 1039 – 1045, 2003.

[7] J. Wang, M. Lewis, and J. Gennari. Usar: A game based simulation forteleoperation. In Proceedings of the 47th Annual Meeting of the HumanFactors and Ergonomics Society, Denver, CO, October 2003.

[8] Jijun Wang. USARSim - A Game-based Simulation of theNIST Reference Arenas http://usl.sis.pitt.edu/ulab/usarsim download page.htm and http://sourceforge.net/projects/usarsim, 2005 – 2006.

SRMED-2006 22

Page 25: Third International Workshop on Synthetic …...Third International Workshop on Synthetic Simulation and Robotics to Mitigate Earthquake Disaster (SRMED 2006) 16th, June, 2006 Messe

Fig. 8. Left: Unreal map of the AVZ building (top) and of the yellow arena (bottom). Right: Corresponding marching cube representation (top) and pointcloud in a bird eyes view (bottom).

Fig. 9. Marching cube mesh of the yellow arena

SRMED-2006 23

Page 26: Third International Workshop on Synthetic …...Third International Workshop on Synthetic Simulation and Robotics to Mitigate Earthquake Disaster (SRMED 2006) 16th, June, 2006 Messe

A Simulation of Fluid Objects in Disasters- Tokai heavy rainfall simulation using IDSS -

Tomoichi TakahashiMeijo University

Tenpaku, 468-8502, JapanEmail:[email protected]

Tesuhiko KotoNIED-EDM

Kawaski Labs.,201-0855, Japan

Ikuo TakeuchiUniversity of Tokyo

Chiyoda, Tokyo,101-0021, Japan

Itsuki NodaNational Institute of AIST2-41-6,Aomi, Koto-ku,Tokyo 135-0064, Japan

Abstract— A simulation method of fluid in comprehensiverescue simulation system is proposed. The idea of compre-hensive simulation system provides a methodology that canhandle disasters and actions that people do in disaster areas.RSS system has been dealing with rescue agents, civilians andenvironments around them such as buildings, roads. They aresolid objects Fluid objects such as smoke and water are keycomponents in disaster situation, however they are difficultto merge fluid objects and solid ones. A flood simulation andfluid object representation is implemented in IDSS and thesimulation results are verified with real heavy rainfall disasterdata.

I. INTRODUCTION

Disasters have been suffering and will suffer us. Form2003 to 2006, five earthquakes with more 1,000 deaths werereported. Among them, there was the tsunami caused byearthquake at Northern Sumatra.

We have joined RoboCup Rescue Simulation (RSS)project that aims to develop a comprehensive disaster rescuesimulation systems [5]. RSS system can combine variousdisaster simulations, such as fire spread, building collapse,rescue operations, evacuation behavior, etc. Some of themoccur simultaneously at earthquakes and present them ascoherent scenes.

The idea of comprehensive simulation system providesa methodology that can handle disasters and actions thatpeople do in disaster areas. Table I shows disasters anddisaster simulations that are required to be implemented.From the table, component simulators are common todifferent disasters. They are from earthquakes, typhoonsuch as Hurricane Katrina at 2005 September to man madedisasters. RSS system has been dealing with solid objectssuch as rescue agents, civilians and environments aroundthem such as buildings, roads. Fluid objects such as smokeand water, are also key components in disaster situation.Smoke not only decreases human sensing abilities but alsosuffocates. Earthquakes also cause tsunami and flood owingto collapse of banks.

In this paper, we propose a simulation method for fluid incomprehensive rescue simulation system. There are severalmodels corresponding to the causes, such as dam flood,river overflow, tsunami, etc. [1][11][3]. With respect oftsunami, Katada et al. developed Tsunami Disaster ScenarioSimulator and made hazard model to forecast the human

loss [2]. Flood simulator in urban area has other properties.Downpours rainfall swells a river and leads to overflow.And when downpours rainfall is over the capability of urbandrainage systems, the rainwater flows back to low placesthrough manholes. The flood in urban flow into under stairsand underground mall and may take a toll.

Requirements for flood simulation and our model arepresented in the next section. The third section describesits implementation on IDSS (Integrated earthquake DisasterSimulation System) [6]. Experiment results for Tokai heavyrainfall and future research themes are discussed in the finalsection.

II. REQUIREMENTS AND PROPOSED MODEL FOR FLOODSIMULATION

A. Flood features and requirements

Followings are features of flood and requirements forflood simulators:

• Water does not choose places. We cannot predictwhere collapses of embankment caused by earthquakesor downpours will occur and water flows into lowerlevels. At any given place, flood simulation should bedone without any special preparation.

• Flood or smoke spreads faster and wider than firespreads. Water spread fast, it requires short time stepsof simulation. Fluid flows according to natural laws.The calculations of flow require altitude data andboundary conditions. The conditions are three dimen-sion data of surrounding buildings.

• Flood wall & firewall and drainage & fire pumps areeffective to save loves. Human activities using suchtools should be incorporated into flood simulations.And flood has an effect on fire spread and collapsebuildings that may injure people inside.

• At flood, people evacuate to higher places. High build-ings work as refuges from flood. On the other hand,underground malls become dangerous places wherepeople find difficult to evacuate from. These facilitiesare important features to simulate human evacuationand rescue actions.

We assume followings are minimum requirement of floodsimulation, (1) simulation of flood’s spread at any place,

SRMED-2006 24

Page 27: Third International Workshop on Synthetic …...Third International Workshop on Synthetic Simulation and Robotics to Mitigate Earthquake Disaster (SRMED 2006) 16th, June, 2006 Messe

TABLE IDISASTERS, NECESSARY SIMULATION COMPONENTS AND PURPOSES

components usagedisasters simulators data items to be evaluated time to used

natural earthquake fire GIS human lives before disastertsunami *smoke *analysistyphoon collapse facilities damages after disasterflood *building, mudslide * public property *planning

man-made terror human activity life lines * private property*traffics*evacuation

Fig. 1. Water objects and building object in grid

(2) handling human rescue operation such as setting floodwall, (3) representing evacuation to higher floors and (4)integration flood into comprehensive simulation system.

B. Flood simulation model & fluid object presentation

We use a diffusion equation model.

∂w

∂t= S + α{

∂2w

∂x2+

∂2w

∂y2} (1)

where w is a quantity that involves water level and at time t.α is a diffusion coefficient and S is a parameter indicatingthat there are sources or sinks of water. In simulation, areais divided in grids and water property at every gridi,j . wi,j

is water property there and is calculated with differenceequation derived from eq. (1).

∂wi,j

∂t= α

(

∂2wi,j

∂x2+

∂2wi,j

∂y2

)

+ Si,j (2)

Reasons that we adapted the diffusion equation modelare followings.

• The above equation is valid when water spreads on aflat floor. Regarding w(i, j) as a sum of elevation andwater level at position (i, j), it can handle water flowsdown easily. In our implementation, building objectsare over cells and its height are treated as a portion ofw just as level of land. However, only water part in wis moved to neighbors.

• Positive Ss corresponds to rainfall or embankments ofriver and the places with negative values correspondto positions of discharge pump stations or seweragemanholes.

Fig. 2. Flood simulation model

C. Representation of boundary

The size of grid is set to represent buildings and roadsthat are over them(Fig.1). Blue cells (front five cells) showthat their water objects have water component of propertyw whose values are greater than zero, while they are zero atother cells. Two high cells at the back row show a buildingobject. Their w values that are added with building’s heightare more than other cells, so water cannot enter into thesebuildings. When they have flood, water flows to low placesand does not come into buildings.

Fig. 2 (a) shows that the neighbors have the same waterlevel. While water remains there at the left case, water flowsto the right cell in the right case. Fig. 2 (b) shows thatvariations of w(i, j) from time = t to time = t + 1 arelimited to the portions of water levels, since the land portionof w does not move.

Our proposed method makes easier to set and calculateboundary conditions in other flood simulation methodssuch as CFD (Computational Fluid Dynamics) and possibleto calculate damages from flood above floor levels. Andsatisfies the first two of minimum requirements.

1) Given GIS data (ie. Altitude data of wij), it cancalculate flood’s spread,

2) human rescue operation such as setting flood wallscan be represented by setting Ss,

III. FLOOD SIMULATION USING IDSSWe implemented flood simulator as a component sim-

ulator of IDSS (Integrated earthquake Disaster Simulation

SRMED-2006 25

Page 28: Third International Workshop on Synthetic …...Third International Workshop on Synthetic Simulation and Robotics to Mitigate Earthquake Disaster (SRMED 2006) 16th, June, 2006 Messe

System) [6]. IDSS has been designed to simulate disasterand rescue operation with finer resolution over bigger thanRSS system.

A. Distributed simulation of IDSS

IDSS combines a space-time geographical informationdatabase, component simulators and real-world interfacessuch as sensors and robots. One of IDSS features is thatit supports scalability of a simulated world by distributingcomputation [7].

In the IDSS, a simulated area can be divided intosmall regions where distributed kernels are responsible forlocal simulations respectively (Fig. 3). Local componentsimulators are connected to the distributed kernels. Thekernels controls simulation process as follows;

1) control of SSTD (shared space-time data) read &write requests from component simulators,

2) control of simulation clock,3) exchange of simulation results with neighbor kernels,4) communication with a master kernel that controls

whole simulation.

B. API for component simulators

IDSS provides S-API (Simulator Module ApplicationProgram Interface) for component simulator developers.Using S-API, the component simulator developers can dodistributed computation. The procedures are followings;

1) read current world status that can be viewed fromSSTD,

2) do simulation over an assigned region,3) write results of simulation back to SSTD,4) wait until the next simulation cycle.Fig. 3 shows how flood flows from region 1 to region 2.

Water spreads in region 1 at first. Flood simulator in chargeof the region 1 simulates it and water flows into the overlapregion. Via SSTD, flood simulator in charge of the region 2reads the ws in the overlap region and simulates the spreadof flood in region2.

C. Generation of simulated area from GIS data

For flood simulations, geographical data, such as rivers,coasts, roads and crossings are necessary and also arerequired to be represented as three dimension data. GIS dataare provided as free data from various organizations [9]. InJapan, GSI national surveying and mapping organizationputs digital maps of Japan on the Internet in XML form.To do experiments, following steps are done to compensatedata from GSI provided one.

• Building information is also essential data for floodsimulation. They provide refugees in urban areas andboundary conditions for flood spread. However, theyare privately owned properties and are not included inthe XML form except public buildings. In our experi-ments, building data are automatically generated[10].

• Altitude data corresponding to grid size are interpo-lated from provided resolution (Fig4).

(a) computation and data exchange

(b) time sequence of flood spread

Fig. 3. Distributed simulation using IDSS and an example

Fig. 4. Generated altitude data for Tenpaku

SRMED-2006 26

Page 29: Third International Workshop on Synthetic …...Third International Workshop on Synthetic Simulation and Robotics to Mitigate Earthquake Disaster (SRMED 2006) 16th, June, 2006 Messe

Fig. 6. time sequences of simulation (from left step=0,432,864 Bule=water)

TABLE IICHANGE OF WATER HIGHT (RAINFALL WITH MAX. RAINFALL S = 0.00258

)

elaspe timewater 4 8 12 16 20 24

0.1m < 23.6% 22.0% 21.0% 20.4% 20.1% 19.8%0.1m =< and < 0.5m 55.5% 19.5% 17.0% 15.6% 14.7% 13.9%0.5m =< and < 1m 15.7% 34.4% 6.5% 5.3% 4.9% 4.6%

1m < 5.2% 24.1% 55.5% 58.6% 60.3% 61.7% x

Fig. 5. Damages at Tokai Heavy rainfall

IV. EXPERIMENTS

We take the heavy rainfall disaster at Tokai (aroundNagoya City) on Sep. 11, 2000 as test areas [8].(Fig. 5)Followings are simulations condition and Fig. 6 show re-sults of simulation. A PC (Pentium 4 (3GHz), 2GBmemory)is used and simulation time is 674[sec].

1) Tenpaku and Nonami area was one of themost damaged areas. We select Tenpaku ward(5.6km×7.4km). The area is divided into 75 × 100grids.

2) The simulation period is set one day from Sep. 11 to12, and 100 seconds are set as one time step. Thistime step and grid size restrict water spread speed isless than 0.7 m/s.

3) Records of rainfall per hour are publicly open. Forexample, the maximum rainfall per hour from Sep.11 to 12 was 93mm 1. They are converted in Sij by

1The average monthly rainfall at August is 145mm and the rainfalls atheadwater area were reported more than Tenpaku area.

dividing with area and multiplying simulation step.The converted value is S=0.00258 for the maximumrainfall per hour 93mm.

4) Powers of drainage pumps are also open data. Thepowers are 18m3/s (converted values is S=-0.326.)There are two pumps in this area. We set them atlocations - (50,25) and (50,50). [12]

A. verification for results

1) Experiment 1: Table II shows simulations results thatrain continues to fall constantly with max rainfall for a day.This setting is more rainfall than real situation. The rightbottom value in the table shows that there are water over1m height at 61.7% of grids after 24 hours. Dark partsin Fig. 7 show flooded area at Tenpaku. The ratio of realflooded area is about 20 %. The simulation ratio is morethan the real one.

2) Experiment 2: The upper part of Fig. 8 shows ob-served rain fall. A solid line in lower part shows changesof water height at one grid when rain continues to fallwith average rainfall. A dashed line corresponds to changeswhen rainfall like observed hour rainfall. Around 17 hours(600 time steps) when the rain fell heavily, the water levelincreases rapidly Table III shows the ratios of flooded areasimulated with average rainfall and hour rainfall. They are21.2 % and 24.4 % and become near to real values. Fig.8 shows changes of water height with average rainfall andwith hour rainfall.

3) Experiment 3: It is desired to rescue efficiently underlimited resources. Setting drainage pumps is one of rescueoperations against flood. The lower row in Table III showssimulation results when the pumps are set at lower locationsthan positions at experiments 2. The ratios decrease about

SRMED-2006 27

Page 30: Third International Workshop on Synthetic …...Third International Workshop on Synthetic Simulation and Robotics to Mitigate Earthquake Disaster (SRMED 2006) 16th, June, 2006 Messe

0.2%, 0.4% for average rainfall and hour rainfall casesrespectively.

These experiments show that the flood simulation reflectreal disasters.

Fig. 7. Spred of flood at Tenpaku ward.

0

0.03

0.06

0.09

0 300 600

day averagehour rainfall

1

10

100observed rainfall

Fig. 8. Hour rainfall changes (up) and simulated water hight(down)

TABLE IIIRATIOS OF FLOOD AREA VS. POSITION OF PUMPS

pump positions day average hour rainfallEx. 1 and 2 21.2?? 24.4??

Ex. 3 (lower position) 21.0?? 24.1??

B. distributed computation

As pointed in II-B, simulation taking consideration ofbuilding’s effect on flood spread requires that the size ofgrid is as small as buildings’ size. The smaller size ofgrid increases the number of grids and requires the morecomputation time. Territorial division is one of methods toreduce the computation time.

Table IV shows simulation results using IDSS. Targetarea is set wider by combining ward from one ward to

TABLE IVSIMULATION TIME (SECOND)

(1) whole size constantmap division number

wards#(area) grid size 1 2 41 ward (45.67 km2) 100 × 74 452 363 3272 wards (77.68 km2) 94 × 100 573 424 4093 wards (94.00 km2) 78 × 100 494 355 348

4 wards (111.90 km2) 61 × 100 417 342 321

(2) divides are size constantmap division number(whole grid #)

wards#(area) 1 (100) 4 (200)1 ward (45.67 km2) 452 1,1592 wards (77.68 km2) 573 1,5583 wards (94.00 km2) 494 1,3344 wards (111.90 km2) 417 1,253

four wards (Fig. 9). The whole areas are divided into twoand four areas. PCs with the same performance (Pentium43GHz, memory 2GB) are assigned to each divided area. Ina case of the number of division is two (four), two (four)PCs are used to simulate the whole area. The PCs are linkedwith giga ether network.

The computation of overlapped area and data exchangesare overhead with dividing areas. Except them,

(1) Dividing a area with keeping the whole sizeconstant, the computation time reduces to 1

n,

(2) Dividing a area with keeping the divided sizeconstant (the whole area increases as n), thecomputation time remains independent of n.

Table IV shows simulation time. The values show somemerits of region division, however there are rooms to beimproved.

V. DISCUSSIONS AND CONCLUSION

Several disasters under various setting have been simu-lated sing RSS, and comparison among simulation resultshas been done. Comparison simulation results with real oneare required as simulation becomes real. In this paper, weshow flood simulation based on diffusion equation modelis easily implemented in IDSS. The experiments in Ashow that the flood simulation match qualitatively the realdisaster and rescue operations effect on simulation results.The experiments in B indicate that IDSS’s scalability isuseful for flood simulation. These results show that theproposed method is promising for a disaster simulationsystem.

The last two of requirements, evacuation to higher floorsand integration into comprehensive simulation system, arenot implemented yet. Future issues including them are

• Not only agent’s command related fluid such as setwater wall, evacuate to higher floors or from un-dergrounds are set, also representation of buildinginside structure and simulator changes agents state arerequired to implement.

SRMED-2006 28

Page 31: Third International Workshop on Synthetic …...Third International Workshop on Synthetic Simulation and Robotics to Mitigate Earthquake Disaster (SRMED 2006) 16th, June, 2006 Messe

Fig. 9. Maps used for experiment B.(from left 1ward, 2, 3, and 4 wards)

• With regard to integration with other simulator, humancan move in water or may lose their visibility bysmoke. Traffic simulator or human state monitoringsimulator supports damage from flood simulator.

• Sewage system or smoke control equipment is alsoimportant to make simulation realistic.

• Takahashi reported on comparison Simulation resultsusing RSS with local governments disaster estimation[13]. The local government data is also primitiveone as used in this paper, so verification methods ofsimulation results are important.

ACKNOWLEDGMENT

The authors appreciate Takayasu Suzuki who did exper-iments and organizations that provide GIS data.

REFERENCES

[1] M. Gouda, K. Karner, R. Tatschl: Dam Flooding Simulation UsingAdvanced CFD Methods, WCCM V, Fifth World Congress on Com-putational Mechanics. Vienna, Austria. July, 2002

[2] http://dsel.ce.gunma-u.ac.jp/simulator/owase/en/[3] http://fmd.dpri.kyoto-u.ac.jp/ toshi/index.html[4] http://www.pmel.noaa.gov/tsunami/home.html[5] H. Kitano, S. Tadokoro, I. Noda, H. Matsubara, T. Takahashi, A. Shin-

jou, S. Shimada: RoboCup Rescue: Search and Rescue in Large-Scale Disasters as a Domain for Autonomous Agents Research, IEEEInternational Conference on System, Man, and Cybernetics, 1999

[6] I. Takeuchi, S. Kakumoto,Y. Goto: Towards an Integrated EarthquakeDisaster Simulation System, 1st Inter. Workshop on SyntheticSimulation and Robotics to Mitigate Earthquake Disaster, July, 2003http://www.dis.uniroma1.it/ rescue/events/padova03/papers/index.html

[7] T. Koto, I. Takeuchi: A Distributed Disaster Simulation SystemThat Integrates Sub-simulators, 1st Inter. Workshop on SyntheticSimulation and Robotics to Mitigate Earthquake Disaster, July, 2003http://www.dis.uniroma1.it/ rescue/events/padova03/papers/index.html

[8] http://www.city.nagoya.jp/13doboku/toukai gouu/toukaigouu.htm[9] http://zgate.gsi.go.jp/ch/jmp20/jmp20 eng.html

[10] Tanigawa, Takahashi, et.al: Urban Flood Simulation as a componentof Integrated earthquake Disaster Simulation System, Proc. 2005 IEEEInt. Workshop on Safety, Security and Rescue Robotics, pp.248-252,2005,

[11] http://www.cbr.mlit.go.jp/kisojyo/rootup/top.html[12] http://www.pref.aichi.jp/kensetsu-somu/owari-

kensetsu/kasen/nakae/GEKITOKU.htm/kaze/nagoyanokawa/gouu/nagoya00002599.html

[13] T. Takahashi: Requirements to Agent Based Disaster Simula-tions from Local Government Usages, First International Work-shop on: Agent Technology for Disaster Management, 2006http://www.ecs.soton.ac.uk/ sdr/atdm/

SRMED-2006 29

Page 32: Third International Workshop on Synthetic …...Third International Workshop on Synthetic Simulation and Robotics to Mitigate Earthquake Disaster (SRMED 2006) 16th, June, 2006 Messe

Information Sharing and Integration

in Rescue Robots and Simulations

Itsuki Noda

Information Technology Research InstituteNational Institute of Advanced Industrial Science and Technology

Tsukuba, JAPAN(E-mail: [email protected])

Abstract

Collecting and sharing disaster information aboutdamaged area is the most important activity to sup-port decision-making in rescue processes. We assumethat such information sharing activities can be mod-eled as a simple database access, and design a stan-dard protocol for information sharing among rescuesystems. The protocol is built on several existingXML standards like GML (Geographic Mark-up Lan-guage) and SOAP (Simple Object Access Protocol)with some extensions that enables to integrate vari-ous systems °exibly and on-the-°y.

1 Motivation

In huge disasters, collecting information is the mostimportant action to make rescue e®ective. Generally,headquarters of rescue departments like local govern-ments, police and ¯re o±ces need to know status ofthe damaged area as much as possible for suitabledecision-making. Therefore, the governments are re-quired to build robust and e®ective information sys-tem for rescue.

Information system is center issue also for generalpublic. People in damaged area need to receive andsend information in order to evacuate from damagedarea, ask someone’s help, and inform their status totheir families and others.

Special Project for Earthquake Disaster Mitigation

in Urban Areas (DDT Project) [2, 9, 5] was foundedby Japanese Government to promote researches onmitigation of damages of huge disasters using ad-vanced technologies that includes ICT, robotics andcomputer simulation. One of the goals of is to providea standard of framework for information and commu-nication infrastructure for rescue system. To achievethe goals, we are developing common frameworks ofrobust networking and °exible information-sharing,which help to collect sensing data about damages andto control search-and-rescue devices like robots, sen-sor networks, PDA, and so on.

2 Requirements for Rescue In-

formation System

The primary role of robots and devices developed inDDT Project is to collect disaster information to sup-port e®ective rescue activities. Because such robotsand devices provide only fragments of information,we needs a database system to store and integratesuch fragments.

The essential feature of disaster information islocation-related and time-sensitive. This means thatthe database to integrate them should be a kind ofgeographical information system (GIS), which is de-signed to provide facilities to represent geographicalobjects, to retrieve objects using location as a key, tolink information to a certain point, and to store in-

SRMED-2006 30

Page 33: Third International Workshop on Synthetic …...Third International Workshop on Synthetic Simulation and Robotics to Mitigate Earthquake Disaster (SRMED 2006) 16th, June, 2006 Messe

formation existence and changes of objects on a map.

Such database will also provide a facility to inte-grate rescue robots and information systems for res-cue. Figure 1 shows an overview design of the totalintegrated information systems for rescue. Here, in-formation is transfered via database using a commonprotocol.

We also need to have a viewpoint of dual usewhen we consider to design rescue information sys-tem. Dual use provides the following merits for res-cue systems:

• It is easy and low-cost to update the commonplatform. The cost of maintenance and updateof the protocols and devices should be an impor-tant issue to accept them in the rescue system.Products for consumers are generally low-costand up-to-date in in

• It is easy to ¯nd spares when some modules havetroubles. If the modules or devices are compati-ble to ordinary one that are used by many peopleat home or o±ce, the rescue team need not to besensitive to carry huge stocks of spare modules.

When we think about the dual use, extendibilityand °exibility are key issues. Because ICT’s progressis so called dog-year, we need to choose technologiescarefully from the viewpoint of life-cycle. Extendibil-ity and °exibility are major factors to in°uent thelife-cycle.

3 Mitigation Information Shar-

ing Protocol

3.1 Overview

Based on discussions in previous sections, we havebeen designing a protocol for information sharing forrescue called MISP(Mitigation Information SharingProtocol). MISP provides functions to access and tomaintain geometrical information database over net-works. We suppose that a total system using MISPforms a client-server style, in which a server is adatabase and clients are data-providers and/or data-requesters. The protocol consists of pure and simple

XML representation so that it is easy to develop sys-tems to handle this protocol.

In MISP, geometric properties should be repre-sented by geometric primitive types of GML. [7].While GML provides widely varied expressions forgeometric primitives, we use only points, line-strings,polygons, and geometry collections. This set of prim-itives is rich enough to construct GIS for rescue pur-pose, and can be handled e®ectively using spacial in-dexing techniques like R-tree.

As a database protocol, we take WFS(Web FeatureService) [6] with SOAP envelope as a base. Currently,the following protocols are available in MISP:

• GetFeature: Query data in the database.• Transaction: Manipulate data in the database.

– Insert: Add new data into the database.– Update: Modify a part of existing data in

the database.– Delete: Remove existing data from the

database.

• GetCapabilities: Ask information about func-tions the database provides.

• DescribeFeatureType: Request informationabout a XML structure of a certain type of datathe database can handle.

In addition to these WFS protocols, MISP also hasan additional protocol, RegisterFeatureType, tode¯ne a new type and its XML structure by XMLschemes. Using this protocol, the user can add a newtype of data without stopping and re-design wholesystems. This kind of °exibility is important forthe rescue system because it is di±cult to de¯ne ev-erything before disaster. RegisterFeatureType pro-tocol enables to connect new systems and to han-dle new types of information under emergency situ-ations. Figure 2 shows major four protocols, Regis-terFeatureType, GetFeature, Insert and Update, inMISP.

In MISP, geometric properties should be repre-sented by geometric primitive types of GML. WhileGML provides widely varied expressions for geomet-ric primitives, we use only points, line-strings, poly-gons, and geometry collections. This set of primitivesis rich enough to construct GIS for rescue purpose,

SRMED-2006 31

Page 34: Third International Workshop on Synthetic …...Third International Workshop on Synthetic Simulation and Robotics to Mitigate Earthquake Disaster (SRMED 2006) 16th, June, 2006 Messe

Figure 1: Integrated Rescue Systems via Geographic Information Database

and can be handled e®ectively using spacial indexingtechniques like R-tree.

We also use SOAP(Simple Object Access Protocol)Envelope as an envelope of the above database pro-tocol. While we currently assume that the databasesystem forms a simple client-server style, we expectthat such rescue database system should should bedesigned as a GRID and/or P2P system for robust-ness and scalability. SOAP is introduced to provideextendability to adapt GRID/P2P systems.

3.2 DaRuMa

DaRuMa (DAtabase for Rescue Utility MAnage-ment) is a reference system that is compliant toMISP. DaRuMa consists of MySQL and a middle-ware written in Ruby/Java. The middle-ware trans-late between MISP and SQL. Figure 3 shows aoverview of DaRuMa system.

In order to utilize e®ectiveness of MySQL asRDBS, DaRuMa’s middle-ware °attens XML struc-

XMLSchema

DaRuMa Server

MySQL

MISP Filter

Application

XML(MISP)

XML, GML

SQL

Geo.DataGeo.DataGeo.DataGeo.DataGeo.DataGeo.Data

Geo.DataGeo.DataGeo.Data

XMLSchema

XMLSchema

XMLSchema

WFSInsertData

Geo.DataGeo.DataGeo.Data

WFSUpdateData

Geo.DataGeo.DataGeo.Data

WFSGet

Feature

RegisterFeatureType Query Insert Update

Figure 2: Four Major Protocol in MISP

SRMED-2006 32

Page 35: Third International Workshop on Synthetic …...Third International Workshop on Synthetic Simulation and Robotics to Mitigate Earthquake Disaster (SRMED 2006) 16th, June, 2006 Messe

Figure 3: System Overview of DaRuMa

ture into SQL table as much as possible. In additionto it, DaRuMa makes indexes of geometrical prop-erties in XML structure, because most of queries inrescue situation are related to locations of data.

Because DaRuMa/MISP is utilized as a client-server system over networks, we can apply it for vari-ous styles with existing and newly-developed systemslike ¯gure 4. We already developed a CSV-MISP con-verter/libraries by which most of existing informationsystems can connect with.

3.3 Sensed Data Representation

MISP and DaRuMa provide quite general frameworkto share information, but do not specify how to rep-resent actual data in XML format. While such unre-stricted speci¯cation enables for DaRuMa (and otherMISP-compliant systems) to connect to existing sys-tems, we also need some guideline to represent shar-ing information for newly developing systems to en-able e®ective cooperation among systems.

As the ¯rst step to build guideline, we are design-ing representation of sensing data taken by sensor-and robot-networks based on GML’s directedObser-vation. The features of the representation are:

• Sensing data itself (ddt:sensedDataEntity) andits meta-data (ddt:sensedDataInfo) is separately

Figure 4: Various Styles to Connect Existing andNewly-developed Systems

represented. Because sensing data like imagesand movies are generally large, the size of thequery results may be huge even the most of themare not used. Separation of actual data andmeta-data enables the system to ¯lter data usingmeta-data information that is generally smallerthan actual data.

• Each property can include noise element, whichindicate noise- and ambiguity-level. Because itis di±cult to scan whole area in a uniformed wayunder disaster, such information will be requiredin order to integrate data from di®erent kinds ofsensing devises.

4 Conclusion

In this article, we propose MISP, a standard proto-col for rescue information sharing systems, and showsome sample implementation. The protocol has a°exibility to accept new type of information and tointegrate new sub-systems easily.

As mentioned in section 2, a viewpoint of dual-use is important to design rescue system. While Ionly described about rescue-application side of MISP,MISP also has a capability to handle general purposerelated to geographical information. We are planningto apply it to daily application like town-navigation.

SRMED-2006 33

Page 36: Third International Workshop on Synthetic …...Third International Workshop on Synthetic Simulation and Robotics to Mitigate Earthquake Disaster (SRMED 2006) 16th, June, 2006 Messe

<ddt:sensedDataInfo><ddt:location><gml:Point><gml:coordinates>10.0,20.0,30.0</gml:coordinates>

</gml:Point><ddt:noise> ... </ddt:noise></ddt:location><ddt:target><gml:Point><gml:coordinates>40.0,50.0,60.0</gml:coordinates>

</gml:Point><ddt:noise> ... </ddt:noise></ddt:target><ddt:validTime><gml:TimePeriod><gml:beginPosition>2005-09-29T23:35:00+09:00</gml:beginPosition><gml:endPosition>2005-09-29T23:35:00+09:00</gml:endPosition>

</gml:TimePeriod></ddt:validTime><ddt:using xlink:href="MyDigiCam" /><ddt:resultOf xlink:href="MyPhoto"><ddt:type>image/jpeg</ddt:type></ddt:resultOf><ddt:direction><ddt:DirectionVector><ddt:horizontalAngle>0.0</ddt:horizontalAngle><ddt:verticalAngle>80.0</ddt:verticalAngle><ddt:rollAngle>0.0</ddt:rollAngle>

</ddt:DirectionVector></ddt:direction><ddt:notes><ddt:viewCone>35.0,20.0</ddt:viewCone><ddt:resolution>640,400</ddt:resolution></ddt:notes>

</ddt:sensedDataInfo><ddt:sensedDataEntity gml:id="MyPhoto"><ddt:type>image/jpeg</ddt:type><ddt:encoding>base64</ddt:encoding><ddt:data>

/9j/4SQ+RXhpZgAASUkqAAgAAAAMAA8BAg...AQAAABoBBQABAAAAuAAAABsBBQABAAAAwA...AgAUAAAA8AAAABMCAwABAAAAAgAAAJiCAg...CgEAAKQEAABGVUpJRklMTQAARmluZVBpeC...dGFsIENhbWVyYSBGaW5lUGl4IEY4MTAgIC...NwAgICAgAABQcmludElNADAyNTAAAAIAAg...AQAAAOQCAAAiiAMAAQAAAAYAAAAniAMAAQ......

</ddt:data></ddt:sensedDataEntity>

Figure 5: Example of Sensed Data

There also remain following open issues:

• Various Templates for Sensing Data: In this ar-ticle, we just show an example of sensing datawhich is designed based on a picture image.We also need other kinds of data templates likemovie and grid data. For movie or other time-sequence data, we should apply schemes for dy-namic features of GML, in which a sequence ishandled as a set of snapshot. For grid data,we need introduce the concept of ‘coverage’, inwhich °exible grid can be de¯ned to representmatrix type data. In any case, we should payattention to application of movements of ubiqui-tous sensor networks[10, 11, 4] in order to adaptthe templates to such new technologies.

• Ontology and Web Service: While MISP can ac-cept any kind of data structure, MISP has nomechanism to map among these structures. Es-pecially in the case of daily operations of localgovernments, many types of data are overlappedover departments. Generally, it is di±cult tomaintain correspondence between data in di®er-ent departments. Therefore, we need to a on-tology mechanism to map such correspondencesemi-automatically[8].

• GRID/P2P: As mentioned in section 3.1, rescueinformation system should have a capability torun on GRID/P2P environments[3, 1]. MISPuses SOAP Envelope to guarantee to apply suchdistributed processing framework, but has notbeen speci¯ed how to realize it.

References

[1] Schahram Dustdar and Wolfgang Schreiner. Asurvey on web services composition. Interna-tional Journal of Web and Grid Services, 1(1):1–30, 2005.

[2] Tetsuhiko Koto, Itsuki Noda, YoshitakaKuwata, and Ikuo Takeuchi. The architectureof the integrated earthquake disaster simulationsystem under special project for earthquakedisaster mitigation in urban areas. In Proc. ofThe 17th Annual Conference of the Japanese

SRMED-2006 34

Page 37: Third International Workshop on Synthetic …...Third International Workshop on Synthetic Simulation and Robotics to Mitigate Earthquake Disaster (SRMED 2006) 16th, June, 2006 Messe

Society for Artificial Intelligence, Jun. 2003.No. 1B5-01 (in Japanese).

[3] M. Lenzerini. Innconsistency tolerance in p2pdata integration. In Inconsistency and Incom-pleteness in Database (IIDB) 2006, Mar. 2006.

[4] Rimma V. Nehme and Elke A. Rundensteiner.Scuba: Scalable cluster-based algorithm for eval-uating continuous spatio-temporal queries onmoving objects. In Advances in Database Tech-nology – EDBT 2006, pages 1001–1019. EDBT,Springer, Mar. 2006.

[5] Itsuki Noda, Tomoichi Takahashi, Shuji Morita,Tetsuhiko Koto, and Satoshi Tadokoro. Lan-guage design for rescue agents. In Makoto Tan-abe, Peter van den Besselaar, and Toru Ishida,editors, Digital Cities II, chapter Technologiesfor Digital Cities, pages 371–383. Springer, 2002.

[6] Open GIS Consotium, Inc. Web Feature ServiceImplementation Specification (OGC 02-058),ver. 1.0.0 edition, May. 2002. https://portal.opengeospatial.org/files/?artifact_id=7176.

[7] Open GIS Consotium, Inc. OpenGIS Geogra-phy Markup Language (GML) ImplementationSpecification (OGC-02-023r4), ver. 3.00 edition,Jan. 2003. http://www.opengis.org/docs/02-023r4.pdf.

[8] Jihghai Rao and Xiaomeng Su. A survey of au-tomated web service composition methods. InSWSWPC 2004, pages 43–54, 2004.

[9] T. Takahashi, I. Takeuchi, T. Koto, S. Tadokoro,and I. Noda. Robocup rescue disaster simulatorarchitecture. In H. Kitano, S. Tadokoro, K. Fis-cher, and A. Burt, editors, Workshop WorkingNotes on RoboCup Rescue (ICMAS-2000 work-shop), pages 97–105, Jul. 2000.

[10] Fusheng Wang, Shaorong Liu, Peiya Liu, andYujian Bai. Bridging physical and virtual worlds:Complex event processing for r¯d data streams.In Advances in Database Technology – EDBT

2006, pages 588–607. EDBT, Springer, Mar.2006.

[11] Jiong Yang and Meng Hu. Trajpattern: Miningsequential patterns from imprecise trajectoriesof mobile objects. In Advances in Database Tech-nology – EDBT 2006, pages 664–681. EDBT,Springer, Mar. 2006.

SRMED-2006 35

Page 38: Third International Workshop on Synthetic …...Third International Workshop on Synthetic Simulation and Robotics to Mitigate Earthquake Disaster (SRMED 2006) 16th, June, 2006 Messe

Multi-Objective Autonomous Exploration in aRescue Environment

Daniele Calisi, Alessandro Farinelli, Luca Iocchi, Daniele Nardi, Francesca PucciDipartimento di Informatica e Sistemistica

Universita di Roma “La Sapienza”Via Salaria 113, 00198 Roma, Italy

E-mail: <lastname>@dis.uniroma1.it

Abstract— In this paper we present a novel approach tomulti-objective exploration in a rescue environment. Thoughautonomous exploration has been investigated in the past, wespecifically focus on the problem of searching interesting fea-tures in the environment during the map building process. Oursolution is based on multi-objective search and explorationstrategy and allows us for considering both the map buildingtask and the victim search task, using heterogeneous sensors.Furthermore, we introduced a Petri Nets formalism in orderto be able to face unexpected events, parallel tasks and actionsynchronization, in a coherent framework.

I. INTRODUCTION

In recent years increasing attention has been devoted torescue robotics both from research institutions and fromrescue operators. Robots can consistently help human oper-ators in dangerous tasks during rescue operations in severalways. One of the main services that mobile robots canprovide to rescue operators is to work as remote sensingdevices reporting information from dangerous places thathuman operators cannot reach.

A consistent part of rescue robotic research is focusedon providing robots with high mobility capabilities andcomplex sensing devices. Such kind of robots are usuallydesigned to be tele-operated during the rescue mission.Another branch of rescue robotic focuses on providingmobile bases with a certain degree of autonomy ([2]). Semi-autonomous robots can process acquired data and build ahigh-level representation of the surrounding environment([7]). Moreover, robots can act in the environment (e.g.navigate) with only a limited interaction with the humanoperator. In this way, the human operator can easily con-trol multiple robots providing high level commands (e.g.“explore this area”, “reach this point”, etc.). Moreover, incase of temporary network breakdown, the mobile base cancontinue executing its task and come back to a predefinedbase station.

In this work we focus on autonomous search in anindoor unstructured environment. The search task is tar-geted towards detection and analysis of interesting features(e.g. possible human victims). The environment is notknown before-hand and has to be totally explored andanalyzed to report all relevant features to human operators.Moreover, the relevant features should be located insidethe environment, therefore the robot has to build a map

of the surrounding area and localize itself inside the map.Autonomous exploration has been deeply investigated inmobile robot literature ([8], [11], [12]). Most approaches,however, do not consider the problem of searching forinteresting features inside the environment while doing theexploration. On the other hand, several approaches havebeen proposed for search tasks, but they either assume theenvironment as known, or do not address uncertainty inrobot actions and perceptions.

In this paper we present a novel framework for au-tonomous search and exploration of indoor unstructured en-vironment. Our solution is based on a multi-objective searchand exploration strategy. The objectives of our system areto explore and build a consistent map of the environmentand to detect possible victims, while minimizing the timeneeded to complete the mission. A main feature of ourapproach is that we use heterogeneous sensing devicesfor the map building system and for the victim detectionsystem. Notice that, having heterogeneous devices for themap building process and for the feature detection processis relevant also to other kinds of features. For example,to detect the presence of CO2 in the environment themobile base should use a dedicated device that has differentcharacteristics with respect to sensors commonly used forthe map process (e.g. laser scanner or sonar device). Inparticular, our mobile base is equipped with a laser rangescanner device for the purpose of map building, whilea stereo vision camera and an infra-red thermo sensorare used for victim detection. The two processes havedifferent and sometimes conflicting goals. For example, themap building process is accurate and fast: the laser canaccurately acquire information from a long distance (e.g.around 80 meters). On the other hand, the victim detectionsystem is very demanding from a computational point ofview and has a very short range of operation. Moreover, thevictim detection system is divided in two sub-processes: thefirst one generates interesting points where victims could bepresent, the second process analyzes one interesting pointand decides whether it is a victim or a false alarm.

A top level module coordinates the two sub-systems,ensuring that all the environment will be explored and thateach possible detected victim will be analyzed. The maindecision that our exploration and search strategy makes is

SRMED-2006 36

Page 39: Third International Workshop on Synthetic …...Third International Workshop on Synthetic Simulation and Robotics to Mitigate Earthquake Disaster (SRMED 2006) 16th, June, 2006 Messe

the next point to be reached by the robot. The possiblecandidate points are computed considering the informationprovided by the two subsystems. In particular, candidatepoints are either frontier points, between the visited areasand the unexplored space, or points located near possiblevictims.

Notice that, since information provided by the subsys-tems are not certain, and the action robot performs may fail,the system should be able to promptly react to unexpectedevents (e.g. a novel candidate point to be examined, or anavigation failure). To manage in a coherent and flexibleframework all the possible events that our system shouldtake into account, we decided to deploy Petri Nets-basedformalism to represent course of actions to be executed.

Our approach has been developed and tested on a work-ing rescue mobile base. Performed experiments show thatour system is able to autonomously explore unstructuredenvironments, and localize possible found victims.

This paper is organized as follow: next section describesthe problem we address. Section 3 describes our roboticsystem and the software module that are responsible ofmapping, localization, navigation, and victim detectiontasks. The approach to exploration is described in Section4. Section 5 describes performed experiments and Section6 concludes the paper.

II. PROBLEM FORMALIZATION AND OBJECTIVES

To be fully autonomous, the robot has to be able not onlyto build the map and localize with it (SLAM), but also todecide the sequences of movements useful to explore thewhole environment. The exploration problem can be seenas a next-best-view (NBV) problem ([6]), i.e. computing asequence of sensing positions based on the data acquired atprevious locations, in order to build a complete represen-tation of the environment. An optimal NBV algorithm willthus move the robot to positions in which it can take themaximum information gain (given the current knowledge ofthe environment). The NBV concept is taken from computervision and has some major differences with respect to robotexploration (for example NBV algorithms do not considercollision free paths and possible localization problems;moreover, they usually deal with known scenarios).

Making the robot or its sensors span all the environmentis also referred as “coverage” problem, usually when theenvironment is known a priori (e.g. autonomous vacuumcleaners, lawn mowers, etc.).

Exploration strategies usually aim at reducing the ex-ploration time. This applies also to a rescue mission, forobvious reasons. Thus, the goal is to choose the actionsthat gain the maximum information in the minimum time.This requires also to take into account the time consumedin the motion of the robot, i.e. in the length of the paths.

In a rescue mission, we have indeed two concurrent goalsin the exploration. The first deals with map building, usuallyrelying on sensors in the class of range finders (laser rangefinders, sonars, etc.); the goal here is to let the sensors cover

all the explorable space. A SLAM algorithm is responsibleof putting the readings together to build the map. Thesecond goal of exploration in a rescue mission is the searchfor victims. Usually, to detect human bodies other classesof sensors are used, e.g. cameras, stereo-cameras, thermo-cameras or thermo sensors in general. This kind of sensorsusually have a different range in which they can be used andthe methods tend to be very different (for example there canbe the need to look at the same place from two differentpoints of view). This results in a situation in which thetwo goals usually lead to different and sometimes oppositedecisions. For example, when a portion of a map hassuccessfully been built, the map building algorithm woulddirect the robot elsewhere. However, the victim detectionsubsystem might need to further analyze the area.

In [9] this kind of problem is called “Map building”,when the task is to explore the whole environment in orderto build the map, or “Searching”, when the task is to findsomething (in our case an unknown number of victims) inan unknown environment, that needs to be systematicallyexplored. In the following we will refer to them as MapBuilding task and Victim Search task.

In order to have a coherent and common frameworkfor the two goals, we formalize as follows. Given a spaceE ⊂ Rn to be explored and a (set of) sensor(s) that canbuild a local representation of the space A(q) ⊂ E fromthe oriented position q, the exploration for map buildingprocess is to find a sequence of oriented positions qi suchthat

n⋃

i=0

A(qi) = E

i.e. the sensor(s) has to span the whole explorable spaceE. The problem is actually more complex because usuallySLAM algorithms require a sufficient overlap between tworeadings to reduce uncertainty in sensors interpretation.

This applies both to the Map Building task and theVictim Search task, but, as stated before, the sensors usedare different in many respects. Also the output of the twoalgorithms are slightly different: while the Map Buildingtask give the representation of the environment, the outputof the Victim Search module will be an explored area inwhich some features are detected and positioned in the map.This implicitly means that there is no other such featuresin the rest of that area.

The two tasks should synchronize and interact in order tohave the map of the whole environment and all the victimsdetected and located in that map.

Notice that the typical NBV algorithm structure is givenby the following steps:

1) select or choose the next view point;2) navigate or move the sensor to that view point;3) acquire information from the sensor;4) integrate this information with the global knowledge

of the object/environment.

SRMED-2006 37

Page 40: Third International Workshop on Synthetic …...Third International Workshop on Synthetic Simulation and Robotics to Mitigate Earthquake Disaster (SRMED 2006) 16th, June, 2006 Messe

In our case, however, the map building process and thevictim research are active even when the robot is movingto the position previously chosen, thus incrementing theprobability to find victims and speeding up the environmentrepresentation building task (the navigation phase can betime consuming since the evaluation of candidates canrequire heavy computations).

III. SYSTEM DESCRIPTION

In our system, the exploration relies on the followingsubsystems:• SLAM subsystem These modules deal with map

building, given the sensor readings, and localizationin the map built.

• Navigation subsystem Navigation modules are re-sponsible for enabling the robot to reach a targetposition; a rough “topological” path-planner computesa path and a lower-level module tries to follow thispath, maneuvering the robot ([3]).

• Frontier module This module compute the boundariesbetween the free explored space and the unknownspace, given the current map, using a wavefrontexpansion-like algorithm.

• Pan-tilt Control subsystem The victim detectionalgorithm is based on the analysis of the imagesprovided by a stereo-vision system. This system ismounted on a pan-tilt unit, thus making its mobilityuncorrelated from robot mobility.

• Human Body Detection subsystem It is performedin two steps. The first being faster and less accurate isused to identify interesting places where a human bodycould be located. The second step is computationallyheavier and slower, and for this reason should beactivated only in those areas declared interesting inthe first step.

We use a laser range finder for mapping ([7]), localizationand navigation ([4]), and a sonar that helps in findingtransparent surfaces. The Human Body subsystem relies ona stereo vision system ([1]), that has a very reduced rangeof operation.

Since the map is not known a priori, it is not possibleto compute the exact information gain for a given positionin the map. We can only compute expected informationgain. One method is given in [6], but the computationcan be heavy. For this reason, we prefer to use the goodapproximation given by the consideration of the length ofthe so called “frontiers”, i.e. boundaries between free andunknown space ([12]).

For what concerns the victim detection task, the ex-ploration has to take in consideration also the fact thatthe Human Body Detection subsystem can give interestingpoints that can be in places just visited, due to the timeneeded to process the images. This means that the robotcan be forced to go backward on its steps in order to makethe second step of the victim detection algorithm to checkfor victim presence in the right place.

Fig. 1. A typical situation in which the exploration algorithm needs tochoose between a set of frontiers (“F” in the figure) and a set of interestingpoints to be checked (“I” in the figure)

Since moving the robot can be very time consuming,we mounted the stereo-camera and the thermo sensor ona pan-tilt unit. In this way the Human Body Detectionsubsystem sensors can point elsewhere with respect to therobot. Anyway, this means also that pan-tilt unit needs to bemoved in order to successfully (and better) look for humanbodies.

IV. EXPLORATION STRATEGY

The exploration strategy presented in this paper can beconceptually divided into two parts. In the first we addressthe problem of optimizing an utility function in order tocompute the next position to be reached, while in the secondwe add a level of abstraction and use this choice in a morecomplex plan that is able to face unexpected events, parallelactions and synchronization.

A. The choice of the next target to reach

As we stated before, for the choice of the target positionto be reached, we have two sources of candidates: theFrontier module and the Human Body Detection subsystem.

For what concerns the first module, it is quite obviousthat the boundaries between the free explored space andthe unknown areas are the candidate targets for the nextposition to reach, because the unexplored areas can be seenthrough them ([12], [6]). The computation of the frontiers isbased on the wavefront expansion algorithm ([8]), that hasbeen extended to make the computation iterative, in orderto avoid to reconsider the whole map at each iteration. Asfor the “interesting positions” given by the pre-computationpart of the Human Body Detection subsystem, one must totake into account that the robot needs to approach thembefore the full Human Body Detection algorithm can beenabled.

Given these sources, the exact positions to reach arechosen such that the robot is at a proper distance, with aright orientation, and sufficiently far from other obstacles.This is of fundamental importance when we deal with

SRMED-2006 38

Page 41: Third International Workshop on Synthetic …...Third International Workshop on Synthetic Simulation and Robotics to Mitigate Earthquake Disaster (SRMED 2006) 16th, June, 2006 Messe

possible victims, both for their safety and for the algorithmto work properly.

The exploration process, following the general NBValgorithm structure, is divided into two steps: evaluationof candidates; navigation towards the chosen target. As westated before, the frontier computation task and the searchof the possible victims are executed in parallel with thenavigation and provide their outputs to the module that isresponsible of the candidate selection when it request them.

Candidates are evaluated with a method inspired by themulti-objective optimization ([5]). In particular, we chooseto minimize the exploration time, and to maximize theinformation gain, in order to explore all the unknown freespace and discover all the possible victims.

To minimize the exploration time, we compute the lengthof the shortest path given by the path planner to each can-didate. To compute the information gain of the candidateswe have to define the information obtained by reaching afrontier and that given by investigating a possible victim.

For the former, the information gain is given by theexpected portion of unexplored environment that can bereached by the robot: this can be approximated by thelength of the frontier itself. On the other hand, the infor-mation given by a victim is constant, because the HumanBody Detection subsystem does not distinguish from onepossible victim to another.

Given the set of candidates, the best target can becomputed as follows:• computation of the shortest distance and the greatest

information gain;• computation of the best target, using the distance from

ideal solution D(ci), (the ideal solution has the bestdistance and best information gain):

D(ci) =√

(d(ci)− d∗)2 + (I(ci)− I∗)2

in which ci is a candidate, d(ci) is the distance fromit, I(ci) its information gain, d∗ and I∗ are the bestdistance and the best information gain (see [5]).

Once the best target is computed, the robot should reachit using the Navigation subsystem; when it reaches it, a newcandidate evaluation and selection is performed.

B. The introduction of the Petri Nets formalism

One of the main limits of using only the NBV algorithmstructure and the utility function is that, once the robot hasbegun to move towards to a target, it is not able to adapt itsactions to new events: for instance, if during the navigationthe robot finds out an “interesting point”, it can analyze itonly after having reached the current target, thus resultingin a manifest loss of efficiency. Moreover, we need to beable to face unexpected navigation faults that, in a rescuemission, can often happened.

In our setting, there is the need of some kind of mecha-nism to handle the concept of “interrupt” and “unexpectedevent”. For these reasons, we introduced a Plan Executormodule, based on Petri Nets formalism, that is able to

manage navigation faults, to react to new possible victimsand also to manage concurrent actions.

Petri Nets are a formal and graphical language whichis suitable for modeling systems with concurrency andresource sharing. They are currently used in manufacturingsystems, concurrency analysis, process management andso on. Since its introduction, in the beginning of 60’s, itrevealed to be a very powerful and easy to use tool.

A Petri Net is a directed graph in which the edgesconnect two kind of nodes, called transitions and places. Atransition node can be connected only with places node andvice versa; each edge has assigned a positive integer, calledweight; we denote w(pi, tj) the weight of the edge betweenplace pi and the transition tj . In graphical representations,places are drawn as circles and transitions as bars. Amarking of a Petri Net assigns a non-negative integer ito each place (we say that the place contains i tokens).Together with the graph specification, for a Petri Net isgiven also an initial marking. In modeling, usually placesrepresent conditions, while transitions represent events. Atransition has a set of input places and a set of outputplaces representing pre-conditions and post-conditions ofthe event. A transition t is said to be enabled if each inputplace pi contains at least w(pi, t) tokens. An enabled tran-sition may or may not fire depending on whether the eventassociated with it actually takes place. When a transitionfires, w(pi, t) tokens are removed from each input place pi

and w(t, pj) token are added in each output place pj . Usingthese simple building blocks, it is possible to have structurethat can control parallel actions and synchronization withease. One simple plan used by our Plan Executor moduleis shown in Figure 3.

From an implementation point of view, the employmentof the Petri Nets tool allows for:• quick and easy plan definition, validation and test;• management of concurrent actions (i.e. the pan-tilt

direction changes and the robot motion);• evaluation of fault management strategies.It can also lead to build more complex plans, involving

more sensors and more performable actions, that can dealwith more difficult scenarios.

V. EXPERIMENTS

Experiments on the presented exploration strategy havebeen performed both in simulation (using Player/Stage1

framework) and real rescue arenas (using an ActivMediaPioneer 3-AT mobile base, with a Stereo Vision systemmounted on a pan-tilt unit and a SICK Laser Range Finder,see Figure 2).

The plan used in experiments is the one showed in Fig-ure 3. We can see the initial marking (the token containedin the very upper place) and simple constructs like theinterrupts, i.e. conditions (attached to transitions) evaluatedwhen the input place(s) are active (i.e. contain at least

1http://playerstage.sf.net

SRMED-2006 39

Page 42: Third International Workshop on Synthetic …...Third International Workshop on Synthetic Simulation and Robotics to Mitigate Earthquake Disaster (SRMED 2006) 16th, June, 2006 Messe

Fig. 3. The plan used in the experiments

Fig. 2. The robot used for the real arena experiments

one token). If the interrupt condition is true, the tokenis removed from the place and the action related with itinterrupted. We can see two interrupts during the executionof the MoveToTarget action: NavigationFailedand SeenVictim, that is: the navigation to the current

target is interrupted if the navigation reports a failure ora new interesting position has been found (this leads toprocess interesting places as soon as possible).

In Figure 4, the final map of a completed rescue missionis shown. The two victims have been found and preciselyplaced in the automatically built map. The thick arrow inthe upper right side of the figure is the starting positionof the robot. This rescue arena is 7x5m big and has beenexplored in about 15 minutes (maximum linear speed wasset to 100mm/s, due to the need to negotiate with narrowpassages and the time requested to process stereo images).

VI. RELATED WORK

The exploration problem is usually seen as an opti-mization problem in which the expected total time of theexploration or a related value has to be optimized (e.g. theinformation gain has to be maximized, the total travel costminimized, etc.). Several approaches address the problemdefining utility functions for the choice of the next positionto travel to. In these utility functions, one has to considerboth the costs of the actions and their gains. Usually theytake into account the minimization of the total time alongwith other features such as map precision, and include themin the utility function.

Banos and Latombe in [6] addressed the next best viewproblem to exploration, arguing that solutions taken fromthe computer vision area cannot be applied directly tomobile robot exploration due to the fact that in this area

SRMED-2006 40

Page 43: Third International Workshop on Synthetic …...Third International Workshop on Synthetic Simulation and Robotics to Mitigate Earthquake Disaster (SRMED 2006) 16th, June, 2006 Messe

Fig. 4. The map showed at the end of a mission with two victims found

some relevant issues of mobile robots are not taken in con-sideration (e.g. robot localization and obstacle avoidance),and proposed a method in which the canonical steps ofthe NBV problem are executed one after the other. For anexample of a computer vision NBV solution, refer to [10].

In [11] an approach that combines mapping, localizationand exploration is presented. This method evaluates thecost and information gain of the actions that the robothas to perform. The actions are addressed mostly for theconstruction of a precise and ad accurate map and do notconsider possible interesting features.

VII. CONCLUSIONS AND FUTURE WORK

In this paper we presented a novel approach to multi-objective exploration in a rescue environment. In the pre-sented approach, we select the next position to exploreusing a multi-objective utility function over the set oftargets useful for map building and the set of interestingpositions to be analyzed further by the Human BodyDetection subsystem. Moreover, thanks to the Petri Netsformalism, we are able to build complex plans in which themodeling of constructs like parallel actions, synchronizationand interrupts is straightforward. The experiments show thatour robot is able to autonomously explore environmentsboth simulated and real, and to find and check interestingpositions while looking for victims.

At present, we are developing quantitative experimentsusing the simulator to have a better estimate of the ad-vantages of the proposed approach. However, the maindifficulty is to find some performance metrics that fits thiskind of problem.

The issues that arise in this paper can be investigatedfurther, building more complex plans in order to includeother actions (e.g. the pan-tilt control) or to be able to reactto more complex and unexpected events. From the pointof view of the high-level plan built using the Petri Netsformalism, we are able also to consider sets of situations inwhich the exploration strategy needs to be different fromthe NBV-like utility-based strategy and easily switch therobot behavior.

REFERENCES

[1] S. Bahadori. Human Body Detection in Search and Rescue Missions.PhD thesis, Universit di Roma “La Sapienza”, Italy, 2006.

[2] S. Bahadori, D. Calisi, A. Censi, A. Farinelli, G. Grisetti, L. Iocchi,and D. Nardi. Autonomous systems for search and rescue. In A Birk,S. Carpin, D. Nardi, Jacoff A., and S. Tadokoro, editors, RescueRobotics. Springer-Verlag, 2005. to appear.

[3] D. Calisi, A. Farinelli, L. Iocchi, and D. Nardi. Autonomousnavigation and exploration in a rescue environment. In Proc. of IEEEInternational Workshop on Safety, Security and Rescue Robotics(SSRR), Kobe, Japan, June 2005.

[4] D. Calisi, A. Farinelli, L. Iocchi, and D. Nardi. Autonomousnavigation and exploration in a rescue environment. In Proc. ofthe 2nd European Conference on Mobile Robotics (ECMR), pages110–115, September 2005.

[5] Carlos A. Coello Coello. An updated survey of evolutionary multi-objective optimization techniques: State of the art and future trends.In Peter J. Angeline, Zbyszek Michalewicz, Marc Schoenauer, XinYao, and Ali Zalzala, editors, Proceedings of the Congress onEvolutionary Computation, volume 1, pages 3–13, Mayflower Hotel,Washington D.C., USA, 6-9 1999. IEEE Press.

[6] Hector H. Gonzalez-Banos and Jean-Claude Latombe. Navigationstrategies for exploring indoor environments. I. J. Robotic Res.,21(10-11):829–848, 2002.

[7] G. Grisetti, C. Stachniss, and W. Burgard. Improving grid-basedslam with rao-blackwellized particle filters by adaptive proposals andselective resampling. In Proc. of the IEEE Int. Conf. on Robotics &Automation (ICRA), 2005.

[8] J. C. Latombe. Robot Motion Planning. Kluwer Academic Publisher,1991.

[9] S. M. LaValle. Planning Algorithms. Cambridge University Press,2006. Also available at http://msl.cs.uiuc.edu/planning/.

[10] R. Pito. A sensor based solution to the next best view problem,1996.

[11] C. Stachniss, G. Grisetti, and W. Burgard. Information gain-basedexploration using rao-blackwellized particle filters. In Proc. ofRobotics: Science and Systems (RSS), Cambridge, MA, USA, 2005.

[12] B. Yamauchi. A frontier based approach for autonomous exploration.In IEEE International Symposium on Computational Intelligence inRobotics and Automation, Monterey, CA, July 10-11, 1997., 1997.

SRMED-2006 41

Page 44: Third International Workshop on Synthetic …...Third International Workshop on Synthetic Simulation and Robotics to Mitigate Earthquake Disaster (SRMED 2006) 16th, June, 2006 Messe

Development of an autonomous res ue robotwithin the USARSim 3D virtual environmentGiuliano Polverari, Daniele Calisi, Alessando Farinelli, Daniele NardiDipartimento di Informati a e Sistemisti aUniversità di Roma �La Sapienza�Via Salaria 113, 00198 Roma, ItalyE-mail: <lastname>�dis.uniroma1.itAbstra t. The in reasing interest towards res ue roboti s and the om-plexity of typi al res ue environments make it ne essary the use of high�delity 3D simulators during the appli ation development phase. USAR-Sim is an open sour e high �delity simulator for res ue environments,based on a ommer ial game engine. In this paper we des ribe the devel-opment of an autonomous res ue robot within the USARSim simulationenvironment. We des ribe our res ue roboti system and present the ex-tensions we made to USARSim to have a satisfying simulation of ourrobot. Moreover, we present, as a ase study, an algorithm to avoid ob-sta les not visible by our laser s anner based mapping pro ess.1 Introdu tionRoboti systems have been proposed in re ent years in a variety of settings andframeworks, pursuing di�erent resear h goals, and su essfully applied in manyappli ation domains. Te hnologi al improvements both in the hardware and inthe asso iated software of roboti platform push their appli ation towards moreand more omplex s enarios.Sear h and Res ue roboti s is one of the most hallenging and interestingappli ation environments for AI and roboti s. Su h an appli ation requires therobots to be equipped with several omplex sensors and to be able to perform omplex manoeuvres in luttered and unstru tured spa es.When working with an expensive and omplex hardware, the presen e of asimulator is of signi� ant importan e. On the one hand, it enables the evaluationof di�erent alternatives during the robot system design phase leading to betterde isions and ost savings. On the other hand, it supports the pro ess of softwaredevelopment by providing a repla ement when robots are not available (e.g.broken or used by another person) or unable to endure long running experiments.Furthermore, the simulation o�ers the possibility to perform an easier and fasterdebugging phase.Several roboti simulators for 3D environments have re ently been developed,providing a valid alternative to the anoni al 2D-oriented ones. A high �delity3D environment adds to the simulation the possibility to test extremely realisti intera tions, with a superior graphi rendering, extending the range of sensorsto be tested.SRMED-2006 42

Page 45: Third International Workshop on Synthetic …...Third International Workshop on Synthetic Simulation and Robotics to Mitigate Earthquake Disaster (SRMED 2006) 16th, June, 2006 Messe

USARSim is an open sour e 3D simulator for the urban sear h and res ue(USAR) environment based on a ommer ial game engine, urrently supportedby an international ommunity.This paper aims to des ribe the realization of an autonomous roboti systemfor sear h and res ue missions using USARSim. The roboti system is basedon a Pioneer P3AT1 ommer ial platform equipped with a sonar ring. We us-tomized the platform adding a SICK Laser Range Finder, a Stereo Color Cameramounted on a pan-tilt unit, an Infra Red Sensor and a wireless a es point to ommuni ate with a ground station. The purpose of the roboti system is theautonomous exploration of a res ue s enario sear hing for vi tims and buildingthe map of the explored area. The autonomous navigation system, whi h is basedon a two level path-planner, is able to guarantee safe navigation in highly lut-tered spa e [8℄. The mapping system is based on Laser Range Finder readingsand uses a s an mat her based approa h so to lo alize the robot and build themap. Finally, Stereo Vision is used to dete t vi tims.The �rst task was to build an interfa e between USARSim and our roboti development platform to simulate our real robot and its equipment. In parti ularwe both modeled our system with the available built-in features (e.g. Pioneer ro-boti platform and SICK Laser Range Finder) and extended the simulator, so to orre tly represent all our equipment (e.g. the Stereo Color Camera). Moreover,we improved the existing simulation environment, syn hronizing sensor readingsand orre ting the simulation of transparent obje ts. Interfa ing our develop-ment platform with USARSim we are able to test the same ode on both thereal robot and the simulator: as a onsequen e, we are now able to use USARSimas a powerful debugging environment in the development phase of our roboti appli ations.Furthermore, we present a ase study on erning path-planning in unknownand luttered environments. We modeled in USARSim several test s enarios anddeveloped a speed tra king based stall re overy subsystem to deal with invisibleobsta les. We tested the algorithm in USARSim, saving time and preserving therobot from dangerous impa ts.The paper is organized as follows: in the next Se tion we des ribe the US-ARSim simulator. Se tion 3 shows our work with the simulator, the interfa e webuilt and the ustomization we made. In Se tion 4 we dis uss the ase study.Se tion 5 dis usses related works and Se tion 6 on ludes the paper.2 USARSimUSARSim (presented in [1℄) is a 3D high �delity simulator of USAR robotsand environments. USARSim an be a valid tool for the study of basi roboti apabilities in 3D environment. USARSim provides a high quality renderinginterfa e and it is able to a urately represent the roboti system behavior.USARSim development started in the University of Pittsburgh and is ur-rently supported by an international ommunity. It is released as open sour e1 A tiveMedia: Pioneer. http://www.a tivrobots. omSRMED-2006 43

Page 46: Third International Workshop on Synthetic …...Third International Workshop on Synthetic Simulation and Robotics to Mitigate Earthquake Disaster (SRMED 2006) 16th, June, 2006 Messe

software2 and has been adopted as the standard simulation tool for the RoboCup3Virtual Robots Competition in the up oming 2006 edition.The urrent version of USARSim onsists of: i) standardized environmentalsample models; ii) robot models of several ommer ial and experimental robots;iii) sensor models, like Laser S anners, Sonars and Cameras; iv) drivers to inter-fa e with external ontrol frameworks, like MOAST, Pyro and Player.USARSim uses Epi Games Unreal Engine 24 to provide a high �delity simu-lation at low ost. Unreal is one of the leading engines in the �rst-person shootergenre and is widely used in both the game industry and in the a ademi om-munity. The use of the Unreal Engine provides several interesting features toUSARSim: i) a high-quality and fast 3D s ene rendering, supporting mesh, sur-fa e (texture) and lighting simulation; ii) a high �delity rigid body physi al sim-ulator, Karma, supporting ollision dete tion, joint, for e and torque modeling;iii) a design tool, UnrealEd, for developers to build their own 3D robot modelsand environments; iv) an obje t-oriented s ripting language, UnrealS ript, whi hsupports state ma hine, time based exe ution, and networking; v) an e� ient lient-server ar hite ture to support multiple players.3 Modeling an Autonomous Res ue Roboti System inUSARSimTo fully integrate our roboti res ue system within the USARSim virtual envi-ronment we performed the following steps: i) we modeled our roboti platformin the USARSim framework and developed a low level interfa e to the simulatorenvironment; ii) we modi�ed the simulator to improve sensors' realism; iii) weintrodu ed in USARSim a Stereo Vision Camera sensor and a 3D Camera.In the following ea h phase of the development is dis ussed. Moreover, weshow validation results on erning autonomous exploration in a USARSim sim-ulated environment.3.1 Modeling our Robot in USARSim and building the interfa eThe robot we urrently use is a Pioneer P3AT. We equipped the virtual hassis(already modeled in USARSim) with a full Sonar ring made of 16 sensors, aSICK Laser Range Finder and Camera mounted on a Pan-Tilt unit. In Figure1 a omparison between our real robot and its model in USARSim is shown.Our development framework, is based on a set of independent modules thatintera t and ommuni ate among them using a entralized bla kboard-typerepository [4℄. To intera t with the USARSim environment we built spe i� modules that dire tly ommuni ate with the USARSim server. Sin e these mod-ules use the standardized framework interfa e they an be dire tly repla ed withthe ones that ommuni ate with real hardware or di�erent simulator environ-ments. In this way we an use all the other modules (e.g. navigation module,mapping module, et .) without any modi� ation.2 USARSim proje t page: http://sour eforge.net/proje ts/usarsim3 RoboCup 2006: http://www.robo up2006.org4 Epi Games: Unreal Engine. www.epi games. omSRMED-2006 44

Page 47: Third International Workshop on Synthetic …...Third International Workshop on Synthetic Simulation and Robotics to Mitigate Earthquake Disaster (SRMED 2006) 16th, June, 2006 Messe

Fig. 1. Our robot and its model in USARSimIn parti ular, we developed four basi modules: i) the robot module, whi his used to manage the ommuni ation so ket, to re eive and store odometry and urrent speed data and to send motion ommands to the server; ii) the lasermodule, whi h is used to store data gained from the simulated Laser S annerSensor, and to view/ hange its on�guration; iii) the sonar module, whi h isused to manage a set of simulated Sonar Sensors; iv) the amera module. Cam-era Sensor simulation is done in USARSim using the video feedba k of UnrealClient, the appli ation of the Unreal Engine used for 3D s ene rendering; in par-ti ular an ImageServer is provided to apture the Unreal Client data and serveit through TCP/IP. Our amera module holds a dedi ated so ket to onne t tothe ImageServer and get the virtual Camera data; moreover, the amera moduleis used to view the Camera on�guration and to move the simulated Pan-Tiltunit.3.2 Improving sensors' simulationUSARSim does not provide timestamp information for sensor readings. However,when pro essing data oming from di�erent sensors, syn hronization an be avery important issue. For example several subsystems of our platform (e.g. theSLAM subsystem) need timestamps for Odometry, Laser and Sonar readings, inorder to al ulate data on�den e and perform oherent state estimation. Weadded a timestamp information to the Sonar, Laser and Odometry data.We experien ed that the simulated Laser S anner sensor dete ted transparentobje ts as they were opaque. Every obje t in USARSim holds a �material� prop-erty: we modi�ed the Laser S anner erroneous behaviour, spreading the laserbeam over the transparent obje ts until hitting another material or rea hing thesensor max range. With su h modi� ation we have been able to test in the sim-ulator our s anmat her-based SLAM (simultaneous lo alization and mapping)and the glass dete tion subsystem for the identi� ation of transparent materials(whi h are undete table by the Laser S anner) based on the Sonar data.3.3 Stereo Vision in USARSimIn a Res ue environment the vi tim re ognition subsystem is very important.Our urrent approa h uses a human dete tion algorithm driven by a Stereo

SRMED-2006 45

Page 48: Third International Workshop on Synthetic …...Third International Workshop on Synthetic Simulation and Robotics to Mitigate Earthquake Disaster (SRMED 2006) 16th, June, 2006 Messe

Vision unit, whi h is omposed by a ouple of syn hronized ameras with thesame orientation.As seen before, amera sensor simulation is done in USARSim apturing thevideo feedba k of Unreal Client. A tually only one running opy of Unreal Clientat time is allowed for ea h operating system. This limit derives from the single-user nature of the simulation, so Unreal-based Stereo Vision seems to be notpossible until future releases of the Unreal Engine.Being not possible to have multiple amera simulation on the same s reen,we extended the robot de�nition ode. Ea h virtual robot is des ribed in thesimulation by an Unreal S ript de�nition ode, storing information about itsmodel and instru tions to handle input data, to make movements and to drawthe amera data.We modi�ed the fun tion used usually to draw double-exposure images onthe s reen. Ea h time a frame is being drawn on the s reen, we split verti allythe output window overriding the �rst half with the left amera data and these ond with the right amera data, maintaining data syn hronization. With thisnew self-developed Stereo Vision sensor we are now able to have a omplete high�delity simulation of our res ue robot.3.4 3D Camera SensorThe Swiss Ranger Camera5 is a sensor able to add a distan e information toevery pixel of the image data aptured by its internal amera. Su h sensor anbe extremely useful in the USAR environment, both for navigation and for vi timdete tion.We added a Swiss Ranger Camera simulation in USARSim introdu ing anew IRC (Infra-Red Range Camera) sensor providing, for ea h pixel, the distan efrom the obje ts in the s ene. By posing the IRC sensor together with a ommonCamera with the same position, orientation and resolution, we add the distan einformation to every pixel of the amera image, obtaining a simulation of theSwiss Range Camera.In Figure 2 the Camera feedba k (on the left) and the IRC sensor output (onthe right, the brightness is proportional to the distan e) are shown side by side.Fig. 2. A Camera image and the orresponding IRC sensor feedba k5 CSEM: Swiss Ranger Camera. http://www.swissranger. h/produ ts.php

SRMED-2006 46

Page 49: Third International Workshop on Synthetic …...Third International Workshop on Synthetic Simulation and Robotics to Mitigate Earthquake Disaster (SRMED 2006) 16th, June, 2006 Messe

3.5 Validation resultsWe performed several tests to validate the whole system on�guration. Wepla ed our robot into di�erent USARSim virtual environments, to perform anautonomous exploration. The system behaviors onsistently mat hed the realroboti system. In parti ular, we veri�ed that the data gained from the sensorsand the motion ommands exe ution were as expe ted.In Figure 3 our res ue robot is shown while it autonomously explores anunknown virtual 3D environment generated by USARSim. On the map on theright unknown parts are drawn in blue (grey), while walls and obsta les aredrawn in bla k and free spa e in white. The small table in front of the robotis not drawn by the SLAM module (i.e. in bla k), be ause it is invisible to theLaser Range Finder. However, our stall re overy subsystem, des ribed in thefollowing paragraph, identi�es the impa t surfa e and draws it on the map.Fig. 3. Our res ue robot exploring an unknown virtual environment

4 Case study: exploration with invisible obsta lesDuring autonomous exploration missions in res ue environments, stall problemsofter arise. Our frontier based autonomous exploration subsystem, presented in[8℄, uses a two-level approa h for navigation. It is based on a global topologi alpath-planner and a lo al motion planner, whi h is an extension of the well-knownRandomized Kinodynami Tree [9℄. This kind of algorithms works by buildinga tree of safe and randomly-generated robot on�gurations. This lo al motionplanner may be stu k by obsta les that are undete table by the Laser S anner,be ause they do not lie on its s anning plane.We built a stall re overy subsystem, whose development was highly sim-pli�ed using USARSim: the simulated environment helped us saving testing timeand preserving the real robot from dangerous impa ts with unknown obsta les.We modeled in USARSim small obsta les like a tube, a ramp and a small tableand observed the rea tions of the virtual robot toward these obje ts.The main y le of the subsystem is based on the following steps:SRMED-2006 47

Page 50: Third International Workshop on Synthetic …...Third International Workshop on Synthetic Simulation and Robotics to Mitigate Earthquake Disaster (SRMED 2006) 16th, June, 2006 Messe

1. The subsystem �rst al ulates the a tual value of linear and angular speedgiven the a tual and previous robot poses (from the SLAM subsystem).2. The di�eren es between the desired and a tual speeds are monitored forseveral positions around the robot surfa e, using di�erent stall onditions.3. To avoid false positives we integrate over time the stall onditions.4. If a stall ondition is veri�ed for several y les, an obsta le is drawn on themap and an alarm is sent to the navigation subsystem to allow for a fastre-planning.We tested the stall re overy subsystem in USARSim obtaining good results.In Figure 3 are shown on the left, the robot hitting a small table not visible by thelaser and on the right, the obsta le representation in the robot map. Subsequenttests were done on the real robot, using di�erent obsta les like hairs and bri ks:the subsystem orre tly identi�ed stall situations tra king all the obje ts andallowing omplete explorations of the environment.5 Related WorksMoast6, is a development framework providing a multi-agent simulation envi-ronment, a baseline ontrol system, and a me hanism to migrate algorithmsfrom the virtual world to the real implementation. Moast is intended to pro-vide USARSim users a ustomizable ontrol system allowing for a high levelintera tion with the simulator. Compared to Moast, our system does not needto migrate the developed algorithms to the real implementation; in fa t, oursystem runs indi�erently on the real robot and on virtual environments.Several works related to USARSim fo us on validation of sensors su h asLaser S anner [2℄ or robot mobility [3℄. In omparison to these works, we fo- used on improving sensor data oheren e (e.g. syn hronizing sensor readingsand testing sensorial fusion tasks) than on validating single sensor simulation.As for obsta les whi h are not dete table by Laser S anner sensor or ameras,di�erent solutions are proposed in literature. Several approa hes are based ontou h sensors: for example in [5℄ the authors des ribe a ylindri al robot witha total overage bumper, while in [6℄ an a tuated whisker is used to identifyobje ts. Su h approa hes however require additional sensors. Another way toaddress the problem is proposed in [7℄. In this work the authors des ribe a mobilerobot used as a tour guide, whi h is able to deal with invisible obje ts given theknown map of the environment, lowering the speed when the lo alization error ishigher. Unfortunately su h a te hnique is useless in a USAR environment, wherethe environment map is not known a priori.To the best of our knowledge the res ue system presented in this paper isone of the �rst omplete autonomous res ue systems working on both real robotsand integrated in the USARSim simulator.6 Con lusionsOur experien e in simulation before USARSim was limited to two dimensions.Several features of our roboti system su h as the glass dete tion subsystem6 MOAST Proje t page. http://moast.sour eforge.netSRMED-2006 48

Page 51: Third International Workshop on Synthetic …...Third International Workshop on Synthetic Simulation and Robotics to Mitigate Earthquake Disaster (SRMED 2006) 16th, June, 2006 Messe

or the vi tim re ognition subsystem were impossible to test during a simulatedmission. Using USARSim, we had the well-known advantages of a high �delity3D simulation, su h as an a urate model of robot me hani s, di�erent materialsavailable on 3D surfa es et .In this paper we presented the development of an autonomous working systemwithin USARSim. We modeled our roboti system within USARSim signi� antlyextending the simulation environment. In parti ular we added the possibility touse Stereo Vision for our vi tim re ognition subsystem, and syn hronized allsensor reading to have a oherent map building pro ess. Moreover, we addressedthe problem of safe navigation in presen e of obsta les whi h are invisible to the2D Laser based mapping pro ess. We proposed a solution to this problem andtested our system in the USARSim virtual environment.The performed tests within the USARSim virtual environment of our roboti system on�rm that su h a framework is suitable for preliminary validation dur-ing the roboti appli ation development phase. In fa t, using our virtual roboti system we have been able to perform experiments involving invisible obsta lespreserving the real robot integrity. Moreover, we an now perform a high �delityexperimental analysis of di�erent res ue system on�gurations without the needto modify the a tual roboti platform.As future work we plan to deeply investigate the intera tions between theinvisible obsta le dete tion pro ess and the navigation and mapping pro ess. Inparti ular, it would be interesting to represent invisible obsta les as dangerousor forbidden on�guration inside the navigation world model, and to study howthis di�erent obsta le representation would impa t on the system performan e.Referen es1. J. Wang, M. Lewis, J. Gennari, USAR: A Game-Based Simulation for Teleopera-tion, Pro . 47th Ann. Meeting Human Fa tors and Ergonomi s So . (2003)2. S. Carpin, A. Birk, M. Lewis, A. Ja o�, High �delity tools for res ue roboti s:results and perspe tives, RoboCup International Symposium 2005 (2005)3. J. Wang, M. Lewis, M. Koes, S. Carpin, Validating USARsim for use in HRI Re-sear h, Pro . of the Human Fa tors And Ergonomi s So iety 49th Annual Meeting(2005) 457-4614. A. Farinelli, G. Grisetti, L. Io hi, SPQR-RDK: a modular framework for program-ming mobile robots, in Pro . of Int. RoboCup Symposium 2004 (2004) 653-6605. J.L. Jones, A.M. Flynn, Mobile Robots - Inspiration to Implementation, A K PetersLtd., Wellesley, Massa husetts (1993)6. G.R. S holz, C.D. Rahn, Pro�le Sensing with an A tuated Whisker, IEEE Trans-a tions on Roboti s and Automation, Vol. 20, No. 1 (2004) 124-1277. D. Fox, W. Burgard, S. Thrun, A. Cremers, A hybrid ollision avoidan e methodfor mobile robots, In Pro . IEEE Int'l Conf. on Roboti s and Automation (1998)8. D. Calisi, A. Farinelli, L. Io hi, D. Nardi, Autonomous Navigation and Explorationin a Res ue Environment, RoboCup International Symposium 2004 (2004)9. S.M. LaValle, J.J. Ku�ner, Randomized Kinodynami Planning. In Pro . of IEEEInt'l Conf. on Roboti s and Automation (1999)

SRMED-2006 49