2009 Localization

download 2009 Localization

of 125

Transcript of 2009 Localization

  • 8/10/2019 2009 Localization

    1/125

    ROBOCUP 2009

    TEAMCHAOS DOCUMENTATION

    Humberto Martnez BarberaJuan Jose Alcaraz JimenezPedro Cavestany Olivares

    David Herrero Perez

    University of Murcia

    Department of Information andCommunications Engineering30100 Murcia, Spain

    Carlos III University

    Department of Systems Engineeringand Automation28911 Leganes, Spain

  • 8/10/2019 2009 Localization

    2/125

  • 8/10/2019 2009 Localization

    3/125

    Acknowledgements

    For the elaboration of this document we have used material from past team members,be them from the original TeamSwedenor its follow-up TeamChaos, all participating inthe Four-Legged League with the different AIBOs. The list is rather large, so we stronglythank all of them for their contributions and effort.

    In particular, this document is based on previous team description papers and someconference papers, from which we have borrowed an important amount of information.Although they appear in the references, we want to explicitly cite the following papersand give extra thanks to their authors:

    A. Saffiotti and K. LeBlanc. Active Perceptual Anchoring of Robot Behavior in aDynamic Environment. Int. Conf. on Robotics and Automation (San Francisco,CA, 2000) pp. 3796-3802.

    P. Buschka, A. Saffiotti, and Z. Wasik. Fuzzy Landmark-Based Localization for aLegged Robot. Int. Conf. on Intelligent Robotic Systems (IROS) Takamatsu, Japan,2000.

    Z. Wasik and A. Saffiotti. Robust Color Segmentation for the RoboCup Domain.Int. Conf. on Pattern Recognition (ICPR), Quebec City, CA, 2002.

    A. Saffiotti and Z. Wasik. Using Hierarchical Fuzzy Behaviors in the RoboCup Do-main . In: C. Zhou, D. Maravall and D. Ruan, eds, Autonomous Robotic Systems,Springer, DE, 2003.

    J.P Canovas, K. LeBlanc and A. Saffiotti. Multi-Robot Object Localization by FuzzyLogic. Proc. of the Int. RoboCup Symposium, Lisbon, Portugal, 2004.

  • 8/10/2019 2009 Localization

    4/125

  • 8/10/2019 2009 Localization

    5/125

    TeamChaos Documentation

    Contents

    Contents

    1 Architecture 1

    1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 ThinkingCap Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

    1.3 Naoqi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

    1.3.1 Modules y brokers . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

    1.3.2 Main broker modules . . . . . . . . . . . . . . . . . . . . . . . . . . 5

    1.3.3 Adaptation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

    1.4 Timing of periodic functions in OpenNao . . . . . . . . . . . . . . . . . . . 8

    1.5 Communications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

    1.5.1 CommsModule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

    1.5.2 CommsManager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111.6 Message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

    2 Locomotion 15

    2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

    2.1.1 Information received by the locomotion module . . . . . . . . . . . 15

    2.1.2 Information provided by the locomotion module . . . . . . . . . . . 16

    2.1.3 Class diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

    2.2 Description of the robot Nao . . . . . . . . . . . . . . . . . . . . . . . . . . 18

    2.3 Navigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212.3.1 Foot orientation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

    2.3.2 Foot position . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

    2.4 Kinematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

    2.4.1 State-machine of kinematics . . . . . . . . . . . . . . . . . . . . . . 25

    2.4.2 Feet motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

    2.4.3 Hip motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

    2.5 Direct kinematic problem . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

    2.6 Inverse kinematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

    2.6.1 Telescopic leg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402.6.2 Leg length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

    2.6.3 Foot orientation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

    2.6.4 Combining previous results . . . . . . . . . . . . . . . . . . . . . . . 46

    i

  • 8/10/2019 2009 Localization

    6/125

    TeamChaos Documentation

    Contents

    2.7 Future works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

    3 Perception 47

    3.1 PAM: Perceptual Anchoring Module . . . . . . . . . . . . . . . . . . . . . 47

    3.1.1 Standard vision pipeline . . . . . . . . . . . . . . . . . . . . . . . . 47

    3.1.2 Experimental vision pipeline . . . . . . . . . . . . . . . . . . . . . . 48

    3.2 Color Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

    3.2.1 Seed Region Growing . . . . . . . . . . . . . . . . . . . . . . . . . . 50

    3.2.2 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

    3.3 Corner Detection and Classification . . . . . . . . . . . . . . . . . . . . . . 53

    3.4 Goal recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

    3.4.1 Sighting the goal . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

    3.4.2 Case discrimination . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

    3.4.3 Goal location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

    3.4.4 Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

    3.5 Active Perception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

    3.5.1 Perception-based Behaviour . . . . . . . . . . . . . . . . . . . . . . 64

    3.5.2 Active Perceptual Anchoring . . . . . . . . . . . . . . . . . . . . . . 66

    3.5.3 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

    3.5.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

    4 Self-Localization 73

    4.1 Uncertainty Representation . . . . . . . . . . . . . . . . . . . . . . . . . . 74

    4.1.1 Fuzzy locations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

    4.1.2 Representing the robots pose . . . . . . . . . . . . . . . . . . . . . 74

    4.1.3 Representing the observations . . . . . . . . . . . . . . . . . . . . . 75

    4.2 Fuzzy Self-Localization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

    4.3 Experimental results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

    5 Behaviours 815.1 Low-level Behaviours . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

    5.1.1 Basic Behaviours . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

    5.1.2 Fuzzy Arbitration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

    5.1.3 The Behaviour Hierarchy . . . . . . . . . . . . . . . . . . . . . . . . 85

    5.2 High-level Behaviours . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

    5.2.1 Hierarchical Finite State Machines . . . . . . . . . . . . . . . . . . 87

    5.3 Team Coordination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

    5.3.1 Ball Booking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

    5.3.2 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 885.4 The Players . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

    5.4.1 GoalKeeper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

    5.4.2 Soccer Player . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

    ii

  • 8/10/2019 2009 Localization

    7/125

    TeamChaos Documentation

    Contents

    A ChaosManager Tools 99

    A.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99A.2 Vision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

    A.2.1 Color Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100

    A.2.2 General Configuration . . . . . . . . . . . . . . . . . . . . . . . . . 101

    A.3 Game Monitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101

    A.4 Game Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102

    B HFSM Builder 105

    B.1 Using the Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

    B.1.1 Buttons and Menus . . . . . . . . . . . . . . . . . . . . . . . . . . . 106

    B.1.2 Code Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108

    B.2 Building a Sample Automata . . . . . . . . . . . . . . . . . . . . . . . . . 109

    B.3 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110

    B.3.1 File Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110

    B.3.2 Class Diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111

    iii

  • 8/10/2019 2009 Localization

    8/125

    TeamChaos Documentation

    Contents

    iv

  • 8/10/2019 2009 Localization

    9/125

    TeamChaos Documentation

    Architecture

    Chapter 1

    Architecture

    1.1 Introduction

    The problem of creating a robotic football team is a very difficult and challenging problem.There are several fields involved (low level locomotion, perception, location, behaviourdevelopment, communications, etc.), which should be developed for building a fully func-tional team. In addition debugging and monitoring tools are also needed. In practicalterms, this means that the software project can be very large. In fact, if we execute thecommandsloccount[45] to our current robots code (that is, excluding the off-board tools),we get:

    [...]

    Total Physical Source Lines of Code (SLOC) = 60,987

    [...]

    There are more than 60.000 lines of code. This fact shows that if we dont organiseand structure the project, it could be very difficult to manage. We use theEclipse IDEfor programming and debugging and SVN for sharing code and documentation.

    TeamChaos development is organised in three Eclipse and SVN projects: TeamChaos,ChaosDocs, andChaosManager. TeamChaoscontains all code related to the robot,Chaos-Manager is a suite of tools for calibrating, preparing memory sticks and monitoring dif-ferent aspects of the robots and the games, and ChaosDocs contains all the availabledocumentation, including team reports, team description papers, and RoboCup applica-tions.

    Communication is an important aspect because we can use external tools for achiev-

    ing laborious tasks easily. Using theChaosManagerwe receive images from the robotsand we can refine the camera parameters and reconfigure the robots camera while therobot is running. We can teleoperate the robot for testing kicks or locomotion usingcommunication between robots and ChaosManager.

    1

  • 8/10/2019 2009 Localization

    10/125

    TeamChaos Documentation

    Architecture

    We will study the special characteristics of the robot Nao and how has TeamChaos

    proceeded to adapt our AIBO-based existing system to them. The operative system isOpenNao, a Linux based distribution and over this operative system there is a middle-ware layer called Naoqi, developed by the enterprise which builds the robot: AldebaranRobotics.

    1.2 ThinkingCap Architecture

    Each robot uses the layered architecture shown in Figure. 1.1. This is a variant of theThinkingCap architecture, which is a framework for building autonomous robots jointly

    developed by Orebro University and the University of Murcia [25]. We outline below themain elements of this architecture:

    Commander

    Perceptual

    Anchoring

    Module

    Hierarchical

    Behaviour

    Module

    Global

    Map

    Hierarchical

    Finite

    State Machine

    Team

    Communication

    Module

    Lower Layer

    Middle Layer

    Higher Layer

    Communication

    Layer

    Sensor

    Data

    Motor

    Commands

    Local State

    Local State

    Global State

    Behaviours

    Messages

    Other

    Robot

    Other

    Robot

    Figure 1.1: The variant of the ThinkingCap architecture.

    The lower layer (commander module, or CMD) provides an abstract interface to thesensori-motor functionalities of the robot. The CMD accepts abstract commands

    from the upper layer, and implements them in terms of actual motion of the roboteffectors. In particular, CMD receives set-points for the desired displacement veloc-ity< vx, vy, v >, where vx, vy are the forward and lateral velocities and v is theangular velocity, and translates them to an appropriate walking style by controllingthe individual leg joints.

    The middle layer maintains a consistent representation of the space around the robot(Perceptual Anchoring Module, or PAM), and implements a set of robust tacticalbehaviours (Hierarchical Behaviour Module, or HBM). The PAM acts as a short-term memory of the location of the objects around the robot: at every moment, thePAM contains an estimate of the position of these objects based on a combination of

    current and past observations with self-motion information. For reference, objectsare named Ball, Net1 (own net), Net2 (opponent net), and LM1-LM6 (the sixpossible landmarks, although only four of them are currently used). The PAMis also in charge of camera control, by selecting the fixation point according to the

    2

  • 8/10/2019 2009 Localization

    11/125

    TeamChaos Documentation

    Architecture

    current perceptual needs [34]. The HBM realizes a set of navigation and ball control

    behaviours, and it is described in greater detail in the following sections.

    The higher layer maintains a global map of the field (GM) and makes real-timestrategic decisions based on the current situation (situation assessment and roleselection is performed in the HFSM, or Hierarchical Finite State Machine). Self-localization in the GM is based on fuzzy logic [10] [17]. The HFSM implements abehaviour selection scheme based on finite state machines [20].

    Radio communication is used to exchange position and coordination informationwith other robots (via the TCM, or Team Communication Module).

    This architecture provides effective modularisation as well as clean interfaces, makingit easy to develop different parts of it. Furthermore, its distributed implementation allowsthe execution of each module in a computer or robot indifferently. For instance, the lowlevel modules can be executed onboard a robot and the high level modules can be executedoffboard, where some debugging tools are available. However, a distributed implemen-tation generates serious synchronisation problems. This causes delays in decisions androbots cannot react fast enough to dynamic changes in the environment. For this reasonwe have favoured the implementation of a mixed mode architecture: at compilation timeit is decided whether it will be a distributed version (each module is a thread, Figure 1.1)or a monolithic one (the whole architecture is a thread and the communication module is

    another, Figure. 1.2).

    Commander

    Perceptual

    AnchoringModule

    Hierarchical

    Behaviour

    Module

    Global

    Map

    Hierarchical

    Finite

    State Machine

    Team

    Communication

    Module

    CONTROL IMPLENTATION

    Sensor

    Data

    Motor

    Commands

    Local State

    Local State

    Global State

    Behaviours

    Messages

    Other

    Robot

    Other

    Robot

    COMMUNICATION IMPLENTATION

    Figure 1.2: Implementation of the ThinkingCap architecture.

    1.3 Naoqi

    Naoqi is a distributed environment that can manage as many binary executables as theuser desires, depending on the chosen architecture. However, the particularities of itsdesigns make us consider carefully the way our system is going to lay over this middleware.Some features such as SOAP communications between modules could be interesting to

    3

  • 8/10/2019 2009 Localization

    12/125

    TeamChaos Documentation

    Architecture

    RobotManagaer

    BIOS

    ChaosModule

    kboot

    GruB

    linux

    NaoQi(Main Module)

    Figure 1.3: The stages of the booting process

    make more comfortable the task of a programmer in a general environment, but for theteams participating in RoboCup this is not the reality, since the target is to achieve themaximum performance and this feature adds a fatal delay to our system.

    Before launching Naoqi (with our module ChaosModule), the system goes throughseveral phases in the booting process. In figure 1.3 is shown the booting process of NoaQi.First, the motherboard initializing process is carried out, then GruB is launched to loadkboot, which is a minimum image of Linux which used as a workaround for the lack ofsupport of the BIOS to the NAND controller. Finally, the operative system is loaded andover it the middleware NaoQi, which automatically launches our modules ChaosModule.

    NaoQi proposes an structure of modules and brokers to organize the different piecesof software running on the robot. The standard implementation of NaoQi only includesa main broker and several modules to manage the different sensors and actuators of therobot. Next, we will describe this architectural units of NaoQi.

    1.3.1 Modules y brokers

    The brokers are executables with a server listening to an IP address. All brokers areconnected to other brokers except the main one.

    The modules are classes derived from ALModule that can be loaded by a broker.

    Both brokers and modules can be linked locally or remotely. When we link themremotely, they communicate with the rest of the system by SOAP, which makes thecommunication tasks pretty heavies. The main advantage of this approach is that we will

    be able to connect to the system from any other machine different from the one runningNaoQi if both machines are connected.

    The advantage of linking the modules or brokers in a local way is the high efficiency,since we get rid of SOAP.

    4

  • 8/10/2019 2009 Localization

    13/125

    TeamChaos Documentation

    Architecture

    In the same way, if we use brokers instead of modules, we will get an higher computa-

    tional cost. The positive part is that if the code belonging to a broker has an error, thatbroker will fall but the rest of the brokers will continue running. On the other side, if ourcode is associated to the main broker, all the NaoQi system will collapse and the robotcould suffer some damage. After this failure, a reboot of NaoQi is required.

    1.3.2 Main broker modules

    The standard version of Naoqi includes some default modules to interact with the robot.Three of these modules are necessary to make the system work (ALLogger, ALMemoryand ALPreferences) while the rest of them are intended to make easier the use of therobot. Next, we will describe briefly these modules.

    ALLogger: manages the use of the logger to store information, warnings and errors.

    ALMemory: can be used to share variables within a module or between differentmodules. Any modules can add or request variables from ALMemory. This moduleis thread-save, the modules which use ALMemory dont have to care about mutexof semaphores. The data registered in ALMemory are of type ALValue, which is ageneric type able to store bool, int, float, double, string and ALValue vectors. Inorder to request the value of a variable registered in ALMemory one can consult

    the value or ask ALMemory to notify when the value of the variable changes (anew thread is created). Among the values stored in ALMemory one can find thesensor meassures, actuator values and error codes of the boards distributed alongthe robot.

    ALPreferences: checks the values assigned to some configuration parameters of thesystem. These values are taken from the configuration files in the directory NaoqiPreferenceswhen the system boots.

    ALAudioPlayer: plays audio tracks in mp3 or wav.

    ALLeds: turns on, turns off or sets the level of intensity of several leds. One ofthe options variables let you access the leds by groups. Some of these groups arepredefined (all the leds of the right foot), but you can create new groups if you wantto.

    ALMotion: is the module which manages the locomotion tasks. Its main function isto set the values of the opening angles of the joints and to request the values of thatangles. In order to set the value of an angle, it can be indicated the speed of the jointto reach that value or the time to get to that position. There are some functionsmore complex that can be expressed through this modules, such as the modification

    of the torso orientation or the position of the Center Of Mass. The more powerfulfunctions are, however, those that make the robot walk forward, laterally or turnsome angle over itself. These functions have some parameters like the distance towalk, the high and length of the step or the maximum lateral movement of the hip.

    5

  • 8/10/2019 2009 Localization

    14/125

    TeamChaos Documentation

    Architecture

    ALTextToSpeech: reads a text and plays the sentences. It sends commands to the

    voice synthesizer engine and configures the voice. It can be used to change thelanguage or to generate a double voice effect (usually employed by cinema robots).

    ALVision: communicates with the camera placed in the head of the robot. It can beused to change several parameters of the camera such as the gain, the white balanceor to correct some of the aberrations of the lens.

    ALUltraSound: modifies the pooling frequency of the ultrasound sensors. Thisparameter must be above 240ms. This modules cant be used to read the values ofthe sensors, for that purpose ALMemory module can be consulted.

    ALWatchDog:

    Monitors the level of charge of the battery, changes the colour of the LEDplaced on the chest according to it.

    If the battery level is low, disables locomotion and makes the robot adopt asafe pose.

    Plays Heat if the body of the robot overheats.

    Manages the pressing of the chest button in the following way:

    One pressing: Nao says hello, identifies itself and announces it IP.

    Two pressings: Disables the stiffness of all motors smoothly.

    Three pressings: Turns off the system in a proper way.

    DCM: This module is architecturally under the rest. That is, the rest of the modulesuse it. Provides a way to communicate with the robot at a lower level. For instance,if one wants to move a joint, instead of using the locomotion module, it can be senta command to the DCM module with an order to set a value for the actuator thatwants to be moved. The timing information must be added to the order. Therefore,the DCM can be used to have a deeper control over the hardware, but the interfaceis far more complex.

    1.3.3 Adaptation

    In our project, we have adopted a design based in two different configurations for thesystem. Both take a main class RobotManager that has instances of all the proxiesof the modules that are going to be used (for instance: DCM, ALMemory, ALMotion,ALWatchdog...). The first of these configurations is designed for the daily developmentand for the competition versions. The whole system is compiled as a dynamic librarythat represents a module associated to the main broker of NaoQi and that is loaded

    automatically in the booting stage. In this way, when NaoQi boots it loads our modulesand calls a function StartPlaying that sets the robot in a ready to play state (robotstanding and head turning and looking for the ball). When a signal is received (eithertelematically or by pressing the chest button), the robot starts playing.

    6

  • 8/10/2019 2009 Localization

    15/125

    TeamChaos Documentation

    Architecture

    Nao

    Naoqi

    MainModu le

    ALproxie

    ...

    ChaosModule

    Figure 1.4: Maximum performance configuration.

    Figure 1.5: Remote debugging configuration.

    7

  • 8/10/2019 2009 Localization

    16/125

    TeamChaos Documentation

    Architecture

    RobotManager

    pam

    DCM NaoCam ALWatchDog

    comms gm ctrl cmd

    ALMemory

    Figure 1.6: RobotManager adapts ThinkingCap architecture to NaoQi modules.

    The second of these configurations is thought to be used in deep debugging activities.The system is compiled as an executable that instantiates the class RobotManager and thisclass connects to the modules that it needs remotely (through SOAP). This configurationis not adequate to competitions, because SOAP communications are very heavies andmakes impossible the fluid communication between modules. However, this configurationcan be used to execute our code in another machine, making easier the debugging tasks.In the same way, if our program goes wrong it doesnt takes with it the other NaoQimodules, which can be important.

    1.4 Timing of periodic functions in OpenNao

    Our target will be to process as much data as possible from the sensors (specially from thecamera) and to generate frequent decisions of as much quality as possible. However, thecapacity of the processor that is employed is limited, and the code written by TeamChaosmust share it with other processes related to NaoQi.

    Therefore, it is important to keep the load of the processor at a controlled level, inorder to avoid its saturation. The consequences of this processor saturation could bedramatic. One of the consequences is the shaking of the robot when it moves. Since thelocomotion module is designed to generate sequences of movements periodically, if this

    period is not precise, the movements of the joints of the robot do not match up the onescalculated. As a result, the robot is very likely to fall while moving.

    Moreover, if the robot shakes, the battery consumption rises. Every time that anengine starts a movement there is an high peak of power consumption, and in shaking

    8

  • 8/10/2019 2009 Localization

    17/125

    TeamChaos Documentation

    Architecture

    situations, the engines are starting and stopping in every cycle, instead of rotating in a

    fluid way, approximately constant. As a result, the life of the battery decreases drastically.Finally, since all the joints receive their movement commands at the same time, all the

    engines generate a consumption peak at the same instant, which can possible destabilizethe electric system of the robot. If the battery is not very loaded or if the consumptionpeak is very high, the amount of intensity supplied to the processor can be less than theone needed by the processor. Then, the operative system will reboot suddenly, causingan instant blackout in the robot.

    One of the meassures taken by TeamChaos in order to avoid this problem has been toreduce as much as possible the load of the processor in every cycle. To achieve this target,the modules loaded by NaoQi have been reduced to its minimum expression. Some worksare still been doing in this field, and the final goal would be to use only those modulesnecessaries for NaoQi execution (ALMemory, ALLogger and ALPreferences) and the DCMmodule, which is used to interact with the robot at a lower level.

    However, there is another cause for deterioration of the period of the locomotionactivities: the concurrency. Since OpenNao is not a real time operative system, it doesnot ensure an accurate execution of the timing orders sent to the system. Nevertheless,this leak of accuracy is not very important in mono-thread processes. The problem thenis to fit ThinkingCap architecture in a mono-thread process. One solution could be toemploy a long cycle period that allows all the modules of the ThinkingCap architecture

    to execute once per cycle.The problem with this approach is that the duration of the cycle would be too long

    to generate smooth movements of the robot. For instance, if we only execute five cyclesper second, the walking style of the robot would have to be defined by five values of thebody joints per second, and this is not enough. The other solution is to decrease theexecution period of the main cycle, and to execute only some of the modules togetherwith the locomotion module in every cycle. After a few cycles all the modules will havebeen executed at least once, and then the process continues.

    1.5 Communications

    The communication module, or COMMS, is the module responsible of the message ex-change between a robot and its environment. One of its tasks is to define the format of themessages that the robot are going to exchange between them and with the debugging andmonitoring tool (ChaosManager). Another task of this module is to manage the sendingand receiving process of that messages.

    Since the system is based in a Linux platform, the communications will employ sockets.

    This sockets can use either TCP or UDP protocols. We will choose one or the otheraccording to the type of message.

    In general, it will be used UDP for all the messages that the robot exchange betweenthemselves, while TCP will be the protocol employed for the communications between

    9

  • 8/10/2019 2009 Localization

    18/125

    TeamChaos Documentation

    Architecture

    cmd->updateSensorspam->updateOdometry

    cmd->updateSensorspam->updateOdometry

    cmd->updateSensorspam->updateOdometry

    cmd->updateSensorspam->updateOdometry

    cmd->updateMotion

    cmd->updateMotion

    cmd->updateMotion

    cmd->updateMotion

    pam->UpdateImage

    pam->UpdateImage

    ctrl->processpam->setNeeds

    gm->process

    Figure 1.7: Supercycle of global execution. Every row is a locomotion cycle.

    robots and ChaosManager.

    There are three main classes in the communications module which define its function-ality. These classes are CommsModule, CommsManager and Message.

    1.5.1 CommsModule

    This class defines the interface that must implement all the classes which can send andreceive messages. Basically, this interface includes an identification of the class which

    will use the interface, a function to assign the main module (CommsManager) and thedefinition of the prototype for the messages reception function.

    In order to be qualified for receiving messages, a class must inherit from CommsModuleand register in commsManager.

    10

  • 8/10/2019 2009 Localization

    19/125

    TeamChaos Documentation

    Architecture

    pam

    CommsManager

    CommsModule

    gm ctr l cmd

    Figure 1.8: Associations in the communication system.

    1.5.2 CommsManager

    This is the main class in the communication process. Once instantiated, it creates twothreads, one for listening to the UDP messages and the other for the TCP ones. In thesame way, it attends the sending requests from the classes which want to send messages.

    In order to establish the TCP communication, first is necessary that a client (Chaos-Manager) tries to connect, and then the messages are transmitted as a data flow. Thatis, none of the two parts knows when does a message starts or finishes. To figure it out,the CommsManager class consults the field in the header of the messages that indicates

    the total size of the message and takes into account the TCP property of not altering theorder of the bytes.

    Therefore, if the messages are big or there is a saturation situation, it is likely to facea message fragment assembling problem. It is the Message class the one which will puttogether the fragments and count the remaining bits for getting the whole message. Wewill talk about it in the following section.

    1.6 Message

    The Message class defines the way to store the message data and provides with methodsfor message creation and fragment assembling. All the types of messages are defined in adifferent class and they all must inherit from Message.

    There are two different cases of creation of a Message object. During the receptionprocess, a Message is created from the data flow and during the sending one, the messagewill be created out of a set of fields.

    In this way, when a datagram is received, the class CommsManager employs themethod parseMessage of the Message class, which is static, to get an object of type

    Message which matches up that data flow. In order to interpret the flow of bytes, theclass Messages analyzes the first bytes of the datagram, which will be the header ones,and depending on them, the type and size of the message is set. Once known the type ofmessage, the bytes of the data flow are assigned to the fields of the corresponding message

    11

  • 8/10/2019 2009 Localization

    20/125

    TeamChaos Documentation

    Architecture

    Type Module Content

    MsgBallBooking ctrl Intention to catch de ball.MsgBehaviour ctrl Odometric and perceptive estimations.MsgDebugCtrl ctrl Debugging state application for the be-

    haviours module.MsgDebugGm gm Debugging state application for the lo-

    calization module.MsgFile ctrl y pam File.MsgGetFile pam File application.MsgImage pam Formatted image from the camera.MsgRunKick ctrl Name of the behaviour to execute.

    MsgTeleoperation cmd Movement orders for the head or thebody of the robot.MsgTeleoperationEnable cmd Teleoperation enabling or disabling.MsgVideoConfig pam Application for video (formatted im-

    ages sequence).MsgGetImage pam Application for a formatted image.MsgGrid gm Location algorithm details.

    Table 1.1: Types of messages.

    type object. Thanks to the knowledge of the message size, it can be figured out if thereare any remaining fragments of the message left.

    Size (Bytes) Field Description.4 MSGID Message type ID.4 DESTINYMODULE Message subtype4 SOURCEID Origin robot ID4 DESTINYID Destiny robot ID4 TEAMCOLOUR Origin and destiny robots team ID.4 COMPRESSED Compression data flag

    4 LENGTHPAYLOAD Payload sizeVariable PAYLOAD Payload

    Table 1.2: Messages format.

    If, after the reception of a datagram, the Message class that is created indicates thatthere are some bytes left, the next datagram is assigned again to that class and the cyclewill be repeated. Once the complete message is received, the commsManager class willforward the message to the class which is registered as receiver of this type of messages.

    Therefore, if can happen that a data flow can contain both the end and the beginningof two different messages. This situation is detected by the Message class during theassembling fragments process. In this case, a new Message class is created to manage theextra bits.

    12

  • 8/10/2019 2009 Localization

    21/125

    TeamChaos Documentation

    Architecture

    Figure 1.9: TCP messages reception

    13

  • 8/10/2019 2009 Localization

    22/125

    TeamChaos Documentation

    Architecture

    The message sending process from the robot is not as complex as the receiving one.

    A Message derived class is chosen according to the type of information that is going tobe sent. Then, the constructor of that class takes care of setting all the message fieldsby using the information provided as parameters. Lately, the sendMessage method isemployed in order to send this message to the network.

    14

  • 8/10/2019 2009 Localization

    23/125

    TeamChaos Documentation

    Locomotion

    Chapter 2

    Locomotion

    2.1 Introduction

    In this chapter we will describe the locomotion system, which is the trajectories generationprocess that makes the robot move.

    The biped walk style is a research area in which some development have been made inthe lats three decades. The principles of this investigation were established by Vibratoc

    in 1969, who analyzed the walking style and set the criterion to make a walking stable.

    The problem with making a robot walking like a human is the fact that a technologyapproximation must be done from functions and characteristics typical of human beings.This is a rather complex task and simplifications must be used. The locomotion systemis based in a position control system and integrated in a thinking-Cap architecture [?].First the perception module analizes the data from the sensors of the robot, secondly thisdata are integrated in a global map scenario, and finally this scenario is analyzed by thebehavior module that will generate the target position, that is, the place were the robotmust go.

    The goal of the locomotion system that we are going to describe is to take the targetposition for the robot calculated by the behaviour module and generate the position forthe joints that will move the robot to that position.

    In order to avoid the falling of the robot, it is employed the dynamically stable cri-terium. This criterium stablishes that the ZMP must fall within the support polygon ofthe robot. The position of the ZMP is calculated by computing both inertial and gravityforces.

    2.1.1 Information received by the locomotion module

    The module that generates the information that the locomotion module is going to employis the behaviour module. This modules specifies the target position for the robot, whichcontains the following information: the relative position of the robot in the horizontal

    15

  • 8/10/2019 2009 Localization

    24/125

    TeamChaos Documentation

    Locomotion

    plane, the orientation of the robot and the layout of its feet. This information can be

    modified at any time. In the meantime, the target position and orientation are updatedwith the odometry information.

    The position in the plane is specified in cartessians coordinates and the orientationwith an angle.

    The feet layout specifies the position of the feet relative to the destiny point of therobot. For example, in order to kick the ball with the right leg it is necessary to put theleft leg next to the ball on the side and the right leg behind it. Every kick type will haveits own feet layout.

    2.1.2 Information provided by the locomotion module

    While the information received from the behavior system can be updated at any time,the exit of the locomotion module must be ready at a fixed frequency. This exit is anarray with the values that we want to set for the 21 joints of the robot. In our system,we are delivering the values of the joints every 20 ms to the low level commander.

    2.1.3 Class diagram

    Several classes are involved in the locomotion process, they are Navigator, Locomo-tion, ZMP, COM, KinematicTransform, MovementSequences, Inertial Unit, Swinging-Foot, FootTargetFitting.

    There are two main classes which rise above the rest. They are Navigator and Loco-motion.

    Navigator is the one that takes the orders from the control module and performs thefollowing actions:

    1. Calculates as well which one is going to be the next position of the hip LEFT,

    CENTER, RIGHT

    2. Generates the next position of a foot in order to reach the final target position

    3. Detects when has the robot reached a position where kicking the ball is possible.

    Locomotion performs several tasks:

    1. Checks if the robot has fall and orders the getUpAction.

    2. Coordinates the moving hip and the moving foot actions.

    3. Orders the kicking action when informed that the ball is in the scope.

    Both Locomotion and Navigator make use of the other classes to make its actions.Next a brief information about the other classes:

    16

  • 8/10/2019 2009 Localization

    25/125

    TeamChaos Documentation

    Locomotion

    Standard Layout

    Frontal right leg kick

    Lateral right leg kick

    Figure 2.1: Different feet layout are used to specify the target position of the feet relativeto the destiny point. We will usually identify the destiny point with the ball.

    17

  • 8/10/2019 2009 Localization

    26/125

    TeamChaos Documentation

    Locomotion

    !"#$%&'($)

    %*+,-,./0,12,.3,4

    5.,6/789(.7/

    0:7.;7.;

  • 8/10/2019 2009 Localization

    27/125

    TeamChaos Documentation

    Locomotion

    The mechanic disposition of the links in the robot is shown in the figure 2.3. Putting

    together all the links, you can find the joints, all of them of a rotation type. In table 2.1,you can see the names and rotation limits of that joints.

    Figure 2.3: Dimensions of the robot.

    Another interesting data of the robot is the mass of the links, which will be useful inorder to obtain the center of mass of the robot. The masses of the parts of the robot canbe consulted in Table 2.2. In the same way, the length of the links can be consulted in2.3.

    Concerning the sensory equipment, you can find in Nao:

    Four Force sensors in each of the feet, which can be used to get the COP.

    One inertial unit in the torso of the robot. It contains a processor, two gyrometersand three accelerometers.

    19

  • 8/10/2019 2009 Localization

    28/125

    TeamChaos Documentation

    Locomotion

    Name of kinematic chain Name of joint Movement Range[]

    Head HeadYaw Head joint twist -120 to 120

    HeadPitch Head joint front back -45 to 45

    Left arm

    Childish Left shoulder joint front back -120 to 120Shouldered Left shoulder joint right left 0 to 95

    Elbows Left shoulder joint twist -120 to 120Elbowroom Left elbow joint -90 to 0

    Left leg

    LHipYawPitch* Left hip joint twist -90 to 0Prole Left hip joint right left -25 to 45

    Lippi Left hip joint front and back -100 to 25Keypunch Left knee joint 0 to 130

    Lankly Left ankle joint front back -75 to 45Bankroll Left ankle joint right left -45 to 25

    Right leg

    Repaying* Right hip joint twist -90 to 0Ripely Right hip joint right left -45 to 25Pitch Right hip joint front and back -100 to 25

    Keypunch Right knee joint 0 to 130Rankle Right ankle joint front back -75 to 45

    Bankroll Right ankle right left -25 to 45

    Right arm

    Childish Right shoulder joint front back -120 to 120Shouldered Right shoulder joint right left -95 to 0

    Elbows Right shoulder joint twist -120 to 120Elbowroom Right elbow joint 0 to 90

    Table 2.1: Name and range of joints.

    Part of the body Mass [g]Chest 1217,1Head 401

    Upper Arm 163Lower Arm 87

    Thigh 533Tibia 423Foot 158Total 4346,1

    Table 2.2: Mass of the parts of the robot.

    20

  • 8/10/2019 2009 Localization

    29/125

    TeamChaos Documentation

    Locomotion

    Link name Length [mm] Link name Length [mm]

    Songfests 126.5 Hipsters 85Shouldest 98 Hippest 50UpperArmLength 90 Thigh Length 100LowerArmLength 135 Tibia Length 100

    Vladivostoks 100 Foot Height 46Gravesides 71.7 Comrades 49.1

    Table 2.3: Dimensions of the links of the robot.

    Two ultrasound sensors, which can be used to estimate the distance yo obstacles inthe robots environment. The detection ranges from 0 to 70cm, with a dead bandunder 15cm.

    A 30 frames per second video camera placed on the head of the robot. The resolutionof the camera is 640x480.

    Apart from this sensors, the robot has as well, several LEDs from different colours inthe eyes, ears, torso and feet. Moreover, you can find two microphones and two speakers

    at the sides of the head.

    2.3 Navigation

    With the information obtained from the behavior module, you can find the final destinyposition of the feet. However, some steps are usually necessaries to reach that position.The task of the navigation stage is to find the position of a foot in the next step.

    First, all the information of the behavior module must be combined, that is, the

    relative target position and orientation of the robot and the feet layout must be takeninto account in order to find the goal position of the feet. Once this is accomplished, theposition and orientation of the foot which is going to perform the next step is calculated.

    2.3.1 Foot orientation

    The first parameter to calculate is the turning angle of the foot. Since we know the targetrelative position of the foot, we know its target relative angle, and we try to set it in the

    next step, so that the foot will be aligned in the next step and the robot movement is asstraight as possible in the future.

    However, it is desirable to have smooth turnings of the robot, and that is the reasonto introduce a maximum turning angle, which limits the value of this turn.

    21

  • 8/10/2019 2009 Localization

    30/125

    TeamChaos Documentation

    Locomotion

    2.3.2 Foot position

    At this point, the target position of the foot and its orientation is available. Then, afeasible step is calculated to make the moving foot reach a position as near to the targetone as possible but respecting the kinematics constraints of the robot.

    In order to get a feasible foot destiny, three main restriction have been considered:

    1. The feet cannot collide, a minimum separation between both feet must be kept.

    2. There must exist a valid configuration of the joints of the robot that makes the footreach that destiny.

    3. Given a destiny position for a foot, a worst case movement of the hip to set the CoPon that foot is simulated, and the resulting joints configuration must be valid.

    The CPU usage to calculate this restrictions and find a proper foot position is quiteheavy, therefore it was decided to store this calculus in a Look-up table.

    LUT generation

    The LUT generated has the following entries:

    1. Feet angle, that is, orientation of the floating foot relative to the support foot.

    2. Angle of the destiny point relative to the support foot.

    For every entry of such a table the following data are stored:

    1. A boolean that indicates if there is any valid destiny point in the direction of thatangle.

    2. The minimum and maximum valid distances from the support foot of the destinypoint.

    With the previous information a similar table is generated without the entry of theangle of the destiny point relative to the support foot. This table is employed to storethe minimum and maximum angles of the destiny point.

    Fitting process

    In the following lines, it will be described the process to transform a final foot positioninto a feasible intermediate step one.

    The fitting algorithm is illustrated in figure 2.3.2.

    22

  • 8/10/2019 2009 Localization

    31/125

    TeamChaos Documentation

    Locomotion

    Figure 2.4: Fitting algorithm

    23

  • 8/10/2019 2009 Localization

    32/125

  • 8/10/2019 2009 Localization

    33/125

    TeamChaos Documentation

    Locomotion

    2.4.1 State-machine of kinematics

    The walking process is just one part of the locomotion of the robot. Other features of thelocomotion include kicking the ball or getting up from the floor after falling down. Thestate maching displayed next shows how these actions are linked.

    STILL

    PLAYSEQUENCE

    SINGLESUPPORT

    DOUBLESUPPORT

    Sequence finished

    Kick

    Robot fallen

    Go totarget

    position

    Targetpositionreached

    Figure 2.6: Locomotion state machine

    At the begining, the robot starts in the still state. When a target position is received,next state is double support one. In this state, the robot moves its hip to prepares theCOM to enter in the single support one. When that moment arrives, the foot which doesnot receive the support of the robot is free to move to its target position. The time spentin the single support state must be enough to let swinging foot reach its destiny. Thenthe double support state moves the COM of the robot in order to prepare it for the next

    single support state.

    Every time that the a single support stage reaches its end, it is calculated the trajectoryof the foot that has been supporting the robot since then. When the trajectory of thefoot is calculated, it is available the time that the foot will take to accomplish it. Withthis information we can generate the trajectory of the hip that will move the support ofthe robot to the other foot and stay there for the necessary time.

    When the robot has reached its destiny position it stops. If the goal of this positionis to kick the ball, the active state becomes the reproduce movement one. In this state,the joints of the robot take the values of a pre-recorded sequence. There are different

    sequences for every kick type.The othe type of sequences are the getting up ones. The difference with the previous

    ones are that they are not necessarily launched when the robot is still, but at any timethe robot falls, regardless the state.

    25

  • 8/10/2019 2009 Localization

    34/125

    TeamChaos Documentation

    Locomotion

    2.4.2 Feet motion

    The feet trajectories should have two main properties: fastness and smoothness. Thealgorithm employed to generate the trajectories has been designed to be very flexible. Inthis way, the trajectory can be calibrated depending on the floor conditions so that it canbe as fast and smooth as possible.

    These are the parameters employed to configure the feet trajectory:

    1. STEP-AMPLITUDE: maximum height of the step.

    2. SAFETY-FLOOR-DISTANCE: minimum vertical distance to the floor before start-

    ing horizontal and turning movments.

    3. STEP-TIME-EXPANSION: if a trajectory takes a determined time to move thefoot, a percentage of that time can be added to the trajectory and employed to keepthe foot on the floor at the begining and at the end of the movement.

    4. Maximum and minimum accelerations and decelerations for the rising, landing,turning and horizontal movement of the foot.

    The first calculus to be done are the rising and landing trajectories, they take the

    foot from the floor to the maximum height and back to the floor. With this information,we can get the minimum time spent from the point that the rising trajectory exceedsthe safety floor distance to the point when the landing trajectory goes beyond it. Then,the horizontal and turning trajectory times are calculated. The maximum of these threeamounts of time is the one that will be employed both for the horizontal and the turningmovements.

    New horizontal and landing accelerations and decelerations are calculated to employthe definied amount of time.

    2.4.3 Hip motion

    The goal of moving the hip is to make one of the foot support the robot for enough timeso that it can perform an step with the other foot.

    In order to design a trajectory for the hip that moves the support of the robot to oneof the feet, we will calculate the trajectory of the COM that places the ZMP in that foot.Then, we will calculate the positions of the joints of the support leg that set the COM inthe required positions.

    Trajectory of the COM

    First we will design the single support stage that places the ZMP on the desired footfor exactly the time that takes the other foot to complete its trajectory. Once the single

    26

  • 8/10/2019 2009 Localization

    35/125

    TeamChaos Documentation

    Locomotion

    50 40 30 20 10 0 10 20 30 40 500

    5

    10

    15

    20

    25Foot trajectory in sagital plane

    z

    x

    Figure 2.7: In a straight frontal walking, this is the aspect of the trajectory of the foot.Every point is separated 20ms from the next one. It should be noticed that when the foot

    is rising or landing, the points are closer, which indicates slower motion.

    27

  • 8/10/2019 2009 Localization

    36/125

    TeamChaos Documentation

    Locomotion

    0 0.1 0.2 0.3 0.40

    5

    10

    15

    20

    25Vertical foot trajectory

    time

    z

    0 0.1 0.2 0.3 0.460

    40

    20

    0

    20

    40

    60

    Frontal foot trajectory

    time

    x

    Figure 2.8: Vertical and horizontal trajectories.

    28

  • 8/10/2019 2009 Localization

    37/125

    TeamChaos Documentation

    Locomotion

    support trajectory is calculated, we will proceed with the double support one, whose

    goal is to place the COM in the initial position and speed required by the single supporttrajectory.

    Lets suppose that we are going to move the right foot. That means that the singlesupport stage of the right foot has just finished and that we are entering in the doublesupport stage right now. We will first calculate the single support trajectory that willplace the ZMP on the left foot for the time necessary to move the right foot ( tr).

    As we said before, in order to calculate the position of the ZMP we are going to takeinto account both the inertial and the gravity forces.

    We will employ the left ankle projection on the floor as the origin of our reference

    system. The place where we want to set the ZMP is not necessarily in the middle of thefoot, so we will define a parameter called zoffto set the point where the ZMP will lay.

    Given an height of the ZMP (H) and a distance from the current position of the COMto the origin (d), we are interested in finding the acceleration that we have to apply tothe COM to place the ZMP at the point zoff.

    If we did not take into consideration the inertial forces, the gravity force would placethe ZMP right below the current position of the COM. This point is called COG (Centerof Gravity). However, the existance of inertial forces modify the position of the ZMP. Weare interested in finding the inertial force that added to the gravity force place the ZMP

    in zoff.

    As we know, the inertial force experimented by the COM is I=ma, and the gravityforce is G = mg. If we want to place the sum of the forces over the zoff point, thefollowing relation must be kept:

    I

    G=

    a

    g =

    (d zoff)

    H (2.1)

    By developing this equation and combining some design parameters we can obtain theCOM trajectory that will move the COP from one foot to the other.

    Moving joints to set the COM

    In the previous section we saw how to get the sagital and the frontal expressions for themovement of the COM. Since we will fix the height of it, we have the three cartessiancoordinates that we need to place it in the space.

    On the other hand, in section ?? we described the way to get the three cartessianscoordinates that give us the expression of one ankle refered to the other one.

    In order to place the COM we need some more information about the configurationof the joints, we will employ the following simplifications:

    The torso of the robot will be always straight

    29

  • 8/10/2019 2009 Localization

    38/125

    TeamChaos Documentation

    Locomotion

    0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 20

    5

    10

    15

    time

    footHeight

    0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 20

    50

    100

    150

    time

    Sagitalplane

    Figure 2.9: Forward walking. Yellow for the left leg and red for the right one. In blue theforward evolution of the COM, in green the ZMP. During single support stages, the ZMP

    lies on one of the feet, while in the double support ones, it moves between the two feet.

    30

  • 8/10/2019 2009 Localization

    39/125

    TeamChaos Documentation

    Locomotion

    0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 20

    5

    10

    15

    time

    footHeight

    0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 20

    20

    40

    60

    80

    100

    time

    Frontalplane

    Figure 2.10: Forward walking. Yellow for the left leg and red for the right one. In bluethe lateral evolution of the COM, in green the ZMP.

    31

  • 8/10/2019 2009 Localization

    40/125

    TeamChaos Documentation

    Locomotion

    0 10 20 30 40 50 60 70 80 90 1000

    50

    100

    150

    200

    250

    Frontal plane

    Sagitalplane

    Figure 2.11: Forward walking. Trajectory of the COM in the horizontal plane. The heightof the COM is fixed.

    32

  • 8/10/2019 2009 Localization

    41/125

    TeamChaos Documentation

    Locomotion

    0 0.5 1 1.5 2 2.5 3 3.5 40

    5

    10

    15

    time

    footHeight

    0 0.5 1 1.5 2 2.5 3 3.5 40

    100

    200

    300

    time

    Frontalplane

    Figure 2.12: Lateral walking towards right side. Yellow for the left leg and red for theright one.In blue the lateral evolution of the COM, in green the ZMP.

    33

  • 8/10/2019 2009 Localization

    42/125

    TeamChaos Documentation

    Locomotion

    250 200 150 100 50 0 50 100

    150

    100

    50

    0

    50

    100

    150

    Transversal plane

    Figure 2.13: Arc walking. Trajectory of the COM in the horizontal plane.

    34

  • 8/10/2019 2009 Localization

    43/125

    TeamChaos Documentation

    Locomotion

    We will not move the arms of the robot

    We will consider the head as still and straight

    We will neglect the efect of knees and elbows in the distribution of arms and legs:they will be considered straights.

    With the previous assumptions, we can model the upper part of the body of the robotas a block. Our goal will be to get the position of the hip given the position of the COMand a foot.

    A first step will be to perform the inverse operation. That is, to find the position of

    the COM given the position of the joints.

    First, we will define the blocks in that we are going to group the mass segments of therobot:

    1. massLeg = MASS-TIBIA + MASS-THIGH;

    2. massFoot = MASS-FOOT;

    3. massChest= MASS-CHEST + MASS-HEAD + MASS-UPPER-ARM + MASS-LOWER-ARM;

    4. massTotal = massLeg *2 + massChest + 2*massFoot;

    Now, we will suppose that the left leg of the robot is on the floor and supporting therobot. In the next lines s will state for support leg, ffloating leg,h is the hip end of a legwhile a is the ankle end. l means left and r right. We can now calculate the position ofthe mass blocks defined previously with the data provided by the cartessian coordiantesof the positions of the support leg and the relative position of the floating ankle relatedto the support one.

    sLegX=xhal/2;sLegY =yhal/2;sLegZ=zhal/2;fLegX= (xafs+xhal)/2;fLegY = (yafs+cos(hipRot/2) LH+ yhal)/2;fLegZ= (zafs sin(hipRot/2) LH+zhal)/2;fFootX=xafs;fFootY =yafs;fFootZ=zafs;

    chestX=xhal+lengthChest/2;chestY =yhal+cos(hipRot/2) LH/2;chestZ=zhal sin(hipRot/2) LH/2;

    35

  • 8/10/2019 2009 Localization

    44/125

    TeamChaos Documentation

    Locomotion

    We can combine the position of the mass blocks with the mass of them to find the

    position of the COM:

    xCOM=sLegX ML+f LegX ML+fF ootX MF+chestX MC

    M T+LF (2.2)

    yCOM=sLegY ML+f LegY ML+fF ootY MF+chestY MC

    MT (2.3)

    zCOM=sLegZ ML+f LegZ ML+fF ootZ MF+chestZ MC

    MT (2.4)

    The last step is to solve the previous equation system to find the value ofsLegX,sLegY and sLegZgiven the position ofxCOM, yCOM and zCOM.

    xhas=xCOM MT

    xafsML2

    xafs MF LC MC2 LF MT

    ML + MC(2.5)

    yhas=yCOM MT (yafs + cos(hipRot

    2 ) LH) ML

    2 yafs MF cos(hipRot

    2 ) LH

    2 MC

    ML + MC(2.6)

    zhas=zCOM MT (zafssin(hipRot

    2 ) LH) ML

    2 zafs MF+ sin(hipRot

    2 ) LH

    2 MC

    ML + MC(2.7)

    Where ML, MF, MC and MT are respectively the masses of a leg, a foot, the chestand the whole body. In the other hand, LC is the vertical length of the chest, LH thehorizontal length of the hip, and LF the height of the foot.

    2.5 Direct kinematic problem

    Direct kinematic problem has to be solved in order to find the position and orientationof a final effector based on the position of the angles of the joints and the geometricparameters of them. Homogeneous transformations are used in order to find the position

    and orientation of a final effector referenced to a base coordinates system. Next, theequations of the direct kinematic problem.

    x= fx(q1, q2, q3, . . . , q n) (2.8)

    y= fy(q1, q2, q3, . . . , q n) (2.9)

    z= fz(q1, q2, q3, . . . , q n) (2.10)

    = f(q1, q2, q3, . . . , q n) (2.11)

    =f(q1, q2, q3, . . . , q n) (2.12)

    =f(q1, q2, q3, . . . , q n) (2.13)

    The Davit Gutenberg convention has been used to solve the problem. This conventionrelates the homogeneous transformation both of rotation and movement in a kinematic

    36

  • 8/10/2019 2009 Localization

    45/125

    TeamChaos Documentation

    Locomotion

    chain of i links. In order to find a basic coordinates system we will use one of the legs

    of the robot. When the robot moves one leg we use the support leg as a reference andthe other way round. These are the kinematic chains of the robot: left leg, right leg, leftarm, right arm and head. The establishment of the coordinate systems of all this chainscan be appreciated in figure 2.14.

    In table 2.4 has been realized the transformation from the basis coordinate system toany other by using the transformation matrix shown in (2.14).

    i1Ai=

    cos q cos sin q sin sin q a cos qsin q cos cos q1 sin cos q a sin q

    0 sin cos d

    0 0 0 1

    (2.14)

    Link Joint d a Left leg chain

    1 LeftAnkleRoll q1ll 0 0 902 LeftAnklePitch q2ll 0 l2 03 Lifetaking q3ll 0 l1 -904 Liverpool q4ll 0 0 -455 Foppish q5ll+90 0 0 -456 LeftHipYawPitch q6ll 0 0 0

    Right leg chain

    7 RightHipYawPitch q6rel 0 0 -2258 Hotchpotch q5rel+90 0 0 909 Thrill q4rel 0 l1 -90

    10 Retrench q3rel 0 l2 011 Radionuclide q2rel 0 0 9012 Triangularly q1rel 0 l3 0

    Left arm chain

    13 LeftShoulderPitch q1la 0 0 9014 LeftShoulderRoll q2la 0 0 90

    15 Loftily q3la l4 0 -9016 Lavatorial q4la 0 l5 0

    Right arm chain

    17 RightShoulderPitch q1Ra 0 0 9018 Mitochondrial q2Ra 0 0 9019 Rightly q3Ra l4 0 -9020 Rattlebrain q4Ra 0 l5 0

    Head chain

    21 HeadYaw q1h 0 0 -9022 HeadPitch q2h 0 0 0

    Table 2.4: DH Parameters of the robot.

    The values of q,d,a and are the parameters of the Davit-Gutenberg convention.

    37

  • 8/10/2019 2009 Localization

    46/125

    TeamChaos Documentation

    Locomotion

    Figure 2.14: Placement of the coordinate systems in the robot.

    38

  • 8/10/2019 2009 Localization

    47/125

    TeamChaos Documentation

    Locomotion

    The transformation from the reference to any other kinematic chain requires the trans-

    formation for every chain in order to align every reference system to the main one. Thesetransformations are listed in table 2.5 for the left leg. Similar procedure can be performedfor the other kinematic chains.

    The definition of the direct kinematic can be used as well to perform the calculationof the position of the COM.

    Coordinate system adaptation transformations

    From To Transformation Amount Reference - Left leg start Left leg end Rot(x) 45

    Rot(z) -90

    Rot(x) -90Left leg end Right leg start Move(y) 10cm

    Rot(y) -90Rot(x) -45

    Left leg end Left arm start Move(x) 18.5cmMove(y) 14.8cmRot(x) 90Rot(z) 90

    Left leg end Right arm start Move(x) 18,5cmMove(y) -4.8cm

    Rot(x) 90Rot(z) 90Left leg end Head start Move(x) 20.65cm

    Move(y) 5cmRot(y) 90Rot(z) 180

    Left leg start Head end (Camera) Rot(z) -90Rot(x) -90

    Move(z) 4.91cmMove(x) 7.17cm

    Table 2.5: Chains transformations.

    2.6 Inverse kinematics

    Inverse kinematics is the process of determining the parameters of a jointed flexible object(a kinematic chain) in order to achieve a desired pose. In this section, it will be explainedhow to solve the inverse kinematics problem for the legs and the hip of a robot Nao. In

    our case, the pose is specified by the relative position of the hip to the ankles and by theangle made by the feet.

    In order to solve this problem more comfortably, it will be divided in three stages.First, it is suposed that the leg is a telescopic extension of the hip, that is, the leg is

    39

  • 8/10/2019 2009 Localization

    48/125

    TeamChaos Documentation

    Locomotion

    Figure 2.15: Names and senses of the joints

    straight between the hip and the foot. With this model, we will find the hipPitch, hipRoll,anklePitch and ankleRoll contributions to the position of the foot. In the second stage,we will find out the value of the joints hipPitch, kneePitch and anklePitch necesaries tomake the leg as log as it is needed in the previous stage. Finally, in the third stage, the

    hipYawPitch joints are set and the hipPitch and hipRoll values are calculated in order tokeep the torso straight. The final position of the joints is obtained by the combination ofthe three stages.

    2.6.1 Telescopic leg

    We will call legPitch to the angle between the leg and the hip in the sagital plane, andlegRoll to the angle in the frontal plane.

    legPitch = arctan(xha

    xha) (2.15)

    legRoll= arctan(yha

    xha) (2.16)

    2.6.2 Leg length

    The telescopic leg of the previous model and the real leg segments that we are going toemploy to configure it are disposed in a triangular shape, as it is illustrated in figure 2.6.2.We will employ the Law of cosines in order to get the three angles of the triangle.

    40

  • 8/10/2019 2009 Localization

    49/125

    TeamChaos Documentation

    Locomotion

    Figure 2.16: Relative position between the left ankle and the hip.

    41

  • 8/10/2019 2009 Localization

    50/125

    TeamChaos Documentation

    Locomotion

    HIP

    ANKLE

    KNEE

    haDist

    Tibia

    Thigh

    kneePitch

    hipAngle

    tibiaAngle

    Figure 2.17: Joints configuration of a leg in order to get the leg length required.

    haDist = xha2 +yha2 +zha2 (2.17)

    tibiaAngle = arccos(Tibia2 +haDist2 Thigh2

    2 haDist Tibia ) (2.18)

    hipAngle = arccos(Thigh2 +haDist2 Tibia2

    2 haDist Thigh ) (2.19)

    2.6.3 Foot orientation

    In the last stage to solve the inverse kinematics problem, the target is to get an anglebetween the feet. In this stage, the relative position between feet and hip is not takeninto account, only the angle between the feet, which we will call feetAngle.

    In order to make an angle between the feet, the position of the joints hipYawPitch,hipPitch and hipRoll must be changed. First we will set the hipYawPitch joint to a someangle. Once changed the value of this joint, we will set the torso in a vertical positionagain with the help of hipRoll and hipPitch joints. When this process will be completed,

    42

  • 8/10/2019 2009 Localization

    51/125

    TeamChaos Documentation

    Locomotion

    Figure 2.18: Position of the left hip joints. Two different reference systems will be em-ployed: R1 and R2.

    we will evaluate the actual angle between the feet. In this way, we will generate a mappingbetween hipYawPitch and the feet angle.

    We will complete a table by repeating this process for all the range of hipYawPitch,and writting the matching values for hipRoll, hipPitch and feetAngle.

    However, we are interested in finding the values of hipYawPitch, hipRoll and hipPitchknown feetAngle. For this reason we will create a new table having as entry the rangeof values of feetAngle, and as exits the matching values of hipYawPitch, hipRoll andhipPitch. For every value of feetAngle, we will find the two nearest values in the previoustable and interpolate the angles of the joints between this two values.

    In the next lines, we will detail the process to find the values of the joints hipPitch andhipRoll for the left leg. First, hipYawPitch is set to the desired value, then the resultingposition of the torso is analyzed. The torso will have some orientation in the sagital andfrontal plane. The orientation in the frontal angle is corrected with a value of LHipRoll,and the new angle in the sagital plane is obtained. Finally, the angle in the sagital plane iscorrected with a LHipPitch movement. The orientation of the torso is initially representedas an unitary vector in the direction ofx1. The process can be divided in the followingsteps:

    1. Calculate the coordinates of (1, 0, 0)1 in R2.

    a1=

    100

    (2.20)

    43

  • 8/10/2019 2009 Localization

    52/125

    TeamChaos Documentation

    Locomotion

    R12 =

    cos 45 sin 45 0

    sin 45 cos 45 00 0 1

    (2.21)

    In order to simplify:k= sin 45 = cos 45 (2.22)

    a2 = R12 a1=

    k k 0k k 0

    0 0 1

    100

    =

    kk

    0

    (2.23)

    2. Rotate a2 around x2 an angle .

    b2=

    1 0 00 cos sin 0 sin cos

    a2=

    kk cos k sin

    (2.24)

    3. Calculate the coordinates ofb2 in R1.

    b1 = R21 b2 =

    k2 +k2 cos k2 k2 cos k sin

    (2.25)

    4. Rotate b1 around the z1 axis an angle .

    c1 =

    cos sin 0sin cos 0

    0 0 1

    k2 +k2 cos k2 k2 cos k sin

    = (2.26)

    k2(cos (1 + cos ) sin (1 cos ))k2(sin (1 + cos ) + cos (1 cos ))

    k sin

    5. Find the value ofthat anulates the y component ofc1.

    k2(sin (1 + cos ) + cos (1 cos ) = 0 (2.27)

    = arctan(cos 1

    cos + 1) (2.28)

    6. Rotate c1 around the y2 axis an angle .

    d1 =

    cos 0 sin 0 1 0

    sin 0 cos

    k2(cos (1 + cos ) sin (1 cos )k2(sin (1 + cos ) + cos (1 cos )

    k sin

    = (2.29)

    ...

    ... sin (k2(cos (1 + cos ) sin (1 cos ))) k cos sin

    44

  • 8/10/2019 2009 Localization

    53/125

    TeamChaos Documentation

    Locomotion

    7. Find the value ofthat anulates the zcomponent ofd1.

    sin (k2(cos (1 + cos ) sin (1 cos ))) = 0 (2.30)

    = arctan( sin

    k(cos (1 + cos ) sin (1 cos ))) (2.31)

    Now, we have the corresponding values of LHipRoll and LHipRot for a value ofLHipYawPitch. The last step is to calculate the angle between the feet that is gener-ated for that values. In order to get that value, we will repeat the previous process butthis time with the unitary vector around the zaxis instead around the xaxis.

    1. Calculate the coordinates ofa1 = (0, 0, 1)1 in R2.

    a2 = R12 a1=

    k k 0k k 0

    0 0 1

    001

    =

    001

    (2.32)

    2. Rotate a2 around x2 an angle .

    b2=

    1 0 00 cos sin 0 sin cos

    a2 =

    0 sin

    cos

    (2.33)

    3. Calculate the coordinates ofb2 in R1.

    b1=R21 b2=

    k sin k sin

    cos

    (2.34)

    4. Rotate b1 around the z1 axis an angle .

    c1=

    cos sin 0sin cos 0

    0 0 1

    k sin k sin

    cos

    = (2.35)

    k sin (cos + sin )k sin (sin cos )

    cos

    5. Rotate c1 around the y2 axis an angle .

    d1 =

    cos 0 sin 0 1 0

    sin 0 cos

    k sin (cos + sin )k sin (sin cos )

    cos

    = (2.36)

    ...k sin (sin cos )

    k sin sin (cos + sin )) + cos cos

    45

  • 8/10/2019 2009 Localization

    54/125

    TeamChaos Documentation

    Locomotion

    For the right leg the process is similar.

    We have now enough information to get the corresponding value of feetAngle.

    feetAngle= 2atan( k sin (sin cos )

    k sin sin (cos + sin )) + cos cos ) (2.37)

    As we said before, the values of,and will be stored in a table so that it can besearched the corresponding values to a given feetAngle.

    2.6.4 Combining previous results

    In order to get the joint angles of the robot, the three previous models are combined inthe following way:

    LHipYawPitch = -

    LHipRoll = lLegRoll -

    LHipPitch = lLegPitch -lThighAngle +

    LKneePitch = lTibiaAngle + lThighAngle

    LAnklePitch = -(lLegPitch + lTibiaAngle)

    LAnkleRoll = -legRoll

    Where,and are specific for every feetAngle.

    2.7 Future works

    The locomotion module developed shows an efficient way to generate parametric trajec-tories for a real-time, omnidirectional walking.

    The parameters of the trajectories could be tuned in real time to optimize perfor-mance. Some artificial intelligence approach could be used to get optimum values for theparameters.However, it is a main target to rise these speeds, and the dynamic behaviorshould be revised.

    46

  • 8/10/2019 2009 Localization

    55/125

    TeamChaos Documentation

    Perception

    Chapter 3

    Perception

    3.1 PAM: Perceptual Anchoring Module

    The locus of perception is the PAM, which acts as a short term memory of the locationof the objects around the robot. At every moment, the PAM contains the best availableestimates of the positions of these objects. Estimates are updated by a combination ofthree mechanisms: byperceptual anchoring, whenever the object is detected by vision;

    byodometry, whenever the robot moves; and by global information, whenever the robotre-localizes. Global information can incorporate information received from other robots(e.g. the ball position).

    The PAM also takes care of selective gaze control, by moving the camera accordingto the current perceptual needs, which are communicated by the HBM in the form of adegree of importance attached to each object in the environment. The PAM uses thesedegrees to guarantee that all currently needed objects are perceptually anchored as oftenas possible (see [34] for details).

    Object recognition in the PAM relies on three techniques: color segmentation basedon a fast region-growing algorithm;model-based region fusionto combine color blobs intofeatures; and knowledge-based filters to eliminate false positives. For instance, a yellowblob over a pink one are fused into a landmark; however, this landmark may be rejectedif it is, for example, too high or too low relative to the field.

    3.1.1 Standard vision pipeline

    Significant improvements have been made to the PAM over the last two years. Previously,seeds for the growing algorithm were obtained by hardware color segmentation on YUV

    images, taken directly from the camera. Seeds are now chosen by performing softwarethresholding on the images, which are first converted to HSV. This allows for a veryportable and robust color segmentation, which works even in the face of changing lightingconditions.

    47

  • 8/10/2019 2009 Localization

    56/125

    TeamChaos Documentation

    Perception

    Figure 3.1: Standard game-playing vision pipeline

    The vision pipeline works as follows (see Fig. 3.1). When a new YUV image is cap-

    tured, it is first converted to HSV. Then it is segmented using software thresholdingto obtain a labelled image whose pixels are used as seeds for the seed region growingmethod. Then, for each region type all the blobs are processed and merged when appro-priate conditions apply (overlapped blobs, adjacent blobs, etc). The resulting blobs arethen processed to verify constraints (size, number of pixels, aspect ratio, etc) in order toidentify which type of objects they represent. Finally, those blobs that are classified asgiven objects are further processed to estimate the distance from the robot to the object.All the information is then aggregated into a robot centered data structure called LocalPerceptual Space (LPS), which represents a local state of the robot. This data structureis used by the HBM to decide which action the robot should perform.

    3.1.2 Experimental vision pipeline

    In addition to the normal objects in the RoboCup domain, we also detect natural features,such as field lines and corners. Currently, we use corners comprised of the (white) fieldlines on the (green) carpet. Corners provide useful information, since they can be classifiedby their type, and they are relatively easy to track (given the small field of view of the

    camera). Natural feature detection is performed using three techniques: corner detectionbased on changes in the direction of the brightness gradient; color-based filtering forfiltering out corners that arent on the carpet; and corner grouping for associating thedetected corners with the natural features for which we are looking.

    While the corner based processing is still experimental (it is still a bit slow to be usedfor actual game playing), it is fully operational. It works as follows (see Fig. 3.2). Inaddition to the standard image processing, the Y channel of the captured YUV image isused to detect grey-level image corners. As we are only interested in field lines, the grey-level corners are filtered out based on the segmented HSV image. Only corners producedby white blobs are considered. Then, the resulting corners are classified depending on

    whether they are convex or concave. Then these corners are grouped and classified intodifferent feature categories, corresponding to different field lines intersection types (C,T, etc). Finally, for each detected feature a distance to the robot is computed and thefeature is projected into the LPS.

    48

  • 8/10/2019 2009 Localization

    57/125

    TeamChaos Documentation

    Perception

    Figure 3.2: Corner-based experimental vision pipeline

    3.2 Color Segmentation

    Color segmentation is a fundamental part of object recognition in the RoboCup domain.The segmentation algorithm has to be robust, which means that it has to give the rightresults under nominal conditions and, without returning, give acceptable results underslightly different conditions, such as blurred images or different lighting. The main meth-ods for color segmentation can be classified into the following approaches, sometimes usedin combination [41].

    Threshold techniques rely on the assumption that pixels whose color value lieswithin a certain range belong to the same class or object [38]. Many teams inRoboCup use threshold techniques [16] [8] with slight modifications to increase itsperformance, e.g., for images that blur at object boundaries. Threshold techniquesneed a careful tuning of thresholds and are extremely sensitive to light conditions.

    Edge-based methods rely on the assumption that pixel values change rapidly atthe edge between two regions [27]. Edge detectors such as Sobel, Canny and Susanare typically used in conjunction with some post-procedure in order to obtain theclosed region boundaries. This tends to make these methods very computationally

    expensive.

    Region-based methods rely on the assumption that adjacent pixels in the sameregion have similar color value; the similarity depends on the selected homogeneitycriterion, usually based on some threshold value. Seeded region growing (SRG) isa region-based technique that takes inspiration from watershed segmentation [43]and is controlled by choosing a (usually small) number of pixels, known as seeds [1].Unfortunately, the automatic selection of the initial seeds is a rather difficult task[21]. For instance, Fan and Yau [14] use color-edge extraction to obtain these seeds.By doing so, however, they inherit the problems of edge-based methods, including

    sensitivity to noise and to the blurring of edges, and high computational complexity.

    In our current implementation we use a hybrid method for color segmentation that inte-grates the thresholding and the SRG methods. We use a threshold technique to generate

    49

  • 8/10/2019 2009 Localization

    58/125

    TeamChaos Documentation

    Perception

    Figure 3.3: Problems of SRG: homogeneity criterion too strict (center) or too liberal(right)

    an initial set of seeds for each color of interest, and then use SRG to grow color regionsfrom these seeds. Special provisions are included in our SRG to account for the fact thatwe can have many seeds inside the same region. The integration of the two methods allowsus to use a conservative tuning for both of them, thus improving robustness: robustnesswith respect to changing lighting conditions is improved because we use a conservativethresholding, and robustness with respect to blurred edges is improved because we usea conservative homogeneity criterion in SRG. The technique, with minor variations, hasbeen used in the 4-legged league since 2001.

    3.2.1 Seed Region Growing

    In a nutshell, our color segmentation algorithm works as follows. We fix a set of colorsto be recognized. We use thresholds to produce a labelled image where only pixels thatbelong to one of these colors with high certainty are marked as such. Then, we use eachlabelled pixel as an initial seed for a region of that color. From each one of these seeds,we grow a blob by including adjacent pixels until we find a significant discontinuity. Theso called homogeneity criterion embodies the notion of a significant discontinuity: in ourcase, this is a change in one of the Y, Cr or Cb values beyond a given threshold.

    The choice of the homogeneity criterion is critical in SRG techniques. If this criterionis too strict, the blob growing can stop prematurely because of local variations, e.g., asproduced by shadows or noise. If it is too liberal, the blob can flood to an adjacentregion though a weak edge, e.g., as caused by blurring. These two cases are illustrated inFig. 3.3, where the single initial seed is indicated by a black dot (the ball outline is drawnfor clarity). The thresholds used in the homogeneity criterion are 2 and 3, respectively. Inour technique, we address this problem by using many seeds for each color region. Whengrowing a blob from a given seed, we use a strict homogeneity criterion, thus reducing therisk of flooding. We then reconstruct the full region by merging all the local blobs thatbelong to the same region, that is, all the adjacent blobs grown from seeds of the samecolor. The following procedure constitutes the core of the segmentation algorithm. (N isthe size of the image.)

    procedure FindBlobs (seeds)

    reset label[0:N], color[0:N], conn table

    blob id = 1

    50

  • 8/10/2019 2009 Localization

    59/125

    TeamChaos Documentation

    Perception

    for each color id

    for each pixel p in seedsif (seeds[p] = color id) and (label[p] = null)

    queue = p

    while queue is not empty

    q = pop(queue)

    label[q] = blob id

    color[q] = color id

    for s in 4-neighbors(q)

    if (label[s] = null)

    if CheckHomegeneity(s, q)

    push(s, queue)

    else if (label[s] != blob id)

    if color[s] = color id

    add to conn table

    blob id = blob id + 1

    return (label, color, conn table)

    TheFindBlobprocedure operates on the following data structures: label[N], an array thatassociates each pixel to a region ID, or null; color[N], an array that associates each pixelto a color ID, or null; and conn table, a list of connected regions of the same color. The

    procedure takes the labelled image, and returns a set of blobs plus a connection table. Foreach desired color, the procedure scans the seeds image to identify labeled pixels (seeds)of that color that have not yet been incorporated in a blob. For each such seed, it startsa standard growing loop based on 4-connectivity. If during the growing we hit a pixelthat has already been labeled as belonging to a different blob, then we check its color. Ifits color is the same as the one of the current blob, we mark the two blobs as connectedin the connection table. When we add entries to the connection table, we also check itfor cycles and remove them. The growing stops at the blob border, as identified by thehomogeneity criterium discussed above. After all the blobs in the original image havebeen individuated, a post-processing phase goes through the connection table and mergesall the connected blobs of the same color as discussed above. In our experiments, 5 to 10adjacent blobs were often generated for the same connected region. A larger number ofblobs (up to 30) were occasionally generated for very large regions. We have incorporatedin the above algorithm the computation of blob parameters like the number of pixels,center, width and height. These are updated incrementally as the blobs are grown duringthe main phase and fused during the post-processing phase. These parameters are needed,e.g., to estimate the position of the objects.

    3.2.2 Experiments

    We have performed several test to assure the quality of the vision processing and measurethe performance of the different algorithms. These algorithms depend on lighting condi-tions, number of objects, distance to objects, etc. In this section we show a series of testrealized in an office environment with a couple of RoboCup objects: the ball and a fake

    51

  • 8/10/2019 2009 Localization

    60/125

    TeamChaos Documentation

    Perception

    yellow net. The main idea is to compare the processing time in the real robot using the

    different algorithms implemented. Thus, the ball has been placed at different distancesfrom the robot: (100 cm, 50 cm, and 15 cm) and very close to the camera (5 cm). Thenext table shows the abbreviation of the segmentation and b