A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

99
A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR ROBOT ARCHITECTURE AND IMPLEMENTATION by DAVID CHRISTOPHER SANDEL KENNETH G. RICKS, COMMITTEE CHAIR DAVID J. JACKSON MONICA D. ANDERSON A THESIS Submitted in partial fulfillment of the requirements for the degree of Master of Science in the Department of Electrical and Computer Engineering in the Graduate School of The University of Alabama TUSCALOOSA, ALABAMA 2015

Transcript of A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

Page 1: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

A TELEOPERATION AND AUTONOMOUS CAPABLE

MODULAR ROBOT ARCHITECTURE

AND IMPLEMENTATION

by

DAVID CHRISTOPHER SANDEL

KENNETH G. RICKS, COMMITTEE CHAIRDAVID J. JACKSON

MONICA D. ANDERSON

A THESIS

Submitted in partial fulfillment of the requirements for the degreeof Master of Science in the Department of

Electrical and Computer Engineeringin the Graduate School of

The University of Alabama

TUSCALOOSA, ALABAMA

2015

Page 2: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

Copyright David Christopher Sandel 2015ALL RIGHTS RESERVED

Page 3: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

ABSTRACT

Robots are an integral part of contemporary society with ever increasing use in re-

source collection, industrial, military, and exploration-based applications. The degree of hu-

man intervention necessary for a robot to successfully operate varies from system to system

and depends heavily on the application, resources available, and knowledge of the opera-

tional environment. A system that can operate either autonomously or under control of a

human operator provides the flexibility to adapt to the requirements of a specific mission

and to unexpected situations as they are encountered. Similarly, a platform that can be

quickly and easily modified is capable of being utilized in a larger number of applications.

This thesis describes a generalized computational architecture for a robotic platform

that allows the system to be either teleoperated by a remote human operator or completely

autonomous. In addition, the architecture is designed specifically to support a modular

platform upon which various modules can be attached or removed to provide the functionality

needed to complete specific tasks. This architecture was implemented on the University of

Alabama Modular Autonomous Robotic Terrestrial Explorer (MARTE) platform as part

of the 2014 NASA Robotic Mining Competition to ultimately result in a robot capable of

collecting and delivering regolith in a simulated lunar or Martian environment. MARTE was

successfully used at the competition and demonstrated the capabilities of the architecture

both during testing and in the competition environment.

ii

Page 4: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

LIST OF ABBREVIATIONS AND SYMBOLS

ADC Analog-to-Digital Converter

AHRS Attitude and Heading Reference System

API Application Programming Interface

ASCII American Standard Code for Information Interchange

ATLANTIS A Three-Layer Architecture for Navigating Through IntricateSituations

ATV All-Terrain Vehicle

AuRA Autonomous Robot Architecture

BLE Bucket Ladder Excavator

CLARAty Coupled Layer Architecture for Robotic Autonomy

CPU Central Processing Unit

DC Direct Current

DS-1 Deep Space One

DSSA Domain Specific Software Architecture

FTDI Future Technology Devices International

GUI Graphical User Interface

I/O Input/Output

IEEE Institute of Electrical and Electronics Engineers

IMU Inertial Measurement Unit

IP Internet Protocol

LED Light Emitting Diode

LiPo Lithium Polymer

MARTE Modular Autonomous Robotic Terrestrial Explorer

MCP MARTE Control Protocol

iii

Page 5: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

NASA National Aeronautics and Space Administration

NASREM NASA/NBS Standard Reference Model

NBS National Bureau of Standards

NIST National Institute of Standards and Technology

NMRA New Millennium Remote Agent

OS Operating System

POSIX Portable Operating System Interface

PWM Pulse Width Modulation

RAMP Robotic Architecture for a Modular Platform

RANSAC Random Sample Consensus

RAP Reactive Action Package

RMC Robotic Mining Competition

ROS Robot Operating System

RS Reactive System

SBC Single Board Computer

SCIP Sensor Communication Interface Protocol

SSS Servo Subsumption Symbolic

TCA Task Control Architecture

TCP Transmission Control Protocol

TTL Transistor-Transistor Logic

UDP User Datagram Protocol

USB Universal Serial Bus

iv

Page 6: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

ACKNOWLEDGMENTS

First, I’d like to thank everyone involved in the University of Alabama Astrobotics

team and whom took part in the development of MARTE.

1. Caleb Leslie for acting as team lead and for helping design and implement all aspects

of the MARTE platform including the power distribution and electronics boxes;

2. Michael Carswell and the Base group for implementing the Base platform, the Base

and Autonomy electronics boxes, and the Autonomy sensor mounts;

3. Kellen Schroeter and the Module group for implementing the Bucket Ladder Excava-

tor platform and electronics box;

4. Andrew Faulkner for co-leading the Software group with me, his AHRS, gimbal, obsta-

cle detection, and path planning algorithms, and for helping set up the communication

link;

5. Mitchell Spryn, Derrill Koelz, and the rest of the Software group for their assistance

in implementing various components of the Backend, Frontend, and Autonomy sub-

systems;

6. Andrew Price for collaborating with me to design the original MARTE Control Pro-

tocol and for developing the original Frontend application;

7. Justin Headley for leading the past University of Alabama Lunabotics teams and

reviewing my thesis draft.

I’d also like to thank Dr. Ricks for all of his support as my advisor, teacher, and the advisor

of the Astrobotics team, Dr. Jackson and Dr. Anderson for the knowledge they have passed

down to me in their classes, and my family for all of their support.

v

Page 7: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

CONTENTS

ABSTRACT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii

LIST OF ABBREVIATIONS AND SYMBOLS . . . . . . . . . . . . . . . . . . . . . iii

ACKNOWLEDGMENTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v

LIST OF TABLES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix

LIST OF FIGURES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x

1. INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1.1. Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1.2. Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

2. BACKGROUND . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

2.1. Control Schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

2.2. Teleoperation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2.3. Autonomy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2.3.1 Deliberative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2.3.2 Reactive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2.3.3 Hybrid Deliberative/Reactive . . . . . . . . . . . . . . . . . . . . . . 7

3. RELATED WORKS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

3.1. Frameworks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

3.2. Teleoperation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

3.3. Autonomy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

vi

Page 8: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

3.3.1 Deliberative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

3.3.2 Reactive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

3.3.3 Hybrid Deliberative/Reactive . . . . . . . . . . . . . . . . . . . . . . 13

4. GENERAL ARCHITECTURE . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

4.1. System Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

4.2. Backend . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

4.3. Frontend . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

4.4. Autonomy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

4.5. Architecture Paradigm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

4.6. Subsystem Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

4.6.1 Request . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

4.6.2 Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

5. CASE STUDY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

5.1. NASA Robotic Mining Competition . . . . . . . . . . . . . . . . . . . . . . . 30

5.2. MARTE Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

5.2.1 Base Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

5.2.2 Bucket Ladder Excavator Module . . . . . . . . . . . . . . . . . . . . 35

5.3. RAMP Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

5.3.1 Subsystem Communication . . . . . . . . . . . . . . . . . . . . . . . . 36

5.3.2 Backend . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

5.3.2.1 Base . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

5.3.2.2 BLE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

5.3.3 Frontend . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

vii

Page 9: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

5.3.4 Autonomy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

5.4. Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

6. CONCLUSIONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

REFERENCES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

viii

Page 10: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

LIST OF TABLES

5.1 MARTE Robot Distribution Layer Requests . . . . . . . . . . . . . . . . . . 48

5.2 MARTE Robot Distribution Layer Module List Response . . . . . . . . . . . 48

5.3 MARTE Base Module Distribution Layer Requests . . . . . . . . . . . . . . 50

5.4 MARTE Base Module Distribution Layer Get Responses . . . . . . . . . . . 51

5.5 MARTE BLE Module Distribution Layer Requests . . . . . . . . . . . . . . 55

5.6 MARTE BLE Module Distribution Layer Get Responses . . . . . . . . . . . 55

5.7 NASA RMC Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

ix

Page 11: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

LIST OF FIGURES

2.1 Human Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

2.2 Basic Teleoperation Architecture . . . . . . . . . . . . . . . . . . . . . . . . 5

2.3 Deliberative Autonomous Paradigm . . . . . . . . . . . . . . . . . . . . . . . 6

2.4 Reactive Autonomous Paradigm . . . . . . . . . . . . . . . . . . . . . . . . . 7

2.5 Hybrid Autonomous Architecture . . . . . . . . . . . . . . . . . . . . . . . . 8

4.1 System Actors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

4.2 Subsystems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

4.3 Backend Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

4.4 Frontend Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

4.5 Autonomy Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

4.6 Teleoperated Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

4.7 Autonomy Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

5.1 RMC Arena Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

5.2 MARTE (Base and BLE Modules) . . . . . . . . . . . . . . . . . . . . . . . 32

5.3 MARTE Modularity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

5.4 MARTE Base Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

5.5 MARTE Bucket Ladder Excavator Module . . . . . . . . . . . . . . . . . . . 35

5.6 Subsystem Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

5.7 MARTE Control Protocol Request Message . . . . . . . . . . . . . . . . . . 38

x

Page 12: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

5.8 Request Message Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

5.9 MARTE Control Protocol Response Message . . . . . . . . . . . . . . . . . . 39

5.10 Response Message Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

5.11 MARTE Teleoperation Inter-Subsystem Communication . . . . . . . . . . . 40

5.12 MARTE Autonomy Inter-Subsystem Communication . . . . . . . . . . . . . 41

5.13 MARTE Backend Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

5.14 Phidgets PhidgetSBC2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

5.15 Linksys WRT54G . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

5.16 MARTE Communication Hardware Layer . . . . . . . . . . . . . . . . . . . 45

5.17 MARTE Server Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

5.18 MARTE Hardware Connections . . . . . . . . . . . . . . . . . . . . . . . . . 49

5.19 MARTE Base Functional Layer: Base Drive . . . . . . . . . . . . . . . . . . 51

5.20 MARTE Base Functional Layer: Wheel Drive . . . . . . . . . . . . . . . . . 52

5.21 MARTE Base Functional Layer: Voltage System . . . . . . . . . . . . . . . . 53

5.22 Sabertooth 2X60 Motor Controller . . . . . . . . . . . . . . . . . . . . . . . 54

5.23 MARTE BLE Functional/Hardware Layers . . . . . . . . . . . . . . . . . . . 57

5.24 MARTE Frontend Functional Layer: Status Update . . . . . . . . . . . . . . 60

5.25 MARTE Frontend Functional Layer: From Mapping . . . . . . . . . . . . . . 60

5.26 Microsoft Xbox 360 Controller . . . . . . . . . . . . . . . . . . . . . . . . . . 61

5.27 Hokuyo UTM-30LX-EW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

5.28 SPT200 Pan & Tilt System . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

5.29 Hitec HS-5465MG Servo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

5.30 Phidgets PhidgetSpatial 3/3/3 Sensor . . . . . . . . . . . . . . . . . . . . . . 64

xi

Page 13: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

5.31 VIA EPIA-P900 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

5.32 MARTE Autonomy Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . 66

5.33 MARTE Autonomy Functional Layer: Autonomy Hardware . . . . . . . . . 68

5.34 MARTE Autonomy Functional Layer: All Hardware . . . . . . . . . . . . . . 69

5.35 MARTE Autonomy State Machine . . . . . . . . . . . . . . . . . . . . . . . 71

5.36 MARTE Final Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . 72

5.37 Autonomy Testing on the Test Base Module . . . . . . . . . . . . . . . . . . 73

5.38 Run #5: Localization Scan . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

5.39 Run #5: Localization Result . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

5.40 Run #5: Path Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

5.41 Run #5: Autonomously Crossing . . . . . . . . . . . . . . . . . . . . . . . . 78

5.42 Run #3: Autonomously Digging . . . . . . . . . . . . . . . . . . . . . . . . . 79

5.43 Run #3: Autonomously Unloading . . . . . . . . . . . . . . . . . . . . . . . 80

xii

Page 14: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

CHAPTER 1

INTRODUCTION

Robots are used in a wide variety of applications in modern society. They remove

the need for human intervention in repetitive tasks and handle processes that require precise

actions such as those in industrial manufacturing. They also provide us the chance to observe

and explore environments that are risky or dangerous to human health, such as the deep

oceans or other planets [1], [2]. In many such applications, it can be beneficial to have a

modular platform that can be quickly and easily modified to adapt to one of a multitude of

tasks. This is particularly true for robotic systems used in space exploration. The amount of

mass and space available aboard rockets carrying equipment from Earth’s surface to orbit and

beyond is extremely limited. A modular platform would lessen the amount of mass necessary

to transport in the long run. When new tasks arise, instead of requiring a completely new

system, only a new module would be needed. This module would then be interfaced to an

existing platform to carry out the task.

1.1 Objective

This thesis introduces a general computational architecture for a robotic platform

that meets two requirements:

1. The architecture will support implementation on a platform composed of many po-

tential modular attachments that can be added and removed;

2. The architecture will support control of the platform through either a remote operator

(teleoperation) or by an optional, fully-autonomous control module.

This scalable architecture allows the functionality of the robotic system to be modified or

extended by changing or adding to the modules attached. In addition, by allowing both

1

Page 15: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

teleoperated or autonomous control, a teleoperated system can be later automated when

such a subsystem is developed or an autonomous system can be overridden by a human

operator when the need arises.

The designed architecture, henceforth called the Robotic Architecture for a Modular

Platform (RAMP), was implemented on the University of Alabama Modular Autonomous

Robotic Terrestrial Explorer (MARTE) platform to provide a system for the excavation and

retrieval of simulated lunar/Martian regolith as part of the 2014 NASA Robotic Mining

Competition (RMC). It was implemented using two modules, a drive train module and

excavation module, and can operate autonomously through an optional subsystem.

1.2 Thesis Outline

This thesis is arranged into six chapters. Chapter 2 begins with a summary of relevant

background information on teleoperated and autonomous robotic architectures to provide a

foundation for the work that will be presented. Chapter 3 describes some existing robotic

architectures from related projects and places RAMP in context. Chapter 4 details the

generalized architecture, RAMP, and components that will support both teleoperated and

autonomous control of a modular robotic platform. Chapter 5 describes an implementation

of RAMP on the University of Alabama MARTE platform as part of the 2014 NASA RMC

and includes a discussion of the results of this implementation. Finally, Chapter 6 concludes

with a summarization of the work presented and a brief discussion on potential future work

related to the architecture and implementation.

2

Page 16: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

CHAPTER 2

BACKGROUND

It is beneficial to cover some basic background information on the standard approaches

to robotic architectures. This chapter provides a refresher on the various levels of human

interaction involved in robot control followed by the basic teleoperation and autonomous

architecture paradigms.

2.1 Control Schemes

The control of a robotic platform can be classified by the degree of human involvement

required for the system to carry out its tasks. This can be represented as a scale of levels of

interaction as shown in Figure 2.1.

Human Operated AutonomousSemi-Autonomous

Figure 2.1: Human Interaction

At the left end of the scale reside systems that require a human operator to make

all high-level decisions about the tasks necessary to complete a goal and, additionally, the

specific actions the platform needs to carry out in order to accomplish that task. While

generally not considered robots, remote-controlled vehicles are a good example of these

types of systems. Take a remote controlled airplane for example. The human operator

takes in information on the environment and the airplane’s status through his/her senses,

makes a decision on what the plane needs to do next to accomplish a goal (such as to gain

altitude, land, or avoid a tree), then inputs commands via the joysticks on the controller.

The controller transmits those commands to the airplane which are used to set the voltages

3

Page 17: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

driving the propeller motor and to rotate the servos that control the locations of the elevator,

rudder, and ailerons. The system itself makes no decisions, it only acts on the controls

provided by the operator.

At the opposing end of the scale are the fully-autonomous robots. These robots

do not need a human operator to decide which tasks should be performed to complete a

mission or the specifics on how to carry out those tasks; actions are either preprogrammed

or determined by the system while operating. For example, a quadcopter platform may carry

out a patrol mission completely autonomously. The system determines that, to complete its

mission, it needs to take off, rise to a certain altitude above the ground, follow some path

while transmitting video to a ground station, then re-land at is original starting point. In

addition to these high level tasks, it also decides what low-level control signals need to be

used to perform these actions using feedback from various sensors, such as an altimeter or

GPS to determine position and an IMU to determine the platform’s orientation.

There are many systems that do not exist at the extremes of this scale. Most

human-operated robotic platforms provide some built-in control abstraction. In the remote-

controlled airplane example, instead of the operator controlling the voltage driving the pro-

peller motor, the system could instead take in a desired airspeed from the user and, using a

airspeed sensor, control the motor voltage internally to try to preserve that airspeed. As the

system gets smarter, it may begin to offload some of the decision making from the operator

as well. For example, a lunar rover may not know how to autonomously complete a mission

to explore an area, but may be capable of autonomously planning a path, avoiding obstacles,

and driving to operator designated points. These systems are often called semi-autonomous

systems.

4

Page 18: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

2.2 Teleoperation

For robotic systems that required human control, input can be provided either directly

or remotely. Teleoperation is a type of human control in which the human operator controls

the platform from a remote location. A basic teleoperated system consists of two separate

subsystems, named in relation to the location of the human operator: the local and remote

subsystems [1]. The relationship between these subsystems is shown in Figure 2.2.

Local

Remote

Control

Sensors

Operator

Display

Actuators

User Input

Communication Link

Status

Figure 2.2: Basic Teleoperation Architecture

The local subsystem interacts directly with the human operator and is tasked with

reading the operator’s input and providing the operator with feedback on the current status of

the overall system. The remote subsystem directly interacts with the sensors and actuators

that make up the robotic platform. The two subsystems communicate with one another

across a communication link, thus allowing the operator’s input commands to affect the

actions of the platform and for the platform to return its status and sensory information

back to be displayed to the user.

5

Page 19: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

2.3 Autonomy

Autonomous architectures generally fall under one of three paradigms. This section

summarizes the basic concepts of each. An in-depth analysis of this topic can be found in

[1].

2.3.1 Deliberative

Sense Plan Act

Figure 2.3: Deliberative Autonomous Paradigm

The first autonomous robots were designed under the deliberative paradigm. These

systems act in a sequential manner, as shown in Figure 2.3:

1. The system senses its current state and environment;

2. The system decides upon a plan to follow based on these sensor readings and past

knowledge;

3. The system drives its actuators based on its plan before returning to collect new

sensor readings and continue the process.

The biggest disadvantage of deliberative systems is the performance bottleneck created by

the planning stage. Traditionally, planning takes a relatively large amount of processing time

compared to the other aspects of an autonomous system and, therefore, the update rate of

the system is limited by the planning stage that occurs in every iteration of the processing

loop [1].

6

Page 20: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

2.3.2 Reactive

Sense Act

Sense Act

Sense Act

Figure 2.4: Reactive Autonomous Paradigm

The reactive paradigm evolved as a reaction to the deliberative approach. It com-

pletely forgoes an explicit planning stage, instead acting based on a collection of independent

behaviors as shown in Figure 2.4. Each behavior in a reactive system takes in sensor readings

and, depending on the behavior it represents, affects the action the robot will carry out. The

way these behaviors combine ultimately defines the operation of the overall system [1].

Reactive systems greatly improve the reaction time of autonomous systems. For ex-

ample, instead of the discrete obstacle detection and path planning steps that would occur

during each processing loop in the deliberative system, a reactive system might continuously

run an obstacle avoidance behavior that would change the direction a robot is driving when-

ever it detects an obstacle in its path. Another benefit of reactive systems is the distinction

between separate behaviors to more easily accommodate parallel processing. The key dis-

advantage is the loss of the planning stage. Instead of a subsystem which would define the

overall operation of the robot, the behaviors must be designed such that they will cooperate

to produce the desired overall operation, often not a simple task [1].

2.3.3 Hybrid Deliberative/Reactive

The hybrid paradigm attempts to reintegrate an overall planning component into the

reactive approach as shown in Figure 2.5. The hybrid approach uses behaviors for handling

reactive control and providing the basic functionality for the system. The planning compo-

nent handles the overall decision-making process and knowledge of the robot’s status and

7

Page 21: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

Sense Act

Sense

Plan

Act

Sense Act

Figure 2.5: Hybrid Autonomous Architecture

environment. To achieve the system’s goal, the planning component activates, deactivates,

and modifies the behaviors to complete the specific tasks the robot needs to accomplish [1].

8

Page 22: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

CHAPTER 3

RELATED WORKS

Many authors over the past decades have designed and documented various robotic

architectures and implementations. This chapter will briefly discuss a selection of these

architectures and place RAMP into context.

3.1 Frameworks

To begin, it is relevant to mention some well-known robot frameworks that assist

developers in implementation and testing of robot algorithms and software. These frame-

works provide an array of different development tools, APIs, and libraries with the goal of

supporting easier development of robotics-related functionality and code reuse across differ-

ent systems and robot platforms. In many cases, as with RAMP, robot architectures focus

less on providing specific tools and APIs. Instead, focus is directed more toward the system

structure and components needed to meet various design requirements. One popular robot

framework is the Player Project [3], [4]. It encompasses three software components: Player,

Stage, and Gazebo. Player is defined as a “robot device interface” that defines an abstraction

layer around the hardware specifics related to common robot functionality such as movement

and sensor readings. In the Player model, an instance of the Player interface is executed as

a server. This Player server hides the hardware details from other software and algorithms

that act as clients in this model. Stage and Gazebo are both simulators that implement the

Player interface. Client algorithms that utilize the Player interface need not know whether

their requests and data are being handled through simulation or by a physical platform.

The Robot Operating System (ROS) [5], like Player, is a robot framework that is designed

to provide an abstract interface around robot hardware. ROS is more complex than Player

9

Page 23: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

and implements a message passing system for communication between nodes that provide

the various control and sensing capabilities. One benefit of ROS’s messaging system is that

it provides better support for distributed systems. RAMP shares some similarities to the

Player Project in that both are structured as client-server systems, but the overall goal of

RAMP and these two frameworks differ. The Player Project and ROS both exist to provide

the specific APIs and tools necessary to abstract away the details of hardware interaction

from algorithm implementations. This benefits developers by allowing code reuse across

different hardware and robot platforms. RAMP, instead of defining specific APIs, focuses

on what components a robot implementation will require to support both teleoperation and

autonomy on a modular platform.

3.2 Teleoperation

One existing teleoperation-based architecture for robotics was derived from domain

engineering to create a domain specific software architecture (DSSA) [6]. This DSSA is

specifically designed for use with a human operator and does not operate autonomously. This

is unlike RAMP which focuses on providing the capability of either autonomous or remote

operation. The DSSA consists of five components: Graphical Representation, Collision

Detection, User Interface, Communication, and Controller. RAMP, during teleoperation,

consists of similar components but with the User Interface and Graphical Representation

considered part of a single layer and the Collision Detection and Controller combined into

a single layer. The DSSA also differs in that it has controllers on the local system. These

controllers use operator commands to simulate the motion of the platform prior to sending

control signals across the communication link. RAMP, instead, places the controllers on

the remote side of the communication link. This means latency across the communication

link has a larger affect on DSSA and, as a result, its application focus is on slower moving

systems with soft real-time requirements [6]. The DSSA has been implemented on a variety

of platforms including a remotely operated service arm (ROSA), a pipe Inspection and

10

Page 24: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

Retrieval Vehicle (IRV), and Teleoperated and Robotized System for Maintenance Operation

in Nuclear Power Plant Vessels (TRON) [6], [7].

The NASA/NBS Standard Reference Model (NASREM) architecture [8] is another

well-known, teleoperation architecture. It is designed for partially autonomous space station

robotic systems. NASREM is a hierarchical architecture in which high-level tasks are broken

down into low-level control. A remote operator can command the robot at various levels

of this hierarchy by sending high-level tasks or, alternatively, intervene by sending low-level

commands. It has five major components: Operator Interface, Task Decomposition, World

Modeling, Sensory Processing, and a Global Memory. A key difference between RAMP and

NASREM is that RAMP divides autonomy-related functionality into a distinct subsystem

from the robot’s low-level hardware control functions. NASREM, on the other hand, in-

tegrates autonomous functionality together with hardware control in a single system. In

RAMP, the separate autonomy subsystem communicates with the control system using the

same command interface as a remote operator would, making the autonomy subsystem a

replacement for a remote operator. Overall, this means the teleoperator cannot make use of

functionality that is part of the RAMP autonomy subsystem. The benefit of this approach is

that RAMP provides the capability to completely remove the autonomy subsystem from the

overall system without affecting the ability to teleoperate. NASREM’s integrated approach

allows a remote operator to interact with the autonomy-like functionality. However, such a

design will not allow the system to operate independent of the autonomy system without

special consideration during implementation. NASREM has been the foundation for a gen-

eral proof-of-concept teleoperatable platform [9] and a system to safely manage a landfill

[10]. The similar RAYBOT architecture [11] was implemented on a teleoperated all-terrain

vehicle (ATV).

11

Page 25: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

3.3 Autonomy

3.3.1 Deliberative

As previously hinted, NASREM also supports autonomous operation. In this situa-

tion, a NASREM system accepts mission-level commands from a remote operator to initiate

autonomous operation and the various levels of the hierarchy provide the competent func-

tionality to decompose this mission into tasks and control to be carried out. When operating

autonomously, NASREM is an example of a deliberative autonomous architecture. The Sen-

sory Processing, World Modeling, and Task Decomposition components make up the Sense,

Plan, and Act components in the deliberative paradigm [1]. Each level of the hierarchy

must go through the sequence of sensing, then planning, then acting. During the World

Modeling (Plan) stage, the robot’s understanding of the world is modeled then actions are

simulated and potentially re-planned. Because this occurs between every sensing and acting

phase, the timing constraints on World Modeling are extremely important for components

with hard real-time requirements. RAMP, when operating autonomously, is a hybrid au-

tonomous architecture. It, along with other hybrid architectures, move the planning and

world modeling-related functionality from between sensing and acting to allow for easier

real-time control loops without sacrificing planning functionality.

3.3.2 Reactive

Subsumption [12], [13] is the most famous of reactive paradigm architectures. It

consists of a hierarchy of modular behaviors with increasing levels of competence. Each

high-level behavior can be suppressed or modified by more primitive, lower-level behaviors.

For example, a robot may contain a high-level behavior for trying to drive in some general

direction. This behavior may then be overridden by a low-level obstacle avoidance behavior

when forward-facing sensors detect an obstacle. As mentioned in the previous chapter,

reactive autonomous designs do not plan but instead are given behaviors that are combined

12

Page 26: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

in such a way as to result in some overall desired operation. On the other hand, RAMP

and other hybrid architectures do contain a planning component. Subsumption has been

implemented on many platforms [13] for applications such as can collection [14] as well as

proof-of-concept platforms for the exploration of planetary surfaces [15].

3.3.3 Hybrid Deliberative/Reactive

RAMP, when operating autonomously, acts as a hybrid paradigm architecture. Hy-

brid architectures are a result of reintroducing the planning aspects of deliberative designs

while taking advantage of the behavior-like components of reactive architectures. Most cur-

rent work on robotic architectures is focused on hybrid designs. One of the first was the

Autonomous Robot Architecture (AuRA) [16], [17]. Originally, it was an architecture specif-

ically designed for navigation but has since been generalized for general autonomous systems.

It is divided into two major layers. The first contains the deliberative-related components

which include a hierarchical planner and a representation of the world. The second, schema

controller, layer acts as the reactive portion of the hybrid design and contains the sensing

and actuation controllers. An AuRA-based system is primarily driven by the schema con-

troller, like a reactive system. The planner is used to set up the reactive controllers when a

new mission is received or to handle any problems detected by the reactive layer. RAMP,

like many later hybrid architectures, adds a new layer between the deliberative and reactive

layers to manage their interaction. AuRA has been implemented on various robots at the

University of Massachusetts and Georgia Tech such as a trash collecting robot [16].

Another early hybrid architecture is ATLANTIS [18]. It is a three-layered design

with a Deliberator, Sequencer, and Controller. The first and last relate to the deliberative

and reactive parts of the hybrid paradigm, respectively. The Sequencer drives the system

and manages the interaction between the two other layers by monitoring the system for

problems, invoking the Deliberator when a new plan is needed, and starting/stopping the

individual activities in the Controller. This last responsibility is handled based on a list

13

Page 27: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

of tasks modeled by the reactive action package (RAP) system [19]. The 3T architecture

[20] is a similar architecture with a Planner, Sequencer, and Skill Manager. 3T differs in

that it is driven by the upper, Planner, layer whereas ATLANTIS is driven by the middle,

Sequencer, layer. 3T has been implemented on various platforms including a robot that

can run general errands [20] and a monitoring system on the the Space Shuttle’s Remote

Manipulator System [21]. Various other architectures have used this general three-tiered

design such as the architecture described in [22] and the CoCo architecture [23].

There are a multitude of other hybrid architectures. NMRA [24] is another de-

signed for NASA’s Deep Space One (DS-1) project. NMRA heavily emphasizes real-time

constraints. CLARAty [25] is a two-tiered design that was implemented on NASA’s K9 and

K10 [26] prototype rovers. Other popular hybrid architectures include Reactive System (RS)

[27], Task Control Architecture (TCA) [28], Saphira [29], and SSS [30].

While later chapters will break RAMP down into many more than three layers, from

a high-level perspective, RAMP, when operating autonomously, is closely related to the

three-tiered hybrid architectures such as ATLANTIS and 3T. RAMP contains two layers

that are directly related in responsibility to the Deliberator/Planner and Sequencer layers

in ATLANTIS and 3T. RAMP also provides the same functionality provided by the Con-

troller/Skill Manager Layer but breaks it down into multiple smaller layers and a completely

separate subsystem. This is done to support the RAMP design requirements of a modular

robotic platform with a removable autonomy control module.

To summarize, RAMP is a robotic architecture specifically designed for a modu-

lar robotic platform that can operate either by teleoperation or via a removable, hybrid

paradigm autonomy module. There are two key distinctions between RAMP and existing

projects. First, most autonomous architectures are capable of supporting teleoperation to

some degree so that they can receive high-level mission commands, but they are either in-

capable of or require implementation-specific details to allow the autonomy components to

be removed from the system while preserving teleoperation capabilities. Secondly, most ex-

14

Page 28: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

isting architectures do not disallow implementation on a modular platform, but also offer

no direct support for modularity. RAMP is specifically designed to support systems that

provide removable autonomy and modularity.

15

Page 29: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

CHAPTER 4

GENERAL ARCHITECTURE

This chapter describes the generalized RAMP architecture. It starts with a high-level

overview of the system and the major subsystems required to support both teleoperated

and autonomous operation followed by an in-depth description of the components of each

subsystem and their requirements in order to support the system’s operation and a modular

platform.

4.1 System Architecture

A robotic system capable of operating in either teleoperated or autonomous modes has

three potential actors, as shown in Figure 4.1. The robotic platform is the actor common

to the system while operating in either of the two modes of operation. The platform’s

job is to provide the necessary mechanical and electrical capabilities such that the system

can act in such a way as to complete the overall mission objective. The human operator

and autonomous operator are the two decision making actors in their respective modes of

operation, each responsible for deciding how the platform should act to successfully complete

the mission.

In both modes of operation, desired actions flow from the controlling operator to

the robotic platform, which attempts to carry out those requests. This fits with a typical

client-server architecture. Here, the robotic platform takes the role of the server, accepting

commands from the clients, in this case, the operators. Information also needs to flow in the

opposite direction. To make educated decisions, the operators need to know the status of

the platform. Thus, the platform acts as a source for status information that the operators

16

Page 30: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

HumanOperator

AutonomousOperator

RoboticPlatform

Control Control

ControlStatus

Status Status

Teleoperation Autonomous

Figure 4.1: System Actors

can consume. By requring the operators to request for a status update from the platform,

this use case also fits within the client-server architecture.

The overall computational architecture is designed based on these interactions. First,

three subsystems emerge, one for each actor: the Backend, Frontend, and Autonomy. These

relationships are summarized in Figure 4.2. Communication between the three subsystems

is modeled around a client-server architecture.

17

Page 31: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

Frontend Autonomy

Backend

HumanOperator

RoboticPlatform

AutonomousOperator

Control Control

ControlStatus

Status Status

Teleoperation Autonomous

Figure 4.2: Subsystems

4.2 Backend

The Backend is the subsystem that manages interactions with the robotic platform.

It has three key responsibilities:

1. It must provide the server interface for accepting requests from the clients and re-

turning responses about the status of the system;

2. It must be capable of understanding the requests received and interacting with the

platform such that the platform acts appropriately based on the request;

3. It must manage the platform hardware with the necessary controllers to maintain any

desired states and avoid dangerous operations.

To meet these responsibilities, the Backend is divided into the eight layers shown in Figure

4.3.

18

Page 32: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

Robot Distribution Layer

Module Distribution Layer

Server Layer

Communication Hardware Interface Layer

Communication Hardware Layer

Platform Hardware Interface Layer

Platform Hardware Layer

Functional Layer

Communication Link

Robotic Platform

Module Module

Module Module

Module Module

Module Module

Module Module

Figure 4.3: Backend Architecture

The first three layers, the Communication Hardware, Communication Hardware In-

terface, and Server layers, provide the server functionality for meeting the first requirement

of the Backend. The Communication Hardware Layer encompasses any hardware necessary

19

Page 33: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

for the modulation and demodulation of communication signals transmitted and received.

The Communication Hardware Interface Layer contains the drivers and APIs necessary for

interfacing with and using the previously mentioned communication devices. Finally, the

Server Layer provides the interface for receiving requests and sending responses. Together,

these layers determine the communication protocols the Backend will use to format messages

exchanged between the clients and Backend. The ultimate responsibility of these three layers

is to convert the communication signals received by the Backend into commands understood

by the Distribution Layer and, later, to take responses returned by the Distribution Layer

and transmit them as signals back to the client.

The Robot and Module Distribution layers are responsible for examining requests

received on the server and determining what should be done. It is at these layers that the

modularity of the robotic platform becomes important. First, the request must be examined

by the Robot Distribution Layer from an overall “robot” point of view. What modules

are necessary to carry out the request? Are the necessary modules attached and available?

What do the modules need to do to carry out the overall task? This will result in the

original request or new requests being passed to the modules involved in completing the

task. Next, each involved module’s Module Distribution Layer must examine the request to

determine what they need to do with their functionality by invoking the correct operations

in the Functional Layer. This satisfies the second requirement of the Backend.

The remaining layers, the Functional, Platform Hardware Interface, and Platform

Hardware layers, contain the functionality for interacting with the robotic platform. Like

the Module Distribution Layer, each possible module has a seperate implementation of these

layers. The Platform Hardware Layer is closely related to, and can be considered part of,

the the robotic platform. It encompasses all of the hardware necessary for actuating the

platform, sensing the status of the platform, and sensing the environment. The Platform

Hardware Interface Layer provides the drivers and API for allowing the Functional Layer

to interact with the Platform Hardware Layer. The Functional Layer provides multiple

20

Page 34: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

benefits. First, it provides abstractions around the hardware interfaces for readability and

reusability. Whereas the Platform Hardware Interface Layer contains interfaces for low-

level hardware interaction, such as setting digital output pins, driving a PWM signal, or

reading ADC channels, the Functional Layer provides abstract wrappers, such as turning

an indicator light on or off, setting a desired servo position, or determining the position

of a linear actuator. A second benefit of the Functional Layer is to fuse the functionality

of multiple separate hardware devices, such as to combine an encoder interface and wheel

interface to result in the functionality of driving a wheel at a specific velocity. These three

layers meet the final requirement for the Backend.

4.3 Frontend

The Frontend manages the interactions of the human operator with the rest of the

system during teleoperation. Its responsibilities include:

1. Receiving and interpreting operator input;

2. Communicating the operator’s desired actions to the Backend;

3. Requesting and displaying status of the system to the operator so he/she can make

more informed decisions.

The Frontend is divided into the seven layers shown in Figure 4.4.

The bottom three layers, the Communication Hardware, Communication Hardware

Interface, and Client Layer, act as the opposing end to the three communication-related layers

of the Backend subsystem. They are responsible for exchanging the operator’s actions and

platform’s status with the Backend over the communication link. This involves formatting

the requests generated by the Frontend’s Functional Layer into the protocol expected by the

Backend, modulating the data across the communication link, demodulating any response

signals, and unpacking the data from the response to be passed back to the Functional Layer.

The Functional Layer manages the Frontend’s knowledge of the system and manages

when the Client Layer is invoked. This involves determining what functionality the Backend

21

Page 35: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

Client Layer

Mapping Layer

Communication Hardware Interface Layer

Communication Hardware Layer

Operator Hardware Interface Layer

Human Operator

Communication Link

Functional Layer

Operator Hardware LayerControl Display

Figure 4.4: Frontend Architecture

can provide based on the the modules attached to the robotic platform. It should also contain

a component for ensuring regular status updates are requested on behalf of the operator.

The Mapping, Operator Hardware Interface, and Operator Hardware layers exist to

interface with the human operator. The Operator Hardware Layer contains two components,

the Display component and the Control component. The Display component is the hardware

necessary to inform the user of the current status of the system. This can be as simple as

22

Page 36: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

LED indicators or as complex as a graphical user interface (GUI). The Control component

encompasses the input devices the operator uses to control the system such as joysticks,

controllers, or a button on a GUI. The Operator Hardware Interface Layer is composed of

the drivers and APIs necessary to write to the Display hardware and read from Control

hardware.

The Mapping Layer decides how operator inputs map to different requests for the

Backend subsystem and how status received from the Backend maps to the display. A

separate layer for mapping also provides the capability of having different mapping configu-

rations for different situations or users. This is beneficial, for example, in situations where

one operator prefers different or more sensitive controls than another.

4.4 Autonomy

The Autonomy subsystem acts as the autonomous actor when the platform is oper-

ating in autonomous mode. It is divided into the eight layers in Figure 4.5.

The Communication Hardware, Communication Hardware Interface, and Client lay-

ers have the basic responsibilities as the three identically named layers in the Frontend

subsystem: to facilitate communication between the Backend and the rest of the Autonomy

subsystem. One difference is that, whereas the Frontend client will always utilize the commu-

nication link, this is not required for the Autonomy subsystem. The Autonomy subsystem

may be placed on the remote side of the communication link, putting the Autonomy subsys-

tem on the robot platform. In many cases, this is desired for the sake of avoiding potential

latency issues due to distance between the operator and platform. The alternative would

place the Autonomy subsystem on the operator side of the communication link; requiring

the link for interaction between the Autonomy and Backend subsystems.

The Autonomy Hardware and Autonomy Hardware Interface layers provide the hard-

ware and software interfaces necessary for retrieving the sensor data from sensors that are a

part of the Autonomy subsystem. The hardware associated with these layers are those that

23

Page 37: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

Client Layer

Functional Layer

Communication Hardware Interface Layer

Communication Hardware Layer

Autonomy Hardware Interface Layer

Autonomy Hardware Layer

Communication Link

Executive Layer

Planning Layer

Platform

Robot

Autonomy

Figure 4.5: Autonomy Architecture

are considered part of the removable autonomy module instead of hardware that is part of

the robotic platform (and, therefore, provided by the Backend.) The Autonomy Hardware

sensor data and Backend sensor data are ultimately used to make autonomous decisions.

The Functional Layer provides the functionality available to the upper, decision-

making layers. It has three components. The Platform component abstracts the Client

Layer from the remainder of the Autonomy subsystem. The Autonomy component provides

the basic functionality of the autonomy hardware (via API and controllers) and parallels

how the Functional Layer of the Backend subsystem provides the basic functionality of

24

Page 38: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

the platform. Finally, the Robot component utilizes the functionality of the Platform and

Autonomy components to create new functionality for use by the Executive Layer.

The Planning and Executive layers work together to handle all the high-level decision-

making for the Autonomy subsystem. To do this, the Planning Layer determines what tasks

are needed to complete a mission and the Executive Layer must invoke the functionality in

the Functional Layer to carry out these tasks.

4.5 Architecture Paradigm

With the general architecture presented, it is now possible to relate RAMP to the

teleoperated and autonomous paradigms described in Chapter 2. When under teleoperated

control, the subsystems of RAMP interact as shown in Figure 4.6. Comparison with a basic

teleoperated architecture in Figure 2.2 shows that the Frontend subsystem acts as the local

subsystem, providing the operator an interface to control and examine the remote platform.

The Backend acts as the remote subsystem, providing the interface to the actuators and

sensors on the platform on the remote side of the communication link.

When operating autonomously, comparison of the RAMP architecture in Figure 4.7

and with the hybrid paradigm in Figure 2.5 shows that RAMP is structured as a hybrid

architecture. The Planning and Executive layers of the Autonomy subsystem act as the Plan

component of the system. The remaining layers of the Autonomy subsystem collaborate with

the Backend subsystem to provide the controllers and interfaces that represent the Sense and

Act components in the hybrid architecture.

25

Page 39: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

Client Layer

Mapping Layer

Communication Hardware Interface Layer

Communication Hardware Layer

Operator Hardware Interface Layer

Operator Hardware Layer

Human Operator

Communication Link

Functional Layer

Server Layer

Communication Hardware Interface Layer

Communication Hardware Layer

Platform Hardware Interface Layer

Platform Hardware Layer

Functional Layer

Robotic Platform

Robot Distribution Layer

Backend Subsystem

Frontend Subsystem

Module Distribution Layer

Figure 4.6: Teleoperated Architecture

26

Page 40: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

Autonomy Subsystem

Communication Link

Server Layer

Communication Hardware Interface Layer

Communication Hardware Layer

Platform Hardware Interface Layer

Platform Hardware Layer

Functional Layer

Robotic Platform

Robot Distribution Layer

Client Layer

Functional Layer

Communication Hardware Interface Layer

Communication Hardware Layer

Autonomy Hardware Interface Layer

Autonomy Hardware Layer

Executive Layer

Planning Layer

Backend Subsystem

Module Distribution Layer

Figure 4.7: Autonomy Architecture

27

Page 41: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

4.6 Subsystem Communication

Communication with the Backend subsystem through the server requires an applica-

tion layer protocol understood by both the Backend server and the Autonomy and Frontend

clients. The application layer protocol is responsible for carrying two types of data between

the subsystems: requests for the Backend and responses from the Backend. This section

examines the necessary components for both types of messages.

4.6.1 Request

The first type of message is a request for the Backend. These messages include both

requests for the Backend to carry out an action and requests for new status information.

Four pieces of information are necessary to communicate a request:

1. Module;

2. Device;

3. Function;

4. Parameter.

Each field is derived from the Distribution and Functional layers of the Backend.

As a request is received and parsed by the Server Layer of the Backend, it is passed

to the Robot Distribution Layer. The Robot Distribution Layer must know whether the

request is intended for the robot as a integrated platform or whether it is destined for a

specific module. This information will be carried in the request’s Module field. When

a request is propagated to the Module Distribution Layer, the Backend must know what

component of the module’s Functional Layer is being queried. This is carried in the Device

field.

Once reaching the Functional Layer, the Backend must determine what type of request

is being made. The type of request is carried in the Function field. This field indicates

whether it is a request for data or a request for an action and, if so, what kind of action.

Finally, clients need to be able to append parameters to the request. The Parameter field

28

Page 42: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

carries this information. With these four fields, the Functional Layer has enough information

to invoke the correct hardware and controllers to carry out a task.

4.6.2 Response

A response message is generated based on a request for data and follows the reverse

path of the request that spawned it. When the Backend receives a request for data, it

must first query the hardware or controllers via the Functional Layer to get the desired

information. The type of data and value of the data are stored in a Property and Value

field in the response message. To provide context on the Property, as the request rises back

up through the layers to be returned to the client by the server, the Distribution Layer also

appends the Device and Module fields just as they were originally received. This results in

a response message also with four fields:

1. Module;

2. Device;

3. Property;

4. Value.

The size and format of these fields is not limited by the architecture and is determined based

on implementation-specific requirements.

29

Page 43: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

CHAPTER 5

CASE STUDY

This chapter presents an implementation of RAMP on the University of Alabama

MARTE platform as part of the 2013-2014 NASA Robotics Mining Competition as a case

study.

5.1 NASA Robotic Mining Competition

The 2014 NASA Robotic Mining Competition (RMC) is a competition “for university-

level students to design and build a mining robot that can traverse the simulated Martian

chaotic terrain, excavate Martian regolith and deposit the regolith into a Collector Bin

within 10 minutes” [31]. Teams “can use telerobotic or autonomous operation to excavate

the basaltic regolith simulant” and are awarded additional competition points for the more

autonomous capability they are able to demonstrate during a competition run.

The competition takes place in a walled 7.38 x 3.88 meter arena, as shown in Figure

5.1. The arena is divided into three regions: the starting, obstacle, and digging regions. The

starting region is further broken down into two starting boxes. Prior to each competition

run, a competition robot is placed in one of the two starting boxes facing one of four possible

cardinal directions. The starting box and direction are selected randomly prior to entrance

into the arena. The collection bin for depositing the excavated regolith is mounted centered

on the outside of the starting region’s 3.88 meter length wall. During a run, the robots are

required to excavate and deliver at least 10 kilograms of regolith simulant to qualify. The

rules state that all collected simulant must be excavated from the digging region of the arena,

the region on the side opposite the starting region. To reach the digging region, robots must

navigate and traverse the obstacle region, the center region of the arena. The obstacle region

30

Page 44: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

is populated with three rocks and two craters, each up to 30 centimeters in diameter. The

placement of these obstacles is chosen by the competition judges prior to each round of the

competition.

Figure 5.1: RMC Arena Diagram [31]

Once the robot is placed within the arena, the systems are powered, and connection

is establish, operators may only interact with the robot remotely. During the competition

run, the operators are stationed in a control room isolated from the arena. Operators are

provided an Ethernet IEEE 802.3 switch that is wired to the arena where competitors are

expected to bring and connect a wireless access point. Communication with the robot in the

arena occurs by utilizing this access point. The competition places an additional constraint

on the wireless link between the access point and the robot; it must use channel 1 or 11 of

the IEEE 802.11 b/g protocol to allow judges to manage and minimize potential interference

between competing robots.

31

Page 45: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

5.2 MARTE Platform

The University of Alabama’s entry for the 2014 NASA RMC was MARTE [32]. The

MARTE platform and RAMP computational architecture were designed and implemented

concurrently during the 2014 competition cycle. This section presents the MARTE robotic

platform upon which RAMP was realized.

As implied by its name, MARTE is a modular platform. A key design goal for this

platform was to provide a common frame and drive-train upon which an array of possible

payload modules, primarily excavation modules, can be mounted. The common platform is

called the Base module. During the development process, various excavation modules were

designed. One was ultimately chosen for implementation for this competition, the Bucket

Ladder Excavator (BLE) module. Figure 5.3 shows three designed excavation modules along

the top row, followed by the Base module in the center. The bottom image shows the

resulting MARTE platform when the top-center module, the BLE, is mounted atop the

Base module. This case study focuses on the RAMP implementation of the Base and BLE

modules.

Figure 5.2: MARTE (Base and BLE Modules)

32

Page 46: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

Figure 5.3: MARTE Modularity [32]

33

Page 47: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

5.2.1 Base Module

The Base module is the common module that defines the standard mechanical and

electrical interfaces through which all other potential modules attach. Mechanically, it pro-

vides a flat, rectangular, aluminum frame with four mounting holes for attaching a payload

module. Electrically, the Base houses and makes available an unregulated 17 to 21 volt

power supply for use by any attached modules. This rail is powered by two 5000 mAh 18.5

V LiPo batteries wired in parallel. This power system can be disconnected via a toggleable

push button to meet safety requirements defined by the competition rules.

The Base also provides a standard, four-wheel drive train. Each wheel is indepen-

dently controllable by separate channels on two dual-channel Sabertooth 2x60 motor con-

trollers wired to four 343 oz-in motors, each paired with 256:1 gearboxes. The wheel’s axles

are centered below the gearboxes and driven via a chain system to provide an extended 7.75

inch ground clearance. The aluminum wheels are each 12 inches in diameter with eight 1

inch grousers for added traction in loose terrain.

Figure 5.4: MARTE Base Module

34

Page 48: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

5.2.2 Bucket Ladder Excavator Module

The BLE module provides the primary digging, collection, and depositing function-

ality for the platform during the competition. The BLE uses a bucket ladder mechanism

consisting of 24 steel buckets attached to a belt driven by two motors, each controllable

through the channels on a 2x60 Sabertooth motor controller. This mechanism is used to

break up and carry the simulant from the surface of the arena to the on-board aluminum

storage bin. The entire storage bin and bucket ladder mechanism pivots on a point at the

upper rear of the module and is actuated by a 500 lb Thompson ball-screw linear actuator

with built in potentiometer-based feedback. This allows the bucket ladder mechanism to be

lowered into the soil to excavate or to be lifted, raising the front of the storage bin to allow

the collected regolith to slide off the rear of the robot into the competition collection bin.

The actuator is also controllable through a 2x60 Sabertooth motor controller channel.

Figure 5.5: MARTE Bucket Ladder Excavator Module

35

Page 49: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

5.3 RAMP Implementation

As previously described, RAMP consists of three subsystems: the Backend, Frontend,

and Autonomy. On MARTE, these three subsystems are separated into three separate com-

puting systems. The Backend subsystem is a computational system located on the MARTE

platform. Central processing for the Backend subsystem, and the hardware necessary to

operate the Base module, is located on the Base platform while the BLE, and any other

payload module, carries its own devices and hardware. The Frontend is implemented as a

separate, portable system that can be carried to test locations or the command room for

a competition run. The Autonomy subsystem, being a removable subsystem, is a separate

computing platform that can be mounted on the MARTE platform when autonomous con-

trol is desired. This section describes the implementation details of each of these subsystems

as part of RAMP.

5.3.1 Subsystem Communication

First, it is beneficial to examine the implementation of the inter-subsystem commu-

nication as it is heavily related to the rules set by the competition. Of particular importance

for this discussion is the requirement that the communication link between the remote op-

erators and the robot must be implemented using an IEEE 802.3 Ethernet link followed by

a wireless hop using IEEE 802.11 b/g, all using an Internet Protocol (IP) based network.

This led to the following network design for the three subsystems.

IP over 802.3

IP over 802.11g

Command Room

Frontend

Backend

Autonomy

Router Router IP over 802.3

Arena Robot Platform

Communication Link

Figure 5.6: Subsystem Communication

36

Page 50: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

Two Linksys WRT54G routers act as the two wireless communication link end points.

Each is loaded with the Tomato third-party firmware [33] and is configured to establish a

wireless bridge [34] between the two routers. This allows the Frontend subsystem connected

to the local router to address any systems connected to the remote router, including the

Backend and Autonomy (if attached) subsystems, as though they are on the same IP subnet.

IP networks commonly use the Transport Control Protocol (TCP) or User Datagram

Protocol (UDP) to assist in transportation of application data between programs running

on remote systems. In this implementation, TCP is used for communication between the

Backend server and the Frontend and Autonomy clients because it provides a connection-

based and reliable communication stream.

Communication of request and response data between the subsystems requires an

application protocol understood by both the source and destination of the message. A

custom application layer protocol, named the MARTE Control Protocol (MCP), and based

off the protocol described in Chapter 4, was implemented for this purpose. The protocol

was defined to carry all data in ASCII to ease the development and debugging process. The

format of a request message is shown in Figures 5.7 and 5.8 and the response message format

in Figures 5.9 and 5.10.

The MCP allows multiple requests to be transmitted within a single message sent

from a client to the Backend and, similarly, multiple responses to be returned within a single

message. The supported values for each of these fields is determined by the functionality

provided by the Backend and will be examined in detail in the following section. Finally,

Figures 5.11 and 5.12 summarize the flow of requests and responses between the subsystems

during both teleoperation and autonomous operation, respectively.

37

Page 51: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

Module Device Function Param Param: . =( , , )...

MCP Request

MCP Request Message

Request Request ... \n

Figure 5.7: MARTE Control Protocol Request Message

message = { request }, "\n" ;

request = module, ":", device, ".", function, "=(", params, ")" ;

module = modulechar, { modulechar } ;

device = devicechar, { devicechar } ;

function = functionchar, { functionchar } ;

params = param, { ",", params } ;

param = paramchar, { paramchar } ;

modulechar = ? all characters except "\n" and ":" ?

devicechar = ? all characters except "\n" and "." ?

functionchar = ? all characters except "\n" and "=" ?

paramchar = ? all characters except "\n" and ")" ?

Figure 5.8: Request Message Format

38

Page 52: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

Module Device Property Value Value: . =( , , )...

MCP Response

MCP Response Message

Response Response ... \n

Figure 5.9: MARTE Control Protocol Response Message

message = { response }, "\n" ;

response = module, ":", device, ".", property, "=(", values, ")" ;

module = modulechar, { modulechar } ;

device = devicechar, { devicechar } ;

property = propertychar, { propertychar } ;

values = value, { ",", values } ;

value = valuechar, { valuechar } ;

modulechar = ? all characters except "\n" and ":" ?

devicechar = ? all characters except "\n" and "." ?

propertychar = ? all characters except "\n" and "=" ?

valuechar = ? all characters except "\n" and ")" ?

Figure 5.10: Response Message Format

39

Page 53: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

Rem

ote

Router

Operator

Platform

Local

Router

Frontend

Request

Request

Request

Response

Response

Response

Backend

Response

Response

Response

Request

Request

Request

802.3

IP TCP

MCP

Response

Message

802.3

IP TCP

MCP

Response

Message

802.11g

IP TCP

MCP

Response

Message

802.3

IP TCP

MCP

Request

Message

802.3

IP TCP

MCP

Request

Message

802.11g

IP TCP

MCP

Request

Message

Figure 5.11: MARTE Teleoperation Inter-Subsystem Communication

40

Page 54: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

A

uto

nom

y

Rem

ote

Rou

ter

Plat

form

Req

ues

t

Req

ues

t

Req

ues

t

Res

pon

se

Res

pon

se

Res

pon

se

Bac

kend

Res

pon

se

Res

pon

se

Res

pon

se

Req

ues

t

Req

ues

t

Req

ues

t

802.3

IP TCP

MC

PRes

pon

seM

essa

ge

802.3

IP TCP

MC

PRes

pon

seM

essa

ge

802.3

IP TCP

MC

PReq

ues

tM

essa

ge

802.3

IP TCP

MC

PReq

ues

tM

essa

ge

Figure 5.12: MARTE Autonomy Inter-Subsystem Communication

41

Page 55: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

5.3.2 Backend

This section begins the subsystem discussion with the Backend implementation for

the MARTE platform. As described in Chapter 4, the RAMP Backend consists of eight

layers (Figure 4.3.) These eight layers are organized on the MARTE platform as shown in

Figure 5.13.

Module Independent

Base Module Bucket Ladder Excavator Module

Robot Distribution Layer

Server Layer

Communication Hardware Interface Layer

Communication Hardware Layer

Functional Layer

Communication Link

Module Distribution LayerModule Distribution Layer

Functional Layer

Platform Hardware Interface Layer Platform Hardware Interface Layer

Platform Hardware Layer Platform Hardware Layer

Base Module Platform Bucket Ladder Excavator Platform

Figure 5.13: MARTE Backend Layers

42

Page 56: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

The Backend implementation is based around a module-independent, single board

computer (SBC) as opposed to a distributed approach where each module would have sepa-

rate processors. The SBC chosen for the Backend is the Phidgets PhidgetSBC2 [35], shown

if Figure 5.14. It is built around the ARM based Samsung S3C2440 400 MHz processor

and has 64 MB of SDRAM. For external interfaces, it has one 10/100BASE-T RJ-45 port

for Ethernet support, six USB 1.1 ports, and a built in Phidgets PhidgetInterfaceKit 8/8/8.

The PhidgetInterfaceKit 8/8/8 provides eight digital input ports, eight digital outputs, and

eight analog to digital input channels.

Figure 5.14: Phidgets PhidgetSBC2 [35]

The first layer of the Backend is the Communication Hardware Layer. It must pro-

vide the Backend the capability to communicate with either the Frontend or the Autonomy

subsystem. As previously described in Section 5.3.1, the wireless communication link be-

tween the Frontend and Backend is handled by two Linksys WRT54G routers [36] (Figure

5.15) using 802.11g, one for mounting on the platform, and one for placement in the envi-

ronment around the competition arena. The on-board router is the first component of the

Backend’s Communication Hardware Layer; it is an off-the-shelf device that can manage

transmission and reception of data across the communication link. To connect the SBC to

the communication link, the SBC must interface to the router. The WRT54G provides five

43

Page 57: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

10/100BASE-T RJ-45 Ethernet ports, one of which is utilized by the SBC for this purpose

through a Category 5e Ethernet cable connected to its own RJ-45 port. The PhidgetSBC2

contains the necessary built-in hardware components to make the signals through the Eth-

ernet port available to the on-board CPU. The Communication Hardware Layer for the

Backend is shown in Figure 5.16.

Figure 5.15: Linksys WRT54G [36]

For the SBC to interface to the communication hardware, the Communication Hard-

ware Interface Layer is necessary. In this case, the PhidgetSBC2 comes pre-installed with

the Debian GNU/Linux distribution. This operating system provides the necessary device

drivers and libraries to interface with the Ethernet controller on-board such that it can read

and create network traffic using the standardized POSIX sockets API [37].

Next, the Server Layer interfaces with the POSIX sockets API. It interprets the raw

data received as MCP request messages and, later, serializes MCP responses to be returned

to the client. The sockets library handles part of this process by implementing the IP and

TCP protocols. For example, it handles all the necessary low-level functionality needed by

the operating system to recognize that incoming data is intended for the SBC by examining

the IP header’s IP address field. The OS then uses the port number in the TCP header to

direct the received data to the program that executes the Backend server.

The key component of the Server Layer is a single-connection TCP/IP server written

in C. It manages instantiation, management, and interfacing with a bound TCP/IP POSIX

socket through which clients can attempt to establish a connection for communication with

44

Page 58: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

Linksys WRT54G

Phidgets PhidgetSBC2 EthernetController

Communication Link

TCP/IP over 802.3

TCP/IP over 802.11g

CPU

Routing

Figure 5.16: MARTE Communication Hardware Layer

the Backend. To successfully communicate with a client, the TCP server must understand

the application layer protocol. While the OS provides implementations for IP and TCP, it

does not understand the custom-designed MCP. The MCP format is used in two ways by

the Server Layer. First, the server makes use of the newline character that indicates the tail

of a request message. This is important because IP-based messages can be fragmented into

separate packets in limited bandwidth situations. Therefore, the server must accumulate

fragmented packets until a full request message is received by the Backend. The end of a

complete message is indicated by receiving the MCP trailing newline. Second, the server

implements a custom parser to convert raw network data into a C structure representing

requests to be used by the two Distribution layers. Similarly, a serializer converts a response

45

Page 59: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

structure back into the raw data to be transmitted to the client. The Server Layer is

summarized by Figure 5.17.

POSIX Sockets

Communication Hardware Interface Layer

TCP Server

Fragmented Packet Data

MCP Parser MCP Serializer

Request Message Data Response Message Data

Robot Distribution Layer

List of Requests List of Responses

Network Data

Figure 5.17: MARTE Server Layer

The list of MCP requests received by the Server Layer is next used by the Robot

Distribution Layer. Each request in the list is examined as a separate entity and is handled

independently of other requests that arrived as part of the same message. The primary MCP

field of importance to the Robot Distribution Layer is the Module field. The Module field

tells the algorithm to handle the request in one of two ways. First, the Module field can

indicate that the request is meant to act on the robot as a full, integrated platform. The

alternative is that the Module field can indicate the request is destined for a specific module.

The former case is indicated by a value of robot in the Module field. When the

field matches this value, the request is passed to a function for further interpretation and

handling. This function analyzes the remaining fields of the request. Three such robot

requests are currently implemented on the MARTE system and the acceptable values for

the other fields are shown in the first three rows of Table 5.1. Two of these requests, the

robot.modules() and watchdog.timeout(int), make requests on the Robot Distribution

46

Page 60: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

Layer itself without directly interacting with a module. The first queries for a response

containing a list of the modules that are attached to the robot platform. This response is

shown in Table 5.2. A client can use this response to determine which modules are currently

available and, therefore, what functionality the system is capable of. The second request

is related to a watchdog timer. This watchdog timer is a constantly decrementing timer

managed by the Robot Distribution Layer. When the timer expires, a stop request is

distributed to all attached modules, ordering the platform to stop all movement. This is a

safety feature for development and demonstration purposes to handle situations in which a

client loses connection to the Backend. The timer is reset every time a new list of requests

is received from the Server Layer. Therefore, as long as the client keeps a constant flow of

requests or a heartbeat message (a request message containing no requests, just the trailing

newline defined by the MCP), the timer will not expire and the platform will continue

to operate normally. The final robot request is a stop command. This demonstrates an

operation for the system as a whole: to stop all current movement. From a client’s point of

view, this request is module-independent. When the Robot Distribution Layer detects this

request, it looks at which modules are currently attached and commands each to stop. In

the MARTE implementation, all modules are standardized to require that it understand a

stop command.

When the Module field is not robot, it is considered to be the name of a specific

module. In this case, the Robot Distribution Layer does not care what values are in the

remaining fields. Instead, it must determine how to forward the request to the module

specified in the field. To accomplish this, the Robot Distribution Layer manages a list

of modules using a standardized interface. In the MARTE implementation, each module

capable of attaching to the system must provide functions to initialize, shutdown, and receive

a request destined for that module’s Module Distribution Layer. This collection of modules

is first used to determine if the value in the Module field is recognized by the system.

If the Robot Distribution Layer does not recognized the module name, an error response is

47

Page 61: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

returned. If it is recognized, the layer must next determine if the module is currently attached

to the platform. The current implementation of MARTE does not support hot swapping of

modules during operation. Instead, the initialization interface provided by each module is

used during boot. If the module can be initialized, it is considered attached and available for

requests. When attached, the request is passed to the module’s Module Distribution Layer.

Finally, once all requests in the request message are handled, all generated responses

are aggregated together into a single list of responses to be returned to the Server Layer for

serialization and transmission.

Module Device Function Parameters Description

robot

robot modules null Respond, listing all attached modulesrobot stop null Request all modules to stop movingwatchdog timeout int Set watchdog timeout value (seconds)

base * * * Forward request to the Base moduleble * * * Forward request to the BLE module

Table 5.1: MARTE Robot Distribution Layer Requests

Module Device Property Values Description

robot robot modules string, string, ... List of names of attached modules

Table 5.2: MARTE Robot Distribution Layer Module List Response

The remaining layers of the Backend subsystem have module dependent implemen-

tations. For the Base and BLE modules, the software for these layers is implemented as

part of the same program executing the module-independent layers previously described.

This is not a requirement for the MARTE system. Alternatively, a module’s software im-

plementation could execute within a separate program or on an entirely separate computing

platform. The only requirement is that the module provide the standard interface to the

Robot Distribution Layer. This interface can mediate the transfer of data between the the

Robot and Module Distribution layers.

The remaining hardware components of the two modules, such as motor controllers

and sensors, are mounted on the modules themselves. To be made accessible to the software

48

Page 62: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

executing on the SBC, these components must interface to the SBC. This is done through

the various I/O and USB ports available on the SBC and could also potentially utilize the

on-board network through the four remaining RJ-45 Ethernet ports on the WRT54G router.

The interconnections between the hardware components and the SBC are shown in Figure

5.18.

Module Independent

Base Module Bucket Ladder Excavator Module

Phidgets PhidgetSBC2

TCP/IP over 802.3

Linksys WRT54G

DO x8 DI x8 ADC x8 USB x6

Push Button

Push Button

Motor

Motor

Motor

Motor

MotorController

MotorController

LED x4

Power RailVoltageMonitor

Motor

Motor

MotorController

ActuatorMotor

MotorController

USB to Serial USB to Serial

ActuatorFeedback

InterfaceKit

Load CellBridge

Figure 5.18: MARTE Hardware Connections

49

Page 63: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

5.3.2.1 Base

When a request is passed from the Robot Distribution Layer to the Base’s Module

Distribution Layer, all fields are analyzed and matched against the functionality the Base

provides. The Device and Function fields of the request are checked against the known

functionality and, if recognized, the relevant functionality in the Functional Layer is invoked

using the values carried in the Parameters field. The available requests understood by the

Base module are listed in Table 5.3.

These requests provide a range of control abstraction. High-level functionality in-

cludes the drive and stop commands which affect the Base as an integrated module. Al-

ternatively, low-level requests include the vel commands for driving each of the four wheels

independent of the others, or the user_led request that can turn an LED on or off. The get

request allows a client to request the status of all aspects of the Base. When this request

is received, a collection of responses, shown in Table 5.4, is returned to be displayed to the

teleoperator or for use in decision-making by the Autonomy subsystem.

Module Device Function Parameters Description

base

base get null Respond with full statusbase drive int, float Drive at a velocity in a given directionbase stop null Stop all movementlf_wheel vel int Drive the left front wheel at a velocitylb_wheel vel int Drive the left back wheel at a velocityrf_wheel vel int Drive the right front wheel at a velocityrb_wheel vel int Drive the right back wheel at a velocityuser_led on int Set an LED on or off

Table 5.3: MARTE Base Module Distribution Layer Requests

The Functional Layer contains the algorithms and controllers for carrying out the re-

quests defined by the Module Distribution Layer. It is hierarchical in design, with high-level

functions utilizing low-level ones to improve code reuse and maintainability. Figures 5.19

and 5.20 show how a base.drive(int, float) and rf_wheel.vel(int) command propagate

through the relevant components in the Functional Layer. The “Base Drive” function, for

50

Page 64: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

Module Device Property Values Description

base

lf_wheel vel int The current velocity of the left front wheellb_wheel vel int The current velocity of the left back wheelrf_wheel vel int The current velocity of the right front wheelrb_wheel vel int The current velocity of the right back wheell_button press int Is the left docking button depressedr_button press int Is the right docking button depressedbattery volt float The voltage of the power railuser_led on int Is the LED on or off

Table 5.4: MARTE Base Module Distribution Layer Get Responses

example, is an implementation of a simple trigonometric-based algorithm that converts veloc-

ity and direction-based commands into individual wheel velocity commands. Each “Wheel”

abstracts a “Motor” interface. These “Motor” components provide a configurable interface

that stores and utilizes, for example, the polarity of a motor. This means the “Wheel” and

high-level components can request a specific direction of rotation regardless of how the motor

is wired to the motor controller.

LB Wheel

Module Distribution Layer

Base Drive

LF Wheel RF Wheel

RB Wheel

Platform Hardware Interface Layer

LB Motor

LF Motor RF Motor

RB Motor

Figure 5.19: MARTE Base Functional Layer: Base Drive

A more complex example on the Base module is the voltage monitoring system shown

in Figure 5.21. These functions collaborate to both provide the current system voltage to a

client and to display the voltage level on a row of LEDs. At the lowest level is the “Voltage”

51

Page 65: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

LB Wheel

Module Distribution Layer

Base Drive

LF Wheel RF Wheel

RB Wheel

Platform Hardware Interface Layer

LB Motor

LF Motor RF Motor

RB Motor

Figure 5.20: MARTE Base Functional Layer: Wheel Drive

component, a wrapper around the underlying ADC interface in the Platform Hardware

Interface Layer. It converts the value on the ADC channel to the corresponding voltage of

the power rail. The “Voltage” component is utilized by a “System Voltage” component. This

is a threaded object that continuously accesses the instantaneous “Voltage” and averages the

current value with a collection of previous values to provide a low pass filter. This helps

hide high frequency changes on the power rail due to load spikes. The “System Voltage”

component is queried by the Module Distribution Layer when the client makes a get()

request.

Components in the Functional Layer do not always map to a specific request. The

Backend provides three LEDs for indicating the battery voltage. This is handled by the

“Voltage Display” component in Figure 5.21. This component is also threaded, constantly

requesting an updated voltage value from the “System Voltage” component. This voltage

is used to control the three LEDs. Each LED also executes as separate threads, providing

the functionality to request the LED to blink at a desire rate. Overall, this system turns on

all LEDs when the battery is near full charge, then, one-by-one, blinks and extinguishes the

LEDs as the voltage drops to indicate the battery is depleting.

52

Page 66: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

Module Distribution Layer

Platform Hardware Interface Layer

High LED

Middle LED Low LED

Voltage

System VoltageVoltage Display

Figure 5.21: MARTE Base Functional Layer: Voltage System

The Platform Hardware Interface Layer provides the interfaces necessary to facilitate

communication between the PhidgetSBC2’s CPU and the hardware devices that make up

the Base module’s Platform Hardware Layer. Figure 5.18 already briefly touched on the

devices and peripherals that make up the Base’s Platform Hardware Layers.

As mentioned at the beginning of the chapter, the Base module consists of a four-

wheeled drive-train and provides a global power rail for any other attached modules, such

as the BLE. The four wheels are driven by four separate motors, each attached to separate

channels on two dual-channel Sabertooth 2x60 motor controllers [38]. These motor con-

trollers provide multiple different interfaces. The Base is configured to use the Sabertooth’s

TTL level RS-232 packetized serial interface for addressing and communicating with multiple

boards on a common bus. The PhidgetSBC2 does not contain a serial interface, therefore

the Base’s Platform Hardware Layer also contains an off-the-shelf USB-to-serial converter.

To monitor the voltage on the power rail, a Phidgets Voltage Sensor [39] converts

the voltage of the power rail down to an analog signal usable by the SBC’s analog to digital

channel. The Base also contains two push buttons mounted on one side of the platform to

provide the ability to “feel” the wall of the arena, each of which is connected to separate

53

Page 67: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

Figure 5.22: Sabertooth 2X60 Motor Controller [38]

digital input channels on the SBC. Finally, the Base has four LEDs interfaced with the

software system through the digital output pins. Three work as the previously discussed

voltage display. The fourth is a general purpose LED settable by the user_led request.

The Platform Hardware Interface Layer interfaces the Functional Layer with this

hardware. The digital and analog channels on the SBC are accessed through the official

Phidgets21 C library [40]. Interfacing with the motor controllers uses both a custom li-

brary for communicating using the Sabertooth’s packetized serial message format and the

GNU/Linux provided termios interface used to communicate with a serial device. In this

case, the termios library is directed to utilize the FTDI driver that can communicate with

the USB-to-serial device as if it is directly interfacing with the serial line.

54

Page 68: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

5.3.2.2 BLE

While the BLE module differs completely from the Base operationally, the implemen-

tation of the four module dependent layers bear many similarities. The Module Distribution

Layer for the BLE is identical in structure to the Base, but differs in the request fields it

accepts and responses it returns (Figures 5.5 and 5.6).

Module Device Function Parameters Description

ble

ble get null Respond with full statusble stop null Stop all movementlift vel int Extend the lifting arm at a velocitylift extend null Extend the lifting arm at full speedlift retract null Retract the lifting arm at full speedlift pos float Set the lifting arm to a positionchain vel int Drive the bucket ladder chain at a velocity

Table 5.5: MARTE BLE Module Distribution Layer Requests

Module Device Property Values Description

ble

lift vel int The current velocity of the lifting armlift pos float The current position of the lifting armchain vel int The current velocity of the bucket ladder chainload kg float The weight in the onboard collection bin

Table 5.6: MARTE BLE Module Distribution Layer Get Responses

The remaining three BLE-dependent layers are shown as a whole in Figure 5.23 start-

ing with the Module Distribution Layer at the top and ending with the Platform Hardware

Layer at the bottom.

The BLE module operates through four peripherals. Two motors work together

through a Sabertooth 2x60 motor controller to operate the bucket ladder’s belt mechanism.

This motor controller is configured identically to the Base’s motor controllers, but exists on

a separate USB-to-serial provided bus. This USB-to-serial device is made available to the

SBC when the BLE is attached. A linear actuator with built-in potentiometer feedback is

used to raise and lower the storage and digging mechanism. It is driven by one channel on

another Sabertooth motor controller. The potentiometer feedback is connected to the ADC

55

Page 69: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

channel of a PhidgetInterfaceKit, the same model of device that is built into the SBC. While

ADC channels on the SBC are available, additional physical constraints were placed on non-

Base modules that they must interface to the SBC electronically only through USB or the

Linksys router. Finally, a PhidgetBridge for reading Wheatstone bridge circuits is used with

a load cell mounted on the linear actuators mounting point to provide an estimated on-board

collected load. The PhidgetInterfaceKit has a built-in USB hub that is used to route the

PhidgetBridge, PhidgetInterfaceKit, and USB-to-serial data to and from the SBC when the

BLE is attached. The Platform Hardware Interface Layer uses the same Sabertooth/termios

API and Phidgets21 library used on the Base to interface to this hardware.

Finally, the Functional Layer provides the basic functionality for the BLE. It is imple-

mented in the same fashion as the Base’s Functional Layer but with the specific algorithms

necessary to handle the requests specified in Table 5.5. Most interesting is the group of com-

ponents handling the linear actuator. The “Thompson Actuator” component is threaded to

provide the capability to request a specific position to set the actuator. It implements a basic

on-off controller using the feedback from the potentiometer. A more complex controller is

unnecessary due to manufacturer specifications that the actuator only be driven by a DC

signal, not a rapidly changing voltage such as one provided by a PWM-like signal.

56

Page 70: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

Module Distribution Layer

Feedback Linear Actuator

Motor

ThompsonActuator

Load Cell Motor Motor

Bucket Ladder

Sabertooth BusInterface

termios API

FTDI Driver

Phidgets 21Library

MC ChannelInterface

SabertoothInterface

ADC ChannelInterface

Bridge ChannelInterface

SabertoothInterface

MC ChannelInterface

MC ChannelInterface

InterfaceKitInterface

BridgeInterface

PhidgetInterfaceKitPhidgetBridge

USB Controller

USB-to-Serial

SabertoothSabertooth

MotorMotorActuator MotorActuator PotentiometerLoad Cell

Functional Layer

Platform Hardware Interface Layer

Platform Hardware Layer

Figure 5.23: MARTE BLE Functional/Hardware Layers

57

Page 71: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

5.3.3 Frontend

The Frontend subsystem is designed to facilitate communication between the human

operators and the Backend on the remote MARTE platform. As a whole, the Frontend is

designed to be usable on a wide variety of general computing systems. This section covers

the implementation of the seven layers of the Frontend subsystem for use with MARTE.

To manage the local side of the communication link with the Backend subsystem, the

Frontend Communication Hardware Layer uses an identical model Linksys WRT54G router

as used on the Backend Communication Hardware Layer. As previously described, these two

routers are configured through the Tomato firmware to operate as a wireless bridge between

the remote IP subnet and local IP subnet. Communication between the operator computer

executing the Frontend software and the local router is achieved through one of the five

RJ-45 Ethernet ports on the router. Generally this is connected to a similar RJ-45 port on

a network interface card, but the specific interface is left to the administrator.

The Frontend software developed for the MARTE is a C# application heavily utilizing

the Microsoft .NET Framework. Interfacing this software with the Communication Hardware

Layer occurs through the System.Net.Sockets API in the .NET Framework class library.

This API utilizes the operating system networking drivers installed. The installation of

these drivers will likely be provided by the operating system or the networking interface

card manufacturer, but is ultimately up to the computer administrator to ensure the .NET

Framework installation can successfully communicate with the IP network.

At the lowest level, the Client Layer is implemented as an instance of the System.Net.

Sockets.TcpClient class in the .NET Framework. This class handles the interaction with

the System.Net.Sockets API to successfully communicate using the TCP/IP protocols as

defined at the beginning of the chapter. Much like the Backend’s Server Layer, the Frontend’s

Client Layer contains a custom C# implementation of the MCP application protocol so

58

Page 72: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

that it can successfully serialize and send request messages and receive and parse response

messages.

The Functional Layer provides the functionality of the Frontend and determines what

the operator can do with the platform. The commands available to the operator are depen-

dent on the modules attached to the platform. The modules attached are requested by

sending a robot:robot.modules=() request to the Backend through the Client Layer. If

the module is returned as attached, the Frontend creates a class instance for that particular

module, containing all the functionality that module provides. The Functional Layer also

provides a thread for regularly querying the status of all attached modules using the relevant

get=() requests. The results are stored as status values within the individual module classes

for use by the Mapping Layer. This regular status request also serves as the heartbeat for

keeping the safety watchdog timer on the Backend from timing out and halting the platform.

Figure 5.24 shows how the status update timer invokes the get request for the active

modules (in this case both the Base and BLE are shown as attached). These, in turn, invoke

requests through the Client Layer. The returned responses are then used by the specific

module class to populate the current known status of that module. Figure 5.25 shows the

Mapping Layer using the known status to be mapped to the display while also allowing the

Mapping Layer to invoke the other supported requests provided by each module.

Before examining the Mapping Layer, it is beneficial to look at the Operator Hardware

layers. To interact with the operator, the Frontend needs both input devices to receive user

input and output devices to provide feedback. User input is detected using three potential

peripherals. The first two are standard devices available on most personal computers: a

keyboard and mouse/touchpad. The third, a Microsoft Xbox 360 controller (Figure 5.26),

provides two analog joysticks, two pressure-sensitive triggers, a directional pad, and a variety

of buttons for a more familiar driving interface.

Connection of these three peripherals to the computer varies from system to system.

For desktop computers, the keyboard and mouse general connect via a PS/2 or USB. The

59

Page 73: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

StatusUpdateTimer

Mapping Layer

Commands Get Get

Client Layer

CommandsStatus Status

Attached Modules

Base BLE

...

Figure 5.24: MARTE Frontend Functional Layer: Status Update

StatusUpdateTimer

Mapping Layer

Commands Get Get

Client Layer

CommandsStatus Status

Attached Modules

Base BLE

...

Figure 5.25: MARTE Frontend Functional Layer: From Mapping

Xbox 360 controller generally uses a USB connection. Display of data to the operator is

generally handled through an external or built-in monitor on the computer driven by a

graphics card.

The low-level interfacing of this hardware to the operating system is also left up to

the owner and administrator of the computer executing the Frontend software. Mouse and

keyboard drivers are generally provided by the OS and graphics drivers are provided by

the manufacturer or OS. The rest of the Frontend interfaces with these peripherals through

60

Page 74: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

Figure 5.26: Microsoft Xbox 360 Controller [41]

the System.Windows.Forms .NET library. Communication with the Xbox 360 controller is

handled by the Microsoft XNA Framework for the Microsoft .NET Framework.

The Mapping Layer makes heavy use of the System.Windows.Forms library for han-

dling how the status of the system is displayed and for using and interpreting keyboard

and mouse input from the operator. The display of the Frontend is shown as a Windows

Form GUI. The GUI consists of a main window, or form, that serves as the foundation

for the operator’s display. It contains information that is always displayed, independent of

the modules attached. For example, it has a tree structure containing the list of attached

modules and the devices within each module. It also contains a button for displaying a con-

nection debugging popup window that determines whether Xbox controllers are attached,

if the routers are reachable, and if connection to the Backend is established. Each module

also contains a form specific for displaying the status of that module. Each module form

is selectable through the active module/device tree structure previously mentioned on the

general window. These module specific forms determine how the status information queried

from the Functional Layer is displayed.

The robot can be controlled by the operator using either the Xbox 360 controller or

using mouse and keyboard operations within the GUI. A case of the latter is the Base module

61

Page 75: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

form which provides text boxes and buttons for manually setting the velocity of each wheel.

The Mapping Layer maps those inputs to the Functional Layer which will eventually result

in a base:XX_wheel.vel=(int) request. These types of inputs are useful for the operator

to debug or test specific components of the platform through the Frontend.

The primary means of operating the robot is the Xbox 360 controller. The Mapping

Layer contains a method for each module that maps the joystick and button presses to the

specific methods in the Functional Layer. For example, the Base module class maps one

of the joysticks to the function for the base:base.drive=(int, float) command. The BLE

maps a trigger to the velocity of the bucket ladder and a button to set the linear actuator

to a pre-set raised position used while traversing the arena.

5.3.4 Autonomy

The Autonomy subsystem is designed to replace the teleoperator during autonomous

operation. The basic strategy of the Autonomy subsystem’s operation is divided into several

distinct states. First, the system needs to determine its starting position and orientation

(pose) within the starting region since the robot could begin in one of two general locations

facing one of four general directions, further compounded by smaller variations in the exact

placement due to the manual placement of the robot in the arena. Once the robot has

determined its initial pose, it needs to determine how to reach the digging region. To do

this, it travels to a central location in front of the obstacle region and scans the arena for

rocks and craters. These obstacles are marked on a map and a path is planned through

the obstacle region to the digging region. The robot then traverses the path and begins

excavation in the digging region. Once excavation has completed, it plans and follows a new

path back to the center of the starting region. To complete the cycle, the robot centers itself

with the bin, docks, and deposits the collected regolith. Finally, the robot decouples with

the bin, re-plans a path to a new location in the digging region, and repeats the general

excavation and delivery process until time expires.

62

Page 76: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

Various sensors and hardware components make up the Autonomy subsystem hard-

ware to implement the capabilities necessary to follow this general approach. To scan the

environment for obstacles and known features, a Hokuyo UTM-30LX-EW (Figure 5.27) LI-

DAR sensor is used. It can collect 1,080 range measurements across a 270◦ horizontal sweep

up to 30 meters within a single scan. It is accessible through a 100BASE Ethernet inter-

face. The LIDAR is mounted atop a SPT200 pan and tilt platform actuated by two Hitec

HS-5645MG digital servos (Figures 5.28 and 5.29). A Phidgets PhidgetSpatial sensor board

is used to provide a 3-axis gyroscope and 3-axis accelerometer (Figure 5.30).

Figure 5.27: Hokuyo UTM-30LX-EW [42]

Figure 5.28: SPT200 Pan & Tilt System [43]

All sensors involved in the implementation of the autonomy strategy are mounted

63

Page 77: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

Figure 5.29: Hitec HS-5645MG Servo [44]

Figure 5.30: Phidgets PhidgetSpatial 3/3/3 Sensor [45]

on-board the robot platform instead of mounting sensors or beacons in the environment.

The Autonomy subsystem in RAMP is completely removable, meaning these sensors are not

necessary for the platform to operate and can be removed during teleoperated control. The

software for the Autonomy subsystem executes on a separate computing system than that

of the Backend’s PhidgetSBC so that it can be removed with the sensors during planned

teleoperation-only operation. A VIA EPIA-P900 Pico-ITX form factor SBC (Figure 5.31)

acts as the computing platform for the Autonomy subsystem. It contains a 1.0 GHz VIA

Eden X2 dual core x86 compatible 64-bit processor and is installed with 4 GB of RAM and

64

Page 78: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

a Solid-state harddrive. An extension board provides two USB ports and an RJ-45 Ethernet

port.

Figure 5.31: VIA EPIA-P900 [46]

To query the accelerometer and gyroscope data, the PhidgetSpatial board is connected

to the VIA SBC using USB. To control the SPT200 pan and tilt platform actuated by the two

Hitec servos and carrying the Hokuyo LIDAR, an eight channel PhidgetAdvancedServo servo

controller board [47] is used and interfaced to the VIA SBC also using USB. Communication

between the VIA EPIA SBC and Hokuyo LIDAR is facilitated by two ports on a simple

eight port IEEE 802.3 Ethernet switch, connected to the LIDAR and SBC’s RJ-45 ports.

The Autonomy subsystem also has a Communication Hardware Layer for controlling

and querying the Backend to ultimately drive the platform. This is done by connecting

one port of the Autonomy subsystem Ethernet switch to an open port on the remote/on-

board Linksys WRT54G router. An overview of the hardware connections for the Autonomy

subsystem is shown in Figure 5.32.

65

Page 79: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

Backend

TCP/IP over 802.3

PhidgetsPhidgetSBC2

TCP/IP over 802.3

Linksys WRT54G

Servo

Servo

ServoController

LIDAR

IMU

EthernetSwitch

VIA EPIA-P900

USB x7

Pan/Tilt

Autonomy

TCP/IP over 802.3

Figure 5.32: MARTE Autonomy Hardware

The VIA EPIA SBC for the Autonomy subsystem is loaded with version 12.04 of the

Ubuntu GNU/Linux operating system. Interfacing the software with the hardware in the

Autonomy Hardware Interface and Communication Hardware Interface layers is comparable

to the Hardware Interface Layer of the Backend subsystem. Communication through USB

and Ethernet are handled at the low-level by the operating system kernel and provided

drivers. Interfacing to the PhidgetSpatial IMU and PhidgetAdvancedServo controller are

provided by the same Phidget21 API. Network traffic is transmitted and received through

the POSIX socket API. Communication with the Hokuyo LIDAR is done using the Scanning

Range Sensor Communication Interface Protocol (SCIP) with TCP/IP over Ethernet [48].

The URG Helper C library [49] is used as the implementation of this protocol.

Communication with the Backend is handled by the Client Layer, much in the same

66

Page 80: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

way as the Frontend subsystem’s Client Layer. But, whereas the Frontend uses the .NET

System.Net.Sockets API, the Autonomy system was developed in C++ and uses the

POSIX sockets API. A C++ implementation of MCP is implemented for use by the Client

Layer to communicate with the Backend.

The Functional Layer of the Autonomy subsystem provides all the basic functionality

available to the higher, decision-making layers of the subsystem. Part of this functionality

relates directly to communication with the Backend, to control and access the status of

the platform. The components for these related functions are simply layers of abstraction

wrapped around direct use of the Client Layer, hiding the distributed nature of such op-

erations from other components of the Functional Layer and the Executive Layer. These

include operations such as telling the platform to drive, operating the digging mechanism,

and reading the weight of the load. The availability of some of these functions is dependent

on the modules attached. For example, the BLE module provides a raise and lower function

for actuating the digging mechanism and bucket. The “Excavate” Autonomy functionality

makes use of this ability along with the ability to drive and measure weight. If a different

excavation module were implemented and attached, the “Excavate” functionality would still

be available to higher functions and the Executive Layer would need to implement module-

specific low-level functionality in the Functional layer that would likely differ from the BLE’s

raise and lower mechanism.

The Functional Layer also has functionality directly related to the Autonomy Hard-

ware layers. An overview is shown in Figure 5.33. At the lowest level are components for

basic functionality of the sensors and actuators. The URG Helper library and a collection of

custom C++ classes provide the functionality for collecting a 2D scan from the Hokuyo LI-

DAR. A custom component was also written for the servos and pan/tilt platform to request

specific degrees of pan and tilt. A threaded attitude and heading reference system (AHRS)

component was written to provide an overall pitch and roll of the platform versus gravity

67

Page 81: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

using periodic updates from the accelerometer and gyroscopes. The AHRS also uses this

data to provide a overall yaw of the platform from a reference orientation.

3D ScannerLocalizer

AHRS Pan/Tilt

LIDAR

Gimbal

Pan ServoTilt ServoGyroscopesAccelerometers

URG Helper Phidget21

Functional Layer

Autonomy Hardware Interface Layer

Figure 5.33: MARTE Autonomy Functional Layer: Autonomy Hardware

These functions are then used to provide more advanced functionality. A gimbal

component utilizes the pan/tilt platform and the AHRS to make the pan/tilt keep the

LIDAR at a constant orientation versus gravity, even as the platform moves across the

uneven terrain. A localizer component utilizes the LIDAR, AHRS and gimbal, along with the

knowledge of the arena walls as features, to keep up with the global location and orientation

of the platform. The localizer uses the gimbal to keep the LIDAR level with the horizon

and, thus, with the walls of the arena. The localizer then uses the LIDAR’s ability to collect

2D scan data to get distance measurements to the walls. Straight lines are extracted from

the distance measurements and are matched to the map of the arena to find potential poses

for the platform. Because the arena is symmetrical, there will always be at least two, and

up to four, potential poses from the scan alone. During initialization of the system during a

68

Page 82: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

competition run, the Autonomy subsystem knows that the platform is in the staring region

of the arena and in an approximate cardinal direction, thus breaking the symmetry. At all

later times, the localizer uses the yaw estimated by the AHRS to break the symmetry and

feeds the yaw component of the estimated pose result back in to the AHRS to zero out the

accumulating error. Another function is a threaded component for carrying out a 3D scan

using the LIDAR and the gimbal. When queried, the 3D scanner repeatedly increases the

pitch on the gimbal and takes a 2D scan at each pre-defined angle. This results in a collection

of 2D scans that are then merged into a 3D map of the environment.

Finally, some Functional Layer functions make use of both the Backend and the

Autonomy hardware. Some examples are shown in Figure 5.34. One is the ability to go to

a point in the arena. This utilizes both the drive functionality of the platform along with

the localizer component to keep the platform driving toward the goal and to know when

the platform has arrived. A path following component takes in a list of desired positions

and utilizes the previous functionality to follow a specific path. The previously mentioned

excavate functionality also falls under this category.

Executive Layer

Client Layer Autonomy Hardware Interface Layer

Go To

Follow PathExcavate

LocalizeDriveGet Weight Raise Digger . . .

Figure 5.34: MARTE Autonomy Functional Layer: All Hardware

69

Page 83: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

The Executive and Planning layers make all the high-level decisions on how to utilize

the Functional Layer to complete its mission. In MARTE’s case, there is only one mission,

to complete NASA RMC competition runs. The beginning of the Autonomy subsystem

section already briefly covered the general strategy of the Autonomy system. This strategy

was implemented as a basic state machine (Figure 5.35). In this case, the state machine as

a whole is part of the Planning Layer and the implementation of the individual states are

contained within the Executive Layer. The overall state machine decides what tasks (states)

should be carried out and in what order to complete a run while each state determines how

to utilize the components of the Functional Layer.

This design is heavily driven by the Executive Layer. Aside from state transitions,

all operation from the Autonomy system is controlled by the states in the Executive Layer.

Other aspects of the Planning Layer, such as a path planning algorithm based on A* and

an obstacle detection algorithm based on RANSAC, do not drive the system but instead are

invoked by the Executive Layer when needed.

70

Page 84: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

Initial Localization

Prepare for Scan

Scan for Obstacles

Detect Obstacles

Plan Path

Cross Obstacle Area

Excavate

Plan Path

Cross Obstacle Area

Dock

Deposit

Decouple

Figure 5.35: MARTE Autonomy State Machine

71

Page 85: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

5.4 Results

RAMP was successfully implemented on the MARTE platform for the 2014 NASA

RMC. The final MARTE platform using the Base and BLE modules and the Autonomy

subsystem is shown in Figure 5.36. The computational hardware for each subsystem and

module were physically mounted and enclosed in separate boxes while external sensors were

attached directly to their respective module or, in the case of Autonomy, free to be attached

to the platform depending on the modules in use.

Figure 5.36: MARTE Final Implementation

The modularity of the architecture was found to be beneficial, not just from an appli-

cation standpoint, but during the development cycle of the computational system. Design

and development of the Base module, BLE module, and computation system all took place

concurrently meaning the finalized Base and BLE modules were not available for use in early

software testing, in particular Autonomy testing. Instead of designing and using simulation

72

Page 86: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

for Autonomy testing, the previous year’s drive train and hardware were used as a test

platform while development of the MARTE Base module was still in progress. An imple-

mentation of the Backend subsystem was used on the test platform. The implementation of

the testing platform’s Functional Layer had specific implementation details for the testing

platform, but the testing Backend provided identical requests and responses as would be

used on MARTE. Through this, the Autonomy subsystem could talk to the testing Backend

and control the testing platform using the same interface as the finalized MARTE Backend

would provide (Figure 5.37). Once the MARTE Base and Backend were completed, Auton-

omy testing was shifted from the testing platform to the MARTE Base with few integration

issues due to the identical Backend interface.

Figure 5.37: Autonomy Testing on the Test Base Module

Similarly, the ability to remove the Autonomy subsystem was also beneficial. The

autonomous hardware and sensors can be completely removed from the MARTE platform

during bench-top testing without making the system inoperable. This allowed Autonomy

testing to occur concurrently to operator training with the Frontend and MARTE.

The results of the six competition runs at the NASA RMC are shown in Table 5.7.

Every run in the competition arena was able to successfully collect above and beyond the

regolith necessary to qualify, even in the two cases in which the control had to be swapped

73

Page 87: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

from autonomous to teleoperated mid-run. In addition, MARTE stood out as the only

competition robot able to successfully detect and avoid obstacles during a fully autonomous

run.

Figure 5.38 shows the 2D LIDAR scan collected for localization just prior to scanning

for obstacles during the fifth run. The data points clearly show the walls of the arena and

the small cloud of distance measurements that captured the BLE frame and dust. Figure

5.39 shows the two best-match poses for the previous scan (in the LIDAR sensor frame of

reference.) One is the correct pose while the other is a mirror due to the arena’s symmetry.

The incorrect pose was discard by additional data provided by the AHRS. Figure 5.40 shows

the result of obstacle detection and path planning following the 3D LIDAR scan, wherein

possible rocks, walls, and craters are marked. In this particular example, the rock and crater

in the “upper” half of the arena were accurately detected and mapped but the “lower” half

detected too much of the terrain as craters: a false positive. The small obstacle detected

beside the robot is caused by dirt piling up as the robot makes a zero degree turn. It is not

taken into account by path planning. Figure 5.41 shows the robot midway through execu-

tion of the path previously depicted, neatly driving between the rock and crater detected.

Finally, Figures 5.42 and 5.43 show MARTE during the collection and depositing processes,

respectively, during the third run.

74

Page 88: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

Run #1 May 19, 2014 Qualified: Yes Teleoperation/AutonomyThe run began under autonomous control. The system successfully initialized,scanned for and detected obstacles, and began crossing the obstacle region, avoid-ing obstacles. Once reaching the digging area, the accumulating error of the AHRSled to sufficient drift of the pitch and roll of the LIDAR’s gimbal to lose the neces-sary range measurements to the arena walls, thus losing localization. Control wasswapped to teleoperation control and the run was successfully completed by the op-erators. Additional status monitoring and error handling was added after the run tohandle similar situations in future runs.

Run #2 May 20, 2014 Qualified: Yes TeleoperationThe entire run was completed successfully under teleoperated control.

Run #3 May 20, 2014 Qualified: Yes AutonomyThe entire run was completed successfully under autonomous control.

Run #4 May 21, 2014 Qualified: Yes Teleoperation/AutonomyThe run successfully reached the digging region autonomously before failing to localizeand recover. Control was transferred to the teleoperators and the run was successfullycompleted. Post-run analysis revealed that the IMU and vibration dampening mounthad shifted during operation leading to an invalid gravity vector and, therefore, invalidgimbal set-point causing a permanent tilt and roll such that the LIDAR could notdetect the arena’s walls for localization. The IMU mount was further secured followingthe run.

Run #5 May 23, 2014 Qualified: Yes AutonomyThe entire run was completed successfully under autonomous control.

Run #6 May 23, 2014 Qualified: Yes AutonomyThe entire run was completed successfully under autonomous control.

Table 5.7: NASA RMC Results

75

Page 89: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

Figure 5.38: Run #5: Localization Scan

Figure 5.39: Run #5: Localization Result

76

Page 90: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

Figure 5.40: Run #5: Path Planning

77

Page 91: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

Fig

ure

5.41

:R

un

#5:

Auto

nom

ousl

yC

ross

ing

78

Page 92: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

Fig

ure

5.42

:R

un

#3:

Auto

nom

ousl

yD

iggi

ng

79

Page 93: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

Fig

ure

5.43

:R

un

#3:

Auto

nom

ousl

yU

nlo

adin

g

80

Page 94: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

CHAPTER 6

CONCLUSIONS

In conclusion, this thesis presents RAMP, a robotic architecture for a modular plat-

form that supports either teleoperated or autonomous control. RAMP is closely related

to three-tiered hybrid autonomous architectures but places special emphasis on the neces-

sary design requirements and implementation details needed to support a modular robotic

platform and the ability to remove the autonomous components from the system while pre-

serving teleoperation capabilities. RAMP consists of three distinct subsystems: a Backend

for managing the robotic platform, a Frontend for interacting with the human operator dur-

ing teleoperation, and Autonomy for autonomously operating the system. This division of

subsystems allows the Autonomy components to be completely removed from the overall

system without affecting the operator’s ability to control the platform through the Backend.

The subsystems are further broken down into the layers necessary for supporting a platform

of removable and interchangeable modules. The architecture was successfully implemented

on the MARTE platform as part of the 2014 NASA RMC. The modularity of the architec-

ture was found to be very useful during the development process. Additionally, the ability

to swap between teleoperation and autonomous control was essential. Also, decoupling tele-

operation from autonomy allows MARTE to operate without the Autonomy hardware to

reduce weight when autonomous operation is not needed.

Various future modifications could be made to improve RAMP. One beneficial addi-

tion would be the inclusion of a new subsystem and layers to support interaction between a

remote operator and the Autonomy subsystem. This could allow a remote operator to receive

and monitor autonomy-related status. This display subsystem would provide substantial

81

Page 95: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

improvements for operators unfamiliar with the implementation details of the Autonomy

subsystem; allowing them to make better decisions on when they should intervene and take

over control of the robot’s operation.

Another improvement would be a standardized interface for communication between

modules within the Backend. The current implementation does not define a specific means

for allowing one module to query the status of another. This would be beneficial in situations

where, for example, one module’s movement may be restricted by the position of another

module to avoid collisions between onboard hardware.

Another possible extension to RAMP would be the addition of support for multi-

robot applications. Perhaps, in such a system, RAMP would benefit from a new layer or

subsystem that communicates and supervises many autonomous, modular platforms, each

of these platforms running the extended version of RAMP mentioned previously that allows

mission-level commands. Or, alternatively, the addition of a supervisor subsystem that

directly controls the individual robots through the same Backend interface that the Frontend

and Autonomy subsystems use. There would also be the question of how to handle the ability

to remove the autonomy module from platforms in such a system. A heterogeneous system

consisting of some robots having an autonomy module and some without an autonomy

module would complicate the design.

Finally, an excavation rover was presented in this thesis but other modular platforms

are also supported. Analysis and improvement of RAMP would include new case studies on

additional platforms designed for other applications and environments.

82

Page 96: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

REFERENCES

[1] R. R. Murphy, Introduction to AI Robotics. Cambridge, MA: MIT Press, 2000.

[2] R. Siegwart et al., Introduction to Autonomous Mobile Robots, 2nd ed. Cambridge,MA: The MIT Press, 2011.

[3] B. P. Gerkey et al., “The Player/Stage Project: Tools for multi-robot and distributedsensor systems,” in International Conference on Advanced Robotics, 2003, pp. 317–323.

[4] B. P. Gerkey et al. (2014) The player project. [Accessed: 2015-01-31]. [Online].Available: http://playerstage.sourceforge.net/

[5] Willow Garage. (2014) ROS: Robot operating system. [Accessed: 2015-01-31]. [Online].Available: http://www.ros.org

[6] B. Alvarez et al., “Reference architecture for robot teleoperation: Development detailsand practical use,” Control Engineering Practice, vol. 9, no. 4, pp. 395–402, 2001.

[7] A. Iborra et al., “Robotized system for retrieving fallen objects within the reactor vesselof a nuclear power plant (PWR),” in 2000 IEEE International Symposium on IndustrialElectronics, vol. 2, 2000, pp. 529–534.

[8] J. S. Albus et al., “NIST technical note 1235: NASA/NBS standard reference modelfor telerobot control system architecture (NASREM),” National Institute of Standardsand Technology, Gaithersburg, MD, Tech. Rep., 1989.

[9] S. Leake et al., “Hierarchical ada robot programming system (HARPS): A complete andworking telerobot control system based on the NASREM model,” in IEEE InternationalConference on Robotics and Automation, vol. 2, 1989, pp. 1022–1028.

[10] B. Ressa et al., “A NASREM based architecture for robot control in the sanitary landfillstudy case,” Fifth International Conference on Advanced Robotics, vol. 2, pp. 1299–1304,1991.

[11] R. H. Byrne, “A practical implementation of a hierarchical control system for teleroboticland vehicles,” National Telesystems Conference, 1992.

[12] R. A. Brooks, “A robust layered control system for a mobile robot,” IEEE Journal ofRobotics and Automation, vol. 2, no. 1, pp. 14–23, 1986.

[13] R. A. Brooks and A. M. Flynn, “Robot beings,” in IEEE/RSJ International Workshopon Intelligent Robots and Systems ’89, 1989, pp. 2–10.

83

Page 97: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

[14] J. Connell, “A colony architecture for an artificial creature,” Ph.D. dissertation, MIT,1989.

[15] E. Gat et al., “Behavior control for robotic exploration of planetary surfaces,” IEEETransactions on Robotics and Automation, vol. 10, no. 4, pp. 490–503, 1994.

[16] R. C. Arkin and T. Bulch, “AuRA: Principles and practice in review,” Journal ofExperimental and Theoretical Artificial Intelligence, vol. 9, pp. 175–189, 1997.

[17] R. C. Arkin, “Motor schema based navigation for a mobile robot: An approach toprogramming by behavior,” in 1987 IEEE Conference on Robotics and Automation,1987, pp. 264–271.

[18] E. Gat, “Integrating planning and reacting in a heterogenous asynchronous architec-ture for controlling real-world mobile robots,” in 10th National Conference on ArtificialIntellegence, ser. AAAI’92, 1992, pp. 809–815.

[19] R. J. Firby, “An investigation into reactive planning in complex domains,” in SixthNational Conference on Artificial Intelligence, vol. 1, 1987, pp. 202–206.

[20] R. P. Bonasso et al., “Experiences with an architecture for intellegent, reactive agents,”Journal of Experimental and Theoretical Artificial Intelligence, vol. 9, pp. 237–256, 1995.

[21] R. P. Bonasso, “Using the 3T architecture for tracking shuttle RMS procedures,” inIEEE International Joint Symposium on Intelligence and Systems, 1998, pp. 180–187.

[22] R. Alami et al., “An architecture for autonomy,” INTERNATIONAL JOURNAL OFROBOTICS RESEARCH, vol. 17, pp. 315–337, 1998.

[23] F. Qureshi et al., “The cognitive controller: A hybrid deliberative/reactive control ar-chitecture for autonomous robots,” in 17th International Conference on Innovations inApplied Artificial Intelligence, 2004, pp. 1102–1111.

[24] B. Pell et al., “An autonomous spacecraft agent prototype,” in Autonomous Robots,1997, pp. 253–261.

[25] R.Volpe et al., “The CLARAty architecture for robotic autonomy,” in IEEE AerospaceConference, vol. 1, 2001, pp. 121–132.

[26] M. G. Bualat and L. Kobayashi, “Flexible rover architecture for science instrumentintegration and testing,” in AIAA Space 2006, 2006.

[27] D. M. Lyons and A. J. Hendriks, “Planning as incremental adaption of a reactive sys-tem,” Robotics and Autonomous Systems, vol. 14, pp. 255–288, 1995.

[28] R. Simmons, “Concurrent planning and execution for autonomous robots,” IEEE Con-trol Systems, vol. 12, no. 1, pp. 46–50, 1992.

[29] K. Konolige et al., “The saphira architecture: A design for autonomy,” Journal ofExperimental and Theoretical Artificial Intelligence, vol. 9, pp. 215–235, 1997.

84

Page 98: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

[30] J. H. Connell, “SSS: A hybrid architecture applied to robot navigation,” in IEEE In-ternational Conference on Robotics and Automation, vol. 3, 1992, pp. 2719–2724.

[31] NASA, “NASA’s fifth annual robotic mining competition: Rules & rubrics 2014,” 2014,unpublished.

[32] C. Leslie et al., “Alabama astrobotics: 2014 systems engineering paper,” 2014, unpub-lished.

[33] PolarCloud. (2010) Tomato firmware version 1.28. [Accessed: 2014-10-17]. [Online].Available: http://www.polarcloud.com/tomato

[34] A. Weiss. (2009) How to: Set Tomato firmware for wireless client modes. [Accessed:2014-10-17]. [Online]. Available: http://www.wi-fiplanet.com/tutorials/article.php/3810281

[35] Phidgets. (2014) 1072 0 - PhidgetSBC2. [Accessed: 2014-10-17]. [Online]. Available:http://www.phidgets.com/products.php?product id=1072 0

[36] Linksys. (2014) WRT54G support. [Accessed: 2014-10-17]. [Online]. Available:http://support.linksys.com/en-us/support/routers/WRT54G

[37] IEEE and The Open Group. (2013) The open group base specifications issue7: System interfaces: Detailed ToC. [Accessed: 2014-08-17]. [Online]. Available:http://pubs.opengroup.org/onlinepubs/9699919799/functions/contents.html

[38] Dimension Engineering. (2014) Sabertooth 2X60 regenerative dual motor driver.[Accessed: 2014-10-23]. [Online]. Available: https://www.dimensionengineering.com/products/sabertooth2x60

[39] Phidgets. (2014) 1135 0 - precision voltage sensor. [Accessed: 2014-10-23]. [Online].Available: http://www.phidgets.com/products.php?category=8&product id=1135 0

[40] Phidgets. (2014) Language - C/C++ - Phidgets support. [Accessed: 2014-10-23].[Online]. Available: http://www.phidgets.com/docs/Language - C/C%2B%2B

[41] Alphathon and Wikipedia. (2010) 360 controller. [Accessed: 2014-10-26]. [Online].Available: http://upload.wikimedia.org/wikipedia/commons/2/2c/360 controller.svg

[42] Hokuyo Automatic. (2012) UTM-30LX-EW. [Accessed: 2014-11-02]. [Online]. Available:http://www.hokuyo-aut.jp/02sensor/07scanner/download/products/utm-30lx-ew/

[43] ServoCity. (2014) SPT200 pan & tilt system. [Accessed: 2014-11-02]. [Online].Available: http://www.servocity.com/html/spt200 pan tilt system.html

[44] ServoCity. (2014) HS-5645MG digital torque. [Accessed: 2014-11-02]. [Online].Available: http://www.servocity.com/html/hs-5645mg digital torque.html

85

Page 99: A TELEOPERATION AND AUTONOMOUS CAPABLE MODULAR …

[45] Phidgets. (2014) 1044 0 - PhidgetSpatial precision 3/3/3 high resolution. [Accessed:2014-11-02]. [Online]. Available: http://www.phidgets.com/products.php?category=5&product id=1044 0

[46] VIA Embedded. (2014) EPIA-P900. [Accessed: 2014-11-02]. [Online]. Available:http://www.viaembedded.com/en/products/boards/1670/1/EPIA-P900.html

[47] Phidgets. (2014) 1061 1 - PhidgetAdvancedServo 8-motor. [Accessed: 2014-11-02].[Online]. Available: http://www.phidgets.com/products.php?category=11&productid=1061 1

[48] Hokuyo Automatic. UTM-30LX-EW communication protocol specification rev 1.0.[Accessed: 2014-11-02]. [Online]. Available: http://www.hokuyo-aut.jp/02sensor/07scanner/download/pdf/UTM-30LX-EW protocol en.pdf

[49] (2013) URG helper version 1.0.4. [Accessed: 2014-11-02]. [Online]. Available:http://sourceforge.net/projects/urgwidget/

86