Robot Building Lab: Localization and Mapping Collision Detection and Obstacle Avoidance Using Sonar ...

56
Robot Building Lab: Localization and Mapping Collision Detection and Obstacle Avoidance Using Sonar Have continuously running task update obstacle state: Left_close, left_far, right_close, right_far, front_close, front_far, clear Use close readings as if they were bumper hits and react to collision (bump_left, bump_right, bump_front) Use far readings as if they were IR proximity detections and avoid collision (obstacle_left, obstacle_right, obstacle_front) Run 5 parallel tasks: 1. Light sensor state update 2. Sonar state update 3. Light following 4. Collision reaction 5. Obstacle avoidance Arbitrate within behavior/priority- based control scheme

Transcript of Robot Building Lab: Localization and Mapping Collision Detection and Obstacle Avoidance Using Sonar ...

Robot Building Lab: Localization and Mapping

Collision Detection and Obstacle Avoidance Using

Sonar Have continuously running task update

obstacle state:– Left_close, left_far, right_close,

right_far, front_close, front_far, clear Use close readings as if they were

bumper hits and react to collision (bump_left, bump_right, bump_front)

Use far readings as if they were IR proximity detections and avoid collision (obstacle_left, obstacle_right, obstacle_front)

Run 5 parallel tasks:1. Light sensor state update2. Sonar state update3. Light following4. Collision reaction5. Obstacle avoidance

Arbitrate within behavior/priority-based control scheme

Robot Building Lab: Localization and Mapping

Inverse Kinematics Demo Demonstrate the following

pattern using inverse kinematics, closed-loop encoder monitoring, and proportional control:1. Square: 36 inches forward, 90

degree turn, 36 inches forward, 90 degree turn, 36 inches forward, 90 degree turn, 36 inches forward, 90 degree turn

Given: – Initial x,y,theta– Target x’,y’,theta or

x,y,theta’ or x’,y’,theta’ Compute (using simple

inverse kinematics)– Target right encoder

counts– Target left encoder counts

Loop (while targets not reached)

– Compute current ratio and set state =• Below_target_ratio or

above_target_ratio or at_target_ratio (note: target ratio is 1:1 if you are always going straight)

– Adjust motor velocities to stay in at_target_ratio state (with proportional control)

Robot Building Lab: Localization and Mapping

Lab 8

Localization and Mappinghttp://plan.mcs.drexel.edu/courses/robotlab/labs/lab08.pdf

Robot Building Lab: Localization and Mapping

Find Goal, Build Map

Record initial position Head toward goal

light, avoiding and reacting to obstacles

Take sonar sweeps and odometry readings and build map as you move

Fix odometry errors with landmarks (using ground sensors)

Output map to host View graphically

Robot Building Lab: Localization and Mapping

Today

Localization re-visitedSimple uncertainty correction with

landmarksLocal sonar mapsSensor fusion and global sonar maps

In Two Weeks: Off-line processing and path planning using maps

Robot Building Lab: Localization and Mapping

Review: Dead Reckoning (deductive reckoning)

Integration of incremental motion over time

Given known start position/orientation (pose)

Given relationship between motor commands and robot displacement (linear and rotational)

Compute current robot pose with simple geometric equations

Provides good short-term relative position accuracy

Accumulation of errors in long-term – wheel slippage, bumps, etc.,

(From Borenstein et. al.)

Robot Building Lab: Localization and Mapping

Review: Forward Kinematics Given: Starting pose (x,y,theta), Motor commands Compute: Ending pose (x’,y’,theta’) Assume encoders mounted on drive motors Let

– Cm = encoder count to linear displacement conversion factor

– Dn = wheel diameter– Ce = encoder pulses per revolution– N = gear ratio

Cm = Dn / N Ce Incremental travel distance for left wheel

= Cm NL (NL = encoder counts on left wheel) Incremental travel distance for right wheel

= Cm NR (NR = encoder counts on right wheel) That’s all we need for determining horizontal

displacement and rotation from encoder counts

Robot Building Lab: Localization and Mapping

Review: Differential Drive Odometry/Kinematics

DL = distance traveled by left wheel

DR = distance traveled by right wheel

Distance traveled by center point of robot is then D = (DR+DL)/2

Change in orientation Dtheta is then (DR – DL)/base

New orientation is then theta’=theta + Dtheta

If old robot position is x,y new robot position is nowx’ = x + D cos theta’y’ = y + D sin theta’

Robot Building Lab: Localization and Mapping

Localization using Kinematics

Issue: We can’t tell direction from encoders alone

Solution: Keep track of forward/backward motor command sent to each wheel

Localization program: Build new arrays into behavior/priority-based controller and use to continually update location

Doesn’t solve noise problems, though

Robot Building Lab: Localization and Mapping

Review: Prioritization Algorithm: Basic Concept

•Each process has:

•a priority level,

•a pair of output values representing its left and right motor commands,

•an enable/disabled state indicator and

•a process name character string

•The prioritize() process cycles through the list of enabled processes, finds the one with the highest priority, and copies its motor output commands to the actual motors.

(copyright Prentice Hall 2001)

Robot Building Lab: Localization and Mapping

Review: Prioritization Algorithm: Data Structures

Data Structures:process_name[] process_priority[] holds the fixed priority values as assigned to the processes process_enable[] holds dynamic enable values

– Touch is enabled, since its priority level has been copied into the process_enable[] array. Turn is disabled, and Wander is always enabled

Left_motor[] and right_motor[] hold values assigned by the behavior tasks

– Even though Turn is disabled, its entries are still present in the motor arrays from a previous time. No need to clear them out

Active_process assigned by prioritize(), is zero, indicating that the process with an index 0—the touch sensor process—is active.

(copyright Prentice Hall 2001)

Robot Building Lab: Localization and Mapping

Today

Localization re-visitedSimple uncertainty correction with

landmarksLocal sonar mapsSensor fusion and global sonar maps

Robot Building Lab: Localization and Mapping

Review: Reducing Odometry Error with Absolute

Measurements Uncertainty Ellipses Change shape based on

other sensor information Artificial/natural

landmarks Active beacons Model matching –

compare sensor-induced features to features of known map – geometric or topological

(From Borenstein et. al.)

Robot Building Lab: Localization and Mapping

Review: Reflective Optosensors

• Active Sensor, includes:

•Transmitter: only infrared light by filtering out visible light

•Light detector (photodiode or phototransistor)

•Light from emitter LED bounces off of an external object and is reflected into the detector

•Quantity of light is reported by the sensor

•Depending on the reflectivity of the surface, more or less of the transmitted light is reflected into the detector

•Analog sensor connects to HBs analog ports

(copyright Prentice Hall 2001)

Robot Building Lab: Localization and Mapping

Use of Infrared Ground Sensor

Robot Building Lab: Localization and Mapping

Correcting Localization with Landmarks

Keep track of (x,y,theta) between landmarks

Correct for absolute y (known) when ground sensor triggers landmark

Issues: – Uncertainty in x

and theta not corrected using this method

– Possible to confuse landmarks

Robot Building Lab: Localization and Mapping

Today

Localization re-visitedSimple uncertainty correction with

landmarksLocal sonar mapsSensor fusion and global sonar maps

Robot Building Lab: Localization and Mapping

Example Sonar Sweep

Distance measurements from circular sonar scan

What is robot seeing?

Robot Building Lab: Localization and Mapping

Detecting a Wall

Robot Building Lab: Localization and Mapping

Partitioning Space into Regions

Process sweeps to partition space into free space (white), and walls and obstacles (black and grey)

Robot Building Lab: Localization and Mapping

Grid-based Algorithm

Superimpose “grid” on robot field of view

Indicate some measure of “obstacleness” in each grid cell based on sonar readings

Robot Building Lab: Localization and Mapping

So how do we use sonar to create maps?

What should we conclude if this sonar reads 10 feet?

10 feet

there is something somewhere around here

there isn’t something here

Local Mapunoccupied

occupied

or ...no information

(Courtesy of Dodds)

Robot Building Lab: Localization and Mapping

Sonar Modelingresponse model (Kuc)

sonar reading

obstacle

c = speed of sound

a = diameter of sonar element

t = time

z = orthogonal distance

= angle of environment surface

• Models the response, hR,with

• Then, add noise to the model to obtain a probability:

p( S | o )

chance that the sonar reading is S, given an obstacle at

location o

z =

S

o

(Courtesy of Dodds)

Robot Building Lab: Localization and Mapping

Typical Sonar Probability Model

(From Borenstein et. Al.)

Robot Building Lab: Localization and Mapping

Building a Map

• The key to making accurate maps is combining lots of data.

• But combining these numbers means we have to know what they are !

What should our map contain ?

• small cells

• each represents a bit of the robot’s environment

• larger values => obstacle

• smaller values => free

(Courtesy of Dodds)

Robot Building Lab: Localization and Mapping

Today

Localization re-visitedSimple uncertainty correction with

landmarksLocal sonar mapsSensor fusion and global sonar maps

Robot Building Lab: Localization and Mapping

Using sonar to create maps

What should we conclude if this sonar reads 10 feet...

10 feet

and how do we add the information that the next sonar reading (as the robot moves) reads 10 feet, too?

10 feet

(Courtesy of Dodds)

Robot Building Lab: Localization and Mapping

What is it a map of?

Several answers to this question have been tried:

It’s a map of occupied cells. oxy oxy cell (x,y) is occupied

cell (x,y) is unoccupied

Each cell is either occupied or unoccupied -- this was the approach taken by the Stanford Cart.

pre ‘83

What information should this map contain, given that it is created with

sonar ?

(Courtesy of Dodds)

Robot Building Lab: Localization and Mapping

Several answers to this question have been tried:

It’s a map of occupied cells.

It’s a map of probabilities: p( o | S1..i )

p( o | S1..i )

The certainty that a cell is occupied, given the sensor readings S1, S2, …, Si

The certainty that a cell is unoccupied, given the sensor readings S1, S2, …, Si

oxy oxy cell (x,y) is occupied

cell (x,y) is unoccupied

• maintaining related values separately?

• initialize all certainty values to zero

‘83 - ‘88

pre ‘83

What is it a map of ?

• contradictory information will lead to both values near 1 • combining them takes some work...

(Courtesy of Dodds)

Robot Building Lab: Localization and Mapping

Several answers to this question have been tried:

It’s a map of occupied cells.

It’s a map of probabilities: p( o | S1..i )

p( o | S1..i )

It’s a map of odds.

The certainty that a cell is occupied, given the sensor readings S1, S2, …, Si

The certainty that a cell is unoccupied, given the sensor readings S1, S2, …, Si

The odds of an event are expressed relative to the complement

of that event.

odds( o | S1..i ) = p( o | S1..i )p( o | S1..i )

The odds that a cell is occupied, given the sensor readings S1, S2, …,

Si

oxy oxy cell (x,y) is occupied

cell (x,y) is unoccupied

‘83 - ‘88

pre ‘83

What is it a map of ?

probabilities

(Courtesy of Dodds)

Robot Building Lab: Localization and Mapping

An example map

units: feetEvidence grid of a tree-lined outdoor path

lighter areas: lower odds of obstacles being present

darker areas: higher odds of obstacles being present

how to combine them?(Courtesy of Dodds)

Robot Building Lab: Localization and Mapping

Conditional probability

Some intuition...

p( o | S ) = The probability of event o, given event S .The probability that a certain cell o is occupied, given that the robot sees the sensor reading S .

p( S | o ) = The probability of event S, given event o .The probability that the robot sees the sensor reading S, given that a certain cell o is occupied.

• What is really meant by conditional probability ?

• How are these two probabilities related?

(Courtesy of Dodds)

Robot Building Lab: Localization and Mapping

Bayes Rule

p( o S ) = p( o | S ) p( S )

- Conditional probabilities

(Courtesy of Dodds)

Robot Building Lab: Localization and Mapping

Bayes Rule

- Bayes rule relates conditional probabilities

p( o | S ) = p( S | o ) p( o )

p( S )Bayes rule

p( o S ) = p( o | S ) p( S )

- Conditional probabilities

(Courtesy of Dodds)

p( o S ) = p( S | o ) p( o )

Robot Building Lab: Localization and Mapping

Bayes Rule

- Bayes rule relates conditional probabilities

p( o | S ) = p( S | o ) p( o )

- So, what does this say about

p( S )Bayes rule

odds( o | S2 S1 ) ?

p( o S ) = p( o | S ) p( S )

- Conditional probabilities

Can we update easily ?

(Courtesy of Dodds)

p( o S ) = p( S | o ) p( o )

Robot Building Lab: Localization and Mapping

Combining evidence

So, how do we combine evidence to create a map?

What we want --

odds( o | S2 S1) the new value of a cell in the map after the sonar reading S2

What we know --

odds( o | S1) the old value of a cell in the map (before sonar reading S2)

p( Si | o ) & p( Si | o ) the probabilities that a certain obstacle causes the sonar reading Si

(Courtesy of Dodds)

Robot Building Lab: Localization and Mapping

Combining evidence

odds( o | S2 S1) = p( o | S2 S1 )

p( o | S2 S1 )

(Courtesy of Dodds)

Robot Building Lab: Localization and Mapping

Combining evidence

odds( o | S2 S1) = p( o | S2 S1 )

p( o | S2 S1 )

. = p( S2 S1 | o ) p(o)

p( S2 S1 | o ) p(o)

def’n of odds

(Courtesy of Dodds)

Robot Building Lab: Localization and Mapping

p( S2 | o ) p( S1 | o ) p(o)

Combining evidence

odds( o | S2 S1) = p( o | S2 S1 )

p( o | S2 S1 )

. =

. =

p( S2 S1 | o ) p(o)

p( S2 S1 | o ) p(o)

p( S2 | o ) p( S1 | o ) p(o)

def’n of odds

Bayes’ rule (+)

(Courtesy of Dodds)

Robot Building Lab: Localization and Mapping

p( S2 | o ) p( S1 | o ) p(o)

Combining evidence

odds( o | S2 S1) = p( o | S2 S1 )

p( o | S2 S1 )

. =

. =

. =

p( S2 S1 | o ) p(o)

p( S2 S1 | o ) p(o)

p( S2 | o ) p( S1 | o ) p(o)

def’n of odds

Bayes’ rule (+)

conditional independence of S1 and

S2

p( S2 | o ) p( o | S1 )p( S2 | o ) p( o | S1 ) Bayes’ rule

(+)

(Courtesy of Dodds)

Robot Building Lab: Localization and Mapping

p( S2 | o ) p( S1 | o ) p(o)

Combining evidence

odds( o | S2 S1) = p( o | S2 S1 )

p( o | S2 S1 )

. =

. =

. =

p( S2 S1 | o ) p(o)

p( S2 S1 | o ) p(o)

p( S2 | o ) p( S1 | o ) p(o)

Update step = multiplying the previous odds by a precomputed weight.

def’n of odds

Bayes’ rule (+)

conditional independence of S1 and

S2

p( S2 | o ) p( o | S1 )p( S2 | o ) p( o | S1 ) Bayes’ rule

(+)

previous oddsprecomputed valuesthe sensor model

(Courtesy of Dodds)

Robot Building Lab: Localization and Mapping

Evidence grids

hallway with some open doors

known map and estimated evidence grid

lab space

CMU -- Hans Moravec(Courtesy of Dodds)

Robot Building Lab: Localization and Mapping

Path Planning in Evidence Grids

Reduces to a search problem within the graph of cells, and the graph is created on the fly.

Search for a minimum-cost path, depending on

•length

•probability of collision

(Courtesy of Dodds)

Robot Building Lab: Localization and Mapping

Robot Mapping

represent space as a collection of cells, each with the odds (or probability) that it contains an obstacle

Lab environment

not surelikely obstacle

likely free space

• The relative locations of the robot within the map are assumed known.• It is important that the robot odometry is correct

• Equally plausible to consider the converse problem...

Evidence Grids...

Given a map of the environment, how do I determine where I am?

“Robot localization problem”(Courtesy of Dodds)

Robot Building Lab: Localization and Mapping

Alternative Mapping Method: Vector field histogram

Faster than using probabilities Makes assumptions and approximations Uses certainty values in each grid cell rather

than occupancy probabilities Rather than update all cells in range, only

those on line of site of sensor are updated with increment if near range reading and decrement below range reading

Cells bounded below and above (eg 0,10) Key observation: by taking rapid readings, this

method approximates a probability distribution by sampling the real world

Robot Building Lab: Localization and Mapping

Robot Building Lab: Localization and Mapping

Vector Field Histogram Superimpose “grid” on robot

field of view Indicate some measure of

“obstacleness” of grid cell based on sonar readings

For each reading, update cells in line of sight of sonar– increment cells near reading– decrement cells below reading

Cells bounded below and

above (eg 0,10) Re-evaluate position with

odometry as robot moves Decay values over time Key observation: by taking

rapid readings, this method approximates a probability distribution by sampling the real world

Robot Building Lab: Localization and Mapping

Building Map With Odometry

(From Thrun et. Al.)

Robot Building Lab: Localization and Mapping

Building Global Maps

Exploration strategy– Or explore while doing

something else, like light following

Combining multiple maps

Odometry error Decaying confidence

over time/movement Accuracy versus use

and timeliness

Robot Building Lab: Localization and Mapping

Lab 8

Localization and Mappinghttp://plan.mcs.drexel.edu/courses/robotlab/labs/lab08.pdf

Robot Building Lab: Localization and Mapping

Find Goal, Build Map

Record initial position Head toward goal light Take sonar sweeps and

odometry readings and build map as you move

Fix odometry errors with landmarks (using ground sensors)

Output map to host View graphically

Robot Building Lab: Localization and Mapping

http://plan.mcs.drexel.edu/cgi-bin/legorobots/occgrid.cgi

Robot Building Lab: Localization and Mapping

Lab 8: Localization and Mapping

In-class lab overview:– Modify behavior/priority-based controller to continually

update pose (x,y,theta) -- Using kinematic equations, modified to use motor commands – show results on LCD display to TA

– Build local map using sonar sweeps and odometry – move toward a single obstacle for a short period of time

– Use vector field histogram assumptions and small grid– Use combination of theta of robot and angle of sonar to

determine line-of-sight for sonar reading – (Download serial communications libraries to IC libs,

Download terminal program to X:\ and install)– Dump local map to PC – use Joe’s tool to display and

show to TA

Robot Building Lab: Localization and Mapping

TWO Weeks Take-home part of Lab

– Modify behavior/priority-based controller to continually update pose (x,y,theta) • Using kinematic equations, modified to use motor commands • Using landmark detection (ground sensing) to correct errors in y-direction

– Follow light, avoid and react to obstacles, AND build map1. Light following, 2. Sonar closeup reaction (bumper), and 3. Sonar sample obstacle avoidance and tracking 4. (Could add real bumper detection and IR now if you want/have ports)

– Dump local map to PC – use Joe’s tool to display – Use vector field histogram assumptions, experiment with decaying,

see how large a grid you can handle on HandyBoard Read on-line documents; answer questions for Week 8 on Plan Grad students need to include software design document for their final

project, code that can be re-used from close, pseudo-code for new functions

No class Wednesday BUT we have two Monday sessions prior to next class Lab 10: Off-line processing and Path planning using maps

Robot Building Lab: Localization and Mapping

Lab 8 ProgramsGlobal variables for:

light state, collision state, obstacle state, ground state

Function to determine encoder state (done) Function to move toward left (done)Function to move toward right (done)Function for forward kinematics (done)Function to move toward target-heading (home)Function to detect sonar close-up state (in-class)Function to react to collision (done for bumpers)Function to determine sonar sample state (in-class)Function to avoid and track obstacles (done for IR)Function to put local sonar reading onto map as fn of location/angleFunction to dump map to PC and graph (home)Function to use ground state to recognize obstacles (home)

void main(void) { calibrate ground state, light state

input headingstart servo and sonar

start processes to keep track of: pose, collision, obstacle, ground states(with good use of defer)behavior/priority based control program to switch between:

light following, collision reaction, obstacle avoidance – according to priority scheme(at home)

}

Robot Building Lab: Localization and Mapping

References

http://plan.mcs.drexel.edu/courses/robotlab/labs/lab08.pdf

http://plan.mcs.drexel.edu/courses/robotlab/readings/hbmanual.pdf

http://plan.mcs.drexel.edu/courses/readings