Efficient Scene Simulation for Robust Monte Carlo ... · Efficient Scene Simulation for Robust...

32
Efficient Scene Simulation for Robust Monte Carlo Localization Using an RGB-D Camera Maurice Fallon , Hordur Johannsson and John Leonard

Transcript of Efficient Scene Simulation for Robust Monte Carlo ... · Efficient Scene Simulation for Robust...

Efficient Scene Simulation for Robust Monte

Carlo Localization Using an RGB-D Camera

Maurice Fallon, Hordur Johannsson

and John Leonard

RGB-D Vision SLAM

Kinect has fueled new interest in Vision SLAM:

• 3D maps [Henry et al. ISER 2010]

• Kinect Fusion [Newcombe et al. ISMAR 2011]

Full model requires:

• Views of every room, every direction, less than 5m

range

• Storage requirements for a building 100s of GBs

• Mesh in video: 300MB

Background

-Visual SLAM

-Input Map

Kinect Localization:

-Visual Odometry

-Depth Simulation

Results:

-Multi-robot demos

-Future work

Raphael Favier TuE

Henry et al. ISER 2010

Newcombe et al. ISMAR 2011

Extreme 3D motion. Low lighting

Cheap sensors. Dynamic environments

vLocalization

The Case for Visual Localization

Current

vSLAM

capabilities

Real-time. 5 m/s. 1000 particles. Only a Kinect

Other Approaches to Localization

Background

-Vision SLAM

-Input Map

Kinect Localization:

-Visual Odometry

-Depth Simulation

Results:

-Multi-robot demos

-Future work

Convert to simulated ‘LIDAR’ scan:

• Use with LIDAR MCL, limited to 2D with wheel odometry

3D Visual Feature Registration:

• Requires full SfM solution

• Many failure modes

• Potentially more accurate

• Complementary to our

research

H. Johannsson

Simple Input Map

Convert 3D vSLAM output to

Planar Map Representation

• Large indoor planes don’t

often change

• Can simulate views not seen

during SLAM

Scales well:

• Low maintenance

• Supports loop-closures

• Sparse, small file size:

• 2MB for 9 floor MIT Stata

Center

Background

-Vision SLAM

-Input Map

Kinect Localization:

-Visual Odometry

-Depth Simulation

Results:

-Multi-robot demos

-Future work

convert points to planes

Maintaining Particle Diversity

Background

-Vision SLAM

-Input Map

Kinect Localization:

-Visual Odometry

-Depth Simulation

Results:

-Multi-robot demos

-Future work

Localization must degrade smoothly:

• Model is incomplete or has changed

• When sensor imagery is blurred or uninformative

• People moving, occluding the sensor

Efficient belief propagation required:

• 1000s of particles @ 10-20Hz

Particle Filter Overview

Background

-Vision SLAM

-Input Map

Kinect Localization:

-Visual Odometry

-Depth Simulation

Results:

-Multi-robot demos

-Future work

Propagation

Likelihood

Particle Propagation – Visual Odometry

State Vector:

Propagate using FOVIS:

• Fast Odometry for Vision1

• 0.08m/s mean velocity error

When VO Fails:

• Add extra noise, drift

• Future: IMU integration

[1] A Huang, A Bachrach, P Henry, M Krainin, D Maturana, D Fox, N Roy. Visual

Odometry and Mapping for Autonomous Flight Using an RGB-D Camera, ISRR 2011

http://code.google.com/p/fovis/

Background

-Vision SLAM

-Input Map

Kinect Localization:

-Visual Odometry

-Depth Simulation

Results:

-Multi-robot demos

-Future work

Particle Propagation – Failed VO

VO failure modes

VO success

Particle Filter Overview

Background

-Vision SLAM

-Input Map

Kinect Localization:

-Visual Odometry

-Depth Simulation

Results:

-Multi-robot demos

-Future work

Propagation

Likelihood

Likelihood Function – Dense Depth

Kinect/RGB-D:

• Dense disparity information

• Uncertainty increases quickly

with range

Our approach:

• Efficient depth image simulation

using GPU

• Naturally captures uncertainty

• Uses long range measurement

Background

-Vision SLAM

-Input Map

Kinect Localization:

-Visual Odometry

-Depth Simulation

Results:

-Multi-robot demos

-Future work

20m

away

Example Depth Image Simulation

Background

-Vision SLAM

-Input Map

Kinect Localization:

-Visual Odometry

-Depth Simulation

Results:

-Multi-robot demos

-Future work

Likelihood Function Evaluation

Background

-Vision SLAM

-Input Map

Kinect Localization:

-Visual Odometry

-Depth Simulation

Results:

-Multi-robot demos

-Future work

Likelihood for pixel i (evaluated on disparity):

Parallel Depth Image Simulation

Background

-Vision SLAM

-Input Map

Kinect Localization:

-Visual Odometry

-Depth Simulation

Results:

-Multi-robot demos

-Future work

• Virtual camera array created using particle poses

• Each scene simultaneously rendered

• Likelihood function evaluated in parallel on GPU using GLSL

• Final likelihoods transferred back to CPU for reweighting and resampling.

Measurements decimated to 20x15

Human Portable

Background

-Vision SLAM

-Input Map

Kinect Localization:

-Visual Odometry

-Depth Simulation

Results:

-Multi-robot demos

-Future work

Extensive Testing

Background

-Vision SLAM

-Input Map

Kinect Localization:

-Visual Odometry

-Depth Simulation

Results:

-Multi-robot demos

-Future work

1.2m/s, 0.48m 1.05m/s, 0.66m 0.47m/s, 0.3m

Paper results out of date

Significant optimization since

2D works with 10s particles

Background

-Vision SLAM

-Input Map

Kinect Localization:

-Visual Odometry

-Depth Simulation

Results:

-Multi-robot demos

-Future work

Fast, Cheap and In Control

x4 realtime

Summary

Contribution:

• Efficient simulation of 1000s of model views

• Robust Localization using only RGB-D

• Extensive testing

• Including closed loop

Open Source Code:

• FOVIS VO Library

• Integrated within Point Cloud Library:

• pcl::simulation

Future Work:

• IMU

• Color [Mason et al, ICRA 2011]

• Bag of Words Visual Recovery

• Barometry

• Integration with Lifelong vSLAM

Background

-Vision SLAM

-Input Map

Kinect Localization:

-Visual Odometry

-Depth Simulation

Results:

-Multi-robot demos

-Future work

Real-time. 5 m/s. 1000 particles. Only a Kinect

Bayesian Filters

Grid Based Methods

• Cannot scale to high dimensionality

Kalman Filter

• Optimal linear systems in Gaussian noise

• Extended Kalman Filter

• Non-linear using local linearization (may be incorrect)

• Data must still be Gaussian

State Estimation

-Traditional

-Stochastic

Kinect Localization:

-Visual Odometry

-Depth Simulation

Results:

-Multi-robot demos

-Future work

Particle Filter – Overview 2

Implementation Issues:

• Degeneracy

• High dimensionality

• Importance Sampling

M. Isard Microsoft Research

State Estimation

-Traditional

-Stochastic

Kinect Localization:

-Visual Odometry

-Depth Simulation

Results:

-Multi-robot demos

-Future work

Robust Visual Localization

• 2D Laser Localization:

– Wheel Odometry required

– Flat-world assumptions

• Aim: Flexible, mobile, 3D with recovery mechanisms

• Motivating Applications:

State Estimation

-Traditional

-Stochastic

Kinect Localization:

-Visual Odometry

-Depth Simulation

Results:

-Multi-robot demos

-Future work

Monte Carlo Integration

Bayesian numerical integration technique:

• State approximated as a cloud of discrete samples

• Samples weighted according to their likelihoods

• Parameters of the distribution can be estimated from a large set

of samples

State Estimation

-Traditional

-Stochastic

Kinect Localization:

-Visual Odometry

-Depth Simulation

Results:

-Multi-robot demos

-Future work

A simplistic solution: simulate a laser

State Estimation

-Traditional

-Stochastic

Kinect Localization:

-Visual Odometry

-Depth Simulation

Results:

-Multi-robot demos

-Future work

• Retain Laser MCL paradigm

[Dellaert, ICRA 1999]:

• Extract non-floor portion of

measurements

• Discard longer ranges (>5m)

• 2D Flatworld approach

• Requires wheel odometry

• Localize versus LIDAR maps or

floor plans

• How to deal with non-Gaussian

noise?

Input: Low-Fi Planar Map

State Estimation

-Traditional

-Stochastic

Kinect Localization:

-Visual Odometry

-Depth Simulation

Results:

-Multi-robot demos

-Future work

Performance Metrics

State Estimation

-Traditional

-Stochastic

Kinect Localization:

-Visual Odometry

-Depth Simulation

Results:

-Multi-robot demos

-Future work

• Error can be significantly reduced if ICP

used on most likely particle

Summary

Summary:

• Efficient simulation of 1000s of

model views

• Robust Localization using only

RGB-D

• Extensive testing

• including closed loop

Open Source Code:

• FoVis Visual Odometry Library

• Simulation Library within Point

Cloud Library: pcl::simulation

Alternative Sources of Information:

• IMU

• Color

• Bag of Words Visual Recovery

• Barometry

State Estimation

-Traditional

-Stochastic

Kinect Localization:

-Visual Odometry

-Depth Simulation

Results:

-Multi-robot demos

-Future work

Efficient Scene Simulation for Robust Monte

Carlo Localization Using an RGB-D Camera

Maurice Fallon, Hordur Johannsson

and John Leonard