Development and Implementation of SLAM Algorithms
Transcript of Development and Implementation of SLAM Algorithms
![Page 1: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/1.jpg)
Development and Implementation of SLAM Algorithms
Kasra KhosoussiSupervised by: Dr. Hamid D. Taghirad
Advanced Robotics and Automated Systems (ARAS)
Industrial Control Laboratory
K.N. Toosi University of Technology
July 13, 2011
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 1 / 43
![Page 2: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/2.jpg)
Outline
1 IntroductionSLAM ProblemBayesian FilteringParticle Filter
2 RBPF-SLAM
3 Monte Carlo Approximation of the Optimal Proposal DistributionIntroductionLRSLIS-1LIS-2
4 ResultsSimulation ResultsExperiments On Real Data
5 Conclusion
6 Future Work
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 2 / 43
![Page 3: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/3.jpg)
Introduction
Table of Contents
1 IntroductionSLAM ProblemBayesian FilteringParticle Filter
2 RBPF-SLAM
3 Monte Carlo Approximation of the Optimal Proposal DistributionIntroductionLRSLIS-1LIS-2
4 ResultsSimulation ResultsExperiments On Real Data
5 Conclusion
6 Future Work
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 3 / 43
![Page 4: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/4.jpg)
Introduction SLAM Problem
Autonomous Mobile Robots
The ultimate goal of mobile robotics is to design autonomous mobilerobots.
“The ability to simultaneously localize a robot and accurately map itsenvironment is a key prerequisite of truly autonomous robots.”
SLAM stands for Simultaneous Localization and Mapping.
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 4 / 43
![Page 5: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/5.jpg)
Introduction SLAM Problem
Autonomous Mobile Robots
The ultimate goal of mobile robotics is to design autonomous mobilerobots.
“The ability to simultaneously localize a robot and accurately map itsenvironment is a key prerequisite of truly autonomous robots.”
SLAM stands for Simultaneous Localization and Mapping.
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 4 / 43
![Page 6: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/6.jpg)
Introduction SLAM Problem
Autonomous Mobile Robots
The ultimate goal of mobile robotics is to design autonomous mobilerobots.
“The ability to simultaneously localize a robot and accurately map itsenvironment is a key prerequisite of truly autonomous robots.”
SLAM stands for Simultaneous Localization and Mapping.
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 4 / 43
![Page 7: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/7.jpg)
Introduction SLAM Problem
Autonomous Mobile Robots
The ultimate goal of mobile robotics is to design autonomous mobilerobots.
“The ability to simultaneously localize a robot and accurately map itsenvironment is a key prerequisite of truly autonomous robots.”
SLAM stands for Simultaneous Localization and Mapping.
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 4 / 43
![Page 8: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/8.jpg)
Introduction SLAM Problem
Autonomous Mobile Robots
The ultimate goal of mobile robotics is to design autonomous mobilerobots.
“The ability to simultaneously localize a robot and accurately map itsenvironment is a key prerequisite of truly autonomous robots.”
SLAM stands for Simultaneous Localization and Mapping.
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 4 / 43
![Page 9: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/9.jpg)
Introduction SLAM Problem
Autonomous Mobile Robots
The ultimate goal of mobile robotics is to design autonomous mobilerobots.
“The ability to simultaneously localize a robot and accurately map itsenvironment is a key prerequisite of truly autonomous robots.”
SLAM stands for Simultaneous Localization and Mapping.
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 4 / 43
![Page 10: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/10.jpg)
Introduction SLAM Problem
Autonomous Mobile Robots
The ultimate goal of mobile robotics is to design autonomous mobilerobots.
“The ability to simultaneously localize a robot and accurately map itsenvironment is a key prerequisite of truly autonomous robots.”
SLAM stands for Simultaneous Localization and Mapping.
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 4 / 43
![Page 11: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/11.jpg)
Introduction SLAM Problem
The SLAM Problem
Assumptions
No a priori knowledge about the environment (i.e. map)
No independent position information (i.e. GPS)
Static environment
Given
Observations of the environment
Control signals
Goal
Estimate the map of the environment (e.g. locations of the features)
Estimate the position and orientation (pose) of the robot
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 5 / 43
![Page 12: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/12.jpg)
Introduction SLAM Problem
The SLAM Problem
Assumptions
No a priori knowledge about the environment (i.e. map)
No independent position information (i.e. GPS)
Static environment
Given
Observations of the environment
Control signals
Goal
Estimate the map of the environment (e.g. locations of the features)
Estimate the position and orientation (pose) of the robot
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 5 / 43
![Page 13: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/13.jpg)
Introduction SLAM Problem
The SLAM Problem
Assumptions
No a priori knowledge about the environment (i.e. map)
No independent position information (i.e. GPS)
Static environment
Given
Observations of the environment
Control signals
Goal
Estimate the map of the environment (e.g. locations of the features)
Estimate the position and orientation (pose) of the robot
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 5 / 43
![Page 14: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/14.jpg)
Introduction SLAM Problem
Map Representation
Topological maps
Grid maps
Feature-based maps
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 6 / 43
![Page 15: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/15.jpg)
Introduction SLAM Problem
Map Representation
Topological maps
Grid maps
Feature-based maps
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 6 / 43
![Page 16: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/16.jpg)
Introduction SLAM Problem
Map Representation
Topological maps
Grid maps
Feature-based maps
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 6 / 43
![Page 17: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/17.jpg)
Introduction Bayesian Filtering
Probabilistic Methods
Probabilistic methods outperform deterministic algorithms
Describe uncertainty in data, modelsand estimates
Bayesian estimation
- Robot pose st
- Robot pose is assumed to be a Markovprocess with initial distribution p(s0)
- Feature’s location θi
- Map θ = {θ1, . . . , θN}- Observation zt, and control input ut
- xt = [st θ]T
- x1:t , {x1, . . . ,xt}
Filtering distribution:p(st,θ|z1:t,u1:t)
Smoothing distribution:p(s0:t,θ|z1:t,u1:t)
MMSE estimate:x̂t = E[xt|z1:t,u1:t]x̂0:t = E[x0:t|z1:t,u1:t]
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 7 / 43
![Page 18: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/18.jpg)
Introduction Bayesian Filtering
Probabilistic Methods
Probabilistic methods outperform deterministic algorithms
Describe uncertainty in data, modelsand estimates
Bayesian estimation
- Robot pose st
- Robot pose is assumed to be a Markovprocess with initial distribution p(s0)
- Feature’s location θi
- Map θ = {θ1, . . . , θN}- Observation zt, and control input ut
- xt = [st θ]T
- x1:t , {x1, . . . ,xt}
Filtering distribution:p(st,θ|z1:t,u1:t)
Smoothing distribution:p(s0:t,θ|z1:t,u1:t)
MMSE estimate:x̂t = E[xt|z1:t,u1:t]x̂0:t = E[x0:t|z1:t,u1:t]
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 7 / 43
![Page 19: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/19.jpg)
Introduction Bayesian Filtering
Probabilistic Methods
Probabilistic methods outperform deterministic algorithms
Describe uncertainty in data, modelsand estimates
Bayesian estimation
- Robot pose st
- Robot pose is assumed to be a Markovprocess with initial distribution p(s0)
- Feature’s location θi
- Map θ = {θ1, . . . , θN}- Observation zt, and control input ut
- xt = [st θ]T
- x1:t , {x1, . . . ,xt}
Filtering distribution:p(st,θ|z1:t,u1:t)
Smoothing distribution:p(s0:t,θ|z1:t,u1:t)
MMSE estimate:x̂t = E[xt|z1:t,u1:t]x̂0:t = E[x0:t|z1:t,u1:t]
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 7 / 43
![Page 20: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/20.jpg)
Introduction Bayesian Filtering
Probabilistic Methods
Probabilistic methods outperform deterministic algorithms
Describe uncertainty in data, modelsand estimates
Bayesian estimation
- Robot pose st
- Robot pose is assumed to be a Markovprocess with initial distribution p(s0)
- Feature’s location θi
- Map θ = {θ1, . . . , θN}- Observation zt, and control input ut
- xt = [st θ]T
- x1:t , {x1, . . . ,xt}
Filtering distribution:p(st,θ|z1:t,u1:t)
Smoothing distribution:p(s0:t,θ|z1:t,u1:t)
MMSE estimate:x̂t = E[xt|z1:t,u1:t]x̂0:t = E[x0:t|z1:t,u1:t]
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 7 / 43
![Page 21: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/21.jpg)
Introduction Bayesian Filtering
Probabilistic Methods
Probabilistic methods outperform deterministic algorithms
Describe uncertainty in data, modelsand estimates
Bayesian estimation
- Robot pose st
- Robot pose is assumed to be a Markovprocess with initial distribution p(s0)
- Feature’s location θi
- Map θ = {θ1, . . . , θN}- Observation zt, and control input ut
- xt = [st θ]T
- x1:t , {x1, . . . ,xt}
Filtering distribution:p(st,θ|z1:t,u1:t)
Smoothing distribution:p(s0:t,θ|z1:t,u1:t)
MMSE estimate:x̂t = E[xt|z1:t,u1:t]x̂0:t = E[x0:t|z1:t,u1:t]
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 7 / 43
![Page 22: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/22.jpg)
Introduction Bayesian Filtering
Probabilistic Methods
Probabilistic methods outperform deterministic algorithms
Describe uncertainty in data, modelsand estimates
Bayesian estimation
- Robot pose st
- Robot pose is assumed to be a Markovprocess with initial distribution p(s0)
- Feature’s location θi
- Map θ = {θ1, . . . , θN}- Observation zt, and control input ut
- xt = [st θ]T
- x1:t , {x1, . . . ,xt}
Filtering distribution:p(st,θ|z1:t,u1:t)
Smoothing distribution:p(s0:t,θ|z1:t,u1:t)
MMSE estimate:x̂t = E[xt|z1:t,u1:t]x̂0:t = E[x0:t|z1:t,u1:t]
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 7 / 43
![Page 23: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/23.jpg)
Introduction Bayesian Filtering
Probabilistic Methods
Probabilistic methods outperform deterministic algorithms
Describe uncertainty in data, modelsand estimates
Bayesian estimation
- Robot pose st
- Robot pose is assumed to be a Markovprocess with initial distribution p(s0)
- Feature’s location θi
- Map θ = {θ1, . . . , θN}- Observation zt, and control input ut
- xt = [st θ]T
- x1:t , {x1, . . . ,xt}
Filtering distribution:p(st,θ|z1:t,u1:t)
Smoothing distribution:p(s0:t,θ|z1:t,u1:t)
MMSE estimate:x̂t = E[xt|z1:t,u1:t]x̂0:t = E[x0:t|z1:t,u1:t]
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 7 / 43
![Page 24: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/24.jpg)
Introduction Bayesian Filtering
State-Space Equations
Robot motion equation:
st = f(st−1,ut,vt)
Observation equation:
zt = g(st, θnt ,wt)
f(·, ·, ·) and g(·, ·, ·) are non-linear functions
vt and wt are zero-mean white Gaussian noises with covariancesmatrices Qt and Rt
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 8 / 43
![Page 25: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/25.jpg)
Introduction Bayesian Filtering
State-Space Equations
Robot motion equation:
st = f(st−1,ut,vt)
Observation equation:
zt = g(st, θnt ,wt)
f(·, ·, ·) and g(·, ·, ·) are non-linear functions
vt and wt are zero-mean white Gaussian noises with covariancesmatrices Qt and Rt
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 8 / 43
![Page 26: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/26.jpg)
Introduction Bayesian Filtering
State-Space Equations
Robot motion equation:
st = f(st−1,ut,vt)
Observation equation:
zt = g(st, θnt ,wt)
f(·, ·, ·) and g(·, ·, ·) are non-linear functions
vt and wt are zero-mean white Gaussian noises with covariancesmatrices Qt and Rt
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 8 / 43
![Page 27: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/27.jpg)
Introduction Bayesian Filtering
Bayes Filter
How to estimate the posterior distribution recursively in time? Bayes filter!
1 Prediction
2 Update
p(xt|z1:t−1,u1:t) =
∫p(xt|xt−1,ut) p(xt−1|z1:t−1,u1:t−1) dxt−1 (1)
p(xt|z1:t,u1:t) =p(zt|xt) p(xt|z1:t−1,u1:t)∫p(zt|xt)p(xt|z1:t−1,u1:t)dxt
(2)
Motion model Observation model
There is a similar recursive formula for the smoothing density
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 9 / 43
![Page 28: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/28.jpg)
Introduction Bayesian Filtering
Bayes Filter
How to estimate the posterior distribution recursively in time? Bayes filter!
1 Prediction
2 Update
p(xt|z1:t−1,u1:t) =
∫p(xt|xt−1,ut) p(xt−1|z1:t−1,u1:t−1) dxt−1 (1)
p(xt|z1:t,u1:t) =p(zt|xt) p(xt|z1:t−1,u1:t)∫p(zt|xt)p(xt|z1:t−1,u1:t)dxt
(2)
Motion model Observation model
There is a similar recursive formula for the smoothing density
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 9 / 43
![Page 29: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/29.jpg)
Introduction Bayesian Filtering
Bayes Filter
How to estimate the posterior distribution recursively in time? Bayes filter!
1 Prediction
2 Update
p(xt|z1:t−1,u1:t) =
∫p(xt|xt−1,ut) p(xt−1|z1:t−1,u1:t−1) dxt−1 (1)
p(xt|z1:t,u1:t) =p(zt|xt) p(xt|z1:t−1,u1:t)∫p(zt|xt)p(xt|z1:t−1,u1:t)dxt
(2)
Motion model Observation model
There is a similar recursive formula for the smoothing density
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 9 / 43
![Page 30: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/30.jpg)
Introduction Bayesian Filtering
Bayes Filter
How to estimate the posterior distribution recursively in time? Bayes filter!
1 Prediction
2 Update
p(xt|z1:t−1,u1:t) =
∫p(xt|xt−1,ut) p(xt−1|z1:t−1,u1:t−1) dxt−1 (1)
p(xt|z1:t,u1:t) =p(zt|xt) p(xt|z1:t−1,u1:t)∫p(zt|xt)p(xt|z1:t−1,u1:t)dxt
(2)
Motion model Observation model
There is a similar recursive formula for the smoothing density
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 9 / 43
![Page 31: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/31.jpg)
Introduction Bayesian Filtering
Bayes Filter
How to estimate the posterior distribution recursively in time? Bayes filter!
1 Prediction
2 Update
p(xt|z1:t−1,u1:t) =
∫p(xt|xt−1,ut) p(xt−1|z1:t−1,u1:t−1) dxt−1 (1)
p(xt|z1:t,u1:t) =p(zt|xt) p(xt|z1:t−1,u1:t)∫p(zt|xt)p(xt|z1:t−1,u1:t)dxt
(2)
Motion model Observation model
There is a similar recursive formula for the smoothing density
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 9 / 43
![Page 32: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/32.jpg)
Introduction Bayesian Filtering
Bayes Filter
How to estimate the posterior distribution recursively in time? Bayes filter!
1 Prediction
2 Update
p(xt|z1:t−1,u1:t) =
∫p(xt|xt−1,ut) p(xt−1|z1:t−1,u1:t−1) dxt−1 (1)
p(xt|z1:t,u1:t) =p(zt|xt) p(xt|z1:t−1,u1:t)∫p(zt|xt)p(xt|z1:t−1,u1:t)dxt
(2)
Motion model Observation model
There is a similar recursive formula for the smoothing density
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 9 / 43
![Page 33: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/33.jpg)
Introduction Bayesian Filtering
Bayes Filter
How to estimate the posterior distribution recursively in time? Bayes filter!
1 Prediction
2 Update
p(xt|z1:t−1,u1:t) =
∫p(xt|xt−1,ut) p(xt−1|z1:t−1,u1:t−1) dxt−1 (1)
p(xt|z1:t,u1:t) =p(zt|xt) p(xt|z1:t−1,u1:t)∫p(zt|xt)p(xt|z1:t−1,u1:t)dxt
(2)
Motion model Observation model
There is a similar recursive formula for the smoothing density
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 9 / 43
![Page 34: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/34.jpg)
Introduction Bayesian Filtering
Bayes Filter Cont’d.
Example
For the case of linear-Gaussian models, Bayes filter equations would besimplified into the Kalman filter equations.
But . . .
In general, it is impossible to implement the exact Bayes filter because itrequires the ability to evaluate complex high-dimensional integrals.
So we have to use approximation . . .
Extended Kalman Filter (EKF)
Unscented Kalman Filter (UKF)
Gaussian-Sum Filter
Extended Information Filter (EIF)
Particle Filter (A.K.A. Sequential Monte Carlo Methods)
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 10 / 43
![Page 35: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/35.jpg)
Introduction Bayesian Filtering
Bayes Filter Cont’d.
Example
For the case of linear-Gaussian models, Bayes filter equations would besimplified into the Kalman filter equations.
But . . .
In general, it is impossible to implement the exact Bayes filter because itrequires the ability to evaluate complex high-dimensional integrals.
So we have to use approximation . . .
Extended Kalman Filter (EKF)
Unscented Kalman Filter (UKF)
Gaussian-Sum Filter
Extended Information Filter (EIF)
Particle Filter (A.K.A. Sequential Monte Carlo Methods)
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 10 / 43
![Page 36: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/36.jpg)
Introduction Bayesian Filtering
Bayes Filter Cont’d.
Example
For the case of linear-Gaussian models, Bayes filter equations would besimplified into the Kalman filter equations.
But . . .
In general, it is impossible to implement the exact Bayes filter because itrequires the ability to evaluate complex high-dimensional integrals.
So we have to use approximation . . .
Extended Kalman Filter (EKF)
Unscented Kalman Filter (UKF)
Gaussian-Sum Filter
Extended Information Filter (EIF)
Particle Filter (A.K.A. Sequential Monte Carlo Methods)
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 10 / 43
![Page 37: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/37.jpg)
Introduction Bayesian Filtering
Bayes Filter Cont’d.
Example
For the case of linear-Gaussian models, Bayes filter equations would besimplified into the Kalman filter equations.
But . . .
In general, it is impossible to implement the exact Bayes filter because itrequires the ability to evaluate complex high-dimensional integrals.
So we have to use approximation . . .
Extended Kalman Filter (EKF)
Unscented Kalman Filter (UKF)
Gaussian-Sum Filter
Extended Information Filter (EIF)
Particle Filter (A.K.A. Sequential Monte Carlo Methods)
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 10 / 43
![Page 38: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/38.jpg)
Introduction Particle Filter
Perfect Monte Carlo Sampling
Q. How to compute expected values such as Ep(x)[h(x)] =∫h(x)p(x)dx
for any integrable function h(·)?
Perfect Monte Carlo (A.K.A. Monte Carlo Integration)
1 Generate N i.i.d. samples {x[i]}Ni=1 according to p(x)
2 Estimate the PDF as PN (x) , 1N
∑Ni=1 δ(x− x[i])
3 Estimate Ep(x)[h(x)] ≈∫h(x)PN (x)dx = 1
N
∑Ni=1 h(x
[i])
Convergence theorems for N →∞ using central limit theorem andstrong law of large numbers
Error decreases with O(N−1/2) regardless of the dimension of x
But . . .
It is usually impossible to sample directly from the filtering or smoothingdistribution (high-dimensional, non-standard, only known up to a constant)
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 11 / 43
![Page 39: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/39.jpg)
Introduction Particle Filter
Perfect Monte Carlo Sampling
Q. How to compute expected values such as Ep(x)[h(x)] =∫h(x)p(x)dx
for any integrable function h(·)?
Perfect Monte Carlo (A.K.A. Monte Carlo Integration)
1 Generate N i.i.d. samples {x[i]}Ni=1 according to p(x)
2 Estimate the PDF as PN (x) , 1N
∑Ni=1 δ(x− x[i])
3 Estimate Ep(x)[h(x)] ≈∫h(x)PN (x)dx = 1
N
∑Ni=1 h(x
[i])
Convergence theorems for N →∞ using central limit theorem andstrong law of large numbers
Error decreases with O(N−1/2) regardless of the dimension of x
But . . .
It is usually impossible to sample directly from the filtering or smoothingdistribution (high-dimensional, non-standard, only known up to a constant)
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 11 / 43
![Page 40: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/40.jpg)
Introduction Particle Filter
Perfect Monte Carlo Sampling
Q. How to compute expected values such as Ep(x)[h(x)] =∫h(x)p(x)dx
for any integrable function h(·)?
Perfect Monte Carlo (A.K.A. Monte Carlo Integration)
1 Generate N i.i.d. samples {x[i]}Ni=1 according to p(x)
2 Estimate the PDF as PN (x) , 1N
∑Ni=1 δ(x− x[i])
3 Estimate Ep(x)[h(x)] ≈∫h(x)PN (x)dx = 1
N
∑Ni=1 h(x
[i])
Convergence theorems for N →∞ using central limit theorem andstrong law of large numbers
Error decreases with O(N−1/2) regardless of the dimension of x
But . . .
It is usually impossible to sample directly from the filtering or smoothingdistribution (high-dimensional, non-standard, only known up to a constant)
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 11 / 43
![Page 41: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/41.jpg)
Introduction Particle Filter
Perfect Monte Carlo Sampling
Q. How to compute expected values such as Ep(x)[h(x)] =∫h(x)p(x)dx
for any integrable function h(·)?
Perfect Monte Carlo (A.K.A. Monte Carlo Integration)
1 Generate N i.i.d. samples {x[i]}Ni=1 according to p(x)
2 Estimate the PDF as PN (x) , 1N
∑Ni=1 δ(x− x[i])
3 Estimate Ep(x)[h(x)] ≈∫h(x)PN (x)dx = 1
N
∑Ni=1 h(x
[i])
Convergence theorems for N →∞ using central limit theorem andstrong law of large numbers
Error decreases with O(N−1/2) regardless of the dimension of x
But . . .
It is usually impossible to sample directly from the filtering or smoothingdistribution (high-dimensional, non-standard, only known up to a constant)
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 11 / 43
![Page 42: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/42.jpg)
Introduction Particle Filter
Importance Sampling (IS)
Idea
Generate samples from another distribution called the importance function
like π(x), and weight these samples according to w∗(x[i]) = p(x[i])
π(x[i]):
Ep(x)[h(x)] =∫
h(x) p(x)π(x) π(x)dx =1
N
N∑i=1
w∗(x[i])h(x[i])
But . . .
How to do this recursively in time?
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 12 / 43
![Page 43: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/43.jpg)
Introduction Particle Filter
Importance Sampling (IS)
Idea
Generate samples from another distribution called the importance function
like π(x), and weight these samples according to w∗(x[i]) = p(x[i])
π(x[i]):
Ep(x)[h(x)] =∫
h(x) p(x)π(x) π(x)dx =1
N
N∑i=1
w∗(x[i])h(x[i])
But . . .
How to do this recursively in time?
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 12 / 43
![Page 44: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/44.jpg)
Introduction Particle Filter
Importance Sampling (IS)
Idea
Generate samples from another distribution called the importance function
like π(x), and weight these samples according to w∗(x[i]) = p(x[i])
π(x[i]):
Ep(x)[h(x)] =∫
h(x) p(x)π(x) π(x)dx =1
N
N∑i=1
w∗(x[i])h(x[i])
But . . .
How to do this recursively in time?
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 12 / 43
![Page 45: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/45.jpg)
Introduction Particle Filter
Sequential Importance Sampling (SIS)
Sampling from scratch from the importance function π(x0:t|z1:t,u1:t)implies growing computational complexity for each step over time
Q. How to estimate p(x0:t|z1:t,u1:t) using importance samplingrecursively?
Sequential Importance Sampling
At time t, generate x[i]t according to π(xt|x[i]
0:t−1, z1:t,u1:t) (proposal
distribution), and merge it with the previous samples x[i]0:t−1 drawn from
π(x0:t−1|z1:t−1,u1:t−1):
x[i]0:t = {x
[i]0:t−1,x
[i]t } ∼ π(x0:t|z1:t,u1:t)
w(x[i]0:t) = w(x
[i]0:t−1)
p(zt|x[i]t )p(x
[i]t |x
[i]t−1,ut)
π(x[i]t |x
[i]0:t−1, z1:t,u1:t)
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 13 / 43
![Page 46: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/46.jpg)
Introduction Particle Filter
Sequential Importance Sampling (SIS)
Sampling from scratch from the importance function π(x0:t|z1:t,u1:t)implies growing computational complexity for each step over time
Q. How to estimate p(x0:t|z1:t,u1:t) using importance samplingrecursively?
Sequential Importance Sampling
At time t, generate x[i]t according to π(xt|x[i]
0:t−1, z1:t,u1:t) (proposal
distribution), and merge it with the previous samples x[i]0:t−1 drawn from
π(x0:t−1|z1:t−1,u1:t−1):
x[i]0:t = {x
[i]0:t−1,x
[i]t } ∼ π(x0:t|z1:t,u1:t)
w(x[i]0:t) = w(x
[i]0:t−1)
p(zt|x[i]t )p(x
[i]t |x
[i]t−1,ut)
π(x[i]t |x
[i]0:t−1, z1:t,u1:t)
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 13 / 43
![Page 47: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/47.jpg)
Introduction Particle Filter
Sequential Importance Sampling (SIS)
Sampling from scratch from the importance function π(x0:t|z1:t,u1:t)implies growing computational complexity for each step over time
Q. How to estimate p(x0:t|z1:t,u1:t) using importance samplingrecursively?
Sequential Importance Sampling
At time t, generate x[i]t according to π(xt|x[i]
0:t−1, z1:t,u1:t) (proposal
distribution), and merge it with the previous samples x[i]0:t−1 drawn from
π(x0:t−1|z1:t−1,u1:t−1):
x[i]0:t = {x
[i]0:t−1,x
[i]t } ∼ π(x0:t|z1:t,u1:t)
w(x[i]0:t) = w(x
[i]0:t−1)
p(zt|x[i]t )p(x
[i]t |x
[i]t−1,ut)
π(x[i]t |x
[i]0:t−1, z1:t,u1:t)
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 13 / 43
![Page 48: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/48.jpg)
Introduction Particle Filter
Degeneracy and Resampling
Degeneracy Problem
After a few steps, all but one of the particles (samples) would have veryinsignificant normalized weight
Resampling
Eliminate particles with low normalized weights and multiply those withhigh normalized weights in a probabilistic manner
Resampling will cause sample impoverishment
Effective sample size (ESS) is a measure of the degeneracy of SISthat can be used in order to avoid unnecessary resampling steps
N̂eff =1∑N
i=1 w̃(x[i]0:t)
2
Perform resampling only if Neff is lower than a fixed threshold NT
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 14 / 43
![Page 49: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/49.jpg)
Introduction Particle Filter
Degeneracy and Resampling
Degeneracy Problem
After a few steps, all but one of the particles (samples) would have veryinsignificant normalized weight
Resampling
Eliminate particles with low normalized weights and multiply those withhigh normalized weights in a probabilistic manner
Resampling will cause sample impoverishment
Effective sample size (ESS) is a measure of the degeneracy of SISthat can be used in order to avoid unnecessary resampling steps
N̂eff =1∑N
i=1 w̃(x[i]0:t)
2
Perform resampling only if Neff is lower than a fixed threshold NT
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 14 / 43
![Page 50: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/50.jpg)
Introduction Particle Filter
Degeneracy and Resampling
Degeneracy Problem
After a few steps, all but one of the particles (samples) would have veryinsignificant normalized weight
Resampling
Eliminate particles with low normalized weights and multiply those withhigh normalized weights in a probabilistic manner
Resampling will cause sample impoverishment
Effective sample size (ESS) is a measure of the degeneracy of SISthat can be used in order to avoid unnecessary resampling steps
N̂eff =1∑N
i=1 w̃(x[i]0:t)
2
Perform resampling only if Neff is lower than a fixed threshold NT
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 14 / 43
![Page 51: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/51.jpg)
Introduction Particle Filter
Degeneracy and Resampling
Degeneracy Problem
After a few steps, all but one of the particles (samples) would have veryinsignificant normalized weight
Resampling
Eliminate particles with low normalized weights and multiply those withhigh normalized weights in a probabilistic manner
Resampling will cause sample impoverishment
Effective sample size (ESS) is a measure of the degeneracy of SISthat can be used in order to avoid unnecessary resampling steps
N̂eff =1∑N
i=1 w̃(x[i]0:t)
2
Perform resampling only if Neff is lower than a fixed threshold NT
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 14 / 43
![Page 52: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/52.jpg)
Introduction Particle Filter
Proposal Distribution
Selecting an appropriate proposal distribution π(xt|x0:t−1, z1:t,u1:t)plays an important role in the success of particle filter
The simplest and most common choice is the motion model(transition density) p(xt|xt−1,ut)p(xt|xt−1, zt ,ut) is known as the optimal proposal distributionand limits the degeneracy of the particle filter by minimizing theconditional variance of unnormalized weights
Importance weights for the optimal proposal distribution can beobtained as:
w(x[i]0:t) = w(x
[i]0:t−1)p(zt|x
[i]t−1)
But . . .
In SLAM, neither p(xt|xt−1, zt,ut) nor p(zt|x[i]t−1) can be computed in
closed form and we have to use approximation
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 15 / 43
![Page 53: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/53.jpg)
Introduction Particle Filter
Proposal Distribution
Selecting an appropriate proposal distribution π(xt|x0:t−1, z1:t,u1:t)plays an important role in the success of particle filter
The simplest and most common choice is the motion model(transition density) p(xt|xt−1,ut)p(xt|xt−1, zt ,ut) is known as the optimal proposal distributionand limits the degeneracy of the particle filter by minimizing theconditional variance of unnormalized weights
Importance weights for the optimal proposal distribution can beobtained as:
w(x[i]0:t) = w(x
[i]0:t−1)p(zt|x
[i]t−1)
But . . .
In SLAM, neither p(xt|xt−1, zt,ut) nor p(zt|x[i]t−1) can be computed in
closed form and we have to use approximation
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 15 / 43
![Page 54: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/54.jpg)
Introduction Particle Filter
Proposal Distribution
Selecting an appropriate proposal distribution π(xt|x0:t−1, z1:t,u1:t)plays an important role in the success of particle filter
The simplest and most common choice is the motion model(transition density) p(xt|xt−1,ut)p(xt|xt−1, zt ,ut) is known as the optimal proposal distributionand limits the degeneracy of the particle filter by minimizing theconditional variance of unnormalized weights
Importance weights for the optimal proposal distribution can beobtained as:
w(x[i]0:t) = w(x
[i]0:t−1)p(zt|x
[i]t−1)
But . . .
In SLAM, neither p(xt|xt−1, zt,ut) nor p(zt|x[i]t−1) can be computed in
closed form and we have to use approximation
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 15 / 43
![Page 55: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/55.jpg)
Introduction Particle Filter
Proposal Distribution
Selecting an appropriate proposal distribution π(xt|x0:t−1, z1:t,u1:t)plays an important role in the success of particle filter
The simplest and most common choice is the motion model(transition density) p(xt|xt−1,ut)p(xt|xt−1, zt ,ut) is known as the optimal proposal distributionand limits the degeneracy of the particle filter by minimizing theconditional variance of unnormalized weights
Importance weights for the optimal proposal distribution can beobtained as:
w(x[i]0:t) = w(x
[i]0:t−1)p(zt|x
[i]t−1)
But . . .
In SLAM, neither p(xt|xt−1, zt,ut) nor p(zt|x[i]t−1) can be computed in
closed form and we have to use approximation
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 15 / 43
![Page 56: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/56.jpg)
Introduction Particle Filter
Proposal Distribution
Selecting an appropriate proposal distribution π(xt|x0:t−1, z1:t,u1:t)plays an important role in the success of particle filter
The simplest and most common choice is the motion model(transition density) p(xt|xt−1,ut)p(xt|xt−1, zt ,ut) is known as the optimal proposal distributionand limits the degeneracy of the particle filter by minimizing theconditional variance of unnormalized weights
Importance weights for the optimal proposal distribution can beobtained as:
w(x[i]0:t) = w(x
[i]0:t−1)p(zt|x
[i]t−1)
But . . .
In SLAM, neither p(xt|xt−1, zt,ut) nor p(zt|x[i]t−1) can be computed in
closed form and we have to use approximation
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 15 / 43
![Page 57: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/57.jpg)
RBPF-SLAM
Table of Contents
1 IntroductionSLAM ProblemBayesian FilteringParticle Filter
2 RBPF-SLAM
3 Monte Carlo Approximation of the Optimal Proposal DistributionIntroductionLRSLIS-1LIS-2
4 ResultsSimulation ResultsExperiments On Real Data
5 Conclusion
6 Future Work
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 16 / 43
![Page 58: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/58.jpg)
RBPF-SLAM
Rao-Blackwellized Particle Filter in SLAM
SLAM is a very high-dimensional problem
Estimating the p(s0:t,θ|z1:t,u1:t) using a particle filter can be veryinefficient
We can factor the smoothing distribution into two parts as
p(s0:t,θ|z1:t,u1:t) = p(s0:t|z1:t,u1:t)
M∏k=1
p(θk| s0:t , z1:t,u1:t)
Motion model is used as the proposal distribution in FastSLAM 1.0
FastSLAM 2.0 linearizes the observation equation and approximatesthe optimal proposal distribution with a Gaussian distribution
FastSLAM 2.0 outperforms FastSLAM 1.0
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 17 / 43
![Page 59: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/59.jpg)
RBPF-SLAM
Rao-Blackwellized Particle Filter in SLAM
SLAM is a very high-dimensional problem
Estimating the p(s0:t,θ|z1:t,u1:t) using a particle filter can be veryinefficient
We can factor the smoothing distribution into two parts as
p(s0:t,θ|z1:t,u1:t) = p(s0:t|z1:t,u1:t)
M∏k=1
p(θk| s0:t , z1:t,u1:t)
Motion model is used as the proposal distribution in FastSLAM 1.0
FastSLAM 2.0 linearizes the observation equation and approximatesthe optimal proposal distribution with a Gaussian distribution
FastSLAM 2.0 outperforms FastSLAM 1.0
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 17 / 43
![Page 60: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/60.jpg)
RBPF-SLAM
Rao-Blackwellized Particle Filter in SLAM
SLAM is a very high-dimensional problem
Estimating the p(s0:t,θ|z1:t,u1:t) using a particle filter can be veryinefficient
We can factor the smoothing distribution into two parts as
p(s0:t,θ|z1:t,u1:t) = p(s0:t|z1:t,u1:t)
M∏k=1
p(θk| s0:t , z1:t,u1:t)
Motion model is used as the proposal distribution in FastSLAM 1.0
FastSLAM 2.0 linearizes the observation equation and approximatesthe optimal proposal distribution with a Gaussian distribution
FastSLAM 2.0 outperforms FastSLAM 1.0
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 17 / 43
![Page 61: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/61.jpg)
RBPF-SLAM
Rao-Blackwellized Particle Filter in SLAM
SLAM is a very high-dimensional problem
Estimating the p(s0:t,θ|z1:t,u1:t) using a particle filter can be veryinefficient
We can factor the smoothing distribution into two parts as
p(s0:t,θ|z1:t,u1:t) = p(s0:t|z1:t,u1:t)
M∏k=1
p(θk| s0:t , z1:t,u1:t)
Motion model is used as the proposal distribution in FastSLAM 1.0
FastSLAM 2.0 linearizes the observation equation and approximatesthe optimal proposal distribution with a Gaussian distribution
FastSLAM 2.0 outperforms FastSLAM 1.0
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 17 / 43
![Page 62: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/62.jpg)
RBPF-SLAM
Rao-Blackwellized Particle Filter in SLAM
SLAM is a very high-dimensional problem
Estimating the p(s0:t,θ|z1:t,u1:t) using a particle filter can be veryinefficient
We can factor the smoothing distribution into two parts as
p(s0:t,θ|z1:t,u1:t) = p(s0:t|z1:t,u1:t)
M∏k=1
p(θk| s0:t , z1:t,u1:t)
Motion model is used as the proposal distribution in FastSLAM 1.0
FastSLAM 2.0 linearizes the observation equation and approximatesthe optimal proposal distribution with a Gaussian distribution
FastSLAM 2.0 outperforms FastSLAM 1.0
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 17 / 43
![Page 63: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/63.jpg)
RBPF-SLAM
Rao-Blackwellized Particle Filter in SLAM
SLAM is a very high-dimensional problem
Estimating the p(s0:t,θ|z1:t,u1:t) using a particle filter can be veryinefficient
We can factor the smoothing distribution into two parts as
p(s0:t,θ|z1:t,u1:t) = p(s0:t|z1:t,u1:t)
M∏k=1
p(θk| s0:t , z1:t,u1:t)
Motion model is used as the proposal distribution in FastSLAM 1.0
FastSLAM 2.0 linearizes the observation equation and approximatesthe optimal proposal distribution with a Gaussian distribution
FastSLAM 2.0 outperforms FastSLAM 1.0
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 17 / 43
![Page 64: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/64.jpg)
RBPF-SLAM
FastSLAM 2.0
Linearization error
Gaussian approximation
Linear motion models with respect to the noise variable vt
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 18 / 43
![Page 65: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/65.jpg)
RBPF-SLAM
FastSLAM 2.0
Linearization error
Gaussian approximation
Linear motion models with respect to the noise variable vt
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 18 / 43
![Page 66: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/66.jpg)
RBPF-SLAM
FastSLAM 2.0
Linearization error
Gaussian approximation
Linear motion models with respect to the noise variable vt
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 18 / 43
![Page 67: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/67.jpg)
Monte Carlo Approximation of the Optimal Proposal Distribution
Table of Contents
1 IntroductionSLAM ProblemBayesian FilteringParticle Filter
2 RBPF-SLAM
3 Monte Carlo Approximation of the Optimal Proposal DistributionIntroductionLRSLIS-1LIS-2
4 ResultsSimulation ResultsExperiments On Real Data
5 Conclusion
6 Future Work
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 19 / 43
![Page 68: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/68.jpg)
Monte Carlo Approximation of the Optimal Proposal Distribution Introduction
FastSLAM 2.0 v.s. The Proposed Algorithms
FastSLAM 2.0
1 Approximate the optimal proposal distribution with a Gaussian usingthe linearized observation equation. Sample from this Gaussian
2 Update the landmarks EKFs for the observed features
3 Compute the importance weights using the linearized observationequation
4 Perform resampling
Proposed Algorithms
1 Sample from the optimal proposal distribution using Monte Carlosampling methods
2 Update the landmarks EKFs for the observed features
3 Compute the importance weights using Monte Carlo integration
4 Perform resampling only if it is necessary according to ESS
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 20 / 43
![Page 69: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/69.jpg)
Monte Carlo Approximation of the Optimal Proposal Distribution Introduction
FastSLAM 2.0 v.s. The Proposed Algorithms
FastSLAM 2.0
1 Approximate the optimal proposal distribution with a Gaussian usingthe linearized observation equation. Sample from this Gaussian
2 Update the landmarks EKFs for the observed features
3 Compute the importance weights using the linearized observationequation
4 Perform resampling
Proposed Algorithms
1 Sample from the optimal proposal distribution using Monte Carlosampling methods
2 Update the landmarks EKFs for the observed features
3 Compute the importance weights using Monte Carlo integration
4 Perform resampling only if it is necessary according to ESS
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 20 / 43
![Page 70: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/70.jpg)
Monte Carlo Approximation of the Optimal Proposal Distribution Introduction
FastSLAM 2.0 v.s. The Proposed Algorithms
FastSLAM 2.0
1 Approximate the optimal proposal distribution with a Gaussian usingthe linearized observation equation. Sample from this Gaussian
2 Update the landmarks EKFs for the observed features
3 Compute the importance weights using the linearized observationequation
4 Perform resampling
Proposed Algorithms
1 Sample from the optimal proposal distribution using Monte Carlosampling methods
2 Update the landmarks EKFs for the observed features
3 Compute the importance weights using Monte Carlo integration
4 Perform resampling only if it is necessary according to ESS
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 20 / 43
![Page 71: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/71.jpg)
Monte Carlo Approximation of the Optimal Proposal Distribution Introduction
FastSLAM 2.0 v.s. The Proposed Algorithms
FastSLAM 2.0
1 Approximate the optimal proposal distribution with a Gaussian usingthe linearized observation equation. Sample from this Gaussian
2 Update the landmarks EKFs for the observed features
3 Compute the importance weights using the linearized observationequation
4 Perform resampling
Proposed Algorithms
1 Sample from the optimal proposal distribution using Monte Carlosampling methods
2 Update the landmarks EKFs for the observed features
3 Compute the importance weights using Monte Carlo integration
4 Perform resampling only if it is necessary according to ESS
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 20 / 43
![Page 72: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/72.jpg)
Monte Carlo Approximation of the Optimal Proposal Distribution Introduction
FastSLAM 2.0 v.s. The Proposed Algorithms
FastSLAM 2.0
1 Approximate the optimal proposal distribution with a Gaussian usingthe linearized observation equation. Sample from this Gaussian
2 Update the landmarks EKFs for the observed features
3 Compute the importance weights using the linearized observationequation
4 Perform resampling
Proposed Algorithms
1 Sample from the optimal proposal distribution using Monte Carlosampling methods
2 Update the landmarks EKFs for the observed features
3 Compute the importance weights using Monte Carlo integration
4 Perform resampling only if it is necessary according to ESS
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 20 / 43
![Page 73: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/73.jpg)
Monte Carlo Approximation of the Optimal Proposal Distribution Introduction
FastSLAM 2.0 v.s. The Proposed Algorithms
FastSLAM 2.0
1 Approximate the optimal proposal distribution with a Gaussian usingthe linearized observation equation. Sample from this Gaussian
2 Update the landmarks EKFs for the observed features
3 Compute the importance weights using the linearized observationequation
4 Perform resampling
Proposed Algorithms
1 Sample from the optimal proposal distribution using Monte Carlosampling methods
2 Update the landmarks EKFs for the observed features
3 Compute the importance weights using Monte Carlo integration
4 Perform resampling only if it is necessary according to ESS
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 20 / 43
![Page 74: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/74.jpg)
Monte Carlo Approximation of the Optimal Proposal Distribution Introduction
FastSLAM 2.0 v.s. The Proposed Algorithms
FastSLAM 2.0
1 Approximate the optimal proposal distribution with a Gaussian usingthe linearized observation equation. Sample from this Gaussian
2 Update the landmarks EKFs for the observed features
3 Compute the importance weights using the linearized observationequation
4 Perform resampling
Proposed Algorithms
1 Sample from the optimal proposal distribution using Monte Carlosampling methods
2 Update the landmarks EKFs for the observed features
3 Compute the importance weights using Monte Carlo integration
4 Perform resampling only if it is necessary according to ESS
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 20 / 43
![Page 75: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/75.jpg)
Monte Carlo Approximation of the Optimal Proposal Distribution Introduction
FastSLAM 2.0 v.s. The Proposed Algorithms
FastSLAM 2.0
1 Approximate the optimal proposal distribution with a Gaussian usingthe linearized observation equation. Sample from this Gaussian
2 Update the landmarks EKFs for the observed features
3 Compute the importance weights using the linearized observationequation
4 Perform resampling
Proposed Algorithms
1 Sample from the optimal proposal distribution using Monte Carlosampling methods
2 Update the landmarks EKFs for the observed features
3 Compute the importance weights using Monte Carlo integration
4 Perform resampling only if it is necessary according to ESS
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 20 / 43
![Page 76: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/76.jpg)
Monte Carlo Approximation of the Optimal Proposal Distribution Introduction
MC Approximation of The Optimal Proposal Dist.
MC sampling methods such as importance sampling (IS) andrejection sampling (RS) can be used in order to sample from the
optimal proposal distribution p(st|s[i]t−1, zt,ut)
Instead of sampling directly from the optimal proposal distribution . . .
We can generate M samples {s[i,j]t }Mj=1 according to another
distribution like q(st|s[i]t−1, zt,ut) and then weight those samples
proportional top(st|s[i]t−1,zt,ut)
q(st|s[i]t−1,zt,ut)Local Importance Sampling (LIS)
We can generate M samples {s[i,j]t }Mj=1 according to another
distribution like q(st|s[i]t−1, zt,ut) and accept some of them as samplesof the optimal proposal distribution according to rejection samplingcriteria Local Rejection Sampling (LRS)
q(st|s[i]t−1,ut, zt) = p(st|s[i]t−1,ut)
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 21 / 43
![Page 77: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/77.jpg)
Monte Carlo Approximation of the Optimal Proposal Distribution Introduction
MC Approximation of The Optimal Proposal Dist.
MC sampling methods such as importance sampling (IS) andrejection sampling (RS) can be used in order to sample from the
optimal proposal distribution p(st|s[i]t−1, zt,ut)
Instead of sampling directly from the optimal proposal distribution . . .
We can generate M samples {s[i,j]t }Mj=1 according to another
distribution like q(st|s[i]t−1, zt,ut) and then weight those samples
proportional top(st|s[i]t−1,zt,ut)
q(st|s[i]t−1,zt,ut)
Local Importance Sampling (LIS)
We can generate M samples {s[i,j]t }Mj=1 according to another
distribution like q(st|s[i]t−1, zt,ut) and accept some of them as samplesof the optimal proposal distribution according to rejection samplingcriteria Local Rejection Sampling (LRS)
q(st|s[i]t−1,ut, zt) = p(st|s[i]t−1,ut)
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 21 / 43
![Page 78: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/78.jpg)
Monte Carlo Approximation of the Optimal Proposal Distribution Introduction
MC Approximation of The Optimal Proposal Dist.
MC sampling methods such as importance sampling (IS) andrejection sampling (RS) can be used in order to sample from the
optimal proposal distribution p(st|s[i]t−1, zt,ut)
Instead of sampling directly from the optimal proposal distribution . . .
We can generate M samples {s[i,j]t }Mj=1 according to another
distribution like q(st|s[i]t−1, zt,ut) and then weight those samples
proportional top(st|s[i]t−1,zt,ut)
q(st|s[i]t−1,zt,ut)Local Importance Sampling (LIS)
We can generate M samples {s[i,j]t }Mj=1 according to another
distribution like q(st|s[i]t−1, zt,ut) and accept some of them as samplesof the optimal proposal distribution according to rejection samplingcriteria Local Rejection Sampling (LRS)
q(st|s[i]t−1,ut, zt) = p(st|s[i]t−1,ut)
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 21 / 43
![Page 79: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/79.jpg)
Monte Carlo Approximation of the Optimal Proposal Distribution Introduction
MC Approximation of The Optimal Proposal Dist.
MC sampling methods such as importance sampling (IS) andrejection sampling (RS) can be used in order to sample from the
optimal proposal distribution p(st|s[i]t−1, zt,ut)
Instead of sampling directly from the optimal proposal distribution . . .
We can generate M samples {s[i,j]t }Mj=1 according to another
distribution like q(st|s[i]t−1, zt,ut) and then weight those samples
proportional top(st|s[i]t−1,zt,ut)
q(st|s[i]t−1,zt,ut)Local Importance Sampling (LIS)
We can generate M samples {s[i,j]t }Mj=1 according to another
distribution like q(st|s[i]t−1, zt,ut) and accept some of them as samplesof the optimal proposal distribution according to rejection samplingcriteria
Local Rejection Sampling (LRS)
q(st|s[i]t−1,ut, zt) = p(st|s[i]t−1,ut)
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 21 / 43
![Page 80: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/80.jpg)
Monte Carlo Approximation of the Optimal Proposal Distribution Introduction
MC Approximation of The Optimal Proposal Dist.
MC sampling methods such as importance sampling (IS) andrejection sampling (RS) can be used in order to sample from the
optimal proposal distribution p(st|s[i]t−1, zt,ut)
Instead of sampling directly from the optimal proposal distribution . . .
We can generate M samples {s[i,j]t }Mj=1 according to another
distribution like q(st|s[i]t−1, zt,ut) and then weight those samples
proportional top(st|s[i]t−1,zt,ut)
q(st|s[i]t−1,zt,ut)Local Importance Sampling (LIS)
We can generate M samples {s[i,j]t }Mj=1 according to another
distribution like q(st|s[i]t−1, zt,ut) and accept some of them as samplesof the optimal proposal distribution according to rejection samplingcriteria Local Rejection Sampling (LRS)
q(st|s[i]t−1,ut, zt) = p(st|s[i]t−1,ut)
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 21 / 43
![Page 81: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/81.jpg)
Monte Carlo Approximation of the Optimal Proposal Distribution Introduction
MC Approximation of The Optimal Proposal Dist.
MC sampling methods such as importance sampling (IS) andrejection sampling (RS) can be used in order to sample from the
optimal proposal distribution p(st|s[i]t−1, zt,ut)
Instead of sampling directly from the optimal proposal distribution . . .
We can generate M samples {s[i,j]t }Mj=1 according to another
distribution like q(st|s[i]t−1, zt,ut) and then weight those samples
proportional top(st|s[i]t−1,zt,ut)
q(st|s[i]t−1,zt,ut)Local Importance Sampling (LIS)
We can generate M samples {s[i,j]t }Mj=1 according to another
distribution like q(st|s[i]t−1, zt,ut) and accept some of them as samplesof the optimal proposal distribution according to rejection samplingcriteria Local Rejection Sampling (LRS)
q(st|s[i]t−1,ut, zt) = p(st|s[i]t−1,ut)
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 21 / 43
![Page 82: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/82.jpg)
Monte Carlo Approximation of the Optimal Proposal Distribution LRS
Local Rejection Sampling (LRS)
graphics by M. Jordan
Rejection Sampling
1 Generate u[i] ∼ U [0, 1] and s[i,j]t ∼ p(st|s[i]t−1,ut)
2 Accept s[i,j]t if u[i] ≤ p(s
[i,j]t |s[i]t−1,zt,ut)
C·p(s[i,j]t |s[i]t−1,ut)=
p(zt|s[i,j]t )
maxjp(zt|s[i,j]t )
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 22 / 43
![Page 83: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/83.jpg)
Monte Carlo Approximation of the Optimal Proposal Distribution LRS
Local Rejection Sampling (LRS)
graphics by M. Jordan
Rejection Sampling
1 Generate u[i] ∼ U [0, 1] and s[i,j]t ∼ p(st|s[i]t−1,ut)
2 Accept s[i,j]t if u[i] ≤ p(s
[i,j]t |s[i]t−1,zt,ut)
C·p(s[i,j]t |s[i]t−1,ut)=
p(zt|s[i,j]t )
maxjp(zt|s[i,j]t )
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 22 / 43
![Page 84: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/84.jpg)
Monte Carlo Approximation of the Optimal Proposal Distribution LRS
Local Rejection Sampling (LRS)
graphics by M. Jordan
Rejection Sampling
1 Generate u[i] ∼ U [0, 1] and s[i,j]t ∼ p(st|s[i]t−1,ut)
2 Accept s[i,j]t if u[i] ≤ p(s
[i,j]t |s[i]t−1,zt,ut)
C·p(s[i,j]t |s[i]t−1,ut)
=p(zt|s[i,j]t )
maxjp(zt|s[i,j]t )
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 22 / 43
![Page 85: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/85.jpg)
Monte Carlo Approximation of the Optimal Proposal Distribution LRS
Local Rejection Sampling (LRS)
graphics by M. Jordan
Rejection Sampling
1 Generate u[i] ∼ U [0, 1] and s[i,j]t ∼ p(st|s[i]t−1,ut)
2 Accept s[i,j]t if u[i] ≤ p(s
[i,j]t |s[i]t−1,zt,ut)
C·p(s[i,j]t |s[i]t−1,ut)=
p(zt|s[i,j]t )
maxjp(zt|s[i,j]t )
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 22 / 43
![Page 86: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/86.jpg)
Monte Carlo Approximation of the Optimal Proposal Distribution LRS
LRS Cont’d.
Now we have to compute the importance weights for the set of acceptedsamples
We should compute p(zt|s[i]t−1)
p(zt|s[i]t−1) =∫p(zt|st)p(st|s[i]t−1,ut)dst ≈
MC Intergration
1
M
M∑j=1
p(zt|s[i,j]t )
p(zt|s[i,j]t ) can be approximated by a Gaussian
MC Integration ⇒ Large number of local particles M
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 23 / 43
![Page 87: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/87.jpg)
Monte Carlo Approximation of the Optimal Proposal Distribution LRS
LRS Cont’d.
Now we have to compute the importance weights for the set of acceptedsamples
We should compute p(zt|s[i]t−1)
p(zt|s[i]t−1) =∫p(zt|st)p(st|s[i]t−1,ut)dst ≈
MC Intergration
1
M
M∑j=1
p(zt|s[i,j]t )
p(zt|s[i,j]t ) can be approximated by a Gaussian
MC Integration ⇒ Large number of local particles M
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 23 / 43
![Page 88: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/88.jpg)
Monte Carlo Approximation of the Optimal Proposal Distribution LRS
LRS Cont’d.
Now we have to compute the importance weights for the set of acceptedsamples
We should compute p(zt|s[i]t−1)
p(zt|s[i]t−1) =∫p(zt|st)p(st|s[i]t−1,ut)dst ≈
MC Intergration
1
M
M∑j=1
p(zt|s[i,j]t )
p(zt|s[i,j]t ) can be approximated by a Gaussian
MC Integration ⇒ Large number of local particles M
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 23 / 43
![Page 89: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/89.jpg)
Monte Carlo Approximation of the Optimal Proposal Distribution LRS
LRS Cont’d.
Now we have to compute the importance weights for the set of acceptedsamples
We should compute p(zt|s[i]t−1)
p(zt|s[i]t−1) =∫p(zt|st)p(st|s[i]t−1,ut)dst ≈
MC Intergration
1
M
M∑j=1
p(zt|s[i,j]t )
p(zt|s[i,j]t ) can be approximated by a Gaussian
MC Integration ⇒ Large number of local particles M
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 23 / 43
![Page 90: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/90.jpg)
Monte Carlo Approximation of the Optimal Proposal Distribution LRS
LRS Cont’d.
Now we have to compute the importance weights for the set of acceptedsamples
We should compute p(zt|s[i]t−1)
p(zt|s[i]t−1) =∫p(zt|st)p(st|s[i]t−1,ut)dst ≈
MC Intergration
1
M
M∑j=1
p(zt|s[i,j]t )
p(zt|s[i,j]t ) can be approximated by a Gaussian
MC Integration
⇒ Large number of local particles M
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 23 / 43
![Page 91: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/91.jpg)
Monte Carlo Approximation of the Optimal Proposal Distribution LRS
LRS Cont’d.
Now we have to compute the importance weights for the set of acceptedsamples
We should compute p(zt|s[i]t−1)
p(zt|s[i]t−1) =∫p(zt|st)p(st|s[i]t−1,ut)dst ≈
MC Intergration
1
M
M∑j=1
p(zt|s[i,j]t )
p(zt|s[i,j]t ) can be approximated by a Gaussian
MC Integration ⇒ Large number of local particles M
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 23 / 43
![Page 92: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/92.jpg)
Monte Carlo Approximation of the Optimal Proposal Distribution LIS-1
Local Importance Sampling (LIS)LIS-1
LIS-1
1 Similar to LRS, we have to generate M random samples {s[i,j]t }Mj=1
according to p(st|s[i]t−1,ut)
2 Local IS weights: w(LIS)(s[i,j]t ) = p(zt|s[i,j]t ) ∝ p(s
[i,j]t |s[i]t−1,zt,ut)
p(s[i,j]t |s[i]t−1,ut)
3 Local resampling among {s[i,j]t }Mj=1 using w(LIS)(s[i,j]t )
4 Main weights: w(s∗[i,j]t ) = w(s
[i]t−1)p(zt|s
[i]t−1)
5 p(zt|s[i]t−1) ≈MC Integration
1M
∑Mj=1 p(zt|s
[i,j]t )
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 24 / 43
![Page 93: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/93.jpg)
Monte Carlo Approximation of the Optimal Proposal Distribution LIS-1
Local Importance Sampling (LIS)LIS-1
LIS-1
1 Similar to LRS, we have to generate M random samples {s[i,j]t }Mj=1
according to p(st|s[i]t−1,ut)
2 Local IS weights: w(LIS)(s[i,j]t ) = p(zt|s[i,j]t ) ∝ p(s
[i,j]t |s[i]t−1,zt,ut)
p(s[i,j]t |s[i]t−1,ut)
3 Local resampling among {s[i,j]t }Mj=1 using w(LIS)(s[i,j]t )
4 Main weights: w(s∗[i,j]t ) = w(s
[i]t−1)p(zt|s
[i]t−1)
5 p(zt|s[i]t−1) ≈MC Integration
1M
∑Mj=1 p(zt|s
[i,j]t )
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 24 / 43
![Page 94: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/94.jpg)
Monte Carlo Approximation of the Optimal Proposal Distribution LIS-1
Local Importance Sampling (LIS)LIS-1
LIS-1
1 Similar to LRS, we have to generate M random samples {s[i,j]t }Mj=1
according to p(st|s[i]t−1,ut)
2 Local IS weights: w(LIS)(s[i,j]t ) = p(zt|s[i,j]t ) ∝ p(s
[i,j]t |s[i]t−1,zt,ut)
p(s[i,j]t |s[i]t−1,ut)
3 Local resampling among {s[i,j]t }Mj=1 using w(LIS)(s[i,j]t )
4 Main weights: w(s∗[i,j]t ) = w(s
[i]t−1)p(zt|s
[i]t−1)
5 p(zt|s[i]t−1) ≈MC Integration
1M
∑Mj=1 p(zt|s
[i,j]t )
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 24 / 43
![Page 95: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/95.jpg)
Monte Carlo Approximation of the Optimal Proposal Distribution LIS-1
Local Importance Sampling (LIS)LIS-1
LIS-1
1 Similar to LRS, we have to generate M random samples {s[i,j]t }Mj=1
according to p(st|s[i]t−1,ut)
2 Local IS weights: w(LIS)(s[i,j]t ) = p(zt|s[i,j]t ) ∝ p(s
[i,j]t |s[i]t−1,zt,ut)
p(s[i,j]t |s[i]t−1,ut)
3 Local resampling among {s[i,j]t }Mj=1 using w(LIS)(s[i,j]t )
4 Main weights: w(s∗[i,j]t ) = w(s
[i]t−1)p(zt|s
[i]t−1)
5 p(zt|s[i]t−1) ≈MC Integration
1M
∑Mj=1 p(zt|s
[i,j]t )
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 24 / 43
![Page 96: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/96.jpg)
Monte Carlo Approximation of the Optimal Proposal Distribution LIS-1
Local Importance Sampling (LIS)LIS-1
LIS-1
1 Similar to LRS, we have to generate M random samples {s[i,j]t }Mj=1
according to p(st|s[i]t−1,ut)
2 Local IS weights: w(LIS)(s[i,j]t ) = p(zt|s[i,j]t ) ∝ p(s
[i,j]t |s[i]t−1,zt,ut)
p(s[i,j]t |s[i]t−1,ut)
3 Local resampling among {s[i,j]t }Mj=1 using w(LIS)(s[i,j]t )
4 Main weights: w(s∗[i,j]t ) = w(s
[i]t−1)p(zt|s
[i]t−1)
5 p(zt|s[i]t−1) ≈MC Integration
1M
∑Mj=1 p(zt|s
[i,j]t )
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 24 / 43
![Page 97: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/97.jpg)
Monte Carlo Approximation of the Optimal Proposal Distribution LIS-1
Local Importance Sampling (LIS)LIS-1
LIS-1
1 Similar to LRS, we have to generate M random samples {s[i,j]t }Mj=1
according to p(st|s[i]t−1,ut)
2 Local IS weights: w(LIS)(s[i,j]t ) = p(zt|s[i,j]t ) ∝ p(s
[i,j]t |s[i]t−1,zt,ut)
p(s[i,j]t |s[i]t−1,ut)
3 Local resampling among {s[i,j]t }Mj=1 using w(LIS)(s[i,j]t )
4 Main weights: w(s∗[i,j]t ) = w(s
[i]t−1)p(zt|s
[i]t−1)
5 p(zt|s[i]t−1) ≈MC Integration
1M
∑Mj=1 p(zt|s
[i,j]t )
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 24 / 43
![Page 98: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/98.jpg)
Monte Carlo Approximation of the Optimal Proposal Distribution LIS-2
Local Importance Sampling (LIS)LIS-2
LIS-2
Similar to LRS and LIS-1, we have to generate M random samples
{s[i,j]t }Mj=1 according to p(st|s[i]t−1,ut)
Total weights of generated particles {s[i,j]t }N,M(i,j)=(1,1) = local weights
× SIS weights
Instead of eliminating local weights through local resamplings (LIS-1),total weights are computed in LIS-2
It is proved in the thesis that total weights would be equal to
w(s[i,j]0:t ) = w(s
[i]0:t−1)p(zt|s
[i,j]t )
Local weights are used in LIS-2 to filter those samples with low p(zt|s[i,j]t )
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 25 / 43
![Page 99: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/99.jpg)
Monte Carlo Approximation of the Optimal Proposal Distribution LIS-2
Local Importance Sampling (LIS)LIS-2
LIS-2
Similar to LRS and LIS-1, we have to generate M random samples
{s[i,j]t }Mj=1 according to p(st|s[i]t−1,ut)
Total weights of generated particles {s[i,j]t }N,M(i,j)=(1,1) = local weights
× SIS weights
Instead of eliminating local weights through local resamplings (LIS-1),total weights are computed in LIS-2
It is proved in the thesis that total weights would be equal to
w(s[i,j]0:t ) = w(s
[i]0:t−1)p(zt|s
[i,j]t )
Local weights are used in LIS-2 to filter those samples with low p(zt|s[i,j]t )
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 25 / 43
![Page 100: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/100.jpg)
Monte Carlo Approximation of the Optimal Proposal Distribution LIS-2
Local Importance Sampling (LIS)LIS-2
LIS-2
Similar to LRS and LIS-1, we have to generate M random samples
{s[i,j]t }Mj=1 according to p(st|s[i]t−1,ut)
Total weights of generated particles {s[i,j]t }N,M(i,j)=(1,1) = local weights
× SIS weights
Instead of eliminating local weights through local resamplings (LIS-1),total weights are computed in LIS-2
It is proved in the thesis that total weights would be equal to
w(s[i,j]0:t ) = w(s
[i]0:t−1)p(zt|s
[i,j]t )
Local weights are used in LIS-2 to filter those samples with low p(zt|s[i,j]t )
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 25 / 43
![Page 101: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/101.jpg)
Monte Carlo Approximation of the Optimal Proposal Distribution LIS-2
Local Importance Sampling (LIS)LIS-2
LIS-2
Similar to LRS and LIS-1, we have to generate M random samples
{s[i,j]t }Mj=1 according to p(st|s[i]t−1,ut)
Total weights of generated particles {s[i,j]t }N,M(i,j)=(1,1) = local weights
× SIS weights
Instead of eliminating local weights through local resamplings (LIS-1),total weights are computed in LIS-2
It is proved in the thesis that total weights would be equal to
w(s[i,j]0:t ) = w(s
[i]0:t−1)p(zt|s
[i,j]t )
Local weights are used in LIS-2 to filter those samples with low p(zt|s[i,j]t )
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 25 / 43
![Page 102: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/102.jpg)
Monte Carlo Approximation of the Optimal Proposal Distribution LIS-2
Local Importance Sampling (LIS)LIS-2
LIS-2
Similar to LRS and LIS-1, we have to generate M random samples
{s[i,j]t }Mj=1 according to p(st|s[i]t−1,ut)
Total weights of generated particles {s[i,j]t }N,M(i,j)=(1,1) = local weights
× SIS weights
Instead of eliminating local weights through local resamplings (LIS-1),total weights are computed in LIS-2
It is proved in the thesis that total weights would be equal to
w(s[i,j]0:t ) = w(s
[i]0:t−1)p(zt|s
[i,j]t )
Local weights are used in LIS-2 to filter those samples with low p(zt|s[i,j]t )
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 25 / 43
![Page 103: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/103.jpg)
Results
Table of Contents
1 IntroductionSLAM ProblemBayesian FilteringParticle Filter
2 RBPF-SLAM
3 Monte Carlo Approximation of the Optimal Proposal DistributionIntroductionLRSLIS-1LIS-2
4 ResultsSimulation ResultsExperiments On Real Data
5 Conclusion
6 Future Work
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 26 / 43
![Page 104: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/104.jpg)
Results Simulation Results
Simulation
50 Monte Carlo runs with different seeds
200m × 200m simulated environment
−100 −50 0 50 100
−80
−60
−40
−20
0
20
40
60
80
meters
meters
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 27 / 43
![Page 105: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/105.jpg)
Results Simulation Results
Number of Resamplings
Table: Average number of resampling steps over 50 MC simulations
N FastSLAM 2.0LRS LIS-2
(M = 50) (M = 3)
20 221.01 180.84 188.85
30 229.79 186 193.13
40 235.71 186.55 195.33
50 237.80 188.96 196.32
60 239.55 189.62 195.99
70 240.10 189.40 197.85
80 243.61 190.39 197.96
90 244.16 190.94 198.73
100 245.70 192.69 199.50
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 28 / 43
![Page 106: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/106.jpg)
Results Simulation Results
Runtime
Table: Average Runtime over 50 MC simulations
N FastSLAM 2.0LRS LIS-2
(M = 50) (M = 3)(sec) (sec) (sec)
20 36 318 38
30 52 480 56
40 68 635 75
50 84 794 93
60 103 953 112
70 117 1106 130
80 134 1265 148
90 149 1413 165
100 167 1588 183
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 29 / 43
![Page 107: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/107.jpg)
Results Simulation Results
Mean Square Error
101
102
0
10
20
30
40
50
60
70
Number of Particles
averageMSEover
time(m
2)
FastSLAM2.0
LRS
LIS−2
Figure: Average MSE of estimated robot pose over time. The parameter M is setto 50 for LRS and 3 for LIS-2.
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 30 / 43
![Page 108: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/108.jpg)
Results Simulation Results
Mean Square Error Cont’d.
0 25 50 75 100 125 150 175 2000
20
40
60
80
100
120
140
time (sec)
averageMSE
over
50MC
simulation
s(m
2)
FastSLAM2.0
LRS (L=50)
LRS (L=20)
LIS−2 (L=20)
LIS−2 (L=3)
Figure: Average MSE of estimated robot position over 50 Monte Carlosimulations.
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 31 / 43
![Page 109: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/109.jpg)
Results Experiments On Real Data
Victoria Park Dataset
Victoria Park dataset by E. Nebot and J. Guivant
Large environment 200m × 300mMore than 108,000 controls and observations
−100 −50 0 50 100 150 200 250−50
0
50
100
150
200
Figure: Estimated Robot Path
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 32 / 43
![Page 110: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/110.jpg)
Results Experiments On Real Data
Victoria Park Dataset
Victoria Park dataset by E. Nebot and J. GuivantLarge environment 200m × 300m
More than 108,000 controls and observations
−100 −50 0 50 100 150 200 250−50
0
50
100
150
200
Figure: Estimated Robot Path
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 32 / 43
![Page 111: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/111.jpg)
Results Experiments On Real Data
Victoria Park Dataset
Victoria Park dataset by E. Nebot and J. GuivantLarge environment 200m × 300mMore than 108,000 controls and observations
−100 −50 0 50 100 150 200 250−50
0
50
100
150
200
Figure: Estimated Robot Path
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 32 / 43
![Page 112: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/112.jpg)
Results Experiments On Real Data
Victoria Park Dataset
Victoria Park dataset by E. Nebot and J. GuivantLarge environment 200m × 300mMore than 108,000 controls and observations
−100 −50 0 50 100 150 200 250−50
0
50
100
150
200
Figure: Estimated Robot Path
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 32 / 43
![Page 113: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/113.jpg)
Conclusion
Table of Contents
1 IntroductionSLAM ProblemBayesian FilteringParticle Filter
2 RBPF-SLAM
3 Monte Carlo Approximation of the Optimal Proposal DistributionIntroductionLRSLIS-1LIS-2
4 ResultsSimulation ResultsExperiments On Real Data
5 Conclusion
6 Future Work
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 33 / 43
![Page 114: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/114.jpg)
Conclusion
Conclusion
LRS and LIS can describe non-Gaussian (e.g. multi-modal) proposaldistributions
Nonlinear motion models
Linearization error
LRS and LIS have control over the accuracy of the approximationthrough M
Much lower number of resampling steps in LRS and LIS-2 than inFastSLAM 2.0 ⇒ Slower rate of degeneracy ⇒ Better approximationof the optimal proposal distribution ⇒ Sample impoverishmentproblem
Monte Carlo Integration in LRS and LIS-1 ⇒ Large M ⇒ Highcomputational cost
Accurate results of LRS come at the cost of large runtime
LIS-2 (M=3) outperform FastSLAM 2.0 and LRS (M=50) formoderate number of particles (N)
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 34 / 43
![Page 115: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/115.jpg)
Conclusion
Conclusion
LRS and LIS can describe non-Gaussian (e.g. multi-modal) proposaldistributions
Nonlinear motion models
Linearization error
LRS and LIS have control over the accuracy of the approximationthrough M
Much lower number of resampling steps in LRS and LIS-2 than inFastSLAM 2.0 ⇒ Slower rate of degeneracy ⇒ Better approximationof the optimal proposal distribution ⇒ Sample impoverishmentproblem
Monte Carlo Integration in LRS and LIS-1 ⇒ Large M ⇒ Highcomputational cost
Accurate results of LRS come at the cost of large runtime
LIS-2 (M=3) outperform FastSLAM 2.0 and LRS (M=50) formoderate number of particles (N)
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 34 / 43
![Page 116: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/116.jpg)
Conclusion
Conclusion
LRS and LIS can describe non-Gaussian (e.g. multi-modal) proposaldistributions
Nonlinear motion models
Linearization error
LRS and LIS have control over the accuracy of the approximationthrough M
Much lower number of resampling steps in LRS and LIS-2 than inFastSLAM 2.0 ⇒ Slower rate of degeneracy ⇒ Better approximationof the optimal proposal distribution ⇒ Sample impoverishmentproblem
Monte Carlo Integration in LRS and LIS-1 ⇒ Large M ⇒ Highcomputational cost
Accurate results of LRS come at the cost of large runtime
LIS-2 (M=3) outperform FastSLAM 2.0 and LRS (M=50) formoderate number of particles (N)
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 34 / 43
![Page 117: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/117.jpg)
Conclusion
Conclusion
LRS and LIS can describe non-Gaussian (e.g. multi-modal) proposaldistributions
Nonlinear motion models
Linearization error
LRS and LIS have control over the accuracy of the approximationthrough M
Much lower number of resampling steps in LRS and LIS-2 than inFastSLAM 2.0 ⇒ Slower rate of degeneracy ⇒ Better approximationof the optimal proposal distribution ⇒ Sample impoverishmentproblem
Monte Carlo Integration in LRS and LIS-1 ⇒ Large M ⇒ Highcomputational cost
Accurate results of LRS come at the cost of large runtime
LIS-2 (M=3) outperform FastSLAM 2.0 and LRS (M=50) formoderate number of particles (N)
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 34 / 43
![Page 118: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/118.jpg)
Conclusion
Conclusion
LRS and LIS can describe non-Gaussian (e.g. multi-modal) proposaldistributions
Nonlinear motion models
Linearization error
LRS and LIS have control over the accuracy of the approximationthrough M
Much lower number of resampling steps in LRS and LIS-2 than inFastSLAM 2.0 ⇒ Slower rate of degeneracy ⇒ Better approximationof the optimal proposal distribution ⇒ Sample impoverishmentproblem
Monte Carlo Integration in LRS and LIS-1 ⇒ Large M ⇒ Highcomputational cost
Accurate results of LRS come at the cost of large runtime
LIS-2 (M=3) outperform FastSLAM 2.0 and LRS (M=50) formoderate number of particles (N)
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 34 / 43
![Page 119: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/119.jpg)
Conclusion
Conclusion
LRS and LIS can describe non-Gaussian (e.g. multi-modal) proposaldistributions
Nonlinear motion models
Linearization error
LRS and LIS have control over the accuracy of the approximationthrough M
Much lower number of resampling steps in LRS and LIS-2 than inFastSLAM 2.0 ⇒ Slower rate of degeneracy ⇒ Better approximationof the optimal proposal distribution ⇒ Sample impoverishmentproblem
Monte Carlo Integration in LRS and LIS-1 ⇒ Large M ⇒ Highcomputational cost
Accurate results of LRS come at the cost of large runtime
LIS-2 (M=3) outperform FastSLAM 2.0 and LRS (M=50) formoderate number of particles (N)
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 34 / 43
![Page 120: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/120.jpg)
Conclusion
Conclusion
LRS and LIS can describe non-Gaussian (e.g. multi-modal) proposaldistributions
Nonlinear motion models
Linearization error
LRS and LIS have control over the accuracy of the approximationthrough M
Much lower number of resampling steps in LRS and LIS-2 than inFastSLAM 2.0 ⇒ Slower rate of degeneracy ⇒ Better approximationof the optimal proposal distribution ⇒ Sample impoverishmentproblem
Monte Carlo Integration in LRS and LIS-1 ⇒ Large M ⇒ Highcomputational cost
Accurate results of LRS come at the cost of large runtime
LIS-2 (M=3) outperform FastSLAM 2.0 and LRS (M=50) formoderate number of particles (N)
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 34 / 43
![Page 121: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/121.jpg)
Conclusion
Conclusion
LRS and LIS can describe non-Gaussian (e.g. multi-modal) proposaldistributions
Nonlinear motion models
Linearization error
LRS and LIS have control over the accuracy of the approximationthrough M
Much lower number of resampling steps in LRS and LIS-2 than inFastSLAM 2.0 ⇒ Slower rate of degeneracy ⇒ Better approximationof the optimal proposal distribution ⇒ Sample impoverishmentproblem
Monte Carlo Integration in LRS and LIS-1 ⇒ Large M ⇒ Highcomputational cost
Accurate results of LRS come at the cost of large runtime
LIS-2 (M=3) outperform FastSLAM 2.0 and LRS (M=50) formoderate number of particles (N)
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 34 / 43
![Page 122: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/122.jpg)
Conclusion
Conclusion
LRS and LIS can describe non-Gaussian (e.g. multi-modal) proposaldistributions
Nonlinear motion models
Linearization error
LRS and LIS have control over the accuracy of the approximationthrough M
Much lower number of resampling steps in LRS and LIS-2 than inFastSLAM 2.0 ⇒ Slower rate of degeneracy ⇒ Better approximationof the optimal proposal distribution ⇒ Sample impoverishmentproblem
Monte Carlo Integration in LRS and LIS-1 ⇒ Large M ⇒ Highcomputational cost
Accurate results of LRS come at the cost of large runtime
LIS-2 (M=3) outperform FastSLAM 2.0 and LRS (M=50) formoderate number of particles (N)
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 34 / 43
![Page 123: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/123.jpg)
Conclusion
Conclusion
LRS and LIS can describe non-Gaussian (e.g. multi-modal) proposaldistributions
Nonlinear motion models
Linearization error
LRS and LIS have control over the accuracy of the approximationthrough M
Much lower number of resampling steps in LRS and LIS-2 than inFastSLAM 2.0 ⇒ Slower rate of degeneracy ⇒ Better approximationof the optimal proposal distribution ⇒ Sample impoverishmentproblem
Monte Carlo Integration in LRS and LIS-1 ⇒ Large M ⇒ Highcomputational cost
Accurate results of LRS come at the cost of large runtime
LIS-2 (M=3) outperform FastSLAM 2.0 and LRS (M=50) formoderate number of particles (N)
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 34 / 43
![Page 124: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/124.jpg)
Conclusion
Conclusion
LRS and LIS can describe non-Gaussian (e.g. multi-modal) proposaldistributions
Nonlinear motion models
Linearization error
LRS and LIS have control over the accuracy of the approximationthrough M
Much lower number of resampling steps in LRS and LIS-2 than inFastSLAM 2.0 ⇒ Slower rate of degeneracy ⇒ Better approximationof the optimal proposal distribution ⇒ Sample impoverishmentproblem
Monte Carlo Integration in LRS and LIS-1 ⇒ Large M ⇒ Highcomputational cost
Accurate results of LRS come at the cost of large runtime
LIS-2 (M=3) outperform FastSLAM 2.0 and LRS (M=50) formoderate number of particles (N)
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 34 / 43
![Page 125: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/125.jpg)
Conclusion
Conclusion
LRS and LIS can describe non-Gaussian (e.g. multi-modal) proposaldistributions
Nonlinear motion models
Linearization error
LRS and LIS have control over the accuracy of the approximationthrough M
Much lower number of resampling steps in LRS and LIS-2 than inFastSLAM 2.0 ⇒ Slower rate of degeneracy ⇒ Better approximationof the optimal proposal distribution ⇒ Sample impoverishmentproblem
Monte Carlo Integration in LRS and LIS-1 ⇒ Large M ⇒ Highcomputational cost
Accurate results of LRS come at the cost of large runtime
LIS-2 (M=3) outperform FastSLAM 2.0 and LRS (M=50) formoderate number of particles (N)
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 34 / 43
![Page 126: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/126.jpg)
Conclusion
Conclusion
LRS and LIS can describe non-Gaussian (e.g. multi-modal) proposaldistributions
Nonlinear motion models
Linearization error
LRS and LIS have control over the accuracy of the approximationthrough M
Much lower number of resampling steps in LRS and LIS-2 than inFastSLAM 2.0 ⇒ Slower rate of degeneracy ⇒ Better approximationof the optimal proposal distribution ⇒ Sample impoverishmentproblem
Monte Carlo Integration in LRS and LIS-1 ⇒ Large M ⇒ Highcomputational cost
Accurate results of LRS come at the cost of large runtime
LIS-2 (M=3) outperform FastSLAM 2.0 and LRS (M=50) formoderate number of particles (N)
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 34 / 43
![Page 127: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/127.jpg)
Future Work
Table of Contents
1 IntroductionSLAM ProblemBayesian FilteringParticle Filter
2 RBPF-SLAM
3 Monte Carlo Approximation of the Optimal Proposal DistributionIntroductionLRSLIS-1LIS-2
4 ResultsSimulation ResultsExperiments On Real Data
5 Conclusion
6 Future Work
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 35 / 43
![Page 128: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/128.jpg)
Future Work
Future Work
How to guess an “appropriate” value for M?
More Monte Carlo runs
Hybrid algorithm (linearization + Monte Carlo sampling)
Repeat the simulations for the case of unknown data association
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 36 / 43
![Page 129: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/129.jpg)
Future Work
Future Work
How to guess an “appropriate” value for M?
More Monte Carlo runs
Hybrid algorithm (linearization + Monte Carlo sampling)
Repeat the simulations for the case of unknown data association
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 36 / 43
![Page 130: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/130.jpg)
Future Work
Future Work
How to guess an “appropriate” value for M?
More Monte Carlo runs
Hybrid algorithm (linearization + Monte Carlo sampling)
Repeat the simulations for the case of unknown data association
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 36 / 43
![Page 131: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/131.jpg)
Future Work
Future Work
How to guess an “appropriate” value for M?
More Monte Carlo runs
Hybrid algorithm (linearization + Monte Carlo sampling)
Repeat the simulations for the case of unknown data association
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 36 / 43
![Page 132: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/132.jpg)
Papers
Papers
IEEE/RSJ International Conference on Intelligent Robots andSystems (IROS) 2011: Monte-Carlo Approximation of The OptimalProposal Distribution in RBPF-SLAM (submitted)
. . .
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 37 / 43
![Page 133: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/133.jpg)
Thank You
Thank You . . .
Figure: Melon: The Semi-autonomous Mobile Robot of K.N. Toosi Univ. ofTech.
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 38 / 43
![Page 134: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/134.jpg)
Thank You
Thank You . . .
Figure: Melon: The Semi-autonomous Mobile Robot of K.N. Toosi Univ. ofTech.
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 38 / 43
![Page 135: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/135.jpg)
Thank You
Thank you . . .
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 39 / 43
![Page 136: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/136.jpg)
Thank You
Rao-Blackwellized Particle Filter in SLAM
SLAM is a very high-dimensional problem
Estimating the p(s0:t,θ|z1:t,u1:t) using a particle filter can be veryinefficient
We can factor the smoothing distribution into two parts as
p(s0:t,θ|z1:t,u1:t) = p(s0:t|z1:t,u1:t)
M∏k=1
p(θk| s0:t , z1:t,u1:t)
We can estimate p(s0:t|z1:t,u1:t) using a particle filter
p(θk|s[i]0:t, z1:t,u1:t) can be estimated using an EKF for each robot
path particle s[i]0:t
A map (estimated locations of the features) is attached to each robotpath particle
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 40 / 43
![Page 137: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/137.jpg)
Thank You
Rao-Blackwellized Particle Filter in SLAM
SLAM is a very high-dimensional problem
Estimating the p(s0:t,θ|z1:t,u1:t) using a particle filter can be veryinefficient
We can factor the smoothing distribution into two parts as
p(s0:t,θ|z1:t,u1:t) = p(s0:t|z1:t,u1:t)
M∏k=1
p(θk| s0:t , z1:t,u1:t)
We can estimate p(s0:t|z1:t,u1:t) using a particle filter
p(θk|s[i]0:t, z1:t,u1:t) can be estimated using an EKF for each robot
path particle s[i]0:t
A map (estimated locations of the features) is attached to each robotpath particle
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 40 / 43
![Page 138: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/138.jpg)
Thank You
Rao-Blackwellized Particle Filter in SLAM
SLAM is a very high-dimensional problem
Estimating the p(s0:t,θ|z1:t,u1:t) using a particle filter can be veryinefficient
We can factor the smoothing distribution into two parts as
p(s0:t,θ|z1:t,u1:t) = p(s0:t|z1:t,u1:t)
M∏k=1
p(θk| s0:t , z1:t,u1:t)
We can estimate p(s0:t|z1:t,u1:t) using a particle filter
p(θk|s[i]0:t, z1:t,u1:t) can be estimated using an EKF for each robot
path particle s[i]0:t
A map (estimated locations of the features) is attached to each robotpath particle
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 40 / 43
![Page 139: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/139.jpg)
Thank You
Rao-Blackwellized Particle Filter in SLAM
SLAM is a very high-dimensional problem
Estimating the p(s0:t,θ|z1:t,u1:t) using a particle filter can be veryinefficient
We can factor the smoothing distribution into two parts as
p(s0:t,θ|z1:t,u1:t) = p(s0:t|z1:t,u1:t)
M∏k=1
p(θk| s0:t , z1:t,u1:t)
We can estimate p(s0:t|z1:t,u1:t) using a particle filter
p(θk|s[i]0:t, z1:t,u1:t) can be estimated using an EKF for each robot
path particle s[i]0:t
A map (estimated locations of the features) is attached to each robotpath particle
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 40 / 43
![Page 140: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/140.jpg)
Thank You
Rao-Blackwellized Particle Filter in SLAM
SLAM is a very high-dimensional problem
Estimating the p(s0:t,θ|z1:t,u1:t) using a particle filter can be veryinefficient
We can factor the smoothing distribution into two parts as
p(s0:t,θ|z1:t,u1:t) = p(s0:t|z1:t,u1:t)
M∏k=1
p(θk| s0:t , z1:t,u1:t)
We can estimate p(s0:t|z1:t,u1:t) using a particle filter
p(θk|s[i]0:t, z1:t,u1:t) can be estimated using an EKF for each robot
path particle s[i]0:t
A map (estimated locations of the features) is attached to each robotpath particle
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 40 / 43
![Page 141: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/141.jpg)
Thank You
Rao-Blackwellized Particle Filter in SLAM
SLAM is a very high-dimensional problem
Estimating the p(s0:t,θ|z1:t,u1:t) using a particle filter can be veryinefficient
We can factor the smoothing distribution into two parts as
p(s0:t,θ|z1:t,u1:t) = p(s0:t|z1:t,u1:t)
M∏k=1
p(θk| s0:t , z1:t,u1:t)
We can estimate p(s0:t|z1:t,u1:t) using a particle filter
p(θk|s[i]0:t, z1:t,u1:t) can be estimated using an EKF for each robot
path particle s[i]0:t
A map (estimated locations of the features) is attached to each robotpath particle
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 40 / 43
![Page 142: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/142.jpg)
Thank You
Importance Sampling
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 41 / 43
![Page 143: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/143.jpg)
Thank You
Importance Sampling (IS)
Idea
Generate samples from another distribution called the importance function
like π(x), and weight these samples according to w∗(x[i]) = p(x[i])
π(x[i]):
Ep(x)[h(x)] =∫
h(x) p(x)π(x) π(x)dx =1
N
N∑i=1
w∗(x[i])h(x[i])
In practice we compute importance weights w(x) proportional to p(x)π(x) and
normalize them to estimate the expected value as:
Ep(x)[h(x)] ≈N∑i=1
w(x[i])∑Nj=1w(x
[j])h(x[i])
But . . .
How to do this recursively in time?K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 42 / 43
![Page 144: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/144.jpg)
Thank You
Importance Sampling (IS)
Idea
Generate samples from another distribution called the importance function
like π(x), and weight these samples according to w∗(x[i]) = p(x[i])
π(x[i]):
Ep(x)[h(x)] =∫
h(x) p(x)π(x) π(x)dx =1
N
N∑i=1
w∗(x[i])h(x[i])
In practice we compute importance weights w(x) proportional to p(x)π(x) and
normalize them to estimate the expected value as:
Ep(x)[h(x)] ≈N∑i=1
w(x[i])∑Nj=1w(x
[j])h(x[i])
But . . .
How to do this recursively in time?K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 42 / 43
![Page 145: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/145.jpg)
Thank You
Importance Sampling (IS)
Idea
Generate samples from another distribution called the importance function
like π(x), and weight these samples according to w∗(x[i]) = p(x[i])
π(x[i]):
Ep(x)[h(x)] =∫
h(x) p(x)π(x) π(x)dx =1
N
N∑i=1
w∗(x[i])h(x[i])
In practice we compute importance weights w(x) proportional to p(x)π(x) and
normalize them to estimate the expected value as:
Ep(x)[h(x)] ≈N∑i=1
w(x[i])∑Nj=1w(x
[j])h(x[i])
But . . .
How to do this recursively in time?K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 42 / 43
![Page 146: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/146.jpg)
Thank You
Importance Sampling (IS)
Idea
Generate samples from another distribution called the importance function
like π(x), and weight these samples according to w∗(x[i]) = p(x[i])
π(x[i]):
Ep(x)[h(x)] =∫
h(x) p(x)π(x) π(x)dx =1
N
N∑i=1
w∗(x[i])h(x[i])
In practice we compute importance weights w(x) proportional to p(x)π(x) and
normalize them to estimate the expected value as:
Ep(x)[h(x)] ≈N∑i=1
w(x[i])∑Nj=1w(x
[j])h(x[i])
But . . .
How to do this recursively in time?K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 42 / 43
![Page 147: Development and Implementation of SLAM Algorithms](https://reader034.fdocuments.net/reader034/viewer/2022042322/625d3111e51620327d743704/html5/thumbnails/147.jpg)
Thank You
SIR
K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 43 / 43