Mobile Robot Localization (ch. 7)
-
Upload
zaltana-torres -
Category
Documents
-
view
94 -
download
14
description
Transcript of Mobile Robot Localization (ch. 7)
Mobile Robot Localization (ch. 7)
• Mobile robot localization is the problem of determining the pose of a robot relative to a given map of the environment. Because,
• Unfortunately, the pose of a robot can not be sensed directly, at least for now. The pose has to be inferred from data.
• A single sensor measurement is enough?
• The importance of localization in robotics.
• Mobile robot localization can be seen as a problem of coordinate transformation. (One point of
view.)
Mobile Robot Localization• Localization techniques have been
developed for a broad set of map representations. – Feature based maps, location based maps,
occupancy grid maps, etc. (what exactly are they?) (See figure 7.2)
– (You can probably guess What is the mapping problem?)
• Remember, in localization problem, the map is given, known, available.
• Is it hard? Not really, because,
Mobile Robot Localization• Most localization algorithms are variants of
Bayes filter algorithm.
• Different representation of maps, sensor models, motion model, etc lead to different variant.
• Here is the agenda.
Mobile Robot Localization• We want to know different kinds of maps.
• We want to know different kinds of localization problems.
• We want to know how to solve localization problems, during which process, we also want to know how to get sensor model, motion model, etc.
Mobile Robot Localization(We want to know different kinds of maps. )
Different kinds of maps.
At a glance, ….
feature-based, location-based, metric, topological map, occupancy grid map, etc.
see figure 7.2– http://www.cs.cmu.edu/afs/cs/project/jair/pub/volume11/fox99a-html/node
23.html
Mobile Robot Localization – A Taxonomy
(We want to know different kinds of localization problems.)• Different kinds of Localization problems.
• A taxonomy in 4 dimensions– Local versus Global (initial knowledge)
– Static versus Dynamic (environment)
– Passive versus active (control of robots)
– Single robot or multi-robot
Mobile Robot Localization
• Solved already, the Bayes filter algorithm. How?
• The straightforward application of Bayes filters to the localization problem is called Markov localization.
• Here is the algorithm (abstract?)
Mobile Robot Localization
• Algorithm Bayes_filter ( )
• for all do
• • endfor• return
111^ )(),|()( tttttt dxxbelxuxpxbel
ttt zuxbel ,),( 1
)( txbel
)()|( )( ^tttt xbelxzpxbel
tx
Mobile Robot Localization
• Algorithm Markov Locatlization ( )
• for all do
• • endfor• return
The Markov Localization algorithm addresses the global localization problem, the position tracking problem, and the kidnapped robot problem in static environment.
111^ )(),,|()( tttttt dxxbelmxuxpxbel
mzuxbel ttt ,,),( 1
)( txbel
)(),|( )( ^tttt xbelmxzpxbel
tx
Mobile Robot Localization
• Revisit Figure 7.5 to see how Markov localization algorithm in working.
• The algorithm Markov Localization is still very abstract. To put it in work (eg. your project), we need a lot of more background knowledge to realize motion model, sensor model, etc….
• We start with Guassian Filter (also called Kalman filter)
Bayes Filter Implementations (1)
Kalman Filter(Gaussian filters)
(back to Ch.3)
Kalman Filter
• The discovery of KF --- one of the greatest discoveries in the history of statistical estimation theory and possibly the greatest in the 21th century.
• Has been used in many application areas ever since Richard Kalman discovered the idea in 1960.
• The KF has made it possible for humans to do things that would not have been possible without it.
• In modern technology, KFs have become as indispensable as silicon chips.
• Prediction
• Correction
Bayes Filter Reminder
111 )(),|()( tttttt dxxbelxuxpxbel
)()|()( tttt xbelxzpxbel
Gaussians
2
2)(
2
1
2
2
1)(
:),(~)(
x
exp
Nxp
-
Univariate
)()(2
1
2/12/
1
)2(
1)(
:)(~)(
μxΣμx
Σx
Σμx
t
ep
,Νp
d
Multivariate
),(~),(~ 22
2
abaNYbaXY
NX
Properties of Gaussians
22
21
222
21
21
122
21
22
212222
2111 1
,~)()(),(~
),(~
NXpXpNX
NX
• We stay in the “Gaussian world” as long as we start with Gaussians and perform only linear transformations.
• Review your probability textbook
),(~),(~ TAABANY
BAXY
NX
Multivariate Gaussians
12
11
221
11
21
221
222
111 1,~)()(
),(~
),(~
NXpXpNX
NX
http://en.wikipedia.org/wiki/Multivariate_normal_distribution
Kalman Filter
tttttt uBxAx 1
tttt xCz
Estimates the state x of a discrete-time controlled process that is governed by the linear stochastic difference equation
with a measurement
Components of a Kalman Filter
t
Matrix (nxn) that describes how the state evolves from t-1 to t without controls or noise.
tA
Matrix (nxl) that describes how the control ut changes the state from t-1 to t.tB
Matrix (kxn) that describes how to map the state xt to an observation zt.tC
t
Random variables representing the process and measurement noise that are assumed to be independent and normally distributed with covariance Rt and Qt respectively.
Kalman Filter Algorithm
1. Algorithm Kalman_filter( t-1, t-1, ut, zt):
2. Prediction:3. 4.
5. Correction:6. 7. 8.
9. Return t, t
ttttt uBA 1
tTtttt RAA 1
1)( tTttt
Tttt QCCCK
)( tttttt CzK
tttt CKI )(
Kalman Filter Updates in 1D
Kalman Filter Updates in 1D
1)(with )(
)()(
tTttt
Tttt
tttt
ttttttt QCCCK
CKI
CzKxbel
2,
2
2
22 with )1(
)()(
tobst
tt
ttt
tttttt K
K
zKxbel
Kalman Filter Updates in 1D
tTtttt
tttttt RAA
uBAxbel
1
1)(
2
,2221)(
tactttt
tttttt a
ubaxbel
Kalman Filter Updates
0000 ,;)( xNxbel
Linear Gaussian Systems: Initialization
• Initial belief is normally distributed:
• Dynamics are linear function of state and control plus additive noise:
tttttt uBxAx 1
Linear Gaussian Systems: Dynamics
ttttttttt RuBxAxNxuxp ,;),|( 11
1111
111
,;~,;~
)(),|()(
ttttttttt
tttttt
xNRuBxAxN
dxxbelxuxpxbel
Linear Gaussian Systems: Dynamics
tTtttt
tttttt
ttttT
tt
ttttttT
tttttt
ttttttttt
tttttt
RAA
uBAxbel
dxxx
uBxAxRuBxAxxbel
xNRuBxAxN
dxxbelxuxpxbel
1
1
1111111
11
1
1111
111
)(
)()(2
1exp
)()(2
1exp)(
,;~,;~
)(),|()(
• Observations are linear function of state plus additive noise:
tttt xCz
Linear Gaussian Systems: Observations
tttttt QxCzNxzp ,;)|(
ttttttt
tttt
xNQxCzN
xbelxzpxbel
,;~,;~
)()|()(
Linear Gaussian Systems: Observations
1
11
)(with )(
)()(
)()(2
1exp)()(
2
1exp)(
,;~,;~
)()|()(
tTttt
Tttt
tttt
ttttttt
tttT
ttttttT
tttt
ttttttt
tttt
QCCCKCKI
CzKxbel
xxxCzQxCzxbel
xNQxCzN
xbelxzpxbel
See page 45-54 for mathematical derivation.
The Prediction-Correction-Cycle
tTtttt
tttttt RAA
uBAxbel
1
1)(
2
,2221)(
tactttt
tttttt a
ubaxbel
Prediction
The Prediction-Correction-Cycle
1)(,)(
)()(
tTttt
Tttt
tttt
ttttttt QCCCK
CKI
CzKxbel
2,
2
2
22 ,)1(
)()(
tobst
tt
ttt
tttttt K
K
zKxbel
Correction
The Prediction-Correction-Cycle
1)(,)(
)()(
tTttt
Tttt
tttt
ttttttt QCCCK
CKI
CzKxbel
2,
2
2
22 ,)1(
)()(
tobst
tt
ttt
tttttt K
K
zKxbel
tTtttt
tttttt RAA
uBAxbel
1
1)(
2
,2221)(
tactttt
tttttt a
ubaxbel
Correction
Prediction
Kalman Filter Summary
• Highly efficient: Polynomial in measurement dimensionality k and state dimensionality n:
O(k2.376 + n2)
• Optimal for linear Gaussian systems!
• However, most robotics systems are nonlinear, unfortunately!