16-311-Q INTRODUCTION TO ROBOTICS FALL’17gdicaro/16311-Fall17/slides/... · 2017. 11. 4. · 3...

29
16-311-Q I NTRODUCTION TO R OBOTICS F ALL ’17 L ECTURE 20: E XTENDED K ALMAN F ILTER I NSTRUCTOR : G IANNI A. D I C ARO

Transcript of 16-311-Q INTRODUCTION TO ROBOTICS FALL’17gdicaro/16311-Fall17/slides/... · 2017. 11. 4. · 3...

Page 1: 16-311-Q INTRODUCTION TO ROBOTICS FALL’17gdicaro/16311-Fall17/slides/... · 2017. 11. 4. · 3 DISCRETE-TIME MOTION EQUATIONS ⇠ k+1 = 2 6 6 4 x k+1 y k+1 k+1 3 7 7 5 = 2 6 6 6

16-311-Q INTRODUCTION TO ROBOTICS FALL’17

LECTURE 20: EXTENDED KALMAN FILTER

INSTRUCTOR: GIANNI A. DI CARO

Page 2: 16-311-Q INTRODUCTION TO ROBOTICS FALL’17gdicaro/16311-Fall17/slides/... · 2017. 11. 4. · 3 DISCRETE-TIME MOTION EQUATIONS ⇠ k+1 = 2 6 6 4 x k+1 y k+1 k+1 3 7 7 5 = 2 6 6 6

2

EKF FOR MAP-BASED ROBOT LOCALIZATION

Proprioceptiveinformation

Exteroceptive

sensing

1. Action/prediction update: Proprioceptive sensing

2. Perception/measurement update: Exteroceptive sensing

1. Odometry measures: EKF with only motion

2. Detection of landmarks: EKF with motion + observations

Page 3: 16-311-Q INTRODUCTION TO ROBOTICS FALL’17gdicaro/16311-Fall17/slides/... · 2017. 11. 4. · 3 DISCRETE-TIME MOTION EQUATIONS ⇠ k+1 = 2 6 6 4 x k+1 y k+1 k+1 3 7 7 5 = 2 6 6 6

3

DISCRETE-TIME MOTION EQUATIONS

⇠k+1 =

2

664

xk+1

yk+1

✓k+1

3

775 =

2

66664

xk + �Sk cos�✓k +

�✓k2

yk + �Sk sin�✓k +

�✓k2

✓k + �✓k

3

77775

From Runge-Kutta numeric integration of pose evolution kinematic equations. Assume that the odometry model is perfect, based on measured distance

𝛥S, and heading variation 𝛥𝜽

In absence of specific information, motion noise is modeled as Gaussian white noise (and the two noise components are assumed to be uncorrelated)

⌫k =⇥⌫sk ⌫✓k

⇤T ⇠ N(0,Vk), Vk =

"�2ks 0

0 �2k✓

#

Process noise

Odometry measurements are noisy! ➔ Random noise is added to 𝛥S and 𝛥𝜽 to model motion’s kinematics

⇠k+1 =

2

664

xk

yk

✓k

3

775+

2

6664

(�Sk + ⌫sk) cos(✓k +

�✓k2 + ⌫

✓k)

(�Sk + ⌫sk) sin(✓k +

�✓k2 + ⌫

✓k)

�✓k + ⌫✓k

3

7775

Discrete-time process (motion)

equations

Page 4: 16-311-Q INTRODUCTION TO ROBOTICS FALL’17gdicaro/16311-Fall17/slides/... · 2017. 11. 4. · 3 DISCRETE-TIME MOTION EQUATIONS ⇠ k+1 = 2 6 6 4 x k+1 y k+1 k+1 3 7 7 5 = 2 6 6 6

4

NON LINEARITY OF DISCRETE-TIME MOTION EQUATIONS

⇠k+1 =

2

664

xk

yk

✓k

3

775+

2

6664

(�Sk + ⌫sk) cos(✓k +

�✓k2 + ⌫

✓k)

(�Sk + ⌫sk) sin(✓k +

�✓k2 + ⌫

✓k)

�✓k + ⌫✓k

3

7775

Discrete-time process (motion)

equations

⇠k+1 = f (⇠k ,�Sk ,�✓k , ⌫k), ⌫k =⇥⌫sk ⌫✓k

⇤T ⇠ N(0,Vk)

Process’ dynamics function, f(), is not linear ➜ Process equations do not meet the linearity requirement for using the Kalman filter

Linearize pose evolution f() in the neighborhood of [ ξ̂k|k uk (𝝂k = 0)], the current state estimate, controls (𝛥Sk and 𝛥𝜽k), and mean of process noise

f (⇠k , uk , ⌫k) = f (⇠, u, ⌫)|b⇠k|k ,uk ,0 + (⇠k �b⇠k|k)F ⇠|b⇠k|k ,uk ,0 + (⌫k � 0)F ⌫ |b⇠k|k ,uk ,0

= f k(b⇠k|k , uk , 0) + (⇠k � b⇠k|k)F k⇠ + ⌫kF k⌫1st order Taylor series

Linear in ξk and 𝝂k

Page 5: 16-311-Q INTRODUCTION TO ROBOTICS FALL’17gdicaro/16311-Fall17/slides/... · 2017. 11. 4. · 3 DISCRETE-TIME MOTION EQUATIONS ⇠ k+1 = 2 6 6 4 x k+1 y k+1 k+1 3 7 7 5 = 2 6 6 6

5

EXTENDED KALMAN FILTER (EKF): LINEARIZED MOTION MODEL

Linear(ized) discrete-time process (motion) equations

⇠k+1 = f k(b⇠k|k , uk , 0) + (⇠k � b⇠k|k)F k⇠ + ⌫kF k⌫

Scenario (Prediction from motion): The robot does move but no external observations are made. Proprioceptive measures from the on-board odometry sensors are used to model robot’s motion dynamics avoiding to consider the direct control inputs.

Linearization of motion dynamics using the Jacobians F k⇠ and F k⌫ , that have

to be evaluated in (⇠k = b⇠k|k , uk , ⌫k = 0)

= 0

➔ Rules for linear transformations of mean and (co)variance of Gaussian variables can be applied

Measurement correction

8>><

>>:

b⇠k+1 = b⇠k+1|k + Gk+1(zk+1 � Ck+1b⇠k+1|k) (State update)

P k+1 = P k+1|k � Gk+1Ck+1P k+1|k (Covariance update)

Gk+1 = P k+1|kCTk+1(Ck+1P k+1|kC

Tk+1 +W k+1)

�1(Kalman gain)

Extended Kalman Filter (EKF) - Motion onlyThe EKF equations:

Prediction update

( b⇠k+1|k = f k(b⇠k|k , 0;�Sk ,�✓k) + (b⇠k|k � b⇠k|k)F ⇠|b⇠k ,uk ,0 (State prediction)

P k+1|k = F k⇠P kFTk⇠ + F k⌫V kF

Tk⌫ (Covariance prediction)

Page 6: 16-311-Q INTRODUCTION TO ROBOTICS FALL’17gdicaro/16311-Fall17/slides/... · 2017. 11. 4. · 3 DISCRETE-TIME MOTION EQUATIONS ⇠ k+1 = 2 6 6 4 x k+1 y k+1 k+1 3 7 7 5 = 2 6 6 6

6

EKF JACOBIANS FOR THE LINEARIZED MOTION MODEL

The Jacobian of the non-linear function f() is computed in [ ξ̂k|k uk (𝝂k = 0)], the current state estimate (the mean), the current controls, the mean of the Gaussian noise

f() is a vector function with three function components:

f

kx

= x

k

+ (�S

k

+ ⌫

s

k

) cos(✓

k

+

�✓k

2 + ⌫✓

k

)

f

ky

= y

k

+ (�S

k

+ ⌫

s

k

) sin(✓

k

+

�✓k

2 + ⌫✓

k

)

f

k✓

= ✓

k

+ �✓

k

+ ⌫

k

The Jacobian matrix of f:

Fk

(x

k

, y

k

, ✓

k

, ⌫

s

k

, ⌫

k

) =

⇥rfkx

rfky

rfk✓

⇤T

=

2

6666666664

@f

kx

@x

k

@f

kx

@y

k

@f

kx

@✓

k

@f

kx

@⌫

s

k

@f

kx

@⌫

k

@f

ky

@x

k

@f

ky

@y

k

@f

ky

@✓

k

@f

ky

@⌫

s

k

@f

ky

@⌫

k

@f

k✓

@x

k

@f

k✓

@y

k

@f

k✓

@✓

k

@f

k✓

@⌫

s

k

@f

k✓

@⌫

k

3

7777777775

=

⇥Fk⇠ F k⌫

Fk⇠ =

2

664

1 0 ��Sk

sin(✓

k

+

�✓k

2 )

0 1 �S

k

cos(✓

k

+

�✓k

2 )

0 0 1

3

775b⇠k|k ,uk ,⌫=0

Fk⌫ =

2

664

cos(✓

k

+

�✓k

2 ) ��Sk sin(✓k +�✓k

2 )

sin(✓

k

+

�✓k

2 ) �S

k

cos(✓

k

+

�✓k

2 )

0 1

3

775b⇠k|k ,uk ,⌫=0

Fk

(x

k

, y

k

, ✓

k

, ⌫

s

k

, ⌫

k

) =

⇥rfkx

rfky

rfk✓

⇤T

=

2

6666666664

@f

kx

@x

k

@f

kx

@y

k

@f

kx

@✓

k

@f

kx

@⌫

s

k

@f

kx

@⌫

k

@f

ky

@x

k

@f

ky

@y

k

@f

ky

@✓

k

@f

ky

@⌫

s

k

@f

ky

@⌫

k

@f

k✓

@x

k

@f

k✓

@y

k

@f

k✓

@✓

k

@f

k✓

@⌫

s

k

@f

k✓

@⌫

k

3

7777777775

=

⇥Fk⇠ F k⌫

Fk⇠ =

2

664

1 0 ��Sk

sin(✓

k

+

�✓k

2 )

0 1 �S

k

cos(✓

k

+

�✓k

2 )

0 0 1

3

775b⇠k|k ,uk ,⌫=0

Fk⌫ =

2

664

cos(✓

k

+

�✓k

2 ) ��Sk sin(✓k +�✓k

2 )

sin(✓

k

+

�✓k

2 ) �S

k

cos(✓

k

+

�✓k

2 )

0 1

3

775b⇠k|k ,uk ,⌫=0

Page 7: 16-311-Q INTRODUCTION TO ROBOTICS FALL’17gdicaro/16311-Fall17/slides/... · 2017. 11. 4. · 3 DISCRETE-TIME MOTION EQUATIONS ⇠ k+1 = 2 6 6 4 x k+1 y k+1 k+1 3 7 7 5 = 2 6 6 6

RECAP ON DERIVATIVES, GRADIENTS, AND JACOBIANS

Def. Derivative: Given a scalar function f : X ✓ R 7! R, if the limit

lim

h!0

f (x

0

+ h)� f (x0

)

h

exists and takes a finite value, f is differentiable in x0

2 X and the value of the limit is thederivative of the function in x

0

, which is also indicated with f 0(x0

)

def=

df

dx

(x

0

)

Geometric interpretation: the derivative is the slope of the tangent to the graph of f in point(x

0

, f (x

0

)). This can be shown considering that the line passing for two points (x0

, f (x

0

))

and ((x0

+ h), f (x

0

+ h)) belonging to the graph f , is y = mx + f (x0

+ h), where the slope ism =

f (x

0

+h)�f (x0

)

(x

0

+h)�x0

. If h ! 0, the secant to the curve overlaps with the tangent in x0

. Thatis, the equation of the tangent to f in x

0

is: y = f (x0

) + f

0(x

0

)(x � x0

), which is preciselythe first-order Taylor series computed in x

0

.

f(x0)

f(x0+h)

x0 x0+h

f(x)

x

h ! 0f(x0)

x0

f(x)

x 7

R E C A P O N D E R I VAT I V E S , G R A D I E N T S , J A C O B I A N S

Page 8: 16-311-Q INTRODUCTION TO ROBOTICS FALL’17gdicaro/16311-Fall17/slides/... · 2017. 11. 4. · 3 DISCRETE-TIME MOTION EQUATIONS ⇠ k+1 = 2 6 6 4 x k+1 y k+1 k+1 3 7 7 5 = 2 6 6 6

RECAP ON DERIVATIVES, GRADIENTS, AND JACOBIANS

Gradient: “derivative” for scalar functions of multiple variables ! Normal to the tangenthyperplane to the function graph. Given a scalar, differentiable, multi-variable functionf : Rn 7! R, its gradient is the vector of its partial derivatives:

rf(x

1

,x

2

,...,x

n

)

def=

⇣@f

@x

1

,

@f

@x

2

, . . . ,

@f

@x

n

⌘=

@f

@x

1

e1

+

@f

@x

2

e2

+ . . .+

@f

@x

n

en

For f : X ✓ Rn 7! R, the Taylor series becomes:

f (x)|x0

=

X

|k|�0

1

k!

@

k

[f (x0

)](x � x0

)

k

where k is a multi-index, an integer-valued vector, k = (k1

, k

2

, . . . , k

n

), k

i

2 Z+, and @k fmeans @k1

1

f @

k

2

2

f · · · @knn

f , where @ ij

f =

@

j

f

@x

j

i

. The 2nd order polynomial is:

f (x) = f (x0

) +rf (x0

)

T

(x � x0

) +

1

2

(x � x0

)

TH(f (x0

))(x � x0

)

Removing the quadratic part, the linear approximation is obtained, that is, the equation ofthe tangent hyperplane in x

0

, where the gradient is normal to the tangent hyperplane

Jacobian: “gradient” for vector functions of multiple vari-ables! Each function component has a tangent hyperplaneto the function graph ! Map of tangent hyperplanes

8

R E C A P O N D E R I VAT I V E S , G R A D I E N T S , J A C O B I A N S

Page 9: 16-311-Q INTRODUCTION TO ROBOTICS FALL’17gdicaro/16311-Fall17/slides/... · 2017. 11. 4. · 3 DISCRETE-TIME MOTION EQUATIONS ⇠ k+1 = 2 6 6 4 x k+1 y k+1 k+1 3 7 7 5 = 2 6 6 6

ABOUT THE LINEARIZATION IN THE EKF: LINEAR CASE

9

E F F E C T O F L I N E A R I Z AT I O N : L I N E A R C A S E

Page 10: 16-311-Q INTRODUCTION TO ROBOTICS FALL’17gdicaro/16311-Fall17/slides/... · 2017. 11. 4. · 3 DISCRETE-TIME MOTION EQUATIONS ⇠ k+1 = 2 6 6 4 x k+1 y k+1 k+1 3 7 7 5 = 2 6 6 6

ABOUT THE LINEARIZATION IN THE EKF: NON LINEAR CASE

10

E F F E C T O F L I N E A R I Z AT I O N : N O N L I N E A R C A S E

Page 11: 16-311-Q INTRODUCTION TO ROBOTICS FALL’17gdicaro/16311-Fall17/slides/... · 2017. 11. 4. · 3 DISCRETE-TIME MOTION EQUATIONS ⇠ k+1 = 2 6 6 4 x k+1 y k+1 k+1 3 7 7 5 = 2 6 6 6

11

E F F E C T O F L I N E A R I Z AT I O N : N O N L I N E A R C A S EEFFECT OF EKF LINEARIZATION

Page 12: 16-311-Q INTRODUCTION TO ROBOTICS FALL’17gdicaro/16311-Fall17/slides/... · 2017. 11. 4. · 3 DISCRETE-TIME MOTION EQUATIONS ⇠ k+1 = 2 6 6 4 x k+1 y k+1 k+1 3 7 7 5 = 2 6 6 6

EFFECT OF EKF LINEARIZATION

12

E F F E C T O F L I N E A R I Z AT I O N : N O N L I N E A R C A S E

Page 13: 16-311-Q INTRODUCTION TO ROBOTICS FALL’17gdicaro/16311-Fall17/slides/... · 2017. 11. 4. · 3 DISCRETE-TIME MOTION EQUATIONS ⇠ k+1 = 2 6 6 4 x k+1 y k+1 k+1 3 7 7 5 = 2 6 6 6

EFFECT OF EKF LINEARIZATION

13

E F F E C T O F L I N E A R I Z AT I O N : N O N L I N E A R C A S E

Page 14: 16-311-Q INTRODUCTION TO ROBOTICS FALL’17gdicaro/16311-Fall17/slides/... · 2017. 11. 4. · 3 DISCRETE-TIME MOTION EQUATIONS ⇠ k+1 = 2 6 6 4 x k+1 y k+1 k+1 3 7 7 5 = 2 6 6 6

ERROR IN LOCALIZATION KEEPS GROWING

The ellipses in the plot show the error in (x, y), but also the error in ✓ (the third componentof the covariance matrix) grows (but usually less than that in (x, y))

14

E R R O R I N L O C A L I Z AT I O N K E E P S G R O W I N G

Page 15: 16-311-Q INTRODUCTION TO ROBOTICS FALL’17gdicaro/16311-Fall17/slides/... · 2017. 11. 4. · 3 DISCRETE-TIME MOTION EQUATIONS ⇠ k+1 = 2 6 6 4 x k+1 y k+1 k+1 3 7 7 5 = 2 6 6 6

UNCERTAINTY VS. MAGNITUDE OF PROCESS COVARIANCE

The magnitude of the total uncertainty, including both position and heading, is quantified by

theqdet (

ˆ

P ), shown in the plot for different values of V = ↵V 0, ↵ = {0.5, 1, 2} 15

U N C E R TA I N T Y A S P R O C E S S VA R I A N C E

Page 16: 16-311-Q INTRODUCTION TO ROBOTICS FALL’17gdicaro/16311-Fall17/slides/... · 2017. 11. 4. · 3 DISCRETE-TIME MOTION EQUATIONS ⇠ k+1 = 2 6 6 4 x k+1 y k+1 k+1 3 7 7 5 = 2 6 6 6

16

EKF FOR MAP-BASED ROBOT LOCALIZATION

Proprioceptiveinformation

Exteroceptive

sensing

1. Action/prediction update: Proprioceptive sensing

2. Perception/measurement update: Exteroceptive sensing

1. Odometry measures: EKF with only motion

2. Detection of landmarks: EKF with motion + observations

Page 17: 16-311-Q INTRODUCTION TO ROBOTICS FALL’17gdicaro/16311-Fall17/slides/... · 2017. 11. 4. · 3 DISCRETE-TIME MOTION EQUATIONS ⇠ k+1 = 2 6 6 4 x k+1 y k+1 k+1 3 7 7 5 = 2 6 6 6

USING MAPS TO REDUCE ERRORS

Exteroceptive measures are needed in the filter to reduce pose uncertainty

A map is provided to the robot: a list of objects in the environment along with theirproperties

Let’s consider the case in which the map contains n fixed landmarks with their position.Each landmark is identifiable by the robot through a set of detectable features

17

U S I N G M A P S T O R E D U C E T H E E R R O R

Page 18: 16-311-Q INTRODUCTION TO ROBOTICS FALL’17gdicaro/16311-Fall17/slides/... · 2017. 11. 4. · 3 DISCRETE-TIME MOTION EQUATIONS ⇠ k+1 = 2 6 6 4 x k+1 y k+1 k+1 3 7 7 5 = 2 6 6 6

LANDMARK DETECTION

The robot is equipped with (range finder) sensors that provide observations of the landmarkswith respect to the robot as described by the observation model:

zk+1

= hk

(⇠k

,wk

;�ik

)

�ik

=

h�

i

kx

i

ky

iT

is the known (from map) location in the world frame of the landmark

observed at time step k, wk

models sensing errors, ⇠k

=

⇥x

k

y

k

k

⇤T

Using its range sensor, the robot performs the measure zk+1

=

⇥⇢

k

k

⇤T relative to landmark

i detected at step k: ⇢k

is the range, �k

is the bearing angle of the landmark with respect tothe robot (i.e., landmark’s position expressed in polar coordinates in the robot’s local frame)In the considered scenario, an observation also returns the identity i of the sensed landmark

In more general terms, the observation of the landmarks is performed through the observationof a feature vector (e.g., a set of geometric features like line or arc segments), that in turnneed to be associated to a specific landmark ! data association problem, to distinguishamong different landmarks as well as to discard pure noise, which is not considered hereThe knowledge of the identity i of the landmark allows the robot to retrieve from the mapthe Cartesian coordinates (�i

kx

,�

i

ky

) of the landmark

In absence of specific information, the sensor noise is modeled as Gaussian white noise andthe two noise components of the sensing are assumed to be uncorrelated:

wk

=

hw

k

w

k

iT

⇠ N(0,Wk

), Wk

=

"�

2

k⇢

0

0 �

2

k�

#

18

L A N D M A R K - B A S E D M A P S

Page 19: 16-311-Q INTRODUCTION TO ROBOTICS FALL’17gdicaro/16311-Fall17/slides/... · 2017. 11. 4. · 3 DISCRETE-TIME MOTION EQUATIONS ⇠ k+1 = 2 6 6 4 x k+1 y k+1 k+1 3 7 7 5 = 2 6 6 6

LANDMARK DETECTION AND OBSERVATION MODEL

Function hk

plays the role of f for the observations:it allows to compute the predicted measurementfrom the predicted state ˆ⇠

k+1|k . It maps the statevector into the observation vector z

k+1

At time k, the observation modelhk

(⇠k

,wk

;�)

returns the observation zk+1

that the robot is expected to make in state ⇠k

accounting for sensor noise

In the scenario, at pose ⇠k

the robot is expected todetect landmark i at a defined range ⇢ and bearing�, that is, through the measure z

k+1

= (⇢,�) thatcan be possibly corrupted by white Gaussian noise

Since hk

maps the state (robot coordinates in theworld reference frame) into the observation vector(polar coordinates of the landmark in the robot’sreference frame), the observation model is:

zk+1

=

2

4

q(�

i

kx

� xk

)

2

+ (�

i

ky

� yk

)

2

arctan

⇣(�

i

yx

� yk

)/(�

i

kx

� xk

)

⌘� ✓

k

3

5+

"w

k

w

k

#

Measurement noise in the range and the bearing isassumed uncorrelated and Gaussian:

w =

"w⇢

w�

#T

⇠ N(0,W ), V =

"�

2

0

0 �

2

#

For instance, �⇢

= 0.1m, ��

= 1o

19

L A N D M A R K D E T E C T I O N A N D O B S E R VAT I O N M O D E L

Page 20: 16-311-Q INTRODUCTION TO ROBOTICS FALL’17gdicaro/16311-Fall17/slides/... · 2017. 11. 4. · 3 DISCRETE-TIME MOTION EQUATIONS ⇠ k+1 = 2 6 6 4 x k+1 y k+1 k+1 3 7 7 5 = 2 6 6 6

INTERPRETATION OF THE MEASUREMENTS

hk

potentially changes at each time step, being parametrized by the coordinates�ik

= (�

i

kx

,�

i

ky

) of the specific landmark detected, whose identity i is assumed to beknown/acquired

Using the observation model hk

, the robot computes the expected range and the bearingangle to the detected feature based on its own predicted pose

ˆ⇠k+1|k and the known position

of the landmark from the input map

Any difference between the actual observation zk+1

= (⇢

k

,�

k

)

and the estimated observation/position hk

(

ˆ⇠k+1|k ;�

i

k

)

indicates an error in the robot’s position estimate:the robot isn’t where it thought it was!

The difference is quantified in the Kalman filter by the innovation term:

✏k+1

= zk+1

� hk

(

ˆ⇠k+1|k , 000;�

i

k+1

)

Same problem as before: h is a non linear function of the state!

20

W H AT M E A S U R E M E N T S T E L L

Page 21: 16-311-Q INTRODUCTION TO ROBOTICS FALL’17gdicaro/16311-Fall17/slides/... · 2017. 11. 4. · 3 DISCRETE-TIME MOTION EQUATIONS ⇠ k+1 = 2 6 6 4 x k+1 y k+1 k+1 3 7 7 5 = 2 6 6 6

A NUMERIC EXAMPLE

Example: at step k + 1 the robot detects landmark i at a relative range of 2m and a relativeangle of 90o , that is, z

k+1

=

⇥2 90

⇤T ; from the input map, position of landmark i is

�i = (3, 3); robot’s predicted pose according to the current state of the Kalman filter isˆ⇠k+1|k =

⇥2 2 0

⇤T , while its correct pose is ⇠

k+1

=

⇥3 1 0

⇤T (i.e., there is no sensing error,

as it can be seen from the figure)

The innovation is:

✏k+1

= zk+1

� hk

(

ˆ⇠k+1|k , 000;�

i

k+1

)

=

"2

90

#

�" p

1

2

+ 1

2

arctan(1/1)� 0

#

=

"2�p2

45

o

#

In [m, rad] units, the Euclidean norm of theinnovation is:

⇥2�p2 0.79

⇤T

) k✏k+1

k =q(2�

p2)

2

+ 0.79

2 ⇡ 0.98

21

N U M E R I C E X A M P L E

Page 22: 16-311-Q INTRODUCTION TO ROBOTICS FALL’17gdicaro/16311-Fall17/slides/... · 2017. 11. 4. · 3 DISCRETE-TIME MOTION EQUATIONS ⇠ k+1 = 2 6 6 4 x k+1 y k+1 k+1 3 7 7 5 = 2 6 6 6

LINEARIZATION OF THE OBSERVATION MODEL

Linearized observation model in the EKF:

1st order Taylor expansion for hk

() in the neighborhood of the current state estimate, andparametrized by the coordinates �

k

, results in:

hk

(⇠k

,wk

;�k

) = hk

(⇠,w ;�k

)|ˆ⇠k+1|k ,000

+ (⇠k

� ˆ⇠k+1|k)H⇠|ˆ⇠

k+1|k ,000+ (w

k

� 000)Hw |ˆ⇠k+1|k ,000

= hk

(

ˆ⇠k+1|k , 000;�k) + (⇠k � ˆ⇠k+1|k)Hk⇠ + wkHkw

Therefore, observation predictions return linear and can be used in the EKF equations belowby using H, the Jacobian of h, to play the role of matrix C

Prediction update

(ˆ⇠k+1|k = fk(ˆ⇠k|k , uk , 000) (State prediction)

Pk+1|k = Fk⇠PkF

T

k⇠ + Fk⌫VkFT

k⌫ (Covariance prediction)

Measurement correction

8>>>>><

>>>>>:

ˆ⇠k+1

=

ˆ⇠k+1|k + Gk+1(zk+1 � hk(ˆ⇠k+1|k , 000;�i

k

)) (State update)

Pk+1

= Pk+1|k � Gk+1Hk⇠Pk+1|k (Covariance update)

Gk+1

= Pk+1|kHk⇠

TS�1k+1

(Kalman gain)

Sk+1

= Hk⇠Pk+1|kHk⇠

T

+HkwWk+1Hkw

T

22

L I N E A R I Z AT I O N O F T H E O B S E R VAT I O N M O D E L

Page 23: 16-311-Q INTRODUCTION TO ROBOTICS FALL’17gdicaro/16311-Fall17/slides/... · 2017. 11. 4. · 3 DISCRETE-TIME MOTION EQUATIONS ⇠ k+1 = 2 6 6 4 x k+1 y k+1 k+1 3 7 7 5 = 2 6 6 6

JACOBIANS FOR THE LINEARIZED OBSERVATION MODEL

The Jacobian of the non-linear function hk

is computed at the mean of the Gaussianmeasurement noise (w = 000) and at the current state estimate ˆ⇠

k+1|k (which corresponds tothe estimated mean of the Gaussian distribution of the state variable):Let’s adopt a notation similar to the one used before for f to express the function h

k

,defining h

k

=

⇥h

k⇢

h

k�

⇤T and including sensor noise:

h

k⇢

=

q(�

i

kx

� xk

)

2

+ (�

i

ky

� yk

)

2

+ w

k

h

k�

= arctan

⇣(�

i

yx

� yk

)/(�

i

kx

� xk

)

⌘� ✓

k

+ w

k

The Jacobian matrix of hk

is therefore:

Hk

(x

k

, y

k

, ✓

k

, w

k

, w

k

) =

⇥rhkx

rhky

rhk✓

⇤T

=

2

66664

@h

k⇢

@x

k

@h

k⇢

@y

k

@h

k⇢

@✓

k

@h

k⇢

@w

k

@h

k⇢

@w

k

@h

k�

@x

k

@h

k�

@y

k

@h

k�

@✓

k

@h

k�

@w

k

@h

k�

@w

k

3

77775=

⇥Hk⇠ Hkw

Hk⇠ =

2

6664

��

i

kx

� xk

r

i

k

��

i

ky

� yk

r

i

k

0

i

ky

� yk

(r

i

k

)

2

��

i

kx

� xk

(r

i

k

)

2

�1

3

7775

ˆ⇠k+1|k ,w=0

Hkw =

"1 0

0 1

#

where r ik

is the distance of landmark i from the predicted state: r ik

=

q(�

i

kx

� xk

)

2

+ (�

i

ky

� yk

)

2

23

J A C O B I A N S F O R T H E L I N E A R I Z E D O B S E R VAT I O N M O D E L

Hk(xk , yk , ✓k , w⇢k , w

�k ) =

⇥rhk⇢ rhk�

⇤T=

2

66664

@hk⇢

@xk

@hk⇢

@yk

@hk⇢

@✓k

@hk⇢

@w

⇢k

@hk⇢

@w

�k

@hk�

@xk

@hk�

@yk

@hk�

@✓k

@hk�

@w

⇢k

@hk�

@w

�k

3

77775=⇥Hk⇠ Hkw

Page 24: 16-311-Q INTRODUCTION TO ROBOTICS FALL’17gdicaro/16311-Fall17/slides/... · 2017. 11. 4. · 3 DISCRETE-TIME MOTION EQUATIONS ⇠ k+1 = 2 6 6 4 x k+1 y k+1 k+1 3 7 7 5 = 2 6 6 6

EXPERIMENTAL RESULTS: AN ALMOST PERFECT STATE TRACKING

n = 20 landmarks are randomly deployed in a squared environment of 20⇥20 m2�

= 0.1 m, ��

= 1oEvery n steps, a reading is performed, returning the measured range and bearing to arandomly selected landmarkThis is a quite favorable scenario for the EKF 24

E X P E R I M E N TA L R E S U LT S : A N A L M O S T P E R F E C T T R A C K I N G

Page 25: 16-311-Q INTRODUCTION TO ROBOTICS FALL’17gdicaro/16311-Fall17/slides/... · 2017. 11. 4. · 3 DISCRETE-TIME MOTION EQUATIONS ⇠ k+1 = 2 6 6 4 x k+1 y k+1 k+1 3 7 7 5 = 2 6 6 6

THE EVOLUTION OF THE ERROR: NO SYSTEMATIC GROWTH

25

E V O L U T I O N O F T H E E R R O R : N O S Y S T E M AT I C G R O W T H

Page 26: 16-311-Q INTRODUCTION TO ROBOTICS FALL’17gdicaro/16311-Fall17/slides/... · 2017. 11. 4. · 3 DISCRETE-TIME MOTION EQUATIONS ⇠ k+1 = 2 6 6 4 x k+1 y k+1 k+1 3 7 7 5 = 2 6 6 6

KALMAN WITH SPORADIC MEASURES: ENVIRONMENT BEACONS

Simulated run with no visible beacons.

The triangles represent the actual robot position and orientation⇥x(k), y(k), ✓(k)

⇤T

, therectangles represent the estimated robot pose, the ellipses represent the confidence in theestimates of x(k) and y(k)

26

E K F W I T H E N V I R O N M E N T B E A C O N S

Page 27: 16-311-Q INTRODUCTION TO ROBOTICS FALL’17gdicaro/16311-Fall17/slides/... · 2017. 11. 4. · 3 DISCRETE-TIME MOTION EQUATIONS ⇠ k+1 = 2 6 6 4 x k+1 y k+1 k+1 3 7 7 5 = 2 6 6 6

KALMAN WITH SPORADIC MEASURES: ENVIRONMENT BEACONS

Simulated run taking observations of a single wall beacon using a sonar sensors.

After the wall comes into view, the error ellipse shrinks perpendicular to the wall as aposteriori confidence in the estimate of x(k) and y(k) increases.

Note that the only part of a smooth wall that can be “seen” by a sonar sensor is the portionof the wall that is perpendicular to the incident sonar beam. 27

E K F W I T H E N V I R O N M E N T B E A C O N S

Page 28: 16-311-Q INTRODUCTION TO ROBOTICS FALL’17gdicaro/16311-Fall17/slides/... · 2017. 11. 4. · 3 DISCRETE-TIME MOTION EQUATIONS ⇠ k+1 = 2 6 6 4 x k+1 y k+1 k+1 3 7 7 5 = 2 6 6 6

KALMAN WITH SPORADIC MEASURES: ENVIRONMENT BEACONS

Simulated run with localization from first one, then two wall beacons.

After the first wall comes into view, the error ellipse shrinks perpendicular to the wall as aposteriori confidence in the estimate of x(k) and y(k) increases. The same happens with theview of the second wall, overall reducing estimate uncertainty. 28

E K F W I T H E N V I R O N M E N T B E A C O N S

Page 29: 16-311-Q INTRODUCTION TO ROBOTICS FALL’17gdicaro/16311-Fall17/slides/... · 2017. 11. 4. · 3 DISCRETE-TIME MOTION EQUATIONS ⇠ k+1 = 2 6 6 4 x k+1 y k+1 k+1 3 7 7 5 = 2 6 6 6

KALMAN WITH SPORADIC MEASURES: ENVIRONMENT BEACONS

Simulated run with localization from a sequence of wall beacons

The presence of multiple wall beacons allows to always keep uncertainty estimation very low.

29

E K F W I T H E N V I R O N M E N T B E A C O N S