Stereo Visual Odometrykaess/vslam_cvpr14/media/VSLAM... · Incremental Pose Recovery/RANSAC ....

26
Stereo Visual Odometry Chris Beall CVPR 2014 Visual SLAM Tutorial

Transcript of Stereo Visual Odometrykaess/vslam_cvpr14/media/VSLAM... · Incremental Pose Recovery/RANSAC ....

Page 1: Stereo Visual Odometrykaess/vslam_cvpr14/media/VSLAM... · Incremental Pose Recovery/RANSAC . Undistortion and Rectification . Feature Extraction • Detect local features in each

Stereo Visual Odometry

Chris Beall CVPR 2014 Visual SLAM Tutorial

Page 2: Stereo Visual Odometrykaess/vslam_cvpr14/media/VSLAM... · Incremental Pose Recovery/RANSAC . Undistortion and Rectification . Feature Extraction • Detect local features in each

OUTLINE

•  Motivation •  Algorithm Overview •  Feature Extraction and matching •  Incremental Pose Recovery

•  Practical/robustness considerations •  What else do you get? •  Results

Page 3: Stereo Visual Odometrykaess/vslam_cvpr14/media/VSLAM... · Incremental Pose Recovery/RANSAC . Undistortion and Rectification . Feature Extraction • Detect local features in each

Motivation

•  Why stereo Visual Odometry? •  Stereo avoids scale ambiguity inherent in monocular VO •  No need for tricky initialization procedure of landmark depth

Page 4: Stereo Visual Odometrykaess/vslam_cvpr14/media/VSLAM... · Incremental Pose Recovery/RANSAC . Undistortion and Rectification . Feature Extraction • Detect local features in each

Algorithm Overview

1. Rectification

2. Feature Extraction

4. Temporal Feature Matching

3. Stereo Feature Matching

5. Incremental Pose Recovery/RANSAC

Page 5: Stereo Visual Odometrykaess/vslam_cvpr14/media/VSLAM... · Incremental Pose Recovery/RANSAC . Undistortion and Rectification . Feature Extraction • Detect local features in each

Undistortion and Rectification

Page 6: Stereo Visual Odometrykaess/vslam_cvpr14/media/VSLAM... · Incremental Pose Recovery/RANSAC . Undistortion and Rectification . Feature Extraction • Detect local features in each

Feature Extraction •  Detect local features in each image

•  SIFT gives good results (can also use SURF, Harris, etc.)

Lowe ICCV 1999

Page 7: Stereo Visual Odometrykaess/vslam_cvpr14/media/VSLAM... · Incremental Pose Recovery/RANSAC . Undistortion and Rectification . Feature Extraction • Detect local features in each

Stereo Matching

•  Match features between left/right images

•  Since the images are rectified, we can restrict the search to a bounding box on the same scan-line

Left image Right image

Page 8: Stereo Visual Odometrykaess/vslam_cvpr14/media/VSLAM... · Incremental Pose Recovery/RANSAC . Undistortion and Rectification . Feature Extraction • Detect local features in each

Temporal Matching

•  Temporally match features between frame t and t-1

Page 9: Stereo Visual Odometrykaess/vslam_cvpr14/media/VSLAM... · Incremental Pose Recovery/RANSAC . Undistortion and Rectification . Feature Extraction • Detect local features in each

Relative Pose Estimation/RANSAC

•  Want to recover the incremental camera pose using the tracked features and triangulated landmarks

•  There will be some erroneous stereo and temporal feature associations ! Use RANSAC •  Select N out of M data items at random (the minimal set here is 3) •  Estimate parameter (incremental pose from t-1 to t) •  Find the number K of data items that fit the model (called inliers)

within a given tolerance •  Repeat S times •  Compute refined model using full inlier set

Page 10: Stereo Visual Odometrykaess/vslam_cvpr14/media/VSLAM... · Incremental Pose Recovery/RANSAC . Undistortion and Rectification . Feature Extraction • Detect local features in each

Relative Pose Estimation

•  Camera pose can be recovered given at least three known landmarks in a non-degenerate configuration

•  In the case of stereo VO, landmarks can simply be triangulated

•  Two ways to recover pose: •  Absolute orientation •  Reprojection error minimization

6 DOF camera pose

Known 3D landmarks

Page 11: Stereo Visual Odometrykaess/vslam_cvpr14/media/VSLAM... · Incremental Pose Recovery/RANSAC . Undistortion and Rectification . Feature Extraction • Detect local features in each

Absolute Orientation

•  Estimate relative camera motion by computing relative transformation between 3D landmarks which were triangulated from stereo-matched features

Pose r0

Pose r1 Tt

Pose r0

(R,t) ? Tt-1

Tt-1

Tt

Page 12: Stereo Visual Odometrykaess/vslam_cvpr14/media/VSLAM... · Incremental Pose Recovery/RANSAC . Undistortion and Rectification . Feature Extraction • Detect local features in each

Absolute Orientation

•  First, in 2D: •  Given two sets of corresponding points and related

by a rigid 2D transformation :

•  First recover rotation (2 points), then translation (1 point)

ABSOLUTE ORIENTATION

FRANK DELLAERT

Suppose we have two sets of corresponding points�p

l

i

and {pr

i

} in a left andright coordinate frame, respectively, related by a rigid (2D or 3D) transformation:

p

l = R

l

r

p

r + t

l

r

The process of finding the rigid transformation T

l

r

= (Rl

r

, t

l

r

) between them is calledabsolute orientation. Most algorithms proceed by first finding the rotation R

l

r

and then the translation t

l

r

, as the latter can be recovered from a single pointcorrespondence once R

l

r

is known,

t

l

r

= p

l �R

l

r

p

r

1. Estimating rotations in 2D

10 20 30 40 50 60 70 80

0

10

20

30

40

50

60

−20 −10 0 10 20

−20

−15

−10

−5

0

5

10

15

20

Figure 1.1. Two noisy point clouds, left (red) and right (green),and the noiseless point cloud SY that was used to generate them,which can be recovered by SVD decomposition (see Section 3).

An example of an absolute orientation problem in 2D is shown in Figure 1.1.In 2D, estimating the rotation is very simple: we only need to look at two pointcorrespondences, say (pl1, p

r

1) and (pl2, pr

2), and realize that the vector v betweenthem will be rotated by ✓, the unknown rotation. Indeed, we have

v

l =�p

l

2 � p

l

1

�= R

l

r

(pr2 � p

r

1) = R

l

r

v

r

The angle ✓ is easily recovered as

✓ = arccosv

l · vr

kvlk kvrk1

ABSOLUTE ORIENTATION

FRANK DELLAERT

Suppose we have two sets of corresponding points�p

l

i

and {pr

i

} in a left andright coordinate frame, respectively, related by a rigid (2D or 3D) transformation:

p

l = R

l

r

p

r + t

l

r

The process of finding the rigid transformation T

l

r

= (Rl

r

, t

l

r

) between them is calledabsolute orientation. Most algorithms proceed by first finding the rotation R

l

r

and then the translation t

l

r

, as the latter can be recovered from a single pointcorrespondence once R

l

r

is known,

t

l

r

= p

l �R

l

r

p

r

1. Estimating rotations in 2D

10 20 30 40 50 60 70 80

0

10

20

30

40

50

60

−20 −10 0 10 20

−20

−15

−10

−5

0

5

10

15

20

Figure 1.1. Two noisy point clouds, left (red) and right (green),and the noiseless point cloud SY that was used to generate them,which can be recovered by SVD decomposition (see Section 3).

An example of an absolute orientation problem in 2D is shown in Figure 1.1.In 2D, estimating the rotation is very simple: we only need to look at two pointcorrespondences, say (pl1, p

r

1) and (pl2, pr

2), and realize that the vector v betweenthem will be rotated by ✓, the unknown rotation. Indeed, we have

v

l =�p

l

2 � p

l

1

�= R

l

r

(pr2 � p

r

1) = R

l

r

v

r

The angle ✓ is easily recovered as

✓ = arccosv

l · vr

kvlk kvrk1

ABSOLUTE ORIENTATION

FRANK DELLAERT

Suppose we have two sets of corresponding points�p

l

i

and {pr

i

} in a left andright coordinate frame, respectively, related by a rigid (2D or 3D) transformation:

p

l = R

l

r

p

r + t

l

r

The process of finding the rigid transformation T

l

r

= (Rl

r

, t

l

r

) between them is calledabsolute orientation. Most algorithms proceed by first finding the rotation R

l

r

and then the translation t

l

r

, as the latter can be recovered from a single pointcorrespondence once R

l

r

is known,

t

l

r

= p

l �R

l

r

p

r

1. Estimating rotations in 2D

10 20 30 40 50 60 70 80

0

10

20

30

40

50

60

−20 −10 0 10 20

−20

−15

−10

−5

0

5

10

15

20

Figure 1.1. Two noisy point clouds, left (red) and right (green),and the noiseless point cloud SY that was used to generate them,which can be recovered by SVD decomposition (see Section 3).

An example of an absolute orientation problem in 2D is shown in Figure 1.1.In 2D, estimating the rotation is very simple: we only need to look at two pointcorrespondences, say (pl1, p

r

1) and (pl2, pr

2), and realize that the vector v betweenthem will be rotated by ✓, the unknown rotation. Indeed, we have

v

l =�p

l

2 � p

l

1

�= R

l

r

(pr2 � p

r

1) = R

l

r

v

r

The angle ✓ is easily recovered as

✓ = arccosv

l · vr

kvlk kvrk1

ABSOLUTE ORIENTATION

FRANK DELLAERT

Suppose we have two sets of corresponding points�p

l

i

and {pr

i

} in a left andright coordinate frame, respectively, related by a rigid (2D or 3D) transformation:

p

l = R

l

r

p

r + t

l

r

The process of finding the rigid transformation T

l

r

= (Rl

r

, t

l

r

) between them is calledabsolute orientation. Most algorithms proceed by first finding the rotation R

l

r

and then the translation t

l

r

, as the latter can be recovered from a single pointcorrespondence once R

l

r

is known,

t

l

r

= p

l �R

l

r

p

r

1. Estimating rotations in 2D

10 20 30 40 50 60 70 80

0

10

20

30

40

50

60

−20 −10 0 10 20

−20

−15

−10

−5

0

5

10

15

20

Figure 1.1. Two noisy point clouds, left (red) and right (green),and the noiseless point cloud SY that was used to generate them,which can be recovered by SVD decomposition (see Section 3).

An example of an absolute orientation problem in 2D is shown in Figure 1.1.In 2D, estimating the rotation is very simple: we only need to look at two pointcorrespondences, say (pl1, p

r

1) and (pl2, pr

2), and realize that the vector v betweenthem will be rotated by ✓, the unknown rotation. Indeed, we have

v

l =�p

l

2 � p

l

1

�= R

l

r

(pr2 � p

r

1) = R

l

r

v

r

The angle ✓ is easily recovered as

✓ = arccosv

l · vr

kvlk kvrk1

ABSOLUTE ORIENTATION

FRANK DELLAERT

Suppose we have two sets of corresponding points�p

l

i

and {pr

i

} in a left andright coordinate frame, respectively, related by a rigid (2D or 3D) transformation:

p

l = R

l

r

p

r + t

l

r

The process of finding the rigid transformation T

l

r

= (Rl

r

, t

l

r

) between them is calledabsolute orientation. Most algorithms proceed by first finding the rotation R

l

r

and then the translation t

l

r

, as the latter can be recovered from a single pointcorrespondence once R

l

r

is known,

t

l

r

= p

l �R

l

r

p

r

1. Estimating rotations in 2D

10 20 30 40 50 60 70 80

0

10

20

30

40

50

60

−20 −10 0 10 20

−20

−15

−10

−5

0

5

10

15

20

Figure 1.1. Two noisy point clouds, left (red) and right (green),and the noiseless point cloud SY that was used to generate them,which can be recovered by SVD decomposition (see Section 3).

An example of an absolute orientation problem in 2D is shown in Figure 1.1.In 2D, estimating the rotation is very simple: we only need to look at two pointcorrespondences, say (pl1, p

r

1) and (pl2, pr

2), and realize that the vector v betweenthem will be rotated by ✓, the unknown rotation. Indeed, we have

v

l =�p

l

2 � p

l

1

�= R

l

r

(pr2 � p

r

1) = R

l

r

v

r

The angle ✓ is easily recovered as

✓ = arccosv

l · vr

kvlk kvrk1

B. K. P Horn JOSA 1987

Page 13: Stereo Visual Odometrykaess/vslam_cvpr14/media/VSLAM... · Incremental Pose Recovery/RANSAC . Undistortion and Rectification . Feature Extraction • Detect local features in each

Absolute Orientation

•  Rotation: •  Given two point correspondences and , the

vector v between the two points will be rotated by the desired angle

•  Specfically, the vectors are related by

•  Finally, recover the angle and translation

ABSOLUTE ORIENTATION

FRANK DELLAERT

Suppose we have two sets of corresponding points�p

l

i

and {pr

i

} in a left andright coordinate frame, respectively, related by a rigid (2D or 3D) transformation:

p

l = R

l

r

p

r + t

l

r

The process of finding the rigid transformation T

l

r

= (Rl

r

, t

l

r

) between them is calledabsolute orientation. Most algorithms proceed by first finding the rotation R

l

r

and then the translation t

l

r

, as the latter can be recovered from a single pointcorrespondence once R

l

r

is known,

t

l

r

= p

l �R

l

r

p

r

1. Estimating rotations in 2D

10 20 30 40 50 60 70 80

0

10

20

30

40

50

60

−20 −10 0 10 20

−20

−15

−10

−5

0

5

10

15

20

Figure 1.1. Two noisy point clouds, left (red) and right (green),and the noiseless point cloud SY that was used to generate them,which can be recovered by SVD decomposition (see Section 3).

An example of an absolute orientation problem in 2D is shown in Figure 1.1.In 2D, estimating the rotation is very simple: we only need to look at two pointcorrespondences, say (pl1, p

r

1) and (pl2, pr

2), and realize that the vector v betweenthem will be rotated by ✓, the unknown rotation. Indeed, we have

v

l =�p

l

2 � p

l

1

�= R

l

r

(pr2 � p

r

1) = R

l

r

v

r

The angle ✓ is easily recovered as

✓ = arccosv

l · vr

kvlk kvrk1

ABSOLUTE ORIENTATION

FRANK DELLAERT

Suppose we have two sets of corresponding points�p

l

i

and {pr

i

} in a left andright coordinate frame, respectively, related by a rigid (2D or 3D) transformation:

p

l = R

l

r

p

r + t

l

r

The process of finding the rigid transformation T

l

r

= (Rl

r

, t

l

r

) between them is calledabsolute orientation. Most algorithms proceed by first finding the rotation R

l

r

and then the translation t

l

r

, as the latter can be recovered from a single pointcorrespondence once R

l

r

is known,

t

l

r

= p

l �R

l

r

p

r

1. Estimating rotations in 2D

10 20 30 40 50 60 70 80

0

10

20

30

40

50

60

−20 −10 0 10 20

−20

−15

−10

−5

0

5

10

15

20

Figure 1.1. Two noisy point clouds, left (red) and right (green),and the noiseless point cloud SY that was used to generate them,which can be recovered by SVD decomposition (see Section 3).

An example of an absolute orientation problem in 2D is shown in Figure 1.1.In 2D, estimating the rotation is very simple: we only need to look at two pointcorrespondences, say (pl1, p

r

1) and (pl2, pr

2), and realize that the vector v betweenthem will be rotated by ✓, the unknown rotation. Indeed, we have

v

l =�p

l

2 � p

l

1

�= R

l

r

(pr2 � p

r

1) = R

l

r

v

r

The angle ✓ is easily recovered as

✓ = arccosv

l · vr

kvlk kvrk1

ABSOLUTE ORIENTATION

FRANK DELLAERT

Suppose we have two sets of corresponding points�p

l

i

and {pr

i

} in a left andright coordinate frame, respectively, related by a rigid (2D or 3D) transformation:

p

l = R

l

r

p

r + t

l

r

The process of finding the rigid transformation T

l

r

= (Rl

r

, t

l

r

) between them is calledabsolute orientation. Most algorithms proceed by first finding the rotation R

l

r

and then the translation t

l

r

, as the latter can be recovered from a single pointcorrespondence once R

l

r

is known,

t

l

r

= p

l �R

l

r

p

r

1. Estimating rotations in 2D

10 20 30 40 50 60 70 80

0

10

20

30

40

50

60

−20 −10 0 10 20

−20

−15

−10

−5

0

5

10

15

20

Figure 1.1. Two noisy point clouds, left (red) and right (green),and the noiseless point cloud SY that was used to generate them,which can be recovered by SVD decomposition (see Section 3).

An example of an absolute orientation problem in 2D is shown in Figure 1.1.In 2D, estimating the rotation is very simple: we only need to look at two pointcorrespondences, say (pl1, p

r

1) and (pl2, pr

2), and realize that the vector v betweenthem will be rotated by ✓, the unknown rotation. Indeed, we have

v

l =�p

l

2 � p

l

1

�= R

l

r

(pr2 � p

r

1) = R

l

r

v

r

The angle ✓ is easily recovered as

✓ = arccosv

l · vr

kvlk kvrk1

ABSOLUTE ORIENTATION

FRANK DELLAERT

Suppose we have two sets of corresponding points�p

l

i

and {pr

i

} in a left andright coordinate frame, respectively, related by a rigid (2D or 3D) transformation:

p

l = R

l

r

p

r + t

l

r

The process of finding the rigid transformation T

l

r

= (Rl

r

, t

l

r

) between them is calledabsolute orientation. Most algorithms proceed by first finding the rotation R

l

r

and then the translation t

l

r

, as the latter can be recovered from a single pointcorrespondence once R

l

r

is known,

t

l

r

= p

l �R

l

r

p

r

1. Estimating rotations in 2D

10 20 30 40 50 60 70 80

0

10

20

30

40

50

60

−20 −10 0 10 20

−20

−15

−10

−5

0

5

10

15

20

Figure 1.1. Two noisy point clouds, left (red) and right (green),and the noiseless point cloud SY that was used to generate them,which can be recovered by SVD decomposition (see Section 3).

An example of an absolute orientation problem in 2D is shown in Figure 1.1.In 2D, estimating the rotation is very simple: we only need to look at two pointcorrespondences, say (pl1, p

r

1) and (pl2, pr

2), and realize that the vector v betweenthem will be rotated by ✓, the unknown rotation. Indeed, we have

v

l =�p

l

2 � p

l

1

�= R

l

r

(pr2 � p

r

1) = R

l

r

v

r

The angle ✓ is easily recovered as

✓ = arccosv

l · vr

kvlk kvrk1

ABSOLUTE ORIENTATION

FRANK DELLAERT

Suppose we have two sets of corresponding points�p

l

i

and {pr

i

} in a left andright coordinate frame, respectively, related by a rigid (2D or 3D) transformation:

p

l = R

l

r

p

r + t

l

r

The process of finding the rigid transformation T

l

r

= (Rl

r

, t

l

r

) between them is calledabsolute orientation. Most algorithms proceed by first finding the rotation R

l

r

and then the translation t

l

r

, as the latter can be recovered from a single pointcorrespondence once R

l

r

is known,

t

l

r

= p

l �R

l

r

p

r

1. Estimating rotations in 2D

10 20 30 40 50 60 70 80

0

10

20

30

40

50

60

−20 −10 0 10 20

−20

−15

−10

−5

0

5

10

15

20

Figure 1.1. Two noisy point clouds, left (red) and right (green),and the noiseless point cloud SY that was used to generate them,which can be recovered by SVD decomposition (see Section 3).

An example of an absolute orientation problem in 2D is shown in Figure 1.1.In 2D, estimating the rotation is very simple: we only need to look at two pointcorrespondences, say (pl1, p

r

1) and (pl2, pr

2), and realize that the vector v betweenthem will be rotated by ✓, the unknown rotation. Indeed, we have

v

l =�p

l

2 � p

l

1

�= R

l

r

(pr2 � p

r

1) = R

l

r

v

r

The angle ✓ is easily recovered as

✓ = arccosv

l · vr

kvlk kvrk1

ABSOLUTE ORIENTATION

FRANK DELLAERT

Suppose we have two sets of corresponding points�p

l

i

and {pr

i

} in a left andright coordinate frame, respectively, related by a rigid (2D or 3D) transformation:

p

l = R

l

r

p

r + t

l

r

The process of finding the rigid transformation T

l

r

= (Rl

r

, t

l

r

) between them is calledabsolute orientation. Most algorithms proceed by first finding the rotation R

l

r

and then the translation t

l

r

, as the latter can be recovered from a single pointcorrespondence once R

l

r

is known,

t

l

r

= p

l �R

l

r

p

r

1. Estimating rotations in 2D

10 20 30 40 50 60 70 80

0

10

20

30

40

50

60

−20 −10 0 10 20

−20

−15

−10

−5

0

5

10

15

20

Figure 1.1. Two noisy point clouds, left (red) and right (green),and the noiseless point cloud SY that was used to generate them,which can be recovered by SVD decomposition (see Section 3).

An example of an absolute orientation problem in 2D is shown in Figure 1.1.In 2D, estimating the rotation is very simple: we only need to look at two pointcorrespondences, say (pl1, p

r

1) and (pl2, pr

2), and realize that the vector v betweenthem will be rotated by ✓, the unknown rotation. Indeed, we have

v

l =�p

l

2 � p

l

1

�= R

l

r

(pr2 � p

r

1) = R

l

r

v

r

The angle ✓ is easily recovered as

✓ = arccosv

l · vr

kvlk kvrk1

ABSOLUTE ORIENTATION

FRANK DELLAERT

Suppose we have two sets of corresponding points�p

l

i

and {pr

i

} in a left andright coordinate frame, respectively, related by a rigid (2D or 3D) transformation:

p

l = R

l

r

p

r + t

l

r

The process of finding the rigid transformation T

l

r

= (Rl

r

, t

l

r

) between them is calledabsolute orientation. Most algorithms proceed by first finding the rotation R

l

r

and then the translation t

l

r

, as the latter can be recovered from a single pointcorrespondence once R

l

r

is known,

t

l

r

= p

l �R

l

r

p

r

1. Estimating rotations in 2D

10 20 30 40 50 60 70 80

0

10

20

30

40

50

60

−20 −10 0 10 20

−20

−15

−10

−5

0

5

10

15

20

Figure 1.1. Two noisy point clouds, left (red) and right (green),and the noiseless point cloud SY that was used to generate them,which can be recovered by SVD decomposition (see Section 3).

An example of an absolute orientation problem in 2D is shown in Figure 1.1.In 2D, estimating the rotation is very simple: we only need to look at two pointcorrespondences, say (pl1, p

r

1) and (pl2, pr

2), and realize that the vector v betweenthem will be rotated by ✓, the unknown rotation. Indeed, we have

v

l =�p

l

2 � p

l

1

�= R

l

r

(pr2 � p

r

1) = R

l

r

v

r

The angle ✓ is easily recovered as

✓ = arccosv

l · vr

kvlk kvrk1

Page 14: Stereo Visual Odometrykaess/vslam_cvpr14/media/VSLAM... · Incremental Pose Recovery/RANSAC . Undistortion and Rectification . Feature Extraction • Detect local features in each

Absolute Orientation

•  Now in 3D: •  Create coordinate frames from three corresponding points

•  Take x-axis by connecting to :

•  Construct y-axis in the plane formed by three points, perpendicular to x-axis:

•  Complete frame with z-axis:

ABSOLUTE ORIENTATION 2

The rotation matrix R

l

r

can be recovered linearly without trigonometric functionsusing the fact that

R

l

r

=

cos✓ �sin✓

sin✓ cos✓

�=

c �s

s c

which yields the following linear system in c and s:

v

l

x

v

l

y

�=

c �s

s c

� v

r

x

v

r

y

�=

cv

r

x

� sv

r

y

sv

r

x

+ cv

r

y

�=

v

r

x

�v

r

y

v

r

y

v

r

x

� c

s

A solution that is symmetric in left and right is the following,

c = k

�v

l

x

v

r

x

+ v

l

y

v

r

y

�and s = k

�v

l

x

v

r

y

� v

l

y

v

r

x

with k chosen to make c

2 + s

2 = 1.

2. Estimating rotations in 3D

1.82

2.22.4

2.62.8

22.2

2.42.6

2.8

1.5

1.6

1.7

1.8

1.9

2

2.1

2.2

2.3

−1.8−1.6

−1.4−1.2

−1

−1.2−1

−0.8−0.6

−0.2

0

0.2

0.4

0.6

0.8

Figure 2.1. Left and right triads constructed on three points, see text.

In 3D, we cannot recover the 3 DOF rotation with a single vector, but we canafter we find two corresponding “triads”, i.e., right-handed coordinate frames. Fol-lowing Horn (1987), instead of two point correspondences we now need three, whichwe denote by (pl1, p

r

1) and (pl2, pr

2) and (pl3, pr

3). We can then create a coordinateframe (say in left) by pointing the x-axis from p

l

1 to p

l

2,

x

l

/�p

l

2 � p

l

1

where the proportional sign indicates we normalize x

l

to have unit norm. Weconstruct the y-axis perpendicular to it in the p

l

1 � p

l

2 � p

l

3 plane,

y

l

/ (pl3 � p

l

1)�⇥(pl3 � p

l

1) · xl

⇤x

l

and comple the triad by having a z-axis perpendicular to the first two:

z

l

/ (xl

⇥ y

l

)

After we do this, the rotation from the left frame to the global frame is simply

R

g

l

= [ x

l

y

l

z

l

]

and after we repeat the construction for the three points in the right frame, werecover the unknown rotation by simply composing the two rotations:

R

l

r

= R

l

g

R

g

r

= (Rg

l

)T

R

g

r

ABSOLUTE ORIENTATION 2

The rotation matrix R

l

r

can be recovered linearly without trigonometric functionsusing the fact that

R

l

r

=

cos✓ �sin✓

sin✓ cos✓

�=

c �s

s c

which yields the following linear system in c and s:

v

l

x

v

l

y

�=

c �s

s c

� v

r

x

v

r

y

�=

cv

r

x

� sv

r

y

sv

r

x

+ cv

r

y

�=

v

r

x

�v

r

y

v

r

y

v

r

x

� c

s

A solution that is symmetric in left and right is the following,

c = k

�v

l

x

v

r

x

+ v

l

y

v

r

y

�and s = k

�v

l

x

v

r

y

� v

l

y

v

r

x

with k chosen to make c

2 + s

2 = 1.

2. Estimating rotations in 3D

1.82

2.22.4

2.62.8

22.2

2.42.6

2.8

1.5

1.6

1.7

1.8

1.9

2

2.1

2.2

2.3

−1.8−1.6

−1.4−1.2

−1

−1.2−1

−0.8−0.6

−0.2

0

0.2

0.4

0.6

0.8

Figure 2.1. Left and right triads constructed on three points, see text.

In 3D, we cannot recover the 3 DOF rotation with a single vector, but we canafter we find two corresponding “triads”, i.e., right-handed coordinate frames. Fol-lowing Horn (1987), instead of two point correspondences we now need three, whichwe denote by (pl1, p

r

1) and (pl2, pr

2) and (pl3, pr

3). We can then create a coordinateframe (say in left) by pointing the x-axis from p

l

1 to p

l

2,

x

l

/�p

l

2 � p

l

1

where the proportional sign indicates we normalize x

l

to have unit norm. Weconstruct the y-axis perpendicular to it in the p

l

1 � p

l

2 � p

l

3 plane,

y

l

/ (pl3 � p

l

1)�⇥(pl3 � p

l

1) · xl

⇤x

l

and comple the triad by having a z-axis perpendicular to the first two:

z

l

/ (xl

⇥ y

l

)

After we do this, the rotation from the left frame to the global frame is simply

R

g

l

= [ x

l

y

l

z

l

]

and after we repeat the construction for the three points in the right frame, werecover the unknown rotation by simply composing the two rotations:

R

l

r

= R

l

g

R

g

r

= (Rg

l

)T

R

g

r

ABSOLUTE ORIENTATION 2

The rotation matrix R

l

r

can be recovered linearly without trigonometric functionsusing the fact that

R

l

r

=

cos✓ �sin✓

sin✓ cos✓

�=

c �s

s c

which yields the following linear system in c and s:

v

l

x

v

l

y

�=

c �s

s c

� v

r

x

v

r

y

�=

cv

r

x

� sv

r

y

sv

r

x

+ cv

r

y

�=

v

r

x

�v

r

y

v

r

y

v

r

x

� c

s

A solution that is symmetric in left and right is the following,

c = k

�v

l

x

v

r

x

+ v

l

y

v

r

y

�and s = k

�v

l

x

v

r

y

� v

l

y

v

r

x

with k chosen to make c

2 + s

2 = 1.

2. Estimating rotations in 3D

1.82

2.22.4

2.62.8

22.2

2.42.6

2.8

1.5

1.6

1.7

1.8

1.9

2

2.1

2.2

2.3

−1.8−1.6

−1.4−1.2

−1

−1.2−1

−0.8−0.6

−0.2

0

0.2

0.4

0.6

0.8

Figure 2.1. Left and right triads constructed on three points, see text.

In 3D, we cannot recover the 3 DOF rotation with a single vector, but we canafter we find two corresponding “triads”, i.e., right-handed coordinate frames. Fol-lowing Horn (1987), instead of two point correspondences we now need three, whichwe denote by (pl1, p

r

1) and (pl2, pr

2) and (pl3, pr

3). We can then create a coordinateframe (say in left) by pointing the x-axis from p

l

1 to p

l

2,

x

l

/�p

l

2 � p

l

1

where the proportional sign indicates we normalize x

l

to have unit norm. Weconstruct the y-axis perpendicular to it in the p

l

1 � p

l

2 � p

l

3 plane,

y

l

/ (pl3 � p

l

1)�⇥(pl3 � p

l

1) · xl

⇤x

l

and comple the triad by having a z-axis perpendicular to the first two:

z

l

/ (xl

⇥ y

l

)

After we do this, the rotation from the left frame to the global frame is simply

R

g

l

= [ x

l

y

l

z

l

]

and after we repeat the construction for the three points in the right frame, werecover the unknown rotation by simply composing the two rotations:

R

l

r

= R

l

g

R

g

r

= (Rg

l

)T

R

g

r

ABSOLUTE ORIENTATION 2

The rotation matrix R

l

r

can be recovered linearly without trigonometric functionsusing the fact that

R

l

r

=

cos✓ �sin✓

sin✓ cos✓

�=

c �s

s c

which yields the following linear system in c and s:

v

l

x

v

l

y

�=

c �s

s c

� v

r

x

v

r

y

�=

cv

r

x

� sv

r

y

sv

r

x

+ cv

r

y

�=

v

r

x

�v

r

y

v

r

y

v

r

x

� c

s

A solution that is symmetric in left and right is the following,

c = k

�v

l

x

v

r

x

+ v

l

y

v

r

y

�and s = k

�v

l

x

v

r

y

� v

l

y

v

r

x

with k chosen to make c

2 + s

2 = 1.

2. Estimating rotations in 3D

1.82

2.22.4

2.62.8

22.2

2.42.6

2.8

1.5

1.6

1.7

1.8

1.9

2

2.1

2.2

2.3

−1.8−1.6

−1.4−1.2

−1

−1.2−1

−0.8−0.6

−0.2

0

0.2

0.4

0.6

0.8

Figure 2.1. Left and right triads constructed on three points, see text.

In 3D, we cannot recover the 3 DOF rotation with a single vector, but we canafter we find two corresponding “triads”, i.e., right-handed coordinate frames. Fol-lowing Horn (1987), instead of two point correspondences we now need three, whichwe denote by (pl1, p

r

1) and (pl2, pr

2) and (pl3, pr

3). We can then create a coordinateframe (say in left) by pointing the x-axis from p

l

1 to p

l

2,

x

l

/�p

l

2 � p

l

1

where the proportional sign indicates we normalize x

l

to have unit norm. Weconstruct the y-axis perpendicular to it in the p

l

1 � p

l

2 � p

l

3 plane,

y

l

/ (pl3 � p

l

1)�⇥(pl3 � p

l

1) · xl

⇤x

l

and comple the triad by having a z-axis perpendicular to the first two:

z

l

/ (xl

⇥ y

l

)

After we do this, the rotation from the left frame to the global frame is simply

R

g

l

= [ x

l

y

l

z

l

]

and after we repeat the construction for the three points in the right frame, werecover the unknown rotation by simply composing the two rotations:

R

l

r

= R

l

g

R

g

r

= (Rg

l

)T

R

g

r

ABSOLUTE ORIENTATION 2

The rotation matrix R

l

r

can be recovered linearly without trigonometric functionsusing the fact that

R

l

r

=

cos✓ �sin✓

sin✓ cos✓

�=

c �s

s c

which yields the following linear system in c and s:

v

l

x

v

l

y

�=

c �s

s c

� v

r

x

v

r

y

�=

cv

r

x

� sv

r

y

sv

r

x

+ cv

r

y

�=

v

r

x

�v

r

y

v

r

y

v

r

x

� c

s

A solution that is symmetric in left and right is the following,

c = k

�v

l

x

v

r

x

+ v

l

y

v

r

y

�and s = k

�v

l

x

v

r

y

� v

l

y

v

r

x

with k chosen to make c

2 + s

2 = 1.

2. Estimating rotations in 3D

1.82

2.22.4

2.62.8

22.2

2.42.6

2.8

1.5

1.6

1.7

1.8

1.9

2

2.1

2.2

2.3

−1.8−1.6

−1.4−1.2

−1

−1.2−1

−0.8−0.6

−0.2

0

0.2

0.4

0.6

0.8

Figure 2.1. Left and right triads constructed on three points, see text.

In 3D, we cannot recover the 3 DOF rotation with a single vector, but we canafter we find two corresponding “triads”, i.e., right-handed coordinate frames. Fol-lowing Horn (1987), instead of two point correspondences we now need three, whichwe denote by (pl1, p

r

1) and (pl2, pr

2) and (pl3, pr

3). We can then create a coordinateframe (say in left) by pointing the x-axis from p

l

1 to p

l

2,

x

l

/�p

l

2 � p

l

1

where the proportional sign indicates we normalize x

l

to have unit norm. Weconstruct the y-axis perpendicular to it in the p

l

1 � p

l

2 � p

l

3 plane,

y

l

/ (pl3 � p

l

1)�⇥(pl3 � p

l

1) · xl

⇤x

l

and comple the triad by having a z-axis perpendicular to the first two:

z

l

/ (xl

⇥ y

l

)

After we do this, the rotation from the left frame to the global frame is simply

R

g

l

= [ x

l

y

l

z

l

]

and after we repeat the construction for the three points in the right frame, werecover the unknown rotation by simply composing the two rotations:

R

l

r

= R

l

g

R

g

r

= (Rg

l

)T

R

g

r

ABSOLUTE ORIENTATION 2

The rotation matrix R

l

r

can be recovered linearly without trigonometric functionsusing the fact that

R

l

r

=

cos✓ �sin✓

sin✓ cos✓

�=

c �s

s c

which yields the following linear system in c and s:

v

l

x

v

l

y

�=

c �s

s c

� v

r

x

v

r

y

�=

cv

r

x

� sv

r

y

sv

r

x

+ cv

r

y

�=

v

r

x

�v

r

y

v

r

y

v

r

x

� c

s

A solution that is symmetric in left and right is the following,

c = k

�v

l

x

v

r

x

+ v

l

y

v

r

y

�and s = k

�v

l

x

v

r

y

� v

l

y

v

r

x

with k chosen to make c

2 + s

2 = 1.

2. Estimating rotations in 3D

1.82

2.22.4

2.62.8

22.2

2.42.6

2.8

1.5

1.6

1.7

1.8

1.9

2

2.1

2.2

2.3

−1.8−1.6

−1.4−1.2

−1

−1.2−1

−0.8−0.6

−0.2

0

0.2

0.4

0.6

0.8

Figure 2.1. Left and right triads constructed on three points, see text.

In 3D, we cannot recover the 3 DOF rotation with a single vector, but we canafter we find two corresponding “triads”, i.e., right-handed coordinate frames. Fol-lowing Horn (1987), instead of two point correspondences we now need three, whichwe denote by (pl1, p

r

1) and (pl2, pr

2) and (pl3, pr

3). We can then create a coordinateframe (say in left) by pointing the x-axis from p

l

1 to p

l

2,

x

l

/�p

l

2 � p

l

1

where the proportional sign indicates we normalize x

l

to have unit norm. Weconstruct the y-axis perpendicular to it in the p

l

1 � p

l

2 � p

l

3 plane,

y

l

/ (pl3 � p

l

1)�⇥(pl3 � p

l

1) · xl

⇤x

l

and comple the triad by having a z-axis perpendicular to the first two:

z

l

/ (xl

⇥ y

l

)

After we do this, the rotation from the left frame to the global frame is simply

R

g

l

= [ x

l

y

l

z

l

]

and after we repeat the construction for the three points in the right frame, werecover the unknown rotation by simply composing the two rotations:

R

l

r

= R

l

g

R

g

r

= (Rg

l

)T

R

g

r

Page 15: Stereo Visual Odometrykaess/vslam_cvpr14/media/VSLAM... · Incremental Pose Recovery/RANSAC . Undistortion and Rectification . Feature Extraction • Detect local features in each

Absolute Orientation

•  Normalize the axes and we have the rotation of the frame with respect to the global reference frame,

•  Repeat for the right frame, and obtain the relative rotation,

•  As in the 2D case, the translation can then be recovered using a single point

ABSOLUTE ORIENTATION 2

The rotation matrix R

l

r

can be recovered linearly without trigonometric functionsusing the fact that

R

l

r

=

cos✓ �sin✓

sin✓ cos✓

�=

c �s

s c

which yields the following linear system in c and s:

v

l

x

v

l

y

�=

c �s

s c

� v

r

x

v

r

y

�=

cv

r

x

� sv

r

y

sv

r

x

+ cv

r

y

�=

v

r

x

�v

r

y

v

r

y

v

r

x

� c

s

A solution that is symmetric in left and right is the following,

c = k

�v

l

x

v

r

x

+ v

l

y

v

r

y

�and s = k

�v

l

x

v

r

y

� v

l

y

v

r

x

with k chosen to make c

2 + s

2 = 1.

2. Estimating rotations in 3D

1.82

2.22.4

2.62.8

22.2

2.42.6

2.8

1.5

1.6

1.7

1.8

1.9

2

2.1

2.2

2.3

−1.8−1.6

−1.4−1.2

−1

−1.2−1

−0.8−0.6

−0.2

0

0.2

0.4

0.6

0.8

Figure 2.1. Left and right triads constructed on three points, see text.

In 3D, we cannot recover the 3 DOF rotation with a single vector, but we canafter we find two corresponding “triads”, i.e., right-handed coordinate frames. Fol-lowing Horn (1987), instead of two point correspondences we now need three, whichwe denote by (pl1, p

r

1) and (pl2, pr

2) and (pl3, pr

3). We can then create a coordinateframe (say in left) by pointing the x-axis from p

l

1 to p

l

2,

x

l

/�p

l

2 � p

l

1

where the proportional sign indicates we normalize x

l

to have unit norm. Weconstruct the y-axis perpendicular to it in the p

l

1 � p

l

2 � p

l

3 plane,

y

l

/ (pl3 � p

l

1)�⇥(pl3 � p

l

1) · xl

⇤x

l

and comple the triad by having a z-axis perpendicular to the first two:

z

l

/ (xl

⇥ y

l

)

After we do this, the rotation from the left frame to the global frame is simply

R

g

l

= [ x

l

y

l

z

l

]

and after we repeat the construction for the three points in the right frame, werecover the unknown rotation by simply composing the two rotations:

R

l

r

= R

l

g

R

g

r

= (Rg

l

)T

R

g

r

ABSOLUTE ORIENTATION 2

The rotation matrix R

l

r

can be recovered linearly without trigonometric functionsusing the fact that

R

l

r

=

cos✓ �sin✓

sin✓ cos✓

�=

c �s

s c

which yields the following linear system in c and s:

v

l

x

v

l

y

�=

c �s

s c

� v

r

x

v

r

y

�=

cv

r

x

� sv

r

y

sv

r

x

+ cv

r

y

�=

v

r

x

�v

r

y

v

r

y

v

r

x

� c

s

A solution that is symmetric in left and right is the following,

c = k

�v

l

x

v

r

x

+ v

l

y

v

r

y

�and s = k

�v

l

x

v

r

y

� v

l

y

v

r

x

with k chosen to make c

2 + s

2 = 1.

2. Estimating rotations in 3D

1.82

2.22.4

2.62.8

22.2

2.42.6

2.8

1.5

1.6

1.7

1.8

1.9

2

2.1

2.2

2.3

−1.8−1.6

−1.4−1.2

−1

−1.2−1

−0.8−0.6

−0.2

0

0.2

0.4

0.6

0.8

Figure 2.1. Left and right triads constructed on three points, see text.

In 3D, we cannot recover the 3 DOF rotation with a single vector, but we canafter we find two corresponding “triads”, i.e., right-handed coordinate frames. Fol-lowing Horn (1987), instead of two point correspondences we now need three, whichwe denote by (pl1, p

r

1) and (pl2, pr

2) and (pl3, pr

3). We can then create a coordinateframe (say in left) by pointing the x-axis from p

l

1 to p

l

2,

x

l

/�p

l

2 � p

l

1

where the proportional sign indicates we normalize x

l

to have unit norm. Weconstruct the y-axis perpendicular to it in the p

l

1 � p

l

2 � p

l

3 plane,

y

l

/ (pl3 � p

l

1)�⇥(pl3 � p

l

1) · xl

⇤x

l

and comple the triad by having a z-axis perpendicular to the first two:

z

l

/ (xl

⇥ y

l

)

After we do this, the rotation from the left frame to the global frame is simply

R

g

l

= [ x

l

y

l

z

l

]

and after we repeat the construction for the three points in the right frame, werecover the unknown rotation by simply composing the two rotations:

R

l

r

= R

l

g

R

g

r

= (Rg

l

)T

R

g

r

ABSOLUTE ORIENTATION 2

The rotation matrix R

l

r

can be recovered linearly without trigonometric functionsusing the fact that

R

l

r

=

cos✓ �sin✓

sin✓ cos✓

�=

c �s

s c

which yields the following linear system in c and s:

v

l

x

v

l

y

�=

c �s

s c

� v

r

x

v

r

y

�=

cv

r

x

� sv

r

y

sv

r

x

+ cv

r

y

�=

v

r

x

�v

r

y

v

r

y

v

r

x

� c

s

A solution that is symmetric in left and right is the following,

c = k

�v

l

x

v

r

x

+ v

l

y

v

r

y

�and s = k

�v

l

x

v

r

y

� v

l

y

v

r

x

with k chosen to make c

2 + s

2 = 1.

2. Estimating rotations in 3D

1.82

2.22.4

2.62.8

22.2

2.42.6

2.8

1.5

1.6

1.7

1.8

1.9

2

2.1

2.2

2.3

−1.8−1.6

−1.4−1.2

−1

−1.2−1

−0.8−0.6

−0.2

0

0.2

0.4

0.6

0.8

Figure 2.1. Left and right triads constructed on three points, see text.

In 3D, we cannot recover the 3 DOF rotation with a single vector, but we canafter we find two corresponding “triads”, i.e., right-handed coordinate frames. Fol-lowing Horn (1987), instead of two point correspondences we now need three, whichwe denote by (pl1, p

r

1) and (pl2, pr

2) and (pl3, pr

3). We can then create a coordinateframe (say in left) by pointing the x-axis from p

l

1 to p

l

2,

x

l

/�p

l

2 � p

l

1

where the proportional sign indicates we normalize x

l

to have unit norm. Weconstruct the y-axis perpendicular to it in the p

l

1 � p

l

2 � p

l

3 plane,

y

l

/ (pl3 � p

l

1)�⇥(pl3 � p

l

1) · xl

⇤x

l

and comple the triad by having a z-axis perpendicular to the first two:

z

l

/ (xl

⇥ y

l

)

After we do this, the rotation from the left frame to the global frame is simply

R

g

l

= [ x

l

y

l

z

l

]

and after we repeat the construction for the three points in the right frame, werecover the unknown rotation by simply composing the two rotations:

R

l

r

= R

l

g

R

g

r

= (Rg

l

)T

R

g

r

Page 16: Stereo Visual Odometrykaess/vslam_cvpr14/media/VSLAM... · Incremental Pose Recovery/RANSAC . Undistortion and Rectification . Feature Extraction • Detect local features in each

Absolute Orientation

•  The Absolute Orientation approach assumes a relatively noiseless case, and does not work well otherwise

•  No simple way to average out noisy points by considering more data •  Use SVD-based method instead •  Use different approach based on projective geometry

ABSOLUTE ORIENTATION 2

The rotation matrix R

l

r

can be recovered linearly without trigonometric functionsusing the fact that

R

l

r

=

cos✓ �sin✓

sin✓ cos✓

�=

c �s

s c

which yields the following linear system in c and s:

v

l

x

v

l

y

�=

c �s

s c

� v

r

x

v

r

y

�=

cv

r

x

� sv

r

y

sv

r

x

+ cv

r

y

�=

v

r

x

�v

r

y

v

r

y

v

r

x

� c

s

A solution that is symmetric in left and right is the following,

c = k

�v

l

x

v

r

x

+ v

l

y

v

r

y

�and s = k

�v

l

x

v

r

y

� v

l

y

v

r

x

with k chosen to make c

2 + s

2 = 1.

2. Estimating rotations in 3D

1.82

2.22.4

2.62.8

22.2

2.42.6

2.8

1.5

1.6

1.7

1.8

1.9

2

2.1

2.2

2.3

−1.8−1.6

−1.4−1.2

−1

−1.2−1

−0.8−0.6

−0.2

0

0.2

0.4

0.6

0.8

Figure 2.1. Left and right triads constructed on three points, see text.

In 3D, we cannot recover the 3 DOF rotation with a single vector, but we canafter we find two corresponding “triads”, i.e., right-handed coordinate frames. Fol-lowing Horn (1987), instead of two point correspondences we now need three, whichwe denote by (pl1, p

r

1) and (pl2, pr

2) and (pl3, pr

3). We can then create a coordinateframe (say in left) by pointing the x-axis from p

l

1 to p

l

2,

x

l

/�p

l

2 � p

l

1

where the proportional sign indicates we normalize x

l

to have unit norm. Weconstruct the y-axis perpendicular to it in the p

l

1 � p

l

2 � p

l

3 plane,

y

l

/ (pl3 � p

l

1)�⇥(pl3 � p

l

1) · xl

⇤x

l

and comple the triad by having a z-axis perpendicular to the first two:

z

l

/ (xl

⇥ y

l

)

After we do this, the rotation from the left frame to the global frame is simply

R

g

l

= [ x

l

y

l

z

l

]

and after we repeat the construction for the three points in the right frame, werecover the unknown rotation by simply composing the two rotations:

R

l

r

= R

l

g

R

g

r

= (Rg

l

)T

R

g

r

Page 17: Stereo Visual Odometrykaess/vslam_cvpr14/media/VSLAM... · Incremental Pose Recovery/RANSAC . Undistortion and Rectification . Feature Extraction • Detect local features in each

Reprojection Error Minimization

•  A better approach is to estimate relative pose by minimizing reprojection error of 3D landmarks into images at time t and t-1

Triangulate

Tt

Tt-1 Reproject

Reproject Pose r1

Tt

Tt-1

Page 18: Stereo Visual Odometrykaess/vslam_cvpr14/media/VSLAM... · Incremental Pose Recovery/RANSAC . Undistortion and Rectification . Feature Extraction • Detect local features in each

Practical/Robustness Considerations

•  The presented algorithm works very well in feature-rich, static environments, but…

•  A few tricks for better results in challenging conditions: •  Feature binning to cope with bias due to uneven feature

distributions •  Keyframing to cope with dynamic scenes, as well as reducing

drift of a stationary camera

•  Real-time performance

Page 19: Stereo Visual Odometrykaess/vslam_cvpr14/media/VSLAM... · Incremental Pose Recovery/RANSAC . Undistortion and Rectification . Feature Extraction • Detect local features in each

Feature Binning

•  Incremental pose estimation yields poor results when features are concentrated in one area of the image

•  Solution: Draw a grid and keep k strongest features in each cell

Page 20: Stereo Visual Odometrykaess/vslam_cvpr14/media/VSLAM... · Incremental Pose Recovery/RANSAC . Undistortion and Rectification . Feature Extraction • Detect local features in each

Challenges: Dynamic Scenes

•  Dynamic, crowded scenes present a real challenge

•  Cannot depend on RANSAC to always recover the correct inlier set

•  Example 1: Large van “steals” inlier set in passing

Inliers Outliers

Page 21: Stereo Visual Odometrykaess/vslam_cvpr14/media/VSLAM... · Incremental Pose Recovery/RANSAC . Undistortion and Rectification . Feature Extraction • Detect local features in each

Challenges: Dynamic Scenes

•  Example 2: Cross-traffic while waiting to turn left at light

Only accept incremental pose if: •  translation > 0.5m •  Dominant direction

is forward

Without keyframing With keyframing

Incorrect sideways motion

Page 22: Stereo Visual Odometrykaess/vslam_cvpr14/media/VSLAM... · Incremental Pose Recovery/RANSAC . Undistortion and Rectification . Feature Extraction • Detect local features in each

Real-time performance

•  Parallelization of feature extraction and stereo matching steps allows real-time performance even in CPU-only implementation

Mono Frame Queue

Stereo Frame Queue

Pose Recovery/RANSAC

Feature Extractor

Feature Extractor

Feature Extractor

Stereo Frame Queue

Stereo Matching

Stereo Matching

Page 23: Stereo Visual Odometrykaess/vslam_cvpr14/media/VSLAM... · Incremental Pose Recovery/RANSAC . Undistortion and Rectification . Feature Extraction • Detect local features in each

What else do you get?

•  Stereo Visual Odometry yields more than just a camera trajectory!

•  Tracked landmarks form a sparse 3D point cloud of the environment •  Can be used as the basis for localization

Page 24: Stereo Visual Odometrykaess/vslam_cvpr14/media/VSLAM... · Incremental Pose Recovery/RANSAC . Undistortion and Rectification . Feature Extraction • Detect local features in each

What else do you get?

•  3D Point cloud on KITTI Benchmark, Sequence 2

http://www.cvlibs.net/datasets/kitti/

Page 25: Stereo Visual Odometrykaess/vslam_cvpr14/media/VSLAM... · Incremental Pose Recovery/RANSAC . Undistortion and Rectification . Feature Extraction • Detect local features in each

Results on KITTI Benchmark

•  Representative results on KITTI VO Benchmark •  Average translational/rotational errors are very small •  Accumulate over time, resulting in drift

Page 26: Stereo Visual Odometrykaess/vslam_cvpr14/media/VSLAM... · Incremental Pose Recovery/RANSAC . Undistortion and Rectification . Feature Extraction • Detect local features in each

GTSAM VO Example with Point Cloud

•  StereoVOExample_large.m in GTSAM •  Takes VO output and improves result through bundle

adjustment (more on that later!)

tinyurl.com/gtsam