Lightseminar: Learned Representation in AI An Introduction to Locally Linear Embedding

14
Lightseminar: Learned Representation in AI An Introduction to Locally Linear Embedding Lawrence K. Saul Sam T. Roweis presented by Chan-Su Lee

description

Lightseminar: Learned Representation in AI An Introduction to Locally Linear Embedding Lawrence K. Saul Sam T. Roweis presented by Chan-Su Lee. Introduction. Preprocessing Obtain more useful representation of information - PowerPoint PPT Presentation

Transcript of Lightseminar: Learned Representation in AI An Introduction to Locally Linear Embedding

Page 1: Lightseminar: Learned Representation in AI An Introduction to Locally Linear Embedding

Lightseminar: Learned Representation in AI

An Introduction to Locally Linear Embedding

Lawrence K. SaulSam T. Roweis

presented by Chan-Su Lee

Page 2: Lightseminar: Learned Representation in AI An Introduction to Locally Linear Embedding

Introduction● Preprocessing

– Obtain more useful representation of information– Useful for subsequent operation for classification,

pattern recognition● Unsupervised learning

– Automatic methods to recover hidden structure– Density estimation– Dimensionality reduction

Page 3: Lightseminar: Learned Representation in AI An Introduction to Locally Linear Embedding

PCA & MDS● Principle Component Analysis:PCA

– Linear projections of greatest variance from the top eigenvectors of the data covariance matrix

● Multidimensional Scaling:MDS– Low dimensional embedding that best preserves

pairwise distances between data points● Modeling of linear variabilities in high

dimensional data● No local minima● Find linear subspace and cannot deal properly

with data lying on nonlinear manifolds

Page 4: Lightseminar: Learned Representation in AI An Introduction to Locally Linear Embedding

Problem in PCA and MDS

● Create distortion in local and global geometry– Map faraway data point to nearby point

Page 5: Lightseminar: Learned Representation in AI An Introduction to Locally Linear Embedding

LLE(Locally Linear Embedding)

● Property– Preserving the local configurations of nearest neighbors

● LLE– Local: only neighbors contribute to each reconstruction– Linear: reconstructions are confined to linear subspace

● Assumption– Well-sampled data->locally linear patch of the manifold– d-dimensional manifold->2d neighbors

Page 6: Lightseminar: Learned Representation in AI An Introduction to Locally Linear Embedding
Page 7: Lightseminar: Learned Representation in AI An Introduction to Locally Linear Embedding

LLE Algorithm

● Step1

Neighborhood search

– Compute the neighbors of each data point

– K nearest neighbors per data point

Page 8: Lightseminar: Learned Representation in AI An Introduction to Locally Linear Embedding

LLE Algorithm

● Step2

Constrained Least Square Fits

– Reconstruction Error

if not neighbor

-> Invariant to rotations, rescaling, and translation of that point and its neighbor

Page 9: Lightseminar: Learned Representation in AI An Introduction to Locally Linear Embedding

LLE Algorithm

● Step3

Eigenvalue Problem– Reconstruction

Error

– ->

centered at the origin– ->

Avoid degenerate solution

Page 10: Lightseminar: Learned Representation in AI An Introduction to Locally Linear Embedding

Embedding by LLE

Corners faces to the corners of its two dimensional embedding

Page 11: Lightseminar: Learned Representation in AI An Introduction to Locally Linear Embedding

Examples

Page 12: Lightseminar: Learned Representation in AI An Introduction to Locally Linear Embedding

LLE advantages

● Ability to discover nonlinear manifold of arbitrary dimension

● Non-iterative● Global optimality● Few parameters: K, d● O(DN2) and space efficient due to sparse matrix

Page 13: Lightseminar: Learned Representation in AI An Introduction to Locally Linear Embedding

LLE disadvantages

● Requires smooth, non-closed, densely sampled manifold

● Quality of manifold characterization dependent on neighborhood choice

● Cannot estimate low dimensionality● Sensitive to outliers

Page 14: Lightseminar: Learned Representation in AI An Introduction to Locally Linear Embedding

Comparisons: PCA vs LLE vs Isomap

● PCA: find linear subspace projection vectors to the data

● LLE: find embedding coordinate vectors that best fit local neighborhood relationships

● ISOMAP: find embedding coordinate vectors that preserve geodesic shortest distances