Graph-Cut / Normalized Cut segmentation

75

Click here to load reader

description

Graph-Cut / Normalized Cut segmentation. Jad silbak -University of Haifa . What we have, and what we want:. Most segmentations until now focusing on local features (K-Means). We would like to extract the global impression of an image . - PowerPoint PPT Presentation

Transcript of Graph-Cut / Normalized Cut segmentation

1

Graph-Cut / Normalized Cut segmentation Jad silbak -University of Haifa What we have, and what we want:Most segmentations until now focusing on local features (K-Means).

We would like to extract the global impression of an image .

Since there are many possible partitions of the domain, how do we pick the right one?Definitions and remindersAn undirected graph is marked as where:V - is a set one for each data element (e.g., pixel).

E - is a set of edges.

Given or (the edges between u &v), we define as the affinity between connected nodes.

Definitions and reminders What are Graph-Cuts ?Set of edges whose removal makes a graph disconnected.Its the partition of V into A,B such that and

What is a cuts cost ?The red edges are a graph-Cut ,the cuts cost is the sum of all the values of the edges .

Sum(red)=4 = cuts cost .

Min-cutA graph can be partitioned into two disjoint sets ,we define the partition cost as:

The bipartition of the graph which minimizes the cut value is called the Min-Cut .

Why not use regular min-cut for the partition?Lets see an example :

Why not use regular min-cut for the partition?

Every node is connected to all the other nodes. edge weights are inversely proportional to the distance between the two nodes.Why not use regular min-cut for the partition?minimum cut criteria favors cutting small sets of isolated nodes in the graph .

The ideal cut is not the cut with the min weight The red cut is the cut we want , but the min-cut favors small partition.Weight of cut is directly proportional to the number of edges in the cut.!

* Slide from Khurram Hassan-Shafique CAP5415 Slide credit: B. Freeman and A. Torralba Computer Vision 2003The solution - normalized cut (Ncut)Instead of looking at the value of total edge weight connecting the two partitions,

we compute the cut cost as a fraction of the total edge connections to all the nodes in the graph. We call this disassociation() measure the normalized cut (Ncut).

Normalized cut (Ncut)Cut(A,B) is sum of weights with one end in A and one end in B ,we want to minimize the cut cost.

Assoc(A,V) is sum of all edges with one end in A , we want to maximize the sum of all weights for every A,B element in the partition

Normalized cut (Ncut)

Nassoc In the same spirit, we can define a measure for total normalized association

Ncut and NassocUsing some mathematical manipulations we get that:

Minimizing the disassociation between the groups and maximizing the association within the groups, are in fact identical and can be satisfied simultaneously.

Example :cut and Ncut What is the min cut , and what is the min Ncut ?

Example :cut and Ncut It easy to see the min-cut ,we have efficient algorithms for solving the min-cut problem.

Example :cut and Ncut What we get is a larger cut , and a smaller Ncut . is this the min-Ncut ??

Ncut complexity:Unfortunately, minimizing normalized cut exactly is NP complete, even for the special case of graphs on grids.

However, we will show that, when we put the normalized cut problem in the real value domain, an approximate discrete solution can be found efficiently.Adjacency matrixLet W be the adjacency matrix of the graph, where every is the similarity between i,j where .

abcdeabcdea02000b20200c02021d00201e00110

22112

Diagonal matrixLet D be the diagonal matrix with diagonal entries

( D(i)=the sum of all edge with one end in ) abcdea20000b04000c00500d00030e00002

abcde22112

21Laplacian matrix :We define the the laplacian matrix L :

abcdea20000b04000c00500d00030e00002

abcdea02000b20200c02021d00201e00110

abcdea22-000b2-4-200c0-25-2-1d00-23-1e00-1-12

abcde22112laplacian matrix properties:The laplacian matrix properties :

All eigenvectors of L are perpendicular to each other (has a complete set of orthonormal eigenvectors) .

L is symmetric positive semi-definite .

Has a N non-negative real-valued eigen-values ,the smallest eigen-values is always 0 with eigen- vector .

Vasileios Zografos [email protected] Klas Nordberg [email protected]

Indicator vector let x be an dimensional indicator vector, if node , and otherwise.

abcde22112ABa1b1c1d-1e-1

Normalized cut Then the min normalized cut cost can be written as:

Where y is an indicator vector that acts like x with one exception if then .

Y ,b definition and example: We define b as .

abcde22112ABa1b1c1d-1e-1

a1b1c1de

abcdea20000b04000c00500d00030e00002

Y -constraints we have two constraints on y:1) This can be seen as a constrain that forces all y indicator vectors to be perpendicular to each other and specifically the vector .

But we already know that the laplacian matrix has: eigen vector of .All eigen vectors are perpendicular to each other ,thus This constraints is automatically satisfied by the solution.

Y -constraints2)Y must be a discrete value one of two {1,-b} .Satisfying this constraint is what makes the problem np.If we relax this constraint to accept real values we are able to approximate the solution by solving the generalized eigenvalue system .

Properties of the eigenvalue system The eigen vector will hopefully have similar values for nodes with high similarity( w(i,j) is high) .

The smallest eigenvector is always 0 , because we can have a partition of A=V and B={} thus Ncut(A,B)=0 .

Second smallest eigenvector is the real-valued y that minimizes ncut and is the solution for

What is Clustering?Cluster: a collection of data objectsSimilar to one another within the same cluster.Dissimilar to the objects in other clusters.The definition of clusters correspond to the definition of ncut.

Slide credit: Mario Haddad -Thanks abcde22112B-clusterA-cluster Spectral clusteringIn order to find clusters that may be hard to find in the real domain (coordinates),such as non-convex data .

We use eigenvectors of matrices derived from the data to map the data to a low-dimensional space where the data is separated and can be easily clustered(we can use [K-Means] in the new space ).

Can be treated as graph partitioning problem without making specific assumptions on the form of the clusters (Ncut- is an example for spectral clustering).

Spectral clustering example:

convexnon-convexVasileios Zografos [email protected] Klas Nordberg [email protected] does this work?Ideal Case

Why do the eigenvectors of the laplacian include cluster identification information?101010010101111000111000111000000111000111000111https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=3&ved=0CEQQFjAC&url=http%3A%2F%2Feniac.cs.qc.cuny.edu%2Fandrew%2Fgcml%2FLecture21.pptx&ei=RSM4U-qwKZDdsgbUh4DQDw&usg=AFQjCNElVvY0Mekord2Byc5qvb-8SlEuOg&sig2=IB53ooeEBUgrgPQRQ4nPew Spectral Clustering - Intuition

Slides Courtesy: Eric Xing, M. Hein & U.V. LuxburgSpectral Clustering - Intuition

Slides Courtesy: Eric Xing, M. Hein & U.V. LuxburgSpectral Clustering - Intuition

Slides Courtesy: Eric Xing, M. Hein & U.V. LuxburgWhy does this work?How does this eigenvector decomposition address this?

cluster assignmenthttps://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=3&ved=0CEQQFjAC&url=http%3A%2F%2Feniac.cs.qc.cuny.edu%2Fandrew%2Fgcml%2FLecture21.pptx&ei=RSM4U-qwKZDdsgbUh4DQDw&usg=AFQjCNElVvY0Mekord2Byc5qvb-8SlEuOg&sig2=IB53ooeEBUgrgPQRQ4nPew

Spectral Clustering & Ncut Ncut is type of spectral clustering, in witch we use the eigenvector as an indicator for the partition of the graph (data).The eigenvalues represent the Ncut cost for each eigenvector, minimizing the Ncut means using a eigenvector with a corresponding min eigenvalue. Since the smallest eigenvalue is 0 with a corresponding eigenvector (partitions the data to one cluster) the first eigenvector is not used .The second eigenvector minimizes ncut and is the solution for :

Translating the eigenvector to partitionsWhat allowed us to approximate Ncut solution (which is np), is allowing the y to take real value, but we still need a discrete value for the partition of the graph

abcde22112abcdea22-000b2-4-200c0-25-2-1d00-23-1e00-1-12abcde

Not discretesecond eigenvector We must choose a threshold so we can return to discrete value:We can try random thresholds and choose the best.We can check l evenly spaced possible splitting points and take the best ,(what gives the smallest Ncut).

Translating the eigenvector to partitionsabcde22112abcde

Threshold T=0a1b1c-1d-1e-1Some notes:We have seen how the second eigenvector can be used to bipartition of the graph, but what about the other eigenvector ??

The 3d smallest eigenvector can give a sub-partition for the partition we got from the 2d smallest eigenvector, and in general every eigenvector can sub-partition the result we got from the perverse eigenvector.

Some notes:But do to the fact that every time we use an eigenvector to partition the graph, we may get errors from the conversion to discrete values, the error accumulates and the partition becomes less reliable the higher we go.

What do we do to farther partition the graph ?Recursive Two-Way NcutGiven compute the weight on each edge, and summarize the information into W and D.

Solve for eigenvectors with the smallest eigenvalues.

We use the eigenvector with the second smallest eigenvalue to bipartition the graph.

Recursive Two-Way NcutDecide if the current partition should be subdivided by checking :The stability(not continues) if the values of the of the eigenvector are continues from one intery to anther then, it may be the we are tiring to sub-partition the data were it is not needed. (Ncut < T ) T is prespecified value we use to indicate when to stop farther partition this sub graph.

Recursively repartition the segmented parts if necessary.

Recursive Two-Way Ncut example :

Denis Hamad LASL [email protected]

How does all this relate to images?Let G be a graph that represents the image I as fallows, for every pix in I there exists a node in G that represents it, Wij is the similarty between pix i and j.

Example: Brightness ImagesHow do we define similarity ?

Where is the spatial location of node i, and is a feature vector based on intensity.

Example: Brightness ImagesWe can change the parameters similar to bilateral filtering.

Examples :

Deferent definitions of similarities:

where is the spatial location of node i, and is a feature vector based on: intensity color texture

In intensity the feature victor was of dimension 1 but we can have a d-dim feature victor where the weight arethe similarity between these vectors .

Color feature vector We have seen feature vector based on intensity.

What about feature vector based on colors

Where h, s, v are the HSV values, for color segmentation

Ncut with color :

texture feature vector Feature vector based on texture segmentation:

We can use spatial filter for where the DOOG filters at various scales and orientations .

Ncut with texture

Time complexitySolving a standard eigenvalue problem for all eigenvectors takes , this is impractical !!!

Fortunately our graph partitioning has the following properties :The graphs are often only locally connected.only the top few eigenvectors are needed for graph partitioning.the precision requirement for the eigenvectors is low.

Time complexityWe can remove up to 90% of the total connections with each of the neighborhoods without affecting the eigenvector solution to the system.

Putting everything together, each of the matrix vector computations cost .

Recursive Two-Way Ncut may take

Summarizing: We have seen why min-cut wasnt reliable info for segmentation and why we need Ncut.

Although the Ncut problem is np we can approximate a solution using the generalized eigenvalue system.

How does Ncut graph relate to images and how do we define similarity, (intensity ,color and texture).

Recursive Two-Way Ncut algorithm .

Time complexity.Interactive graph cuts Automatic segmentation seems to never be perfect !

What we want is the ability to mark (impose) hard constraints ,by indicating certain pixels (seeds) that absolutely have to be part of object and certain pixels that absolutely have to be part of background .

red is the objectBlue is the background

Boundary and RegionBoundary based methods - based on local information (Derivatives kernels, Harris, Canny).

Region based methods + statistics inside the region.- often generates irregular boundaries and small Holes.

Daniel Heilper, CS Department, Haifa UniversityBoundary and Region

:BoundaryRegion: Daniel Heilper, CS Department, Haifa Universitycost function:The cost function provides a soft constraint for segmentation and includes both region and boundary properties .

Let be a binary vector whose components can be either obj or bkg ,p the set of nods.

cost function: Region Boundary

specifies a relative importance of the region Boundary properties term.

:Intuition The can be seen as the individual penalties for assigning pixel p to object and background .For example may reflect on how the intensity of pixel p fits into a known intensity model (e.g. histogram) of the object and background

comprises the boundary properties of segmentation A , Coefficient interpreted as a penalty for a discontinuity between p and q. is large when pixels p and q are similar . Costs may be based on local intensity gradient, Laplacian zero-crossing.

Examples:->

Figure 2. Synthetic Gestalt example. The segmentationresults in (b-d)are shown for variousLevels of relative importance of regionversus boundary in (1). Note that the resultin (b) corresponds to a wide range of .

Implementation

Proceedings of Internation Conference on Computer Vision, Vancouver, Canada, July 2001

The general work :we create a graph with two terminals.The edge weights reflect the parameters in the regional and the boundary terms of the cost function,as well as the known positions of seeds in the image.The seeds are O = {v} and B = {p}

interactive graph cuts in medical images

Credits & References J. Shi andJ.Malik,Normalized Cuts and Image Segmentation, Proc. CVPR 1997.also IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(8), 888-905, August 2000.

Boykov, Y., Jolly, M.," Interactivegraph cuts for optimal boundary and regionsegmentation of objects in N-D images." In: International Conference on Computer Vision,Vancouver,BC. (2001) 105112.

PAMI 2000! Slide credit: S. Lazebnik.

Interactive Image Segmentation Fahim Mannan (260 266 294) .

Slides Courtesy: Eric Xing, M. Hein & U.V. Luxburg.

Daniel Heilper, CS Department, Haifa University.

https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=3&ved=0CEQQFjAC&url=http%3A%2F%2Feniac.cs.qc.cuny.edu%2Fandrew%2Fgcml%2FLecture21.pptx&ei=RSM4U-qwKZDdsgbUh4DQDw&usg=AFQjCNElVvY0Mekord2Byc5qvb-8SlEuOg&sig2=IB53ooeEBUgrgPQRQ4nPew

Thank you all