Exploring the Parameter Space of Image Segmentation Algorithms
Talk at NCHU 2010 - p 1
Xiaoyi Jiang
Department of Mathematics and Computer ScienceUniversity of Münster
Germany
Talk at NCHU 2010 - p 2
Typical approaches:
Not consider the problem at all
“We have experimentally determined the parameter values ……“
Supervised: training of parameter values based on training images with (manually specified) ground truth
Unsupervised: based on heuristics to measuresegmentation quality
How to deal with parameters?
Talk at NCHU 2010 - p 3
Drawbacks:
“We have experimentally determined ……“ Who believes that?
Supervised: training of parameter values based on GT GT not always available trained parameters not optimal for a particular image
Unsupervised: based on self-judgement heuristics still no good solution for self-judgement
How to deal with parameters?
Talk at NCHU 2010 - p 4
Basic assumption:
Known reasonable range of good values for each parameter
Our intention: explore the parameter subspace without GT
A: investigate local behavior of parameters
B: adaptively compute an “optimal“ segmentation withina parameter subspace (construction approach)
C: adaptively select an “optimal“ parameter setting withina subspace (selection approach)
How to deal with parameters?
Talk at NCHU 2010 - p 5
Natural landscape
p1
p2
Quality measure
optimal parameters
Talk at NCHU 2010 - p 6
A. Investigate local behavior of parameters
Belief:
There is a subspace of good parameter values
Reality:
Yes, but there are local outliers within such a subspace!
Talk at NCHU 2010 - p 7
A. Investigate local behavior of parameters
Felzenszwalb / Huttenlocher: Efficient graph-based image segmentation. Int. J. Computer Vision 59 (2004) 167–181
Talk at NCHU 2010 - p 8
A. Investigate local behavior of parameters
Close-up:
NMI = 0.70
NMI = 0.26
Talk at NCHU 2010 - p 9
A. Investigate local behavior of parameters
Deng / Manjunath: Unsupervised segmentation of color-texture regions in images and video. IEEE T-PAMI 23(2001) 800–810 (JSEG)
NMI = 0.61NMI = 0.76
Talk at NCHU 2010 - p 10
A. Investigate local behavior of parameters
Frequency study on Berkeley image set:Strong (weak) outliers = segmentation results with NMI lower than 15%
(10%) of the maximum NMI of the current image ensemble (5x5 subspace)
FH JSEG
Talk at NCHU 2010 - p 11
A. Investigate local behavior of parameters
Danger: There are local outliers (salt-and-pepper noise)!
Solution: similar to median filtering
: Segmentations around some parameter setting
: distance function between segmentations
Set median:
Talk at NCHU 2010 - p 12
A. Investigate local behavior of parameters
FH: best worst set median
Talk at NCHU 2010 - p 13
A. Investigate local behavior of parameters
JSEG:
Talk at NCHU 2010 - p 14
B: Adaptively compute an “optimal“ segmentation
Belief:
There is a reasonable subspace of good parameter values. Some optimal parameter setting can be determined by experiments or training.
Reality:
Yes, but this parameter setting is not optimal for a particular image!
Talk at NCHU 2010 - p 15
B: Adaptively compute an “optimal“ segmentation
Exactly the same parameter set applied to two images
Talk at NCHU 2010 - p 16
B: Adaptively compute an “optimal“ segmentation
Segmentation ensemble technique:
Use a sampled parameter subspace to compute anensemble of segmentations
Compute a final segmentation based on SThis combined segmentation tends to be a good one within the explored parameter subspace
Talk at NCHU 2010 - p 17
B: Adaptively compute an “optimal“ segmentation
Talk at NCHU 2010 - p 18
L.Grady: Random walks for image segmentation. IEEE-TPAMI, 28: 1768–1783, 2006
Excursus: Random walker based segmentation
1818
(a) A two-region image (b) Use-defined seeds for each region
(c) A 4-connected lattice topology
seeded (labeled) pixelsunseeded (unlabeled) pixel
edge weight: similarity between two nodes, based one.g., intensity gradient, color changes
(d) An undirected weighted graph
low-weight edge (sharp color gradient)
Talk at NCHU 2010 - p 19
The algorithm labels an unseeded pixel in following steps:
Step 1. Calculate the probability that a random walker starting at an unseeded pixel x first reaches a seed with label s
Excursus: Random walker based segmentation
1919
Probability that a random walker starting from each unseeded node first reaches red seed
Probability that a random walker starting from each unseeded node first reaches blue seed
0.97 0.90 0.85
0.97
0.97 0.90
0.85
0.85
0.15 0.10 0.03
0.15
0.15 0.10
0.03
0.03
0.03 0.10 0.15
0.03
0.03 0.10
0.15
0.15
0.85 0.90 0.97
0.85
0.85 0.90
0.97
0.97
Talk at NCHU 2010 - p 20
Step 2. Label each pixel with the most probable seed destination
Excursus: Random walker based segmentation
2020
(0.97,0.03) (0.90,0.10) (0.85,0.15) (0.15,0.85) (0.10,0.90) (0.03,0.97)
(0.97,0.03) (0.85,0.15) (0.15,0.85) (0.03,0.97)
(0.97,0.03) (0.90,0.10) (0.85,0.15) (0.15,0.85) (0.10,0.90) (0.03,0.97)
A segmentation corresponding to region boundary is obtained by biasing the random walker to avoid crossing sharp color gradients
Talk at NCHU 2010 - p 21
Excursus: Random walker based segmentation
Original Seeds indicating four objects Resulting segmentation
Label 1 probabilities Label 2 probabilities Label 3 probabilities Label 4 probabilities
Talk at NCHU 2010 - p 22
B: Adaptively compute an “optimal“ segmentation
Connection to random walker based segmentation:
The input segmentations provide strong hints about where to automatically place some seeds
Then, the same situation as image segmentation with manually specified seeds apply the random walker algorithm to achieve a final segmentation
Random walker based segmentation ensemble technique:
Generate a graph from input segmentations
Extract seed regions
Compute a final combined segmentation result
Talk at NCHU 2010 - p 23
B: Adaptively compute an “optimal“ segmentation
Graph generation:
Weight eij in G: indicate how probably two pixels pi and
pj belong to the same image region
Solution: Counting number nij of initial segmentations,
in which pi and pj share the same region label. Then, we define the weight function as a Gaussian weighting:
wij = exp [-β (1- nij /N)]
Talk at NCHU 2010 - p 24
B: Adaptively compute an “optimal“ segmentation
Candidate seed region extraction:
We build a new graph G* by preserving those edges with weight wij = 1 only (pi and pj have the same label in all initial segmentations) and removing all other edges. Then, all connected subgraphs in G* build the initial seed regions.
Grouping candidate seed regions: A reduction of seed regions is performed by iteratively merging the two closest candidate seed regions until some termination criterion (thresholding) is satisfied.
Optimization of K (number of seed regions):Based on an approximation of generalized median segmentation by investigating the subspace consisting of the combination segmentations for all possible K 2 [Kmin,Kmax] only.
Talk at NCHU 2010 - p 25
B: Adaptively compute an “optimal“ segmentation
graph G initial seeds final result (optimal K)
Talk at NCHU 2010 - p 26
B: Adaptively compute an “optimal“ segmentation
worst / median / best input segmentation combination segmentation
Talk at NCHU 2010 - p 27
B: Adaptively compute an “optimal“ segmentation
Comparison (per image): Worst / best / average input & combination
Talk at NCHU 2010 - p 28
B: Adaptively compute an “optimal“ segmentation
f(n): Number of images for which the combination result is worse thanthe best n input segmentations
Ensemble technique outperforms all 24 input segmentations in 78 cases. For 70% (210) of all 300 test images, the goodness of our solution is beaten by at most 5 input segmentations only.
Talk at NCHU 2010 - p 29
B: Adaptively compute an “optimal“ segmentation
Comparison: Average performance for all 300 test images(for each of 24 parameter settings)
Talk at NCHU 2010 - p 30
B: Adaptively compute an “optimal“ segmentation
The dream must go on!
Dream
Talk at NCHU 2010 - p 31
Additional applications:
2.5D range image segmentation
detect double contours by dynamic programming (layer of intima and adventitia for computing the intima-media thickness)
B: Adaptively compute an “optimal“ segmentation
Talk at NCHU 2010 - p 32
B: Adaptively compute an “optimal“ segmentation
Segmenter combination:There exists no universal segmentation algorithm that can successfully segment all images. It is not easy to know the optimal algorithm for one particular image.
Instead of looking for the best segmenter which is hardly possible on a per-image basis, now we look for the best segmenter combiner.
Instead of looking for the best set of features and the best classifier, now we look for the best set of classifiers and then the best combination method.
Ho, 2002
Talk at NCHU 2010 - p 33
Belief:
There are heuristics to measure segmentation quality
Reality:
Yes, but optimizing such heuristic do not necessarily correspond to segmentations perceived by humans!
C: Adaptively select an optimal parameter setting
Talk at NCHU 2010 - p 34
C: Adaptively select an optimal parameter setting
Observations:
Different segmenters tend to produce similar good segmentations, but dissimilar bad segmentations
(The subspace of bad segmentations is substantially larger than the subspace of good segmentations)
Compare segmentation results of different segmenters and figure out good segmentations by means of similarity tests
Talk at NCHU 2010 - p 35
C: Adaptively select an optimal parameter setting
Talk at NCHU 2010 - p 36
C: Adaptively select an optimal parameter setting
Outline of the framework:
Compute for each segmentation algorithm N segmentations
Compute an N × N similarity matrix by comparing each segmentation of the first algorithm with each segmentation of the second algorithm
Determine the best parameter setting from the similarity matrix
Talk at NCHU 2010 - p 37
C: Adaptively select an optimal parameter setting
Weaker segmenter CSC benefits from stronger FH/JSEG
Talk at NCHU 2010 - p 38
C: Adaptively select an optimal parameter setting
Also FH benefits from weaker CSC
Talk at NCHU 2010 - p 39
C: Adaptively select an optimal parameter setting
Also JSEG benefits from weaker CSC
Talk at NCHU 2010 - p 40
Conclusions
Basic assumption:
Known reasonable range of good values for each parameter
Our intention: Explore the parameter subspace without GT
A: investigate local behavior of parameters
B: adaptively compute an “optimal“ segmentation within
a parameter subspace
C: adaptively select an optimal parameter setting within
a subspace on a per image basis
Talk at NCHU 2010 - p 41
Conclusions
We could demonstrate:
A: Local outliers can be successfully removed by set median operator
B: The combination performance tends to reach the best input segmentation; in some cases the combinedsegmentation even outperforms the entire input ensemble
C: Segmenters can help each other for selecting good parameter values
Talk at NCHU 2010 - p 42
Conclusions
Combination (ensemble) techniques:
Generalized median: Strings, graphs, clusterings, …
Multiple classifier systems
……
Combining image segmentations
Three cobblers combined equal the master mind - Chinese proverb -
gracias
Top Related