CURE: EFFICIENT CLUSTERING ALGORITHM FOR LARGE DATASETS VULAVALA VAMSHI PRIYA.
-
Upload
preston-fleming -
Category
Documents
-
view
229 -
download
0
Transcript of CURE: EFFICIENT CLUSTERING ALGORITHM FOR LARGE DATASETS VULAVALA VAMSHI PRIYA.
![Page 1: CURE: EFFICIENT CLUSTERING ALGORITHM FOR LARGE DATASETS VULAVALA VAMSHI PRIYA.](https://reader035.fdocuments.net/reader035/viewer/2022062315/5697bfa61a28abf838c98817/html5/thumbnails/1.jpg)
CURE: EFFICIENT CLUSTERING ALGORITHM FOR LARGE DATASETS
VULAVALA VAMSHI PRIYA
![Page 2: CURE: EFFICIENT CLUSTERING ALGORITHM FOR LARGE DATASETS VULAVALA VAMSHI PRIYA.](https://reader035.fdocuments.net/reader035/viewer/2022062315/5697bfa61a28abf838c98817/html5/thumbnails/2.jpg)
contents
I. Different problem in traditional clustering method
II. Basic idea of CURE clustering
III.Improved CURE
IV.Summary
![Page 3: CURE: EFFICIENT CLUSTERING ALGORITHM FOR LARGE DATASETS VULAVALA VAMSHI PRIYA.](https://reader035.fdocuments.net/reader035/viewer/2022062315/5697bfa61a28abf838c98817/html5/thumbnails/3.jpg)
Different problem in traditional clustering method
PARTITIONING CLUSTERING
Partitional Clustering This category of clustering method try to reduce the data set into k clusters based on
some criterion functions.
The most common criterion is square-error criterion.
This method favor to clusters with data points as compact and separated as possible
![Page 4: CURE: EFFICIENT CLUSTERING ALGORITHM FOR LARGE DATASETS VULAVALA VAMSHI PRIYA.](https://reader035.fdocuments.net/reader035/viewer/2022062315/5697bfa61a28abf838c98817/html5/thumbnails/4.jpg)
You may find error in case the square-error is reduced by splitting some large cluster to favor some other group.
Figure: Splitting occur in large cluster by partitional method
![Page 5: CURE: EFFICIENT CLUSTERING ALGORITHM FOR LARGE DATASETS VULAVALA VAMSHI PRIYA.](https://reader035.fdocuments.net/reader035/viewer/2022062315/5697bfa61a28abf838c98817/html5/thumbnails/5.jpg)
Hierarchical Clustering
Hierarchical Clustering
This category of clustering method try to merge sequences of disjoint clusters into the target k clusters base on the minimum distance between two clusters.
The distance between clusters can be measured as:
Distance between mean:
Distance between average point
Distance between two nearest point within cluster
![Page 6: CURE: EFFICIENT CLUSTERING ALGORITHM FOR LARGE DATASETS VULAVALA VAMSHI PRIYA.](https://reader035.fdocuments.net/reader035/viewer/2022062315/5697bfa61a28abf838c98817/html5/thumbnails/6.jpg)
This method favor hyper-spherical shape and uniform data.
Let’s take some prolonged data as example:
Result of dmean : Result of dmin :
![Page 7: CURE: EFFICIENT CLUSTERING ALGORITHM FOR LARGE DATASETS VULAVALA VAMSHI PRIYA.](https://reader035.fdocuments.net/reader035/viewer/2022062315/5697bfa61a28abf838c98817/html5/thumbnails/7.jpg)
Problems summary
1. Traditional clustering mainly favors spherical shape.
2. Data in the cluster must be compact together.
3. Each cluster must separate far away enough.
4. Cluster size must be uniform.
5. Outliner will greatly disturb the cluster result.
![Page 8: CURE: EFFICIENT CLUSTERING ALGORITHM FOR LARGE DATASETS VULAVALA VAMSHI PRIYA.](https://reader035.fdocuments.net/reader035/viewer/2022062315/5697bfa61a28abf838c98817/html5/thumbnails/8.jpg)
Basic idea of CURE clustering
General CURE clustering procedure
1. It is similar to hierarchical clustering approach. But it use sample point variant as the cluster representative rather than every point in the cluster.
2. First set a target sample number c . Than we try to select c well scattered sample points from the cluster.
3. The chosen scattered points are shrunk toward the centroid in a fraction of where 0 <<1
![Page 9: CURE: EFFICIENT CLUSTERING ALGORITHM FOR LARGE DATASETS VULAVALA VAMSHI PRIYA.](https://reader035.fdocuments.net/reader035/viewer/2022062315/5697bfa61a28abf838c98817/html5/thumbnails/9.jpg)
4. These points are used as representative of clusters and will be used as the point in dmin cluster merging approach.
5. After each merging, c sample points will be selected from original representative of previous clusters to represent new cluster.
6. Cluster merging will be stopped until target k cluster is found
Nearest
Merge
Nearest
Merge
![Page 10: CURE: EFFICIENT CLUSTERING ALGORITHM FOR LARGE DATASETS VULAVALA VAMSHI PRIYA.](https://reader035.fdocuments.net/reader035/viewer/2022062315/5697bfa61a28abf838c98817/html5/thumbnails/10.jpg)
Pseudo function of CURE
![Page 11: CURE: EFFICIENT CLUSTERING ALGORITHM FOR LARGE DATASETS VULAVALA VAMSHI PRIYA.](https://reader035.fdocuments.net/reader035/viewer/2022062315/5697bfa61a28abf838c98817/html5/thumbnails/11.jpg)
CURE efficient
The worst-case time complexity is O(n2logn) The space complexity is O(n) due to the use of k-d treee and heap.
![Page 12: CURE: EFFICIENT CLUSTERING ALGORITHM FOR LARGE DATASETS VULAVALA VAMSHI PRIYA.](https://reader035.fdocuments.net/reader035/viewer/2022062315/5697bfa61a28abf838c98817/html5/thumbnails/12.jpg)
Improved CURE
Random Sampling
In case of dealing with large database, we can’t store every data point to the memory. Handle of data merge in large database require very long time. We use random sampling to both reduce the time complexity and memory usage. Assume if we need to detect a cluster u present, we need to at least capture f fraction of data
from this cluster f|u| The required sampling data s to capture can be present as follow:
You can refer to proof from the reference (i). Here we just want to show that we can determine a sample size smin such that the probability of get enough sample from every cluster u is 1 -
![Page 13: CURE: EFFICIENT CLUSTERING ALGORITHM FOR LARGE DATASETS VULAVALA VAMSHI PRIYA.](https://reader035.fdocuments.net/reader035/viewer/2022062315/5697bfa61a28abf838c98817/html5/thumbnails/13.jpg)
Partitioning and two pass clustering
In addition, we use two-pass approach to reduce the computation time.
First, we divide the n data point into p partition and each contain n/p data point.
We than pre-cluster each partition until the number of cluster n/pq reached in each partition for some q > 1
Then each cluster in the first pass result will be used as the second pass clustering input to form the final cluster.
The overall improvement will be:
Also, to maintain the quality of clustering, we must make sure n/pq must be 2 to 3 times of k.
![Page 14: CURE: EFFICIENT CLUSTERING ALGORITHM FOR LARGE DATASETS VULAVALA VAMSHI PRIYA.](https://reader035.fdocuments.net/reader035/viewer/2022062315/5697bfa61a28abf838c98817/html5/thumbnails/14.jpg)
Outlier elimination
We can introduce outliners elimination by two method.
1. Random sampling: With random sampling, most of outlier points are filtered out.
2. Outlier elimination: As outliner is not a compact group, it will grow in size very slowly during the cluster merge stage. We will then kick in the elimination procedure during the merging stage such that those cluster with 1 ~ 2 data points are removed from the cluster list.
In order to prevent these outliners from merging into proper cluster, we must trigger the procedure in proper stage such that we can properly remove the outliners. In general, we will trigger this procedure when cluster sets reduce to 1/3 of total data sets.
![Page 15: CURE: EFFICIENT CLUSTERING ALGORITHM FOR LARGE DATASETS VULAVALA VAMSHI PRIYA.](https://reader035.fdocuments.net/reader035/viewer/2022062315/5697bfa61a28abf838c98817/html5/thumbnails/15.jpg)
Data labeling
Due to the use of random sample. We need to label back every remaining data points to the proper cluster group.
Each data point is assigned to the cluster group with a representative point nearest to the data point.
Final overview of CURE flow
Data
Draw Random Sample
Partition Sample
Partially cluster partition
Elimination outliers Cluster partial clusters
Label data in disk
![Page 16: CURE: EFFICIENT CLUSTERING ALGORITHM FOR LARGE DATASETS VULAVALA VAMSHI PRIYA.](https://reader035.fdocuments.net/reader035/viewer/2022062315/5697bfa61a28abf838c98817/html5/thumbnails/16.jpg)
Sample result with different parameter
Different shrinking factor Different number of representatives c
![Page 17: CURE: EFFICIENT CLUSTERING ALGORITHM FOR LARGE DATASETS VULAVALA VAMSHI PRIYA.](https://reader035.fdocuments.net/reader035/viewer/2022062315/5697bfa61a28abf838c98817/html5/thumbnails/17.jpg)
Relation of execution time, different partition number p and different sample points s
![Page 18: CURE: EFFICIENT CLUSTERING ALGORITHM FOR LARGE DATASETS VULAVALA VAMSHI PRIYA.](https://reader035.fdocuments.net/reader035/viewer/2022062315/5697bfa61a28abf838c98817/html5/thumbnails/18.jpg)
Summary
CURE can effectively detect proper shape of the cluster with the help of scattered representative point and centroid shrinking.
CURE can reduce computation time and memory loading with random sampling and 2 pass clustering
CURE can effectively remove outlier.
The quality and effectiveness of CURE can be tuned be varying different s,p,c, to adapt different input data set.
![Page 19: CURE: EFFICIENT CLUSTERING ALGORITHM FOR LARGE DATASETS VULAVALA VAMSHI PRIYA.](https://reader035.fdocuments.net/reader035/viewer/2022062315/5697bfa61a28abf838c98817/html5/thumbnails/19.jpg)
![Page 20: CURE: EFFICIENT CLUSTERING ALGORITHM FOR LARGE DATASETS VULAVALA VAMSHI PRIYA.](https://reader035.fdocuments.net/reader035/viewer/2022062315/5697bfa61a28abf838c98817/html5/thumbnails/20.jpg)