DataMining.ppt

88
CS 6463: An overview of Molecular Biology 1 B. Data Mining Objective: Provide a quick overview of data mining B.1. Introduction B.2. Data Mining Tasks: B.2.1. Supervised Learning B.2.1. Classification B.2.2. Regression B.2.2. Clustering B.2.3. Association Rule and Time series B.3. Data Selection and Preprocessing B.4. Evaluation and Classification

description

 

Transcript of DataMining.ppt

Page 1: DataMining.ppt

CS 6463: An overview of Molecular Biology 1

B. Data Mining Objective: Provide a quick overview of data mining

B.1. Introduction B.2. Data Mining Tasks:

B.2.1. Supervised Learning B.2.1. Classification B.2.2. Regression

B.2.2. Clustering B.2.3. Association Rule and Time series

B.3. Data Selection and Preprocessing B.4. Evaluation and Classification

Page 2: DataMining.ppt

CS 6463: An overview of Molecular Biology 2

B. Data Mining: What is Data Mining?

Extraction of interesting (non-trivial, implicit, previously unknown and potentially useful) patterns or knowledge from huge amount of data.

Types of problems: Supervised (learning)

Classification Regression

Unsupervised (learning) or clustering Association Rules Time Series Analysis

Page 3: DataMining.ppt

CS 6463: An overview of Molecular Biology 3

B. Data Mining: Classification

Find ways to separate data items into pre-defined groups

We know X and Y belong together, find other things in same group

Requires “training data”: Data items where group is known

Uses: Profiling Technologies: Generate decision trees

(results are human understandable)

Neural Nets

“Route documents to most likely interested parties”

English or non-english? Sports or Politics?

Groups

Training Data

tool produces

classifier

Page 4: DataMining.ppt

CS 6463: An overview of Molecular Biology 4

B. Data Mining: Clustering

Find groups of similar data items

Statistical techniques require some definition of “distance” (e.g. between travel profiles) while conceptual techniques use background concepts and logical descriptions

Uses: Demographic analysisTechnologies: Self-Organizing Maps Probability Densities Conceptual Clustering

“Group people with similar purchasing profiles”

George, Patricia Jeff, Evelyn, Chris Rob

Clusters

Page 5: DataMining.ppt

CS 6463: An overview of Molecular Biology 5

B. Data Mining: Association Rules

Identify dependencies in the data:

X makes Y likely Indicate significance of each

dependency Bayesian methods

Uses: Targeted marketing

Technologies: AIS, SETM, Hugin, TETRAD II

“Find groups of items commonly purchased together”

People who purchase fish are extraordinarily likely to purchase wine

People who purchase Turkey are extraordinarily likely to purchase cranberries

Date/Time/Register Fish Turkey Cranberries Wine …12/6 13:15 2 N Y Y Y …12/6 13:16 3 Y N N Y …

Page 6: DataMining.ppt

CS 6463: An overview of Molecular Biology 6

B. Data Mining: Time Series Analysis

A value (or set of values) that changes in time. Want to find pattern

Uses: Stock Market Analysis

Technologies: Statistics ‘(Stock) Technical Analysis’ Dynamic Programming

“Find groups of items commonly purchased together”

People who purchase fish are extraordinarily likely to purchase wine

People who purchase Turkey are extraordinarily likely to purchase cranberries

Date/Time/Register Fish Turkey Cranberries Wine …12/6 13:15 2 N Y Y Y …12/6 13:16 3 Y N N Y …

Page 7: DataMining.ppt

CS 6463: An overview of Molecular Biology 7

B. Data Mining: Relation with Statistics, Machine Learning,…etc

ClassificationRegressionClustering

Time Series

STATISTICSPATTERN RECOGNITION

MACHINE LEARNING

•Reinforcement Learning•Inductive Logic Programming

OTHERS: INTELLIGENTDATA ANALYSIS, DISCOVERY SCIENCE, … etc

DATA MININGKNOWLEDGE DISCOVERY

Data warehousingOLAP (concept)Association Rules

Page 8: DataMining.ppt

CS 6463: An overview of Molecular Biology 8

B. Data Mining: Process of Data Mining

adapted from:U. Fayyad, et al. (1995), “From Knowledge Discovery to Data Mining: An Overview,” Advances in Knowledge Discovery and Data Mining, U. Fayyad et al. (Eds.), AAAI/MIT Press

DataTargetData

Selection

KnowledgeKnowledge

PreprocessedData

Patterns

Data Mining

Interpretation/Evaluation

Preprocessing

Page 9: DataMining.ppt

CS 6463: An overview of Molecular Biology 9

B. Data Mining: Issue 1: Data Selection

Source of data (which source you think is more reliable). Could be from database or supplementary data.

Is the data clean? Does it make sense? How many instnaces? What sort of attributes does the data have? What sort of class labels does the data have?

Page 10: DataMining.ppt

CS 6463: An overview of Molecular Biology 10

B. Data Mining: Issue 2: Data Preparation

Data cleaning Preprocess data in order to reduce noise and

handle missing values Relevance analysis (feature selection)

Remove the irrelevant or redundant attributes Curse of dimensionality

Data transformation Generalize and/or normalize data

Page 11: DataMining.ppt

CS 6463: An overview of Molecular Biology 11

B. Data Mining: Issue 3: Evaluating Classification Methods

Predictive accuracy Speed and scalability

time to construct the model time to use the model

Robustness handling noise and missing values

Interpretability: understanding and insight provided by the model

Goodness of rules decision tree size compactness of classification rules

Page 12: DataMining.ppt

CS 6463: An overview of Molecular Biology 12

B.2. Data Mining Tasks B.1. Introduction B.2. Data Mining Tasks:

B.2.1. Supervised Learning B.2.1.1 Classification

B.2.1.1.1. Decision Tree B.2.1.1.2. Neural Network B.2.1.1.3. Support Vector Machine B.2.1.1.4. Instance-Based Learning B.2.1.1.5. Bayse Learning

B.2.1.2. Regression B.2.2. Clustering B.2.3. Association Rule and Time series

B.3. Data Selection and Preprocessing B.4. Evaluation and Classification

Page 13: DataMining.ppt

CS 6463: An overview of Molecular Biology 13

B.2.1. Supervised Learning: Classification vs. Regression

Classification: predicts categorical class labels (discrete or

nominal) classifies data (constructs a model) based on the

training set and the values (class labels) in a classifying attribute and uses it in classifying new data

Regression: models continuous-valued functions

Page 14: DataMining.ppt

CS 6463: An overview of Molecular Biology 14

B.2.1.1. Classification: Training Dataset

age income student credit_rating buys_computer<=30 high no fair no<=30 high no excellent no31…40 high no fair yes>40 medium no fair yes>40 low yes fair yes>40 low yes excellent no31…40 low yes excellent yes<=30 medium no fair no<=30 low yes fair yes>40 medium yes fair yes<=30 medium yes excellent yes31…40 medium no excellent yes31…40 high yes fair yes>40 medium no excellent no

This follows an example from Quinlan’s ID3

Page 15: DataMining.ppt

CS 6463: An overview of Molecular Biology 15

B.2.1.1. Classification: Output a Decision Tree for “buys_computer”

age?

overcast

student? credit rating?

no yes fairexcellent

<=30 >40

no noyes yes

yes

30..40

Page 16: DataMining.ppt

CS 6463: An overview of Molecular Biology 16

weather

sunnycloudyrainy

windyTemp > 75

BBQ Eat in

Eat in

BBQ Eat in

yesno

B.2.1.1.1 Decision Tree: Another Example

Page 17: DataMining.ppt

CS 6463: An overview of Molecular Biology 17

B.2.1.1.1 Decision Tree: Avoid Overfitting in Classification

Overfitting: An induced tree may overfit the training data Too many branches, some may reflect anomalies due to noise or

outliers Poor accuracy for unseen samples

Two approaches to avoid overfitting Prepruning: Halt tree construction early—do not split a node if this

would result in the goodness measure falling below a threshold Difficult to choose an appropriate threshold

Postpruning: Remove branches from a “fully grown” tree—get a sequence of progressively pruned trees

Use a set of data different from the training data to decide which is the “best pruned tree”

Page 18: DataMining.ppt

CS 6463: An overview of Molecular Biology 18

B.2.1.1.1 Decision Tree: Approaches to Determine the Final Tree Size

Separate training (2/3) and testing (1/3) sets Use cross validation, e.g., 10-fold cross validation

Partition the data into 10 subsets Run the training 10 times, each using a different subset as test set,

the rest as training

Use all the data for training but apply a statistical test (e.g., chi-square) to estimate whether

expanding or pruning a node may improve the entire distribution

Use minimum description length (MDL) principle halting growth of the tree when the encoding is minimized

Page 19: DataMining.ppt

CS 6463: An overview of Molecular Biology 19

B.2.1.1.1 Decision Tree: Enhancements to basic decision tree induction

Allow for continuous-valued attributes Dynamically define new discrete-valued attributes that partition the

continuous attribute value into a discrete set of intervals Handle missing attribute values

Assign the most common value of the attribute Assign probability to each of the possible values

Attribute construction Create new attributes based on existing ones that are sparsely

represented This reduces fragmentation, repetition, and replication

Page 20: DataMining.ppt

CS 6463: An overview of Molecular Biology 20

B.2.1.1.1 Decision Tree: Classification in Large Databases

Classification—a classical problem extensively studied by statisticians and machine learning researchers

Scalability: Classifying data sets with millions of examples and hundreds of attributes with reasonable speed

Why decision tree induction in data mining? relatively faster learning speed (than other classification

methods) convertible to simple and easy to understand

classification rules comparable classification accuracy with other methods

Page 21: DataMining.ppt

CS 6463: An overview of Molecular Biology 21

B.2. Data Mining Tasks B.1. Introduction B.2. Data Mining Tasks:

B.2.1. Supervised Learning B.2.1.1 Classification

B.2.1.1.1. Decision Tree B.2.1.1.2. Neural Network B.2.1.1.3. Support Vector Machine B.2.1.1.4. Instance-Based Learning B.2.1.1.5. Bayse Learning

B.2.1.2. Regression B.2.2. Clustering B.2.3. Association Rule and Time series

B.3. Data Selection and Preprocessing B.4. Evaluation and Classification

Page 22: DataMining.ppt

CS 6463: An overview of Molecular Biology 22

B.2.1.1.2 Neural Network: A Neuron

The n-dimensional input vector x is mapped into variable y by means of the scalar product and a nonlinear function mapping

-

f

weighted sum

Inputvector x

output y

Activationfunction

weightvector w

w0

w1

wn

x0

x1

xn

Page 23: DataMining.ppt

CS 6463: An overview of Molecular Biology 23

B.2.1.1.2 Neural Network: A Neuron

-

f

weighted sum

Inputvector x

output y

Activationfunction

weightvector w

w0

w1

wn

x0

x1

xn

wxsignxwsignyn

iii

1

Page 24: DataMining.ppt

CS 6463: An overview of Molecular Biology 24

B.2.1.1.2 Neural Network: A Neuron

-

f

weighted sum

Inputvector x

output y

Activationfunction

weightvector w

-1.3

0.1

0.5

0.20.6

0.7

1

wxsignxwsignyn

iii

1

Need to learn this

Page 25: DataMining.ppt

CS 6463: An overview of Molecular Biology 25

B.2.1.1.2 Neural Network: Linear Classification

Binary Classification problem Earlier, known as linear

discriminant The data above the green

line belongs to class ‘x’ The data below green line

belongs to class ‘o’ Examples – SVM, Perceptron,

Probabilistic Classifiers

x

xx

x

xx

x

x

x

x ooo

oo

o

o

o

o o

oo

o

n

iii xwsigny

1

Page 26: DataMining.ppt

CS 6463: An overview of Molecular Biology

B.2.1.1.2 Neural Network: Multi-Layer Perceptron

Output nodes(Output Layer)

Input nodes

Hidden nodes(Hidden Layer)

Output vector

Input vector: xi

wij

Page 27: DataMining.ppt

CS 6463: An overview of Molecular Biology 27

B.2.1.1.2 Neural Network: Points to be aware of

Can further generalize to more layers. But more layers can be bad. Typically two layers are good

enough. The idea of back propagation is based on gradient descent (will

be covered in machine learning course in greater detail I believe).

Most of the time, we get to a local minimum

Training error

Weight space

Page 28: DataMining.ppt

CS 6463: An overview of Molecular Biology 28

B.2.1.1.2 Neural Network: Discriminative Classifiers

Advantages prediction accuracy is generally high robust, works when training examples contain errors fast evaluation of the learned target function

Criticism long training time difficult to understand the learned function (weights)

Decision trees can be converted to a set of rules. not easy to incorporate domain knowledge

Page 29: DataMining.ppt

CS 6463: An overview of Molecular Biology 29

B.2. Data Mining Tasks B.1. Introduction B.2. Data Mining Tasks:

B.2.1. Supervised Learning B.2.1.1 Classification

B.2.1.1.1. Decision Tree B.2.1.1.2. Neural Network B.2.1.1.3. Support Vector Machine B.2.1.1.4. Instance-Based Learning B.2.1.1.5. Bayse Learning

B.2.1.2. Regression B.2.2. Clustering B.2.3. Association Rule and Time series

B.3. Data Selection and Preprocessing B.4. Evaluation and Classification

Page 30: DataMining.ppt

CS 6463: An overview of Molecular Biology

B.2.1.1.3 Support Vector Machines

Support Vectors

Small Margin Large Margin

Page 31: DataMining.ppt

CS 6463: An overview of Molecular Biology 31

B.2.1.1.3 Support Vector Machines: Support vector machine(SVM).

Classification is essentially finding the best boundary between classes.

Support vector machine finds the best boundary points called support vectors and build classifier on top of them.

Linear and Non-linear support vector machine.

Page 32: DataMining.ppt

CS 6463: An overview of Molecular Biology 32

B.2.1.1.3 Support Vector Machines: Example of general SVM

The dots with shadow around

them are support vectors. Clearly they are the best data points to represent the boundary. The curve is the separating boundary.

Page 33: DataMining.ppt

CS 6463: An overview of Molecular Biology 33

B.2.1.1.3 Support Vector Machines: Optimal Hyper plane, separable case.

In this case, class 1 and class 2 are separable.

The representing points are selected such that the margin between two classes are maximized.

Crossed points are support vectors.

00 Tx

C

X

X

X

X

Page 34: DataMining.ppt

CS 6463: An overview of Molecular Biology 34

B.2.1.1.3 Support Vector Machines: Non-separable case

When the data set isnon-separable as shown in the right figure, we will assign weight to eachsupport vector which will be shown in the constraint.

00 Tx

X

X

X

X

C

*

Page 35: DataMining.ppt

CS 6463: An overview of Molecular Biology 35

B.2.1.1.3 Support Vector Machines: SVM vs. Neural Network

SVM Relatively new concept Nice Generalization

properties Hard to learn – learned in

batch mode using quadratic programming techniques

Using kernels can learn very complex functions

Neural Network Quiet Old Generalizes well but

doesn’t have strong mathematical foundation

Can easily be learned in incremental fashion

To learn complex functions – use multilayer perceptron (not that trivial)

Page 36: DataMining.ppt

CS 6463: An overview of Molecular Biology 36

B.2. Data Mining Tasks B.1. Introduction B.2. Data Mining Tasks:

B.2.1. Supervised Learning B.2.1.1 Classification

B.2.1.1.1. Decision Tree B.2.1.1.2. Neural Network B.2.1.1.3. Support Vector Machine B.2.1.1.4. Instance-Based Learning B.2.1.1.5. Bayse Learning

B.2.1.2. Regression B.2.2. Clustering B.2.3. Association Rule and Time series

B.3. Data Selection and Preprocessing B.4. Evaluation and Classification

Page 37: DataMining.ppt

CS 6463: An overview of Molecular Biology 37

B.2.1.1.4. Instance-Based Methods

Instance-based learning: Store training examples and delay the processing (“lazy

evaluation”) until a new instance must be classified Typical approaches

k-nearest neighbor approach Instances represented as points in a Euclidean space.

Locally weighted regression Constructs local approximation

Case-based reasoning Uses symbolic representations and knowledge-based inference

In biology – simple BLASTING = 1-nearest neighbor

Page 38: DataMining.ppt

CS 6463: An overview of Molecular Biology 38

B.2.1.1.4. The k-Nearest Neighbor Algorithm

All instances correspond to points in the n-D space. The nearest neighbor are defined in terms of Euclidean

distance. The target function could be discrete- or real- valued. For discrete-valued, the k-NN returns the most common value

among the k training examples nearest to xq. Voronoi diagram: the decision surface induced by 1-NN for a

typical set of training examples.

.

_+

_ xq

+

_ _+

_

_

+

.

..

. .

Page 39: DataMining.ppt

CS 6463: An overview of Molecular Biology 39

B.2.1.1.4. Discussion on the k-NN Algorithm

The k-NN algorithm for continuous-valued target functions

Calculate the mean values of the k nearest neighbors Distance-weighted nearest neighbor algorithm

Weight the contribution of each of the k neighbors according to their distance to the query point xq

giving greater weight to closer neighbors Similarly, for real-valued target functions

Robust to noisy data by averaging k-nearest neighbors Curse of dimensionality: distance between neighbors

could be dominated by irrelevant attributes. To overcome it, axes stretch or elimination of the least relevant

attributes

wd xq xi

12( , )

Page 40: DataMining.ppt

CS 6463: An overview of Molecular Biology 40

B.2. Data Mining Tasks B.1. Introduction B.2. Data Mining Tasks:

B.2.1. Supervised Learning B.2.1.1 Classification

B.2.1.1.1. Decision Tree B.2.1.1.2. Neural Network B.2.1.1.3. Support Vector Machine B.2.1.1.4. Instance-Based Learning B.2.1.1.5. Baysian Learning

B.2.1.1.5.1. Naïve Bayse B.2.1.1.5.2. Baysian Network

B.2.1.2. Regression B.2.2. Clustering B.2.3. Association Rule and Time series

B.3. Data Selection and Preprocessing B.4. Evaluation and Classification

Page 41: DataMining.ppt

CS 6463: An overview of Molecular Biology 41

B.2.1.1.5. Bayesian Classification: Why?

Probabilistic learning: Calculate explicit probabilities for hypothesis, among the most practical approaches to certain types of learning problems

Incremental: Each training example can incrementally increase/decrease the probability that a hypothesis is correct. Prior knowledge can be combined with observed data.

Probabilistic prediction: Predict multiple hypotheses, weighted by their probabilities

Standard: Even when Bayesian methods are computationally intractable, they can provide a standard of optimal decision making against which other methods can be measured

Page 42: DataMining.ppt

CS 6463: An overview of Molecular Biology 42

B.2.1.1.5. Bayesian Classification: Bayesian Theorem: Basics

Let X be a data sample whose class label is unknown Let H be a hypothesis that X belongs to class C For classification problems, determine P(H|X): the probability

that the hypothesis holds given the observed data sample X P(H): prior probability of hypothesis H (i.e. the initial probability

before we observe any data, reflects the background knowledge)

P(X): probability that sample data is observed P(X|H) : probability of observing the sample X, given that the

hypothesis holds

Page 43: DataMining.ppt

CS 6463: An overview of Molecular Biology 43

B.2.1.1.5. Bayesian Classification: Bayes’ Theorem

Given training data X, posteriori probability of a hypothesis H, P(H|X) follows the Bayes theorem

Informally, this can be written as posterior =likelihood x prior / evidence

MAP (maximum posteriori) hypothesis

Practical difficulty: require initial knowledge of many probabilities, significant computational cost

)()()|()|(

XPHPHXPXHP

.)()|(maxarg)|(maxarg hPhDPHh

DhPHhMAP

h

Page 44: DataMining.ppt

CS 6463: An overview of Molecular Biology 44

B.2.1.1.5. Bayesian Classification: Naïve Bayes Classifier

A simplified assumption: attributes are conditionally independent:

The product of occurrence of say 2 elements x1 and x2, given the current class is C, is the product of the probabilities of each element taken separately, given the same class P([y1,y2],C) = P(y1,C) * P(y2,C)

No dependence relation between attributes Greatly reduces the computation cost, only count the class

distribution. Once the probability P(X|Ci) is known, assign X to the class with

maximum P(X|Ci)*P(Ci)

n

kCixkPCiXP

1)|()|(

Page 45: DataMining.ppt

CS 6463: An overview of Molecular Biology 45

B.2.1.1.5. Bayesian Classification: Training dataset

age income student credit_rating buys_computer<=30 high no fair no<=30 high no excellent no30…40 high no fair yes>40 medium no fair yes>40 low yes fair yes>40 low yes excellent no31…40 low yes excellent yes<=30 medium no fair no<=30 low yes fair yes>40 medium yes fair yes<=30 medium yes excellent yes31…40 medium no excellent yes31…40 high yes fair yes>40 medium no excellent no

Class:C1:buys_computer=‘yes’C2:buys_computer=‘no’

Data sample X =(age<=30,Income=medium,Student=yesCredit_rating=Fair)

Page 46: DataMining.ppt

CS 6463: An overview of Molecular Biology 46

B.2.1.1.5. Bayesian Classification: Naïve Bayesian Classifier: Example

Compute P(X/Ci) for each classP(age=“<30” | buys_computer=“yes”) = 2/9=0.222P(age=“<30” | buys_computer=“no”) = 3/5 =0.6P(income=“medium” | buys_computer=“yes”)= 4/9 =0.444P(income=“medium” | buys_computer=“no”) = 2/5 = 0.4P(student=“yes” | buys_computer=“yes)= 6/9 =0.667P(student=“yes” | buys_computer=“no”)= 1/5=0.2P(credit_rating=“fair” | buys_computer=“yes”)=6/9=0.667P(credit_rating=“fair” | buys_computer=“no”)=2/5=0.4

X=(age<=30 ,income =medium, student=yes,credit_rating=fair) P(X|Ci) : P(X|buys_computer=“yes”)= 0.222 x 0.444 x 0.667 x 0.0.667 =0.044

P(X|buys_computer=“no”)= 0.6 x 0.4 x 0.2 x 0.4 =0.019P(X|Ci)*P(Ci ) : P(X|buys_computer=“yes”) * P(buys_computer=“yes”)=0.028

P(X|buys_computer=“yes”) * P(buys_computer=“yes”)=0.007X belongs to class “buys_computer=yes”

Page 47: DataMining.ppt

CS 6463: An overview of Molecular Biology 47

B.2.1.1.5. Bayesian Classification: Naïve Bayesian Classifier: Comments

Advantages : Easy to implement Good results obtained in most of the cases

Disadvantages Assumption: class conditional independence , therefore loss of

accuracy Practically, dependencies exist among variables E.g., hospitals: patients: Profile: age, family history etc Symptoms: fever, cough etc., Disease: lung cancer, diabetes etc Dependencies among these cannot be modeled by Naïve Bayesian

Classifier How to deal with these dependencies?

Bayesian Belief Networks

Page 48: DataMining.ppt

CS 6463: An overview of Molecular Biology 48

B.2.1.1.5. Bayesian Classification: Bayesian Networks

Bayesian belief network allows a subset of the variables

conditionally independent

A graphical model of causal relationships Represents dependency among the variables Gives a specification of joint probability distribution

X Y

ZP

Nodes: random variablesLinks: dependencyX,Y are the parents of Z, and Y is the parent of PNo dependency between Z and PHas no loops or cycles

Page 49: DataMining.ppt

CS 6463: An overview of Molecular Biology 49

B.2.1.1.5. Bayesian Classification: Bayesian Belief Network: An Example

FamilyHistory

LungCancer

PositiveXRay

Smoker

Emphysema

Dyspnea

LC

~LC

(FH, S) (FH, ~S) (~FH, S) (~FH, ~S)

0.8

0.2

0.5

0.5

0.7

0.3

0.1

0.9

Bayesian Belief Networks

The conditional probability table for the variable LungCancer:Shows the conditional probability for each possible combination of its parents

n

iZParents iziPznzP

1))(|(),...,1(

Page 50: DataMining.ppt

CS 6463: An overview of Molecular Biology 50

B.2.1.1.5. Bayesian Classification: Learning Bayesian Networks

Several cases Given both the network structure and all variables

observable: learn only the CPTs Network structure known, some hidden variables:

method of gradient descent, analogous to neural network learning

Network structure unknown, all variables observable: search through the model space to reconstruct graph topology

Unknown structure, all hidden variables: no good algorithms known for this purpose

D. Heckerman, Bayesian networks for data mining

Page 51: DataMining.ppt

CS 6463: An overview of Molecular Biology 51

B.2. Data Mining Tasks B.1. Introduction B.2. Data Mining Tasks:

B.2.1. Supervised Learning B.2.1.1 Classification B.2.1.2. Regression

B.2.2. Clustering B.2.3. Association Rule and Time series

B.3. Data Selection and Preprocessing B.4. Evaluation and Classification

Page 52: DataMining.ppt

CS 6463: An overview of Molecular Biology 52

B.2.1.2 Regression Instead of predicting class labels (A, B or C)

went to output a numeric value. Methods:

Use neural network A version of decision tree – regression tree Linear regression (from your undergraduate

statistics class) Instance-based learning can be used but cannot

extend SVM, Bayesian learning Most bioinformatics problems are classification

or clustering problem. Hence Skip

Page 53: DataMining.ppt

CS 6463: An overview of Molecular Biology 53

B.2. Data Mining Tasks B.1. Introduction B.2. Data Mining Tasks:

B.2.1. Supervised Learning B.2.1.1 Classification B.2.1.2. Regression

B.2.2. Clustering B.2.2.1. Hierarchical Clustering

B.2.3. Association Rule and Time series B.3. Data Selection and Preprocessing B.4. Evaluation and Classification

Page 54: DataMining.ppt

CS 6463: An overview of Molecular Biology 54

B.2.2. Clustering: What is Cluster Analysis?

Cluster: a collection of data objects Similar to one another within the same cluster Dissimilar to the objects in other clusters

Cluster analysis Grouping a set of data objects into clusters

Clustering is unsupervised classification: no predefined classes

Typical applications As a stand-alone tool to get insight into data distribution As a preprocessing step for other algorithms

Page 55: DataMining.ppt

CS 6463: An overview of Molecular Biology 55

B.2.2. Clustering: General Applications of Clustering

Pattern Recognition Spatial Data Analysis

create thematic maps in GIS by clustering feature spaces detect spatial clusters and explain them in spatial data

mining Image Processing Economic Science (especially market research) WWW

Document classification Cluster Weblog data to discover groups of similar access

patterns

Page 56: DataMining.ppt

CS 6463: An overview of Molecular Biology 56

B.2.2. Clustering: Examples of Clustering Applications

Marketing: Help marketers discover distinct groups in their customer bases, and then use this knowledge to develop targeted marketing programs

Land use: Identification of areas of similar land use in an earth observation database

Insurance: Identifying groups of motor insurance policy holders with a high average claim cost

City-planning: Identifying groups of houses according to their house type, value, and geographical location

Earth-quake studies: Observed earth quake epicenters should be clustered along continent faults

Page 57: DataMining.ppt

CS 6463: An overview of Molecular Biology 57

B.2.2. Clustering: What Is Good Clustering?

A good clustering method will produce high quality clusters with high intra-class similarity low inter-class similarity

The quality of a clustering result depends on both the similarity measure used by the method and its implementation.

The quality of a clustering method is also measured by its ability to discover some or all of the hidden patterns.

Page 58: DataMining.ppt

CS 6463: An overview of Molecular Biology 58

B.2.2. Clustering: Classification is ‘more’ objective

Can compare various algorithms

x

xx

x

xx

x

x

x

x ooo

oo

o

o

o

o o

oo

ox

x

x

o

o

Page 59: DataMining.ppt

CS 6463: An overview of Molecular Biology 59

B.2.2. Clustering: Clustering is very subjective

Cluster the following animals: Sheep, lizard, cat, dog, sparrow, blue shark, viper, seagull, gold fish,

frog, red-mullet

1. By the way they bear their progeny

2. By the existence of lungs

3. By the environment that they live in

4. By the way the bear their progeny and the existence of their lungs

Which way is correct? Depends

Page 60: DataMining.ppt

CS 6463: An overview of Molecular Biology 60

B.2.2. Clustering: Distance measure

Dissimilarity/Similarity metric: Similarity is expressed in terms of a distance function, which is typically metric: d(i, j)

Similarity measure: Small means close Ex: Sequence similarity

Distance = cost of insertion + 2*cost of changing C to A + cost of changing A to T Dissimilarity measure: small means close

TATATAAGT -TCCA

TATATAAGGCTAAT

TATATAAGTTCCA TATATAAGGCTAAT

Page 61: DataMining.ppt

CS 6463: An overview of Molecular Biology 61

B.2.2. Clustering: Data Structures

Asymmetrical distance

Symmetrical distance

npx...nfx...n1x

...............ipx...ifx...i1x

...............1px...1fx...11x

0...)2,()1,(

:::

)2,3()

...ndnd

0dd(3,1

0d(2,1)

0

Page 62: DataMining.ppt

CS 6463: An overview of Molecular Biology 62

B.2.2. Clustering: Measure the Quality of Clustering

There is a separate “quality” function that measures the “goodness” of a cluster.

The definitions of distance functions are usually very different for interval-scaled, boolean, categorical, ordinal and ratio variables.

Weights should be associated with different variables based on applications and data semantics.

It is hard to define “similar enough” or “good enough” the answer is typically highly subjective.

Page 63: DataMining.ppt

CS 6463: An overview of Molecular Biology 63

B.2.2.1. Hierarchical Clustering

There are 5 main classes of clustering algorithms:

1. Partitioning Methods

2. Hierarchical Methods

3. Density-Based Methods

4. Grid-Based Methods

5. Model-Based Clustering Methods However, we only concentrate on

Hierarchical clustering which is more popular in bioinformatics.

Page 64: DataMining.ppt

CS 6463: An overview of Molecular Biology 64

B.2.2. Hierarchical Clustering

Use distance matrix as clustering criteria. This method does not require the number of clusters k as an input, but needs a termination condition

Step 0 Step 1 Step 2 Step 3 Step 4

b

d

c

e

a a b

d e

c d e

a b c d e

Step 4 Step 3 Step 2 Step 1 Step 0

agglomerative(AGNES)

divisive(DIANA)

Page 65: DataMining.ppt

CS 6463: An overview of Molecular Biology 65

B.2.2.1 Hierarchical Clustering: AGNES (Agglomerative Nesting)

Introduced in Kaufmann and Rousseeuw (1990) Implemented in statistical analysis packages, e.g., Splus Use the Single-Link method and the dissimilarity matrix. Merge nodes that have the least dissimilarity Go on in a non-descending fashion Eventually all nodes belong to the same cluster

0

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8 9 10

0

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8 9 10

0

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8 9 10

Page 66: DataMining.ppt

CS 6463: An overview of Molecular Biology 66

Decompose data objects into a several levels of nested partitioning (tree of clusters), called a dendrogram.

A clustering of the data objects is obtained by cutting the dendrogram at the desired level, then each connected component forms a cluster.

B.2.2.1 Hierarchical Clustering: A Dendrogram Shows How the Clusters are Merged Hierarchically

Page 67: DataMining.ppt

CS 6463: An overview of Molecular Biology 67

B.2.2.1 Hierarchical Clustering: DIANA (Divisive Analysis)

Introduced in Kaufmann and Rousseeuw (1990)

Implemented in statistical analysis packages, e.g., Splus

Inverse order of AGNES

Eventually each node forms a cluster on its own

0

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8 9 100

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8 9 10

0

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8 9 10

Page 68: DataMining.ppt

CS 6463: An overview of Molecular Biology 68

B.2.2.1 Hierarchical Clustering: More on Hierarchical Clustering Methods

Major weakness of agglomerative clustering methods do not scale well: time complexity of at least O(n2), where n

is the number of total objects can never undo what was done previously

Integration of hierarchical with distance-based clustering

BIRCH (1996): uses CF-tree and incrementally adjusts the quality of sub-clusters

CURE (1998): selects well-scattered points from the cluster and then shrinks them towards the center of the cluster by a specified fraction

CHAMELEON (1999): hierarchical clustering using dynamic modeling

Page 69: DataMining.ppt

CS 6463: An overview of Molecular Biology 69

B.2. Data Mining Tasks B.1. Introduction B.2. Data Mining Tasks:

B.2.1. Supervised Learning B.2.1.1 Classification B.2.1.2. Regression

B.2.2. Clustering B.2.3. Association Rule and Time series

B.3. Data Selection and Preprocessing B.4. Evaluation and Classification

Page 70: DataMining.ppt

CS 6463: An overview of Molecular Biology 70

Chapter 3: Data Preprocessing

Why preprocess the data?

Data cleaning

Data integration and transformation

Data reduction

Discretization and concept hierarchy

generation

Summary

Page 71: DataMining.ppt

CS 6463: An overview of Molecular Biology 71

Why Data Preprocessing?

Data in the real world is dirty incomplete: lacking attribute values, lacking certain

attributes of interest, or containing only aggregate data e.g., occupation=“”

noisy: containing errors or outliers e.g., Salary=“-10”

inconsistent: containing discrepancies in codes or names e.g., Age=“42” Birthday=“03/07/1997” e.g., Was rating “1,2,3”, now rating “A, B, C” e.g., discrepancy between duplicate records

Page 72: DataMining.ppt

CS 6463: An overview of Molecular Biology 72

Why Is Data Dirty?

Incomplete data comes from n/a data value when collected different consideration between the time when the data was

collected and when it is analyzed. human/hardware/software problems

Noisy data comes from the process of data collection entry transmission

Inconsistent data comes from Different data sources Functional dependency violation

Page 73: DataMining.ppt

CS 6463: An overview of Molecular Biology 73

Why Is Data Preprocessing Important?

No quality data, no quality mining results! Quality decisions must be based on quality data

e.g., duplicate or missing data may cause incorrect or even misleading statistics.

Data warehouse needs consistent integration of quality data

Data extraction, cleaning, and transformation comprises the majority of the work of building a data warehouse. —Bill Inmon

Page 74: DataMining.ppt

CS 6463: An overview of Molecular Biology 74

Major Tasks in Data Preprocessing

Data cleaning Fill in missing values, smooth noisy data, identify or remove

outliers, and resolve inconsistencies

Data integration Integration of multiple databases, data cubes, or files

Data transformation Normalization and aggregation

Data reduction Obtains reduced representation in volume but produces the

same or similar analytical results

Data discretization Part of data reduction but with particular importance,

especially for numerical data

Page 75: DataMining.ppt

CS 6463: An overview of Molecular Biology 75

Forms of data preprocessing

Page 76: DataMining.ppt

CS 6463: An overview of Molecular Biology 76

Data Preprocessing

Why preprocess the data?

Data cleaning

Data integration and transformation

Data reduction

Discretization and concept hierarchy

generation

Summary

Page 77: DataMining.ppt

CS 6463: An overview of Molecular Biology 77

Data Cleaning

Importance Can’t mine from lousy data

Data cleaning tasks Fill in missing values

Identify outliers and smooth out noisy data

Correct inconsistent data

Resolve redundancy caused by data integration

Page 78: DataMining.ppt

CS 6463: An overview of Molecular Biology 78

Missing Data

Data is not always available E.g., many instances have no recorded value for several

attributes, such as customer income in sales data

Missing data may be due to equipment malfunction inconsistent with other recorded data and thus deleted data not entered due to misunderstanding certain data may not be considered important at the time of

entry not register history or changes of the data

Missing data may need to be inferred.

Page 79: DataMining.ppt

CS 6463: An overview of Molecular Biology 79

How to Handle Missing Data?

Ignore the instance: usually done when class label is missing

(assuming the tasks in classification—not effective when the

percentage of missing values per attribute varies

considerably.

Fill in the missing value manually: tedious + infeasible?

Fill in it automatically with

a global constant : e.g., “unknown”, a new class?!

the attribute mean

the attribute mean for all samples belonging to the same class:

smarter

the most probable value: inference-based such as Bayesian

formula or decision tree

Page 80: DataMining.ppt

CS 6463: An overview of Molecular Biology 80

Noisy Data

Noise: random error or variance in a measured variable

Incorrect attribute values may due to faulty data collection instruments data entry problems data transmission problems technology limitation inconsistency in naming convention

Other data problems which requires data cleaning duplicate records incomplete data inconsistent data

Page 81: DataMining.ppt

CS 6463: An overview of Molecular Biology 81

How to Handle Noisy Data?

Binning method: first sort data and partition into (equi-depth) bins then one can smooth by bin means, smooth by bin median,

smooth by bin boundaries, etc. Regression

smooth by fitting the data into regression functions Clustering

detect and remove outliers Combined computer and human inspection

detect suspicious values and check by human (e.g., deal with possible outliers)

Page 82: DataMining.ppt

CS 6463: An overview of Molecular Biology 82

Data Integration

Data integration: combines data from multiple sources into a coherent store Entity identification problem: identify real world entities

from multiple data sources, e.g., A.cust-id B.cust-# Detecting and resolving data value conflicts

for the same real world entity, attribute values from different sources are different

possible reasons: different representations, different scales, e.g., metric vs. British units

Page 83: DataMining.ppt

CS 6463: An overview of Molecular Biology 83

Handling Redundancy in Data Integration

Redundant data occur often when integration of multiple databases

The same attribute may have different names in different databases

One attribute may be a “derived” attribute in another table, e.g., annual revenue

Redundant data may be able to be detected by correlational analysis

Careful integration of the data from multiple sources may help reduce/avoid redundancies and inconsistencies and improve mining speed and quality

Page 84: DataMining.ppt

CS 6463: An overview of Molecular Biology 84

Data Transformation for Bioinformatics Data

Sequence data

ACTGGAACCTTAATTAATTTTGGGCCCCAAATT

<0.7, 0.6, 0.8, -0.1>

• Count frequency of nucleotide, dinucleotide, ..• Covert to some chemical property index – hydrophilic, hydrophobic

Page 85: DataMining.ppt

CS 6463: An overview of Molecular Biology 85

Data Transformation: Normalization

min-max normalization

z-score normalization

normalization by decimal scaling

AAA

AA

A

minnewminnewmaxnewminmax

minvv _)__('

A

A

devstand

meanvv

_'

j

vv

10' Where j is the smallest integer such that Max(| |)<1'v

Page 86: DataMining.ppt

CS 6463: An overview of Molecular Biology 86

Data Reduction Strategies

A data warehouse may store terabytes of data Complex data analysis/mining may take a very long time to

run on the complete data set Data reduction

Obtain a reduced representation of the data set that is much smaller in volume but yet produce the same (or almost the same) analytical results

Data reduction strategies Dimensionality reduction—remove unimportant attributes Data Compression Numerosity reduction—fit data into models Discretization and concept hierarchy generation

Page 87: DataMining.ppt

CS 6463: An overview of Molecular Biology 87

Dimensionality Reduction

Feature selection (i.e., attribute subset selection): Select a minimum set of features such that the probability

distribution of different classes given the values for those features is as close as possible to the original distribution given the values of all features

reduce # of patterns in the patterns, easier to understand Heuristic methods (due to exponential # of choices):

step-wise forward selection step-wise backward elimination combining forward selection and backward elimination decision-tree induction

Page 88: DataMining.ppt

CS 6463: An overview of Molecular Biology 88

Summary

Data preparation is a big issue for data mining

Data preparation includes Data cleaning and data integration

Data reduction and feature selection

Discretization

A lot a methods have been developed but still an

active area of research