Naive Bayes Classifier From Wikipedia

download Naive Bayes Classifier From Wikipedia

of 13

Transcript of Naive Bayes Classifier From Wikipedia

  • 8/10/2019 Naive Bayes Classifier From Wikipedia

    1/13

    Naive Bayes classifier

    From Wikipedia, the free encyclopedia

    Machine learninganddata mining

    Problems

    Classification

    Clustering

    Regression

    Anomaly detection

    Association rules

    Reinforcement learning

    Structured prediction

    Feature learning

    Online learning

    Semi-supervised learning

    Grammar induction

    Supervised learning

    (classificationregression)

    Decision trees

    Ensembles(Bagging,Boosting,Random forest)

    k-NN

    Linear regression

    Naive Bayes

    Neural networks

    Logistic regression

    Perceptron

    http://en.wikipedia.org/wiki/Machine_learninghttp://en.wikipedia.org/wiki/Machine_learninghttp://en.wikipedia.org/wiki/Data_mininghttp://en.wikipedia.org/wiki/Data_mininghttp://en.wikipedia.org/wiki/Statistical_classificationhttp://en.wikipedia.org/wiki/Statistical_classificationhttp://en.wikipedia.org/wiki/Cluster_analysishttp://en.wikipedia.org/wiki/Cluster_analysishttp://en.wikipedia.org/wiki/Regression_analysishttp://en.wikipedia.org/wiki/Regression_analysishttp://en.wikipedia.org/wiki/Anomaly_detectionhttp://en.wikipedia.org/wiki/Anomaly_detectionhttp://en.wikipedia.org/wiki/Association_rule_learninghttp://en.wikipedia.org/wiki/Association_rule_learninghttp://en.wikipedia.org/wiki/Reinforcement_learninghttp://en.wikipedia.org/wiki/Reinforcement_learninghttp://en.wikipedia.org/wiki/Structured_predictionhttp://en.wikipedia.org/wiki/Structured_predictionhttp://en.wikipedia.org/wiki/Feature_learninghttp://en.wikipedia.org/wiki/Feature_learninghttp://en.wikipedia.org/wiki/Online_machine_learninghttp://en.wikipedia.org/wiki/Online_machine_learninghttp://en.wikipedia.org/wiki/Semi-supervised_learninghttp://en.wikipedia.org/wiki/Semi-supervised_learninghttp://en.wikipedia.org/wiki/Grammar_inductionhttp://en.wikipedia.org/wiki/Grammar_inductionhttp://en.wikipedia.org/wiki/Supervised_learninghttp://en.wikipedia.org/wiki/Supervised_learninghttp://en.wikipedia.org/wiki/Statistical_classificationhttp://en.wikipedia.org/wiki/Statistical_classificationhttp://en.wikipedia.org/wiki/Statistical_classificationhttp://en.wikipedia.org/wiki/Regression_analysishttp://en.wikipedia.org/wiki/Regression_analysishttp://en.wikipedia.org/wiki/Regression_analysishttp://en.wikipedia.org/wiki/Decision_tree_learninghttp://en.wikipedia.org/wiki/Decision_tree_learninghttp://en.wikipedia.org/wiki/Ensemble_learninghttp://en.wikipedia.org/wiki/Ensemble_learninghttp://en.wikipedia.org/wiki/Bootstrap_aggregatinghttp://en.wikipedia.org/wiki/Bootstrap_aggregatinghttp://en.wikipedia.org/wiki/Bootstrap_aggregatinghttp://en.wikipedia.org/wiki/Boosting_%28machine_learning%29http://en.wikipedia.org/wiki/Boosting_%28machine_learning%29http://en.wikipedia.org/wiki/Boosting_%28machine_learning%29http://en.wikipedia.org/wiki/Random_foresthttp://en.wikipedia.org/wiki/Random_foresthttp://en.wikipedia.org/wiki/Random_foresthttp://en.wikipedia.org/wiki/K-nearest_neighbors_classificationhttp://en.wikipedia.org/wiki/K-nearest_neighbors_classificationhttp://en.wikipedia.org/wiki/K-nearest_neighbors_classificationhttp://en.wikipedia.org/wiki/Linear_regressionhttp://en.wikipedia.org/wiki/Linear_regressionhttp://en.wikipedia.org/wiki/Artificial_neural_networkhttp://en.wikipedia.org/wiki/Artificial_neural_networkhttp://en.wikipedia.org/wiki/Logistic_regressionhttp://en.wikipedia.org/wiki/Logistic_regressionhttp://en.wikipedia.org/wiki/Perceptronhttp://en.wikipedia.org/wiki/Perceptronhttp://en.wikipedia.org/wiki/File:Linear-svm-scatterplot.svghttp://en.wikipedia.org/wiki/Perceptronhttp://en.wikipedia.org/wiki/Logistic_regressionhttp://en.wikipedia.org/wiki/Artificial_neural_networkhttp://en.wikipedia.org/wiki/Linear_regressionhttp://en.wikipedia.org/wiki/K-nearest_neighbors_classificationhttp://en.wikipedia.org/wiki/Random_foresthttp://en.wikipedia.org/wiki/Boosting_%28machine_learning%29http://en.wikipedia.org/wiki/Bootstrap_aggregatinghttp://en.wikipedia.org/wiki/Ensemble_learninghttp://en.wikipedia.org/wiki/Decision_tree_learninghttp://en.wikipedia.org/wiki/Regression_analysishttp://en.wikipedia.org/wiki/Statistical_classificationhttp://en.wikipedia.org/wiki/Supervised_learninghttp://en.wikipedia.org/wiki/Grammar_inductionhttp://en.wikipedia.org/wiki/Semi-supervised_learninghttp://en.wikipedia.org/wiki/Online_machine_learninghttp://en.wikipedia.org/wiki/Feature_learninghttp://en.wikipedia.org/wiki/Structured_predictionhttp://en.wikipedia.org/wiki/Reinforcement_learninghttp://en.wikipedia.org/wiki/Association_rule_learninghttp://en.wikipedia.org/wiki/Anomaly_detectionhttp://en.wikipedia.org/wiki/Regression_analysishttp://en.wikipedia.org/wiki/Cluster_analysishttp://en.wikipedia.org/wiki/Statistical_classificationhttp://en.wikipedia.org/wiki/Data_mininghttp://en.wikipedia.org/wiki/Machine_learning
  • 8/10/2019 Naive Bayes Classifier From Wikipedia

    2/13

    Support vector machine (SVM)

    Relevance vector machine (RVM)

    Clustering

    BIRCH

    Hierarchical

    k-means

    Expectation-maximization (EM)

    DBSCAN

    OPTICS

    Mean-shift

    Dimensionality reduction

    Factor analysis

    CCA

    ICA

    LDA

    NMF

    PCA

    t-SNE

    Structured prediction

    Graphical models(Bayes net,CRF,HMM)

    Anomaly detection

    k-NN

    Local outlier factor

    Neural nets

    Autoencoder

    Deep learning

    Multilayer perceptron

    RNN

    Restricted Boltzmann machine

    SOM

    http://en.wikipedia.org/wiki/Support_vector_machinehttp://en.wikipedia.org/wiki/Support_vector_machinehttp://en.wikipedia.org/wiki/Relevance_vector_machinehttp://en.wikipedia.org/wiki/Relevance_vector_machinehttp://en.wikipedia.org/wiki/Cluster_analysishttp://en.wikipedia.org/wiki/BIRCH_%28data_clustering%29http://en.wikipedia.org/wiki/BIRCH_%28data_clustering%29http://en.wikipedia.org/wiki/Hierarchical_clusteringhttp://en.wikipedia.org/wiki/Hierarchical_clusteringhttp://en.wikipedia.org/wiki/K-means_clusteringhttp://en.wikipedia.org/wiki/K-means_clusteringhttp://en.wikipedia.org/wiki/K-means_clusteringhttp://en.wikipedia.org/wiki/Expectation-maximization_algorithmhttp://en.wikipedia.org/wiki/Expectation-maximization_algorithmhttp://en.wikipedia.org/wiki/DBSCANhttp://en.wikipedia.org/wiki/DBSCANhttp://en.wikipedia.org/wiki/OPTICS_algorithmhttp://en.wikipedia.org/wiki/OPTICS_algorithmhttp://en.wikipedia.org/wiki/Mean-shifthttp://en.wikipedia.org/wiki/Mean-shifthttp://en.wikipedia.org/wiki/Dimensionality_reductionhttp://en.wikipedia.org/wiki/Factor_analysishttp://en.wikipedia.org/wiki/Factor_analysishttp://en.wikipedia.org/wiki/Canonical_correlation_analysishttp://en.wikipedia.org/wiki/Canonical_correlation_analysishttp://en.wikipedia.org/wiki/Independent_component_analysishttp://en.wikipedia.org/wiki/Independent_component_analysishttp://en.wikipedia.org/wiki/Linear_discriminant_analysishttp://en.wikipedia.org/wiki/Linear_discriminant_analysishttp://en.wikipedia.org/wiki/Non-negative_matrix_factorizationhttp://en.wikipedia.org/wiki/Non-negative_matrix_factorizationhttp://en.wikipedia.org/wiki/Principal_component_analysishttp://en.wikipedia.org/wiki/Principal_component_analysishttp://en.wikipedia.org/wiki/T-distributed_stochastic_neighbor_embeddinghttp://en.wikipedia.org/wiki/T-distributed_stochastic_neighbor_embeddinghttp://en.wikipedia.org/wiki/Structured_predictionhttp://en.wikipedia.org/wiki/Graphical_modelhttp://en.wikipedia.org/wiki/Graphical_modelhttp://en.wikipedia.org/wiki/Bayesian_networkhttp://en.wikipedia.org/wiki/Bayesian_networkhttp://en.wikipedia.org/wiki/Bayesian_networkhttp://en.wikipedia.org/wiki/Conditional_random_fieldhttp://en.wikipedia.org/wiki/Conditional_random_fieldhttp://en.wikipedia.org/wiki/Conditional_random_fieldhttp://en.wikipedia.org/wiki/Hidden_Markov_modelhttp://en.wikipedia.org/wiki/Hidden_Markov_modelhttp://en.wikipedia.org/wiki/Hidden_Markov_modelhttp://en.wikipedia.org/wiki/Anomaly_detectionhttp://en.wikipedia.org/wiki/K-nearest_neighbors_classificationhttp://en.wikipedia.org/wiki/K-nearest_neighbors_classificationhttp://en.wikipedia.org/wiki/K-nearest_neighbors_classificationhttp://en.wikipedia.org/wiki/Local_outlier_factorhttp://en.wikipedia.org/wiki/Local_outlier_factorhttp://en.wikipedia.org/wiki/Artificial_neural_networkhttp://en.wikipedia.org/wiki/Autoencoderhttp://en.wikipedia.org/wiki/Autoencoderhttp://en.wikipedia.org/wiki/Deep_learninghttp://en.wikipedia.org/wiki/Deep_learninghttp://en.wikipedia.org/wiki/Multilayer_perceptronhttp://en.wikipedia.org/wiki/Multilayer_perceptronhttp://en.wikipedia.org/wiki/Recurrent_neural_networkhttp://en.wikipedia.org/wiki/Recurrent_neural_networkhttp://en.wikipedia.org/wiki/Restricted_Boltzmann_machinehttp://en.wikipedia.org/wiki/Restricted_Boltzmann_machinehttp://en.wikipedia.org/wiki/Self-organizing_maphttp://en.wikipedia.org/wiki/Self-organizing_maphttp://en.wikipedia.org/wiki/Self-organizing_maphttp://en.wikipedia.org/wiki/Restricted_Boltzmann_machinehttp://en.wikipedia.org/wiki/Recurrent_neural_networkhttp://en.wikipedia.org/wiki/Multilayer_perceptronhttp://en.wikipedia.org/wiki/Deep_learninghttp://en.wikipedia.org/wiki/Autoencoderhttp://en.wikipedia.org/wiki/Artificial_neural_networkhttp://en.wikipedia.org/wiki/Local_outlier_factorhttp://en.wikipedia.org/wiki/K-nearest_neighbors_classificationhttp://en.wikipedia.org/wiki/Anomaly_detectionhttp://en.wikipedia.org/wiki/Hidden_Markov_modelhttp://en.wikipedia.org/wiki/Conditional_random_fieldhttp://en.wikipedia.org/wiki/Bayesian_networkhttp://en.wikipedia.org/wiki/Graphical_modelhttp://en.wikipedia.org/wiki/Structured_predictionhttp://en.wikipedia.org/wiki/T-distributed_stochastic_neighbor_embeddinghttp://en.wikipedia.org/wiki/Principal_component_analysishttp://en.wikipedia.org/wiki/Non-negative_matrix_factorizationhttp://en.wikipedia.org/wiki/Linear_discriminant_analysishttp://en.wikipedia.org/wiki/Independent_component_analysishttp://en.wikipedia.org/wiki/Canonical_correlation_analysishttp://en.wikipedia.org/wiki/Factor_analysishttp://en.wikipedia.org/wiki/Dimensionality_reductionhttp://en.wikipedia.org/wiki/Mean-shifthttp://en.wikipedia.org/wiki/OPTICS_algorithmhttp://en.wikipedia.org/wiki/DBSCANhttp://en.wikipedia.org/wiki/Expectation-maximization_algorithmhttp://en.wikipedia.org/wiki/K-means_clusteringhttp://en.wikipedia.org/wiki/Hierarchical_clusteringhttp://en.wikipedia.org/wiki/BIRCH_%28data_clustering%29http://en.wikipedia.org/wiki/Cluster_analysishttp://en.wikipedia.org/wiki/Relevance_vector_machinehttp://en.wikipedia.org/wiki/Support_vector_machine
  • 8/10/2019 Naive Bayes Classifier From Wikipedia

    3/13

    Convolutional neural network

    Theory

    Bias-variance dilemma

    Computational learning theory

    Empirical risk minimization

    PAC learning

    Statistical learning

    VC theory

    Computer science portal

    Statistics portal

    v

    t

    e

    Inmachine learning,naive Bayes classifiersare a family of simpleprobabilistic classifiers

    based on applyingBayes' theoremwith strong (naive)independenceassumptions between the

    features.

    Naive Bayes models are also known under a variety of names in the literature, including

    simple Bayesand independence Bayes.[1]All these names reference the use of Bayes'

    theorem in the classifier's decision rule, but naive Bayes is not (necessarily) aBayesian

    method;[1]Russell and Norvignote that "[naive Bayes] is sometimes called a Bayesian

    classifier, a somewhat careless usage that has prompted true Bayesians to call it the idiot

    Bayesmodel."[2]:482

    Naive Bayes has been studied extensively since the 1950s. It was introduced under a different

    name into thetext retrievalcommunity in the early 1960s,[2]:488and remains a popular

    (baseline) method fortext categorization,the problem of judging documents as belonging toone category or the other (such asspam or legitimate,sports or politics, etc.) withword

    frequenciesas the features. With appropriate preprocessing, it is competitive in this domain

    with more advanced methods includingsupport vector machines.[3]It also finds application in

    automaticmedical diagnosis.[4]

    Naive Bayes classifiers are highly scalable, requiring a number of parameters linear in the

    number of variables (features/predictors) in a learning problem.Maximum-likelihoodtraining

    can be done by evaluating aclosed-form expression,[2]:718which takeslinear time,rather than

    by expensiveiterative approximationas used for many other types of classifiers.

    Contents

    http://en.wikipedia.org/wiki/Convolutional_neural_networkhttp://en.wikipedia.org/wiki/Convolutional_neural_networkhttp://en.wikipedia.org/wiki/Bias-variance_dilemmahttp://en.wikipedia.org/wiki/Bias-variance_dilemmahttp://en.wikipedia.org/wiki/Computational_learning_theoryhttp://en.wikipedia.org/wiki/Computational_learning_theoryhttp://en.wikipedia.org/wiki/Empirical_risk_minimizationhttp://en.wikipedia.org/wiki/Empirical_risk_minimizationhttp://en.wikipedia.org/wiki/Probably_approximately_correct_learninghttp://en.wikipedia.org/wiki/Probably_approximately_correct_learninghttp://en.wikipedia.org/wiki/Statistical_learning_theoryhttp://en.wikipedia.org/wiki/Statistical_learning_theoryhttp://en.wikipedia.org/wiki/Vapnik%E2%80%93Chervonenkis_theoryhttp://en.wikipedia.org/wiki/Vapnik%E2%80%93Chervonenkis_theoryhttp://en.wikipedia.org/wiki/Portal:Computer_sciencehttp://en.wikipedia.org/wiki/Portal:Computer_sciencehttp://en.wikipedia.org/wiki/Portal:Statisticshttp://en.wikipedia.org/wiki/Portal:Statisticshttp://en.wikipedia.org/wiki/Template:Machine_learning_barhttp://en.wikipedia.org/wiki/Template:Machine_learning_barhttp://en.wikipedia.org/wiki/Template_talk:Machine_learning_barhttp://en.wikipedia.org/wiki/Template_talk:Machine_learning_barhttp://en.wikipedia.org/w/index.php?title=Template:Machine_learning_bar&action=edithttp://en.wikipedia.org/w/index.php?title=Template:Machine_learning_bar&action=edithttp://en.wikipedia.org/wiki/Machine_learninghttp://en.wikipedia.org/wiki/Machine_learninghttp://en.wikipedia.org/wiki/Machine_learninghttp://en.wikipedia.org/wiki/Probabilistic_classifierhttp://en.wikipedia.org/wiki/Probabilistic_classifierhttp://en.wikipedia.org/wiki/Probabilistic_classifierhttp://en.wikipedia.org/wiki/Bayes%27_theoremhttp://en.wikipedia.org/wiki/Bayes%27_theoremhttp://en.wikipedia.org/wiki/Bayes%27_theoremhttp://en.wikipedia.org/wiki/Statistical_independencehttp://en.wikipedia.org/wiki/Statistical_independencehttp://en.wikipedia.org/wiki/Statistical_independencehttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-idiots-1http://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-idiots-1http://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-idiots-1http://en.wikipedia.org/wiki/Bayesian_probabilityhttp://en.wikipedia.org/wiki/Bayesian_probabilityhttp://en.wikipedia.org/wiki/Bayesian_probabilityhttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-idiots-1http://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-idiots-1http://en.wikipedia.org/wiki/Artificial_Intelligence:_A_Modern_Approachhttp://en.wikipedia.org/wiki/Artificial_Intelligence:_A_Modern_Approachhttp://en.wikipedia.org/wiki/Artificial_Intelligence:_A_Modern_Approachhttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-aima-2http://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-aima-2http://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-aima-2http://en.wikipedia.org/wiki/Information_retrievalhttp://en.wikipedia.org/wiki/Information_retrievalhttp://en.wikipedia.org/wiki/Information_retrievalhttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-aima-2http://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-aima-2http://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-aima-2http://en.wikipedia.org/wiki/Text_categorizationhttp://en.wikipedia.org/wiki/Text_categorizationhttp://en.wikipedia.org/wiki/Text_categorizationhttp://en.wikipedia.org/wiki/Spam_filteringhttp://en.wikipedia.org/wiki/Spam_filteringhttp://en.wikipedia.org/wiki/Spam_filteringhttp://en.wikipedia.org/wiki/Bag_of_wordshttp://en.wikipedia.org/wiki/Bag_of_wordshttp://en.wikipedia.org/wiki/Bag_of_wordshttp://en.wikipedia.org/wiki/Bag_of_wordshttp://en.wikipedia.org/wiki/Support_vector_machinehttp://en.wikipedia.org/wiki/Support_vector_machinehttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-rennie-3http://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-rennie-3http://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-rennie-3http://en.wikipedia.org/wiki/Medical_diagnosishttp://en.wikipedia.org/wiki/Medical_diagnosishttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-rish-4http://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-rish-4http://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-rish-4http://en.wikipedia.org/wiki/Maximum-likelihood_estimationhttp://en.wikipedia.org/wiki/Maximum-likelihood_estimationhttp://en.wikipedia.org/wiki/Maximum-likelihood_estimationhttp://en.wikipedia.org/wiki/Closed-form_expressionhttp://en.wikipedia.org/wiki/Closed-form_expressionhttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-aima-2http://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-aima-2http://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-aima-2http://en.wikipedia.org/wiki/Linear_timehttp://en.wikipedia.org/wiki/Linear_timehttp://en.wikipedia.org/wiki/Linear_timehttp://en.wikipedia.org/wiki/Iterative_methodhttp://en.wikipedia.org/wiki/Iterative_methodhttp://en.wikipedia.org/wiki/Iterative_methodhttp://en.wikipedia.org/wiki/File:Fisher_iris_versicolor_sepalwidth.svghttp://en.wikipedia.org/wiki/File:Internet_map_1024.jpghttp://en.wikipedia.org/wiki/File:Fisher_iris_versicolor_sepalwidth.svghttp://en.wikipedia.org/wiki/File:Internet_map_1024.jpghttp://en.wikipedia.org/wiki/Iterative_methodhttp://en.wikipedia.org/wiki/Linear_timehttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-aima-2http://en.wikipedia.org/wiki/Closed-form_expressionhttp://en.wikipedia.org/wiki/Maximum-likelihood_estimationhttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-rish-4http://en.wikipedia.org/wiki/Medical_diagnosishttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-rennie-3http://en.wikipedia.org/wiki/Support_vector_machinehttp://en.wikipedia.org/wiki/Bag_of_wordshttp://en.wikipedia.org/wiki/Bag_of_wordshttp://en.wikipedia.org/wiki/Spam_filteringhttp://en.wikipedia.org/wiki/Text_categorizationhttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-aima-2http://en.wikipedia.org/wiki/Information_retrievalhttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-aima-2http://en.wikipedia.org/wiki/Artificial_Intelligence:_A_Modern_Approachhttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-idiots-1http://en.wikipedia.org/wiki/Bayesian_probabilityhttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-idiots-1http://en.wikipedia.org/wiki/Statistical_independencehttp://en.wikipedia.org/wiki/Bayes%27_theoremhttp://en.wikipedia.org/wiki/Probabilistic_classifierhttp://en.wikipedia.org/wiki/Machine_learninghttp://en.wikipedia.org/w/index.php?title=Template:Machine_learning_bar&action=edithttp://en.wikipedia.org/wiki/Template_talk:Machine_learning_barhttp://en.wikipedia.org/wiki/Template:Machine_learning_barhttp://en.wikipedia.org/wiki/Portal:Statisticshttp://en.wikipedia.org/wiki/Portal:Computer_sciencehttp://en.wikipedia.org/wiki/Vapnik%E2%80%93Chervonenkis_theoryhttp://en.wikipedia.org/wiki/Statistical_learning_theoryhttp://en.wikipedia.org/wiki/Probably_approximately_correct_learninghttp://en.wikipedia.org/wiki/Empirical_risk_minimizationhttp://en.wikipedia.org/wiki/Computational_learning_theoryhttp://en.wikipedia.org/wiki/Bias-variance_dilemmahttp://en.wikipedia.org/wiki/Convolutional_neural_network
  • 8/10/2019 Naive Bayes Classifier From Wikipedia

    4/13

    1 Introduction

    2 Probabilistic model

    o 2.1 Constructing a classifier from the probability model

    3 Parameter estimation and event models

    o 3.1 Gaussian naive Bayes

    o

    3.2 Multinomial naive Bayeso 3.3 Bernoulli naive Bayes

    o 3.4 Semi-supervised parameter estimation

    4 Discussion

    o 4.1 Relation to logistic regression

    5 Examples

    o 5.1 Sex classification

    5.1.1 Training

    5.1.2 Testing

    o 5.2 Document classification

    6 See also

    7 Referenceso 7.1 Further reading

    8 External links

    Introduction

    In simple terms, a naive Bayes classifier assumes that the value of a particular feature is

    unrelated to the presence or absence of any other feature, given the class variable. For

    example, a fruit may be considered to be an apple if it is red, round, and about 3" in diameter.

    A naive Bayes classifier considers each of these features to contribute independently to the

    probability that this fruit is an apple, regardless of the presence or absence of the otherfeatures.

    For some types of probability models, naive Bayes classifiers can be trained very efficiently

    in asupervised learningsetting. In many practical applications, parameter estimation for

    naive Bayes models uses the method ofmaximum likelihood;in other words, one can work

    with the naive Bayes model without acceptingBayesian probabilityor using any Bayesian

    methods.

    Despite their naive design and apparently oversimplified assumptions, naive Bayes classifiers

    have worked quite well in many complex real-world situations. In 2004, an analysis of the

    Bayesian classification problem showed that there are sound theoretical reasons for theapparently implausibleefficacyof naive Bayes classifiers.[5]Still, a comprehensive

    comparison with other classification algorithms in 2006 showed that Bayes classification is

    outperformed by other approaches, such asboosted treesorrandom forests.[6]

    An advantage of naive Bayes is that it only requires a small amount of training data to

    estimate the parameters (means and variances of the variables) necessary for classification.

    Because independent variables are assumed, only the variances of the variables for each class

    need to be determined and not the entirecovariance matrix.

    Probabilistic model

    http://en.wikipedia.org/wiki/Naive_Bayes_classifier#Introductionhttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#Introductionhttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#Probabilistic_modelhttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#Probabilistic_modelhttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#Constructing_a_classifier_from_the_probability_modelhttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#Constructing_a_classifier_from_the_probability_modelhttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#Parameter_estimation_and_event_modelshttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#Parameter_estimation_and_event_modelshttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#Gaussian_naive_Bayeshttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#Gaussian_naive_Bayeshttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#Multinomial_naive_Bayeshttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#Multinomial_naive_Bayeshttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#Bernoulli_naive_Bayeshttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#Bernoulli_naive_Bayeshttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#Semi-supervised_parameter_estimationhttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#Semi-supervised_parameter_estimationhttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#Discussionhttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#Discussionhttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#Relation_to_logistic_regressionhttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#Relation_to_logistic_regressionhttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#Exampleshttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#Exampleshttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#Sex_classificationhttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#Sex_classificationhttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#Traininghttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#Traininghttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#Testinghttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#Testinghttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#Document_classificationhttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#Document_classificationhttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#See_alsohttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#See_alsohttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#Referenceshttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#Referenceshttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#Further_readinghttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#Further_readinghttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#External_linkshttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#External_linkshttp://en.wikipedia.org/wiki/Supervised_learninghttp://en.wikipedia.org/wiki/Supervised_learninghttp://en.wikipedia.org/wiki/Supervised_learninghttp://en.wikipedia.org/wiki/Maximum_likelihoodhttp://en.wikipedia.org/wiki/Maximum_likelihoodhttp://en.wikipedia.org/wiki/Maximum_likelihoodhttp://en.wikipedia.org/wiki/Bayesian_probabilityhttp://en.wikipedia.org/wiki/Bayesian_probabilityhttp://en.wikipedia.org/wiki/Bayesian_probabilityhttp://en.wikipedia.org/wiki/Efficacyhttp://en.wikipedia.org/wiki/Efficacyhttp://en.wikipedia.org/wiki/Efficacyhttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-5http://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-5http://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-5http://en.wikipedia.org/wiki/Boosted_treeshttp://en.wikipedia.org/wiki/Boosted_treeshttp://en.wikipedia.org/wiki/Boosted_treeshttp://en.wikipedia.org/wiki/Random_forestshttp://en.wikipedia.org/wiki/Random_forestshttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-6http://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-6http://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-6http://en.wikipedia.org/wiki/Covariance_matrixhttp://en.wikipedia.org/wiki/Covariance_matrixhttp://en.wikipedia.org/wiki/Covariance_matrixhttp://en.wikipedia.org/wiki/Covariance_matrixhttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-6http://en.wikipedia.org/wiki/Random_forestshttp://en.wikipedia.org/wiki/Boosted_treeshttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-5http://en.wikipedia.org/wiki/Efficacyhttp://en.wikipedia.org/wiki/Bayesian_probabilityhttp://en.wikipedia.org/wiki/Maximum_likelihoodhttp://en.wikipedia.org/wiki/Supervised_learninghttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#External_linkshttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#Further_readinghttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#Referenceshttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#See_alsohttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#Document_classificationhttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#Testinghttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#Traininghttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#Sex_classificationhttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#Exampleshttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#Relation_to_logistic_regressionhttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#Discussionhttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#Semi-supervised_parameter_estimationhttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#Bernoulli_naive_Bayeshttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#Multinomial_naive_Bayeshttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#Gaussian_naive_Bayeshttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#Parameter_estimation_and_event_modelshttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#Constructing_a_classifier_from_the_probability_modelhttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#Probabilistic_modelhttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#Introduction
  • 8/10/2019 Naive Bayes Classifier From Wikipedia

    5/13

    Abstractly, the probability model for a classifier is a conditional model

    over a dependent class variable with a small number of outcomes or classes, conditional on

    several feature variables through . The problem is that if the number of features islarge or if a feature can take on a large number of values, then basing such a model on

    probability tables is infeasible. We therefore reformulate the model to make it more tractable.

    UsingBayes' theorem,this can be written

    In plain English, usingBayesian Probabilityterminology, the above equation can be written

    as

    In practice, there is interest only in the numerator of that fraction, because the denominator

    does not depend on and the values of the features are given, so that the denominator is

    effectively constant. The numerator is equivalent to thejoint probabilitymodel

    which can be rewritten as follows, using thechain rulefor repeated applications of the

    definition ofconditional probability:

    Now the "naive"conditional independenceassumptions come into play: assume that each

    feature is conditionallyindependentof every other feature for , given the

    category . This means that

    ,

    ,

    ,

    and so on, for . Thus, the joint model can be expressed as

    http://en.wikipedia.org/wiki/Bayes%27_theoremhttp://en.wikipedia.org/wiki/Bayes%27_theoremhttp://en.wikipedia.org/wiki/Bayes%27_theoremhttp://en.wikipedia.org/wiki/Bayesian_probabilityhttp://en.wikipedia.org/wiki/Bayesian_probabilityhttp://en.wikipedia.org/wiki/Bayesian_probabilityhttp://en.wikipedia.org/wiki/Joint_probabilityhttp://en.wikipedia.org/wiki/Joint_probabilityhttp://en.wikipedia.org/wiki/Joint_probabilityhttp://en.wikipedia.org/wiki/Chain_rule_%28probability%29http://en.wikipedia.org/wiki/Chain_rule_%28probability%29http://en.wikipedia.org/wiki/Chain_rule_%28probability%29http://en.wikipedia.org/wiki/Conditional_probabilityhttp://en.wikipedia.org/wiki/Conditional_probabilityhttp://en.wikipedia.org/wiki/Conditional_probabilityhttp://en.wikipedia.org/wiki/Conditional_independencehttp://en.wikipedia.org/wiki/Conditional_independencehttp://en.wikipedia.org/wiki/Conditional_independencehttp://en.wikipedia.org/wiki/Statistical_independencehttp://en.wikipedia.org/wiki/Statistical_independencehttp://en.wikipedia.org/wiki/Statistical_independencehttp://en.wikipedia.org/wiki/Statistical_independencehttp://en.wikipedia.org/wiki/Conditional_independencehttp://en.wikipedia.org/wiki/Conditional_probabilityhttp://en.wikipedia.org/wiki/Chain_rule_%28probability%29http://en.wikipedia.org/wiki/Joint_probabilityhttp://en.wikipedia.org/wiki/Bayesian_probabilityhttp://en.wikipedia.org/wiki/Bayes%27_theorem
  • 8/10/2019 Naive Bayes Classifier From Wikipedia

    6/13

    This means that under the above independence assumptions, the conditional distribution over

    the class variable is:

    where the evidence is a scaling factor dependent only on

    , that is, a constant if the values of the feature variables are known.

    Constructing a classifier from the probability model

    The discussion so far has derived the independent feature model, that is, the naive Bayes

    probability model.The naive Bayesclassifiercombines this model with adecision rule.One

    common rule is to pick the hypothesis that is most probable; this is known as themaximum a

    posterioriorMAPdecision rule. The corresponding classifier, aBayes classifier,is the

    function defined as follows:

    Parameter estimation and event models

    A class' prior may be calculated by assuming equiprobable classes (i.e., priors = 1 / (number

    of classes)), or by calculating an estimate for the class probability from the training set (i.e.,

    (prior for a given class) = (number of samples in the class) / (total number of samples)). To

    estimate the parameters for a feature's distribution, one must assume a distribution or

    generatenonparametricmodels for the features from the training set.[7]

    The assumptions on distributions of features are called the event modelof the Naive Bayes

    classifier. For discrete features like the ones encountered in document classification (include

    spam filtering),multinomialandBernoullidistributions are popular. These assumptions lead

    to two distinct models, which are often confused.[8][9]

    Gaussian naive Bayes

    When dealing with continuous data, a typical assumption is that the continuous values

    associated with each class are distributed according to aGaussiandistribution. For example,

    suppose the training data contain a continuous attribute, . We first segment the data by the

    class, and then compute the mean andvarianceof in each class. Let be the mean of the

    http://en.wikipedia.org/wiki/Probability_modelhttp://en.wikipedia.org/wiki/Probability_modelhttp://en.wikipedia.org/wiki/Statistical_classificationhttp://en.wikipedia.org/wiki/Statistical_classificationhttp://en.wikipedia.org/wiki/Statistical_classificationhttp://en.wikipedia.org/wiki/Decision_rulehttp://en.wikipedia.org/wiki/Decision_rulehttp://en.wikipedia.org/wiki/Decision_rulehttp://en.wikipedia.org/wiki/Maximum_a_posteriorihttp://en.wikipedia.org/wiki/Maximum_a_posteriorihttp://en.wikipedia.org/wiki/Maximum_a_posteriorihttp://en.wikipedia.org/wiki/Maximum_a_posteriorihttp://en.wikipedia.org/wiki/Bayes_classifierhttp://en.wikipedia.org/wiki/Bayes_classifierhttp://en.wikipedia.org/wiki/Bayes_classifierhttp://en.wikipedia.org/wiki/Nonparametrichttp://en.wikipedia.org/wiki/Nonparametrichttp://en.wikipedia.org/wiki/Nonparametrichttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-7http://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-7http://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-7http://en.wikipedia.org/wiki/Multinomial_distributionhttp://en.wikipedia.org/wiki/Multinomial_distributionhttp://en.wikipedia.org/wiki/Multinomial_distributionhttp://en.wikipedia.org/wiki/Bernoulli_distributionhttp://en.wikipedia.org/wiki/Bernoulli_distributionhttp://en.wikipedia.org/wiki/Bernoulli_distributionhttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-mccallum-8http://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-mccallum-8http://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-mccallum-8http://en.wikipedia.org/wiki/Normal_distributionhttp://en.wikipedia.org/wiki/Normal_distributionhttp://en.wikipedia.org/wiki/Normal_distributionhttp://en.wikipedia.org/wiki/Variance#Estimating_the_variancehttp://en.wikipedia.org/wiki/Variance#Estimating_the_variancehttp://en.wikipedia.org/wiki/Variance#Estimating_the_variancehttp://en.wikipedia.org/wiki/Variance#Estimating_the_variancehttp://en.wikipedia.org/wiki/Normal_distributionhttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-mccallum-8http://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-mccallum-8http://en.wikipedia.org/wiki/Bernoulli_distributionhttp://en.wikipedia.org/wiki/Multinomial_distributionhttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-7http://en.wikipedia.org/wiki/Nonparametrichttp://en.wikipedia.org/wiki/Bayes_classifierhttp://en.wikipedia.org/wiki/Maximum_a_posteriorihttp://en.wikipedia.org/wiki/Maximum_a_posteriorihttp://en.wikipedia.org/wiki/Decision_rulehttp://en.wikipedia.org/wiki/Statistical_classificationhttp://en.wikipedia.org/wiki/Probability_model
  • 8/10/2019 Naive Bayes Classifier From Wikipedia

    7/13

    values in associated with class c, and let be the variance of the values in associated

    with class c. Then, the probability densityof some value given a class, , can be

    computed by plugging into the equation for aNormal distributionparameterized by and

    . That is,

    Another common technique for handling continuous values is to use binning todiscretizethe

    feature values, to obtain a new set of Bernoulli-distributed features; some literature in fact

    suggests that this is necessary to apply naive Bayes, but it is not, and the discretization may

    throw away discriminative information.[1]

    Multinomial naive Bayes

    With a multinomial event model, samples (feature vectors) represent the frequencies with

    which certain events have been generated by amultinomial where is the

    probability that event ioccurs (or ksuch multinomials in the multiclass case). This is the

    event model typically used for document classification; the feature values are then term

    frequencies, generated by a multinomial that produces some number of words (seebag of

    wordsassumption). The likelihood of observing a feature vector (histogram)Fis given by

    The multinomial naive Bayes classifier becomes alinear classifierwhen expressed in log-

    space:[3]

    where and .

    If a given class and feature value never occur together in the training data, then the

    frequency-based probability estimate will be zero. This is problematic because it will wipe

    out all information in the other probabilities when they are multiplied. Therefore, it is often

    desirable to incorporate a small-sample correction, calledpseudocount,in all probability

    estimates such that no probability is ever set to be exactly zero. This way ofregularizing

    naive Bayes is calledLaplace smoothingwhen the pseudocount is one, andLidstonesmoothingin the general case.

    http://en.wikipedia.org/wiki/Normal_distributionhttp://en.wikipedia.org/wiki/Normal_distributionhttp://en.wikipedia.org/wiki/Normal_distributionhttp://en.wikipedia.org/wiki/Discretization_of_continuous_featureshttp://en.wikipedia.org/wiki/Discretization_of_continuous_featureshttp://en.wikipedia.org/wiki/Discretization_of_continuous_featureshttp://en.wikipedia.org/wiki/Discretization_errorhttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-idiots-1http://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-idiots-1http://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-idiots-1http://en.wikipedia.org/wiki/Multinomial_distributionhttp://en.wikipedia.org/wiki/Multinomial_distributionhttp://en.wikipedia.org/wiki/Multinomial_distributionhttp://en.wikipedia.org/wiki/Bag_of_wordshttp://en.wikipedia.org/wiki/Bag_of_wordshttp://en.wikipedia.org/wiki/Bag_of_wordshttp://en.wikipedia.org/wiki/Bag_of_wordshttp://en.wikipedia.org/wiki/Linear_classifierhttp://en.wikipedia.org/wiki/Linear_classifierhttp://en.wikipedia.org/wiki/Linear_classifierhttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-rennie-3http://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-rennie-3http://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-rennie-3http://en.wikipedia.org/wiki/Pseudocounthttp://en.wikipedia.org/wiki/Pseudocounthttp://en.wikipedia.org/wiki/Pseudocounthttp://en.wikipedia.org/wiki/Regularization_%28mathematics%29http://en.wikipedia.org/wiki/Regularization_%28mathematics%29http://en.wikipedia.org/wiki/Regularization_%28mathematics%29http://en.wikipedia.org/wiki/Laplace_smoothinghttp://en.wikipedia.org/wiki/Laplace_smoothinghttp://en.wikipedia.org/wiki/Laplace_smoothinghttp://en.wikipedia.org/wiki/Lidstone_smoothinghttp://en.wikipedia.org/wiki/Lidstone_smoothinghttp://en.wikipedia.org/wiki/Lidstone_smoothinghttp://en.wikipedia.org/wiki/Lidstone_smoothinghttp://en.wikipedia.org/wiki/Lidstone_smoothinghttp://en.wikipedia.org/wiki/Lidstone_smoothinghttp://en.wikipedia.org/wiki/Laplace_smoothinghttp://en.wikipedia.org/wiki/Regularization_%28mathematics%29http://en.wikipedia.org/wiki/Pseudocounthttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-rennie-3http://en.wikipedia.org/wiki/Linear_classifierhttp://en.wikipedia.org/wiki/Bag_of_wordshttp://en.wikipedia.org/wiki/Bag_of_wordshttp://en.wikipedia.org/wiki/Multinomial_distributionhttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-idiots-1http://en.wikipedia.org/wiki/Discretization_errorhttp://en.wikipedia.org/wiki/Discretization_of_continuous_featureshttp://en.wikipedia.org/wiki/Normal_distribution
  • 8/10/2019 Naive Bayes Classifier From Wikipedia

    8/13

    Rennie et al.discuss problems with the multinomial assumption in the context of document

    classification and possible ways to alleviate those problems, including the use oftfidf

    weights instead of raw term frequencies and document length normalization, to produce a

    naive Bayes classifier that is competitive withsupport vector machines.[3]

    Bernoulli naive Bayes

    In the multivariateBernoullievent model, features are independentbooleans(binary

    variables) describing inputs. This model is also popular for document classification tasks,[8]

    where binary term occurrence features are used rather than term frequencies. If is a

    boolean expressing the occurrence or absence of the i'th term from the vocabulary, then the

    likelihood of a document given a class Cis given by[8]

    where is the probability of class Cgenerating the term . This event model is

    especially popular for classifying short texts. It has the benefit of explicitly modelling the

    absence of terms. Note that a naive Bayes classifier with a Bernoulli event model is not the

    same as a multinomial NB classifier with frequency counts truncated to one.

    Semi-supervised parameter estimation

    Given a way to train a naive Bayes classifier from labeled data, it's possible to construct a

    semi-supervisedtraining algorithm that can learn from a combination of labeled and

    unlabeled data by running the supervised learning algorithm in a loop:[10]

    Given a collection of labeled samplesLand unlabeled samples U, start

    by training a naive Bayes classifier onL.

    Until convergence, do:

    Predict class probabilities for all examplesxin .

    Re-train the model based on theprobabilities(not the labels) predicted in the previous

    step.

    Convergence is determined based on improvement to the model likelihood , where

    denotes the parameters of the naive Bayes model.

    This training algorithm is an instance of the more generalexpectationmaximization

    algorithm(EM): the prediction step inside the loop is theE-step of EM, while the re-training

    of naive Bayes is theM-step. The algorithm is formally justified by the assumption that the

    data are generated by amixture model,and the components of this mixture model are exactly

    the classes of the classification problem.[10]

    Discussion

    Despite the fact that the far-reaching independence assumptions are often inaccurate, thenaive Bayes classifier has several properties that make it surprisingly useful in practice. In

    http://en.wikipedia.org/wiki/Tf%E2%80%93idfhttp://en.wikipedia.org/wiki/Tf%E2%80%93idfhttp://en.wikipedia.org/wiki/Tf%E2%80%93idfhttp://en.wikipedia.org/wiki/Tf%E2%80%93idfhttp://en.wikipedia.org/wiki/Tf%E2%80%93idfhttp://en.wikipedia.org/wiki/Support_vector_machinehttp://en.wikipedia.org/wiki/Support_vector_machinehttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-rennie-3http://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-rennie-3http://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-rennie-3http://en.wikipedia.org/wiki/Bernoulli_distributionhttp://en.wikipedia.org/wiki/Bernoulli_distributionhttp://en.wikipedia.org/wiki/Bernoulli_distributionhttp://en.wikipedia.org/wiki/Boolean_data_typehttp://en.wikipedia.org/wiki/Boolean_data_typehttp://en.wikipedia.org/wiki/Boolean_data_typehttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-mccallum-8http://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-mccallum-8http://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-mccallum-8http://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-mccallum-8http://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-mccallum-8http://en.wikipedia.org/wiki/Semi-supervised_learninghttp://en.wikipedia.org/wiki/Semi-supervised_learninghttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-em-10http://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-em-10http://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-em-10http://en.wikipedia.org/wiki/Expectation%E2%80%93maximization_algorithmhttp://en.wikipedia.org/wiki/Expectation%E2%80%93maximization_algorithmhttp://en.wikipedia.org/wiki/Expectation%E2%80%93maximization_algorithmhttp://en.wikipedia.org/wiki/Expectation%E2%80%93maximization_algorithmhttp://en.wikipedia.org/wiki/Expectation%E2%80%93maximization_algorithmhttp://en.wikipedia.org/wiki/Expectation%E2%80%93maximization_algorithmhttp://en.wikipedia.org/wiki/Mixture_modelhttp://en.wikipedia.org/wiki/Mixture_modelhttp://en.wikipedia.org/wiki/Mixture_modelhttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-em-10http://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-em-10http://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-em-10http://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-em-10http://en.wikipedia.org/wiki/Mixture_modelhttp://en.wikipedia.org/wiki/Expectation%E2%80%93maximization_algorithmhttp://en.wikipedia.org/wiki/Expectation%E2%80%93maximization_algorithmhttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-em-10http://en.wikipedia.org/wiki/Semi-supervised_learninghttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-mccallum-8http://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-mccallum-8http://en.wikipedia.org/wiki/Boolean_data_typehttp://en.wikipedia.org/wiki/Bernoulli_distributionhttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-rennie-3http://en.wikipedia.org/wiki/Support_vector_machinehttp://en.wikipedia.org/wiki/Tf%E2%80%93idf
  • 8/10/2019 Naive Bayes Classifier From Wikipedia

    9/13

    particular, the decoupling of the class conditional feature distributions means that each

    distribution can be independently estimated as a one-dimensional distribution. This helps

    alleviate problems stemming from thecurse of dimensionality,such as the need for data sets

    that scale exponentially with the number of features. While naive Bayes often fails to produce

    a good estimate for the correct class probabilities,[11]this may not be a requirement for many

    applications. For example, the naive Bayes classifier will make the correct MAP decision ruleclassification so long as the correct class is more probable than any other class. This is true

    regardless of whether the probability estimate is slightly, or even grossly inaccurate. In this

    manner, the overall classifier can be robust enough to ignore serious deficiencies in its

    underlying naive probability model.[4]Other reasons for the observed success of the naive

    Bayes classifier are discussed in the literature cited below.

    Relation to logistic regression

    This section requiresexpansion.(August 2014)

    In the case of discrete inputs (indicator or frequency features for discrete events), naive Bayes

    classifiers form agenerative-discriminativepair with (multinomial)logistic regression

    classifiers: each naive Bayes classifier can be considered a way of fitting a probability model

    p(x,y) to optimize the joint likelihoodp(x,y), while logistic regression fits the same

    probability model to optimize the conditionalp(x|y).[12]

    The link between the two can be seen by observing that the decision function for naive Bayes

    (in binary classification) can be rewritten as "predict 1 if theoddsofp(y=1 |x) are higher than

    those ofp(y=0 |x)". Expressing this in log-space gives:

    The left-hand side of this equation is the log-odds, orlogit,the quantity predicted by the

    linear model that underlies logistic regression. Since naive Bayes is also a linear model in the

    discrete case, it can be be reparametrised as a linear function . Obtaining the

    probabilities is then a matter of applying thelogistic functionto , or in themulticlass case, thesoftmax function.

    Discriminative classifiers have lower asymptotic error than generative ones; however,research byNgandJordanhas shown that in some practical cases naive Bayes can

    outperform logistic regression because it reaches its asymptotic error faster.[12]

    Examples

    Sex classification

    Problem: classify whether a given person is a male or a female based on the measured

    features. The features include height, weight, and foot size.

    Training

    http://en.wikipedia.org/wiki/Curse_of_dimensionalityhttp://en.wikipedia.org/wiki/Curse_of_dimensionalityhttp://en.wikipedia.org/wiki/Curse_of_dimensionalityhttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-11http://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-11http://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-11http://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-rish-4http://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-rish-4http://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-rish-4http://en.wikipedia.org/w/index.php?title=Naive_Bayes_classifier&action=edithttp://en.wikipedia.org/w/index.php?title=Naive_Bayes_classifier&action=edithttp://en.wikipedia.org/w/index.php?title=Naive_Bayes_classifier&action=edithttp://en.wikipedia.org/wiki/Multinomial_logistic_regressionhttp://en.wikipedia.org/wiki/Multinomial_logistic_regressionhttp://en.wikipedia.org/wiki/Multinomial_logistic_regressionhttp://en.wikipedia.org/wiki/Logistic_regressionhttp://en.wikipedia.org/wiki/Logistic_regressionhttp://en.wikipedia.org/wiki/Logistic_regressionhttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-pair-12http://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-pair-12http://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-pair-12http://en.wikipedia.org/wiki/Oddshttp://en.wikipedia.org/wiki/Oddshttp://en.wikipedia.org/wiki/Oddshttp://en.wikipedia.org/wiki/Logithttp://en.wikipedia.org/wiki/Logithttp://en.wikipedia.org/wiki/Logithttp://en.wikipedia.org/wiki/Logistic_functionhttp://en.wikipedia.org/wiki/Logistic_functionhttp://en.wikipedia.org/wiki/Logistic_functionhttp://en.wikipedia.org/wiki/Softmax_functionhttp://en.wikipedia.org/wiki/Softmax_functionhttp://en.wikipedia.org/wiki/Softmax_functionhttp://en.wikipedia.org/wiki/Andrew_Nghttp://en.wikipedia.org/wiki/Andrew_Nghttp://en.wikipedia.org/wiki/Andrew_Nghttp://en.wikipedia.org/wiki/Michael_I._Jordanhttp://en.wikipedia.org/wiki/Michael_I._Jordanhttp://en.wikipedia.org/wiki/Michael_I._Jordanhttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-pair-12http://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-pair-12http://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-pair-12http://en.wikipedia.org/wiki/File:Wiki_letter_w_cropped.svghttp://en.wikipedia.org/wiki/File:Wiki_letter_w_cropped.svghttp://en.wikipedia.org/wiki/File:Wiki_letter_w_cropped.svghttp://en.wikipedia.org/wiki/File:Wiki_letter_w_cropped.svghttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-pair-12http://en.wikipedia.org/wiki/Michael_I._Jordanhttp://en.wikipedia.org/wiki/Andrew_Nghttp://en.wikipedia.org/wiki/Softmax_functionhttp://en.wikipedia.org/wiki/Logistic_functionhttp://en.wikipedia.org/wiki/Logithttp://en.wikipedia.org/wiki/Oddshttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-pair-12http://en.wikipedia.org/wiki/Logistic_regressionhttp://en.wikipedia.org/wiki/Multinomial_logistic_regressionhttp://en.wikipedia.org/w/index.php?title=Naive_Bayes_classifier&action=edithttp://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-rish-4http://en.wikipedia.org/wiki/Naive_Bayes_classifier#cite_note-11http://en.wikipedia.org/wiki/Curse_of_dimensionality
  • 8/10/2019 Naive Bayes Classifier From Wikipedia

    10/13

    Example training set below.

    sex height (feet) weight (lbs) foot size(inches)

    male 6 180 12

    male 5.92 (5'11") 190 11male 5.58 (5'7") 170 12

    male 5.92 (5'11") 165 10

    female 5 100 6

    female 5.5 (5'6") 150 8

    female 5.42 (5'5") 130 7

    female 5.75 (5'9") 150 9

    The classifier created from the training set using a Gaussian distribution assumption would be

    (given variances aresample variances):

    sexmean

    (height)

    variance

    (height)

    mean

    (weight)

    variance

    (weight)

    mean (foot

    size)

    variance (foot

    size)

    male 5.855 3.5033e-02 176.25 1.2292e+02 11.25 9.1667e-01

    female 5.4175 9.7225e-02 132.5 5.5833e+02 7.5 1.6667e+00

    Let's say we have equiprobable classes so P(male)= P(female) = 0.5. This prior probability

    distribution might be based on our knowledge of frequencies in the larger population, or on

    frequency in the training set.

    Testing

    Below is a sample to be classified as a male or female.

    sex height (feet) weight (lbs) foot size(inches)

    sample 6 130 8

    We wish to determine which posterior is greater, male or female. For the classification as

    male the posterior is given by

    For the classification as female the posterior is given by

    The evidence (also termed normalizing constant) may be calculated:

    http://en.wikipedia.org/wiki/Variance#Population_variance_and_sample_variancehttp://en.wikipedia.org/wiki/Variance#Population_variance_and_sample_variancehttp://en.wikipedia.org/wiki/Variance#Population_variance_and_sample_variancehttp://en.wikipedia.org/wiki/Variance#Population_variance_and_sample_variance
  • 8/10/2019 Naive Bayes Classifier From Wikipedia

    11/13

    However, given the sample the evidence is a constant and thus scales both posteriors equally.

    It therefore does not affect classification and can be ignored. We now determine the

    probability distribution for the sex of the sample.

    ,

    where and are the parameters of normal distribution

    which have been previously determined from the training set. Note that a value greater than 1

    is OK hereit is a probability density rather than a probability, because height is a

    continuous variable.

    Since posterior numerator is greater in the female case, we predict the sample is female.

    Document classification

    Here is a worked example of naive Bayesian classification to thedocument classification

    problem. Consider the problem of classifying documents by their content, for example into

    spamand non-spame-mails.Imagine that documents are drawn from a number of classes of

    documents which can be modelled as sets of words where the (independent) probability that

    the i-th word of a given document occurs in a document from class Ccan be written as

    (For this treatment, we simplify things further by assuming that words are randomlydistributed in the document - that is, words are not dependent on the length of the document,

    position within the document with relation to other words, or other document-context.)

    Then the probability that a given documentDcontains all of the words , given a class C, is

    The question that we desire to answer is: "what is the probability that a given documentD

    belongs to a given class C?" In other words, what is ?

    http://en.wikipedia.org/wiki/Document_classificationhttp://en.wikipedia.org/wiki/Document_classificationhttp://en.wikipedia.org/wiki/Document_classificationhttp://en.wikipedia.org/wiki/Spamminghttp://en.wikipedia.org/wiki/Spamminghttp://en.wikipedia.org/wiki/E-mailhttp://en.wikipedia.org/wiki/E-mailhttp://en.wikipedia.org/wiki/E-mailhttp://en.wikipedia.org/wiki/E-mailhttp://en.wikipedia.org/wiki/Spamminghttp://en.wikipedia.org/wiki/Document_classification
  • 8/10/2019 Naive Bayes Classifier From Wikipedia

    12/13

    Nowby definition

    and

    Bayes' theorem manipulates these into a statement of probability in terms oflikelihood.

    Assume for the moment that there are only two mutually exclusive classes, Sand S(e.g.

    spam and not spam), such that every element (email) is in either one or the other;

    and

    Using the Bayesian result above, we can write:

    Dividing one by the other gives:

    Which can be re-factored as:

    http://en.wikipedia.org/wiki/Conditional_probabilityhttp://en.wikipedia.org/wiki/Conditional_probabilityhttp://en.wikipedia.org/wiki/Conditional_probabilityhttp://en.wikipedia.org/wiki/Likelihoodhttp://en.wikipedia.org/wiki/Likelihoodhttp://en.wikipedia.org/wiki/Likelihoodhttp://en.wikipedia.org/wiki/Likelihoodhttp://en.wikipedia.org/wiki/Conditional_probability
  • 8/10/2019 Naive Bayes Classifier From Wikipedia

    13/13

    Thus, the probability ratio p(S|D) / p(S|D) can be expressed in terms of a series of

    likelihood ratios.The actual probability p(S|D) can be easily computed from log (p(S|D) /

    p(S|D)) based on the observation that p(S|D) + p(S|D) = 1.

    Taking thelogarithmof all these ratios, we have:

    (This technique of "log-likelihood ratios" is a common technique in statistics. In the case oftwo mutually exclusive alternatives (such as this example), the conversion of a log-likelihood

    ratio to a probability takes the form of asigmoid curve:seelogitfor details.)

    Finally, the document can be classified as follows. It is spam if (i.e.,

    ), otherwise it is not spam.

    http://en.wikipedia.org/wiki/Likelihood_functionhttp://en.wikipedia.org/wiki/Likelihood_functionhttp://en.wikipedia.org/wiki/Logarithmhttp://en.wikipedia.org/wiki/Logarithmhttp://en.wikipedia.org/wiki/Logarithmhttp://en.wikipedia.org/wiki/Log-likelihood_ratiohttp://en.wikipedia.org/wiki/Log-likelihood_ratiohttp://en.wikipedia.org/wiki/Sigmoid_curvehttp://en.wikipedia.org/wiki/Sigmoid_curvehttp://en.wikipedia.org/wiki/Sigmoid_curvehttp://en.wikipedia.org/wiki/Logithttp://en.wikipedia.org/wiki/Logithttp://en.wikipedia.org/wiki/Logithttp://en.wikipedia.org/wiki/Logithttp://en.wikipedia.org/wiki/Sigmoid_curvehttp://en.wikipedia.org/wiki/Log-likelihood_ratiohttp://en.wikipedia.org/wiki/Logarithmhttp://en.wikipedia.org/wiki/Likelihood_function