Question? Leave a message!




Text Classification & Naive Bayes

Text Classification & Naive Bayes 13
Introduction to Information Retrieval Introduction to Information Retrieval Text Classification Naive Bayes : 1Introduction to Information Retrieval Overview ❶ Recap ❷ Text classification ❸ Naive Bayes ❹ NB theory ❺ Evaluation of TC 2Introduction to Information Retrieval Overview ❶ Recap ❷ Text classification ❸ Naive Bayes ❹ NB theory ❺ Evaluation of TC 3Introduction to Information Retrieval Looking vs. Clicking 4 4Introduction to Information Retrieval Pivot normalization Source: Lillian Lee 5 5Introduction to Information Retrieval Use min heap for selecting top k out of N  Use a binary min heap  A binary min heap is a binary tree in which each node’s value is less than the values of its children.  It takes O(N log k) operations to construct the kheap containing the k largest values (where N is the number of documents).  Essentially linear in N for small k and large N. 6Introduction to Information Retrieval Binary min heap 7Introduction to Information Retrieval Heuristics for finding the top k most relevant  Documentatatime processing  We complete computation of the querydocument similarity score of document d before starting to compute the query i document similarity score of d . i+1  Requires a consistent ordering of documents in the postings lists  Termatatime processing  We complete processing the postings list of query term t i before starting to process the postings list of t . i+1  Requires an accumulator for each document “still in the running”  The most effective heuristics switch back and forth between termatatime and documentatatime processing. 8Introduction to Information Retrieval Tiered index 9Introduction to Information Retrieval Complete search system 10Introduction to Information Retrieval Takeaway today  Text classification: definition relevance to information retrieval  Naive Bayes: simple baseline text classifier  Theory: derivation of Naive Bayes classification rule analysis  Evaluation of text classification: how do we know it worked / didn’t work 11 11Introduction to Information Retrieval Outline ❶ Recap ❷ Text classification ❸ Naive Bayes ❹ NB theory ❺ Evaluation of TC 12Introduction to Information Retrieval A text classification task: Email spam filtering From: ‘‘’’ takworlldhotmail.com Subject: real estate is the only way... gem oalvgkay Anyone can buy real estate with no money down Stop paying rent TODAY There is no need to spend hundreds or even thousands for similar courses I am 22 years old and I have already purchased 6 properties using the methods outlined in this truly INCREDIBLE ebook. Change your life NOW ================================================= Click Below to order: http://www.wholesaledaily.com/sales/nmd.htm ================================================= How would you write a program that would automatically detect and delete this type of message 13 13Introduction to Information Retrieval Formal definition of TC: Training Given:  A document space X  Documents are represented in this space – typically some type of highdimensional space.  A fixed set of classes C = c , c , . . . , c 1 2 J  The classes are humandefined for the needs of an application (e.g., relevant vs. nonrelevant).  A training set D of labeled documents with each labeled document d, c ∈ X × C Using a learning method or learning algorithm, we then wish to learn a classifier ϒ that maps documents to classes: ϒ : X → C 14 14Introduction to Information Retrieval Formal definition of TC: Application/Testing Given: a description d∈ X of a document Determine: ϒ (d) ∈ C, that is, the class that is most appropriate for d 15 15Introduction to Information Retrieval Topic classification 16 16Introduction to Information Retrieval Exercise  Find examples of uses of text classification in information retrieval 17 17Introduction to Information Retrieval Examples of how search engines use classification  Language identification (classes: English vs. French etc.)  The automatic detection of spam pages (spam vs. nonspam)  The automatic detection of sexually explicit content (sexually explicit vs. not)  Topicspecific or vertical search – restrict search to a “vertical” like “related to health” (relevant to vertical vs. not)  Standing queries (e.g., Google Alerts)  Sentiment detection: is a movie or product review positive or negative (positive vs. negative) 18 18Introduction to Information Retrieval Classification methods: 1. Manual  Manual classification was used by Yahoo in the beginning of the web. Also: ODP, PubMed  Very accurate if job is done by experts  Consistent when the problem size and team is small  Scaling manual classification is difficult and expensive.  → We need automatic methods for classification. 19 19Introduction to Information Retrieval Classification methods: 2. Rulebased  Our Google Alerts example was rulebased classification.  There are IDEtype development enviroments for writing very complex rules efficiently. (e.g., Verity)  Often: Boolean combinations (as in Google Alerts)  Accuracy is very high if a rule has been carefully refined over time by a subject expert.  Building and maintaining rulebased classification systems is cumbersome and expensive. 20 20Introduction to Information Retrieval A Verity topic (a complex classification rule) 21 21Introduction to Information Retrieval Classification methods: 3. Statistical/Probabilistic  This was our definition of the classification problem – text classification as a learning problem  (i) Supervised learning of a the classification function ϒ and (ii) its application to classifying new documents  We will look at a couple of methods for doing this: Naive Bayes, Rocchio, kNN, SVMs  No free lunch: requires handclassified training data  But this manual classification can be done by nonexperts. 22 22Introduction to Information Retrieval Outline ❶ Recap ❷ Text classification ❸ Naive Bayes ❹ NB theory ❺ Evaluation of TC 23Introduction to Information Retrieval The Naive Bayes classifier  The Naive Bayes classifier is a probabilistic classifier.  We compute the probability of a document d being in a class c as follows:  n is the length of the document. (number of tokens) d  P(t c) is the conditional probability of term t occurring in a k k document of class c  P(t c) as a measure of how much evidence t contributes k k that c is the correct class.  P(c) is the prior probability of c.  If a document’s terms do not provide clear evidence for one class vs. another, we choose the c with highest P(c). 24 24Introduction to Information Retrieval Maximum a posteriori class  Our goal in Naive Bayes classification is to find the “best” class.  The best class is the most likely or maximum a posteriori (MAP) class cmap: 25 25Introduction to Information Retrieval Taking the log  Multiplying lots of small probabilities can result in floating point underflow.  Since log(xy) = log(x) + log(y), we can sum log probabilities instead of multiplying probabilities.  Since log is a monotonic function, the class with the highest score does not change.  So what we usually compute in practice is: 26 26Introduction to Information Retrieval Naive Bayes classifier  Classification rule:  Simple interpretation:  Each conditional parameter log is a weight that indicates how good an indicator t is for c. k  The prior log is a weight that indicates the relative frequency of c.  The sum of log prior and term weights is then a measure of how much evidence there is for the document being in the class.  We select the class with the most evidence. 27 27Introduction to Information Retrieval Parameter estimation take 1: Maximum likelihood  Estimate parameters and from train data: How  Prior:  N : number of docs in class c; N: total number of docs c  Conditional probabilities:  T is the number of tokens of t in training documents from class c ct (includes multiple occurrences)  We’ve made a Naive Bayes independence assumption here: 28 28Introduction to Information Retrieval The problem with maximum likelihood estimates: Zeros P(Chinad) ∝ P(China) ・ P(BEIJINGChina) ・ P(ANDChina) ・ P(TAIPEIChina) ・ P(JOINChina) ・ P(WTOChina)  If WTO never occurs in class China in the train set: 29 29Introduction to Information Retrieval The problem with maximum likelihood estimates: Zeros (cont)  If there were no occurrences of WTO in documents in class China, we’d get a zero estimate:  → We will get P(Chinad) = 0 for any document that contains WTO  Zero probabilities cannot be conditioned away. 30 30Introduction to Information Retrieval To avoid zeros: Addone smoothing  Before:  Now: Add one to each count to avoid zeros:  B is the number of different words (in this case the size of the vocabulary: V = M) 31 31Introduction to Information Retrieval To avoid zeros: Addone smoothing  Estimate parameters from the training corpus using addone smoothing  For a new document, for each class, compute sum of (i) log of prior and (ii) logs of conditional probabilities of the terms  Assign the document to the class with the largest score 32 32Introduction to Information Retrieval Naive Bayes: Training 33 33Introduction to Information Retrieval Naive Bayes: Testing 34 34Introduction to Information Retrieval Exercise  Estimate parameters of Naive Bayes classifier  Classify test document 35 35Introduction to Information Retrieval Example: Parameter estimates The denominators are (8 + 6) and (3 + 6) because the lengths of text and are 8 and 3, respectively, and because the constant c B is 6 as the vocabulary consists of six terms. 36 36Introduction to Information Retrieval Example: Classification Thus, the classifier assigns the test document to c = China. The reason for this classification decision is that the three occurrences of the positive indicator CHINESE in d outweigh the occurrences 5 of the two negative indicators JAPAN and TOKYO. 37 37Introduction to Information Retrieval Time complexity of Naive Bayes  L : average length of a training doc, L : length of the test ave a doc, M : number of distinct terms in the test doc, training a set, V : vocabulary, set of classes  is the time it takes to compute all counts.  is the time it takes to compute the parameters from the counts.  Generally:  Test time is also linear (in the length of the test document).  Thus: Naive Bayes is linear in the size of the training set (training) and the test document (testing). This is optimal. 38 38Introduction to Information Retrieval Outline ❶ Recap ❷ Text classification ❸ Naive Bayes ❹ NB theory ❺ Evaluation of TC 39Introduction to Information Retrieval Naive Bayes: Analysis  Now we want to gain a better understanding of the properties of Naive Bayes.  We will formally derive the classification rule . . .  . . . and state the assumptions we make in that derivation explicitly. 40 40Introduction to Information Retrieval Derivation of Naive Bayes rule We want to find the class that is most likely given the document: Apply Bayes rule Drop denominator since P(d) is the same for all classes: 41 41Introduction to Information Retrieval Too many parameters / sparseness  There are too many parameters , one for each unique combination of a class and a sequence of words.  We would need a very, very large number of training examples to estimate that many parameters.  This is the problem of data sparseness. 42 42Introduction to Information Retrieval Naive Bayes conditional independence assumption To reduce the number of parameters to a manageable size, we make the Naive Bayes conditional independence assumption: We assume that the probability of observing the conjunction of attributes is equal to the product of the individual probabilities P(Xk = tk c). Recall from earlier the estimates for these priors and conditional probabilities: 43 43Introduction to Information Retrieval Generative model  Generate a class with probability P(c)  Generate each of the words (in their respective positions), conditional on the class, but independent of each other, with probability P(t c) k  To classify docs, we “reengineer” this process and find the class that is most likely to have generated the doc. 44 44Introduction to Information Retrieval Second independence assumption   For example, for a document in the class UK, the probability of generating QUEEN in the first position of the document is the same as generating it in the last position.  The two independence assumptions amount to the bag of words model. 45 45Introduction to Information Retrieval A different Naive Bayes model: Bernoulli model 46 46Introduction to Information Retrieval Violation of Naive Bayes independence assumption  The independence assumptions do not really hold of documents written in natural language.  Conditional independence:  Positional independence:  Exercise  Examples for why conditional independence assumption is not really true  Examples for why positional independence assumption is not really true  How can Naive Bayes work if it makes such inappropriate 47 47 assumptionsIntroduction to Information Retrieval Why does Naive Bayes work  Naive Bayes can work well even though conditional independence assumptions are badly violated.  Example:  Double counting of evidence causes underestimation (0.01) and overestimation (0.99).  Classification is about predicting the correct class and not about accurately estimating probabilities.  Correct estimation⇒ accurate prediction.  But not vice versa 48 48Introduction to Information Retrieval Naive Bayes is not so naive  Naive Naive Bayes has won some bakeoffs (e.g., KDDCUP 97)  More robust to nonrelevant features than some more complex learning methods  More robust to concept drift (changing of definition of class over time) than some more complex learning methods  Better than methods like decision trees when we have many equally important features  A good dependable baseline for text classification (but not the best)  Optimal if independence assumptions hold (never true for text, but true for some domains)  Very fast  Low storage requirements 49 49Introduction to Information Retrieval Outline ❶ Recap ❷ Text classification ❸ Naive Bayes ❹ NB theory ❺ Evaluation of TC 50Introduction to Information Retrieval Evaluation on Reuters 51 51Introduction to Information Retrieval Example: The Reuters collection 52 52Introduction to Information Retrieval A Reuters document 53 53Introduction to Information Retrieval Evaluating classification  Evaluation must be done on test data that are independent of the training data (usually a disjoint set of instances).  It’s easy to get good performance on a test set that was available to the learner during training (e.g., just memorize the test set).  Measures: Precision, recall, F , classification accuracy 1 54 54Introduction to Information Retrieval Precision P and recall R P = TP / ( TP + FP) R = TP / ( TP + FN) 55 55Introduction to Information Retrieval A combined measure: F  F1 allows us to trade off precision against recall.   This is the harmonic mean of P and R: 56 56Introduction to Information Retrieval Averaging: Micro vs. Macro  We now have an evaluation measure (F ) for one class. 1  But we also want a single number that measures the aggregate performance over all classes in the collection.  Macroaveraging  Compute F for each of the C classes 1  Average these C numbers  Microaveraging  Compute TP, FP, FN for each of the C classes  Sum these C numbers (e.g., all TP to get aggregate TP)  Compute F for aggregate TP, FP, FN 1 57 57Introduction to Information Retrieval Naive Bayes vs. other methods 58 58Introduction to Information Retrieval Takeaway today  Text classification: definition relevance to information retrieval  Naive Bayes: simple baseline text classifier  Theory: derivation of Naive Bayes classification rule analysis  Evaluation of text classification: how do we know it worked / didn’t work 59 59Introduction to Information Retrieval Resources  Chapter 13 of IIR  Resources athttp://ifnlp.org/ir  Weka: A data mining software package that includes an implementation of Naive Bayes  Reuters21578 – the most famous text classification evaluation set (but now it’s too small for realistic experiments) 60 60
Website URL
Comment