Introduction to Information Retrieval
Text Classification & Naive
1Introduction to Information Retrieval
5 5Introduction to Information Retrieval
Use min heap for selecting top k out of N
Use a binary min heap
A binary min heap is a binary tree in which each node’s value is
less than the values of its children.
It takes O(N log k) operations to construct the k-heap containing
the k largest values (where N is the number of documents).
Essentially linear in N for small k and large N.
6Introduction to Information Retrieval
Binary min heap
7Introduction to Information Retrieval
Heuristics for finding the top k most relevant
We complete computation of the query-document similarity
score of document d before starting to compute the query-
document similarity score of d .
Requires a consistent ordering of documents in the postings
We complete processing the postings list of query term t
before starting to process the postings list of t .
Requires an accumulator for each document “still in the
The most effective heuristics switch back and forth between
term-at-a-time and document-at-a-time processing.
8Introduction to Information Retrieval
9Introduction to Information Retrieval
Complete search system
10Introduction to Information Retrieval
Text classification: definition & relevance to information retrieval
Naive Bayes: simple baseline text classifier
Theory: derivation of Naive Bayes classification rule & analysis
Evaluation of text classification: how do we know it worked /
11 11Introduction to Information Retrieval
A text classification task: Email spam filtering
From: ‘‘’’ takworlldhotmail.com
Subject: real estate is the only way... gem oalvgkay
Anyone can buy real estate with no money down
Stop paying rent TODAY
There is no need to spend hundreds or even thousands for
I am 22 years old and I have already purchased 6 properties
methods outlined in this truly INCREDIBLE ebook.
Change your life NOW
Click Below to order:
How would you write a program that would automatically detect
and delete this type of message?
13 13Introduction to Information Retrieval
Formal definition of TC: Training
A document space X
Documents are represented in this space – typically some type
of high-dimensional space.
A fixed set of classes C = c , c , . . . , c
1 2 J
The classes are human-defined for the needs of an application
(e.g., relevant vs. nonrelevant).
A training set D of labeled documents with each labeled
document d, c ∈ X × C
Using a learning method or learning algorithm, we then wish to
learn a classifier ϒ that maps documents to classes:
ϒ : X → C
14 14Introduction to Information Retrieval
Formal definition of TC: Application/Testing
Given: a description d∈ X of a document Determine: ϒ (d) ∈ C,
that is, the class that is most appropriate for d
15 15Introduction to Information Retrieval
16 16Introduction to Information Retrieval
Examples of how search engines use classification
Language identification (classes: English vs. French etc.)
The automatic detection of spam pages (spam vs. nonspam)
The automatic detection of sexually explicit content (sexually
explicit vs. not)
Topic-specific or vertical search – restrict search to a
“vertical” like “related to health” (relevant to vertical vs. not)
Standing queries (e.g., Google Alerts)
Sentiment detection: is a movie or product review positive or
negative (positive vs. negative)
18 18Introduction to Information Retrieval
Classification methods: 1. Manual
Manual classification was used by Yahoo in the beginning of
the web. Also: ODP, PubMed
Very accurate if job is done by experts
Consistent when the problem size and team is small
Scaling manual classification is difficult and expensive.
→ We need automatic methods for classification.
19 19Introduction to Information Retrieval
Classification methods: 2. Rule-based
Our Google Alerts example was rule-based classification.
There are IDE-type development enviroments for writing very
complex rules efficiently. (e.g., Verity)
Often: Boolean combinations (as in Google Alerts)
Accuracy is very high if a rule has been carefully refined over
time by a subject expert.
Building and maintaining rule-based classification systems is
cumbersome and expensive.
20 20Introduction to Information Retrieval
A Verity topic (a complex classification rule)
21 21Introduction to Information Retrieval
Classification methods: 3. Statistical/Probabilistic
This was our definition of the classification problem – text
classification as a learning problem
(i) Supervised learning of a the classification function ϒ and
(ii) its application to classifying new documents
We will look at a couple of methods for doing this: Naive
Bayes, Rocchio, kNN, SVMs
No free lunch: requires hand-classified training data
But this manual classification can be done by non-experts.
22 22Introduction to Information Retrieval
The Naive Bayes classifier
The Naive Bayes classifier is a probabilistic classifier.
We compute the probability of a document d being in a class c
n is the length of the document. (number of tokens)
P(t c) is the conditional probability of term t occurring in a
document of class c
P(t c) as a measure of how much evidence t contributes
that c is the correct class.
P(c) is the prior probability of c.
If a document’s terms do not provide clear evidence for one
class vs. another, we choose the c with highest P(c).
24 24Introduction to Information Retrieval
Maximum a posteriori class
Our goal in Naive Bayes classification is to find the “best”
The best class is the most likely or maximum a posteriori
(MAP) class cmap:
25 25Introduction to Information Retrieval
Taking the log
Multiplying lots of small probabilities can result in floating
Since log(xy) = log(x) + log(y), we can sum log probabilities
instead of multiplying probabilities.
Since log is a monotonic function, the class with the highest
score does not change.
So what we usually compute in practice is: