Question? Leave a message!




Scoring, Term Weighting, The Vector Space Model

Scoring, Term Weighting, The Vector Space Model 6
Introduction to Information Retrieval Introduction to Information Retrieval : Scoring, Term Weighting, The Vector Space Model 1Introduction to Information Retrieval Overview ❶ Recap ❷ Why ranked retrieval ❸ Term frequency ❹ tfidf weighting ❺ The vector space model 2Introduction to Information Retrieval Outline ❶ Recap ❷ Why ranked retrieval ❸ Term frequency ❹ tfidf weighting ❺ The vector space model 3Introduction to Information Retrieval Heaps’ law Vocabulary size M as a function of collection size T (number of tokens) for ReutersRCV1. For these data, the dashed line log M = 10 0.49 ∗ log T + 1.64 is the 10 best least squares fit. 1.64 0.49 Thus, M = 10 T 1.64 and k = 10 ≈ 44 and b = 0.49. 4 4Introduction to Information Retrieval Zipf’s law The most frequent term (the) occurs cf times, the 1 second most frequent term (of) occurs times, the third most frequent term (and) occurs times etc. 5 5Introduction to Information Retrieval Dictionary as a string 6 6Introduction to Information Retrieval Gap encoding 7 7Introduction to Information Retrieval Variable byte (VB) code  Dedicate 1 bit (high bit) to be a continuation bit c.  If the gap G fits within 7 bits, binaryencode it in the 7 available bits and set c = 1.  Else: set c = 0, encode highorder 7 bits and then use one or more additional bytes to encode the lower order bits using the same algorithm. 8 8Introduction to Information Retrieval Gamma codes for gap encoding  Represent a gap G as a pair of length and offset.  Offset is the gap in binary, with the leading bit chopped off.  Length is the length of offset.  Encode length in unary code  The Gamma code is the concatenation of length and offset. 9 9Introduction to Information Retrieval Compression of Reuters data structure size in MB dictionary, fixedwidth 11.2 dictionary, term pointers into string 7.6 ∼, with blocking, k = 4 7.1 ∼, with blocking front coding 5.9 collection (text, xml markup etc) 3600.0 collection (text) 960.0 T/D incidence matrix 40,000.0 postings, uncompressed (32bit words) 400.0 postings, uncompressed (20 bits) 250.0 postings, variable byte encoded 116.0 postings, γ encoded 101.0 10 10Introduction to Information Retrieval Takeaway today  Ranking search results: why it is important (as opposed to just presenting a set of unordered Boolean results)  Term frequency: This is a key ingredient for ranking.  Tfidf ranking: best known traditional ranking scheme  Vector space model: One of the most important formal models for information retrieval (along with Boolean and probabilistic models) 11 11Introduction to Information Retrieval Outline ❶ Recap ❷ Why ranked retrieval ❸ Term frequency ❹ tfidf weighting ❺ The vector space model 12Introduction to Information Retrieval Ranked retrieval  Thus far, our queries have all been Boolean.  Documents either match or don’t.  Good for expert users with precise understanding of their needs and of the collection.  Also good for applications: Applications can easily consum 1000s of results.  Not good for the majority of users  Most users are not capable of writing Boolean queries . . .  . . . or they are, but they think it’s too much work.  Most users don’t want to wade through 1000s of results.  This is particularly true of web search. 13 13Introduction to Information Retrieval Problem with Boolean search: Feast or famine  Boolean queries often result in either too few (=0) or too many (1000s) results.  Query 1 (boolean conjunction): standard user dlink 650  → 200,000 hits – feast  Query 2 (boolean conjunction): standard user dlink 650 no card found  → 0 hits – famine  In Boolean retrieval, it takes a lot of skill to come up with a query that produces a manageable number of hits. 14 14Introduction to Information Retrieval Feast or famine: No problem in ranked retrieval  With ranking, large result sets are not an issue.  Just show the top 10 results  Doesn’t overwhelm the user  Premise: the ranking algorithm works: More relevant results are ranked higher than less relevant results. 15 15Introduction to Information Retrieval Scoring as the basis of ranked retrieval  We wish to rank documents that are more relevant higher than documents that are less relevant.  How can we accomplish such a ranking of the documents in the collection with respect to a query  Assign a score to each querydocument pair, say in 0, 1.  This score measures how well document and query “match”. 16 16Introduction to Information Retrieval Querydocument matching scores  How do we compute the score of a querydocument pair  Let’s start with a oneterm query.  If the query term does not occur in the document: score should be 0.  The more frequent the query term in the document, the higher the score  We will look at a number of alternatives for doing this. 17 17Introduction to Information Retrieval Take 1: Jaccard coefficient  A commonly used measure of overlap of two sets  Let A and B be two sets  Jaccard coefficient:  JACCARD (A, A) = 1  JACCARD (A, B) = 0 if A ∩ B = 0  A and B don’t have to be the same size.  Always assigns a number between 0 and 1. 18 18Introduction to Information Retrieval Jaccard coefficient: Example  What is the querydocument match score that the Jaccard coefficient computes for:  Query: “ides of March”  Document “Caesar died in March”  JACCARD(q, d) = 1/6 19 19Introduction to Information Retrieval What’s wrong with Jaccard  It doesn’t consider term frequency (how many occurrences a term has).  Rare terms are more informative than frequent terms. Jaccard does not consider this information.  We need a more sophisticated way of normalizing for the length of a document.  Later in this lecture, we’ll use (cosine) . . .  . . . instead of A ∩ B/A∪ B (Jaccard) for length normalization. 20 20Introduction to Information Retrieval Outline ❶ Recap ❷ Why ranked retrieval ❸ Term frequency ❹ tfidf weighting ❺ The vector space model 21Introduction to Information Retrieval Binary incidence matrix Anthony Julius The Hamlet Othello Macbeth and Caesar Tempest . . . Cleopatra ANTHONY 1 1 0 0 0 1 BRUTUS 1 1 0 1 0 0 CAESAR 1 1 0 1 1 1 CALPURNIA 0 1 0 0 0 0 CLEOPATRA 1 0 0 0 0 0 MERCY 1 0 1 1 1 1 WORSER 1 0 1 1 1 0 . . . V. Each document is represented as a binary vector ∈ 0, 1 22 22Introduction to Information Retrieval Binary incidence matrix Anthony Julius The Hamlet Othello Macbeth and Caesar Tempest . . . Cleopatra ANTHONY 157 73 0 0 0 1 BRUTUS 4 157 0 2 0 0 CAESAR 232 227 0 2 1 0 CALPURNIA 0 10 0 0 0 0 CLEOPATRA 57 0 0 0 0 0 MERCY 2 0 3 8 5 8 WORSER 2 0 1 1 1 5 . . . V. Each document is now represented as a count vector ∈ N 23 23Introduction to Information Retrieval Bag of words model  We do not consider the order of words in a document.  John is quicker than Mary and Mary is quicker than John are represented the same way.  This is called a bag of words model.  In a sense, this is a step back: The positional index was able to distinguish these two documents.  We will look at “recovering” positional information later in this course.  For now: bag of words model 24 24Introduction to Information Retrieval Term frequency tf  The term frequency tf of term t in document d is defined t,d as the number of times that t occurs in d.  We want to use tf when computing querydocument match scores.  But how  Raw term frequency is not what we want because:  A document with tf = 10 occurrences of the term is more relevant than a document with tf = 1 occurrence of the term.  But not 10 times more relevant.  Relevance does not increase proportionally with term frequency. 25 25Introduction to Information Retrieval Instead of raw frequency: Log frequency weighting  The log frequency weight of term t in d is defined as follows  tf → w : t,d t,d 0 → 0, 1 → 1, 2 → 1.3, 10 → 2, 1000 → 4, etc.  Score for a documentquery pair: sum over terms t in both q and d: tfmatchingscore(q, d) = (1 + log tf )  t∈q∩d t,d  The score is 0 if none of the query terms is present in the document. 26 26Introduction to Information Retrieval Exercise  Compute the Jaccard matching score and the tf matching score for the following querydocument pairs.  q: information on cars+ d: “all you’ve ever wanted to know about cars”  q: information on cars+ d: “information on trucks, information on planes, information on trains”  q: red cars and red trucks+ d: “cops stop red cars more often” 27 27Introduction to Information Retrieval Outline ❶ Recap ❷ Why ranked retrieval ❸ Term frequency ❹ tfidf weighting ❺ The vector space model 28Introduction to Information Retrieval Frequency in document vs. frequency in collection  In addition, to term frequency (the frequency of the term in the document) . . .  . . .we also want to use the frequency of the term in the collection for weighting and ranking. 29 29Introduction to Information Retrieval Desired weight for rare terms  Rare terms are more informative than frequent terms.  Consider a term in the query that is rare in the collection (e.g., ARACHNOCENTRIC).  A document containing this term is very likely to be relevant.  → We want high weights for rare terms like ARACHNOCENTRIC. 30 30Introduction to Information Retrieval Desired weight for frequent terms  Frequent terms are less informative than rare terms.  Consider a term in the query that is frequent in the collection (e.g., GOOD, INCREASE, LINE).  A document containing this term is more likely to be relevant than a document that doesn’t . . .  . . . but words like GOOD, INCREASE and LINE are not sure indicators of relevance.  → For frequent terms like GOOD, INCREASE and LINE, we want positive weights . . .  . . . but lower weights than for rare terms. 31 31Introduction to Information Retrieval Document frequency  We want high weights for rare terms like ARACHNOCENTRIC.  We want low (positive) weights for frequent words like GOOD, INCREASE and LINE.  We will use document frequency to factor this into computing the matching score.  The document frequency is the number of documents in the collection that the term occurs in. 32 32Introduction to Information Retrieval idf weight  df is the document frequency, the number of documents t that t occurs in.  df is an inverse measure of the informativeness of term t. t  We define the idf weight of term t as follows: (N is the number of documents in the collection.)  idf is a measure of the informativeness of the term. t  log N/df instead of N/df + to “dampen” the effect of idf t t  Note that we use the log transformation for both term frequency and document frequency. 33 33Introduction to Information Retrieval Examples for idf  Compute idf using the formula: t term df idf t t calpurnia 1 6 animal 100 4 sunday 1000 3 fly 10,000 2 under 100,000 1 the 1,000,000 0 34 34Introduction to Information Retrieval Effect of idf on ranking  idf affects the ranking of documents for queries with at least two terms.  For example, in the query “arachnocentric line”, idf weighting increases the relative weight of ARACHNOCENTRIC and decreases the relative weight of LINE.  idf has little effect on ranking for oneterm queries. 35 35Introduction to Information Retrieval Collection frequency vs. Document frequency word collection frequency document frequency INSURANCE 10440 3997 TRY 10422 8760  Collection frequency of t: number of tokens of t in the collection  Document frequency of t: number of documents t occurs in  Why these numbers  Which word is a better search term (and should get a higher weight)  This example suggests that df (and idf) is better for weighting than cf (and “icf”). 36 36Introduction to Information Retrieval tfidf weighting  The tfidf weight of a term is the product of its tf weight and its idf weight.  tfweight  idfweight  Best known weighting scheme in information retrieval  Note: the “” in tfidf is a hyphen, not a minus sign  Alternative names: tf.idf, tf x idf 37 37Introduction to Information Retrieval Summary: tfidf  Assign a tfidf weight for each term t in each document d:  The tfidf weight . . .  . . . increases with the number of occurrences within a document. (term frequency)  . . . increases with the rarity of the term in the collection. (inverse document frequency) 38 38Introduction to Information Retrieval Exercise: Term, collection and document frequency Quantity Symbol Definition term frequency tf number of occurrences of t in t,d d document frequency df number of documents in the t collection that t occurs in collection frequency cf total number of occurrences of t t in the collection  Relationship between df and cf  Relationship between tf and cf  Relationship between tf and df 39 39Introduction to Information Retrieval Outline ❶ Recap ❷ Why ranked retrieval ❸ Term frequency ❹ tfidf weighting ❺ The vector space model 40Introduction to Information Retrieval Binary incidence matrix Anthony Julius The Hamlet Othello Macbeth and Caesar Tempest . . . Cleopatra ANTHONY 1 1 0 0 0 1 BRUTUS 1 1 0 1 0 0 CAESAR 1 1 0 1 1 1 CALPURNIA 0 1 0 0 0 0 CLEOPATRA 1 0 0 0 0 0 MERCY 1 0 1 1 1 1 WORSER 1 0 1 1 1 0 . . . V. Each document is represented as a binary vector ∈ 0, 1 41 41Introduction to Information Retrieval Count matrix Anthony Julius The Hamlet Othello Macbeth and Caesar Tempest . . . Cleopatra ANTHONY 157 73 0 0 0 1 BRUTUS 4 157 0 2 0 0 CAESAR 232 227 0 2 1 0 CALPURNIA 0 10 0 0 0 0 CLEOPATRA 57 0 0 0 0 0 MERCY 2 0 3 8 5 8 WORSER 2 0 1 1 1 5 . . . V. Each document is now represented as a count vector ∈ N 42 42Introduction to Information Retrieval Binary → count → weight matrix Anthony Julius The Hamlet Othello Macbeth and Caesar Tempest . . . Cleopatra ANTHONY 5.25 3.18 0.0 0.0 0.0 0.35 BRUTUS 1.21 6.10 0.0 1.0 0.0 0.0 CAESAR 8.59 2.54 0.0 1.51 0.25 0.0 CALPURNIA 0.0 1.54 0.0 0.0 0.0 0.0 CLEOPATRA 2.85 0.0 0.0 0.0 0.0 0.0 MERCY 1.51 0.0 1.90 0.12 5.25 0.88 WORSER 1.37 0.0 0.11 4.15 0.25 1.95 . . . Each document is now represented as a realvalued vector of tf V. idf weights∈ R 43 43Introduction to Information Retrieval Documents as vectors  Each document is now represented as a realvalued vector V. of tfidf weights∈ R  So we have a Vdimensional realvalued vector space.  Terms are axes of the space.  Documents are points or vectors in this space.  Very highdimensional: tens of millions of dimensions when you apply this to web search engines  Each vector is very sparse most entries are zero. 44 44Introduction to Information Retrieval Queries as vectors  Key idea 1: do the same for queries: represent them as vectors in the highdimensional space  Key idea 2: Rank documents according to their proximity to the query  proximity = similarity  proximity ≈ negative distance  Recall: We’re doing this because we want to get away from the you’reeitherinorout, feastorfamine Boolean model.  Instead: rank relevant documents higher than nonrelevant documents 45 45Introduction to Information Retrieval How do we formalize vector space similarity  First cut: (negative) distance between two points  ( = distance between the end points of the two vectors)  Euclidean distance  Euclidean distance is a bad idea . . .  . . . because Euclidean distance is large for vectors of different lengths. 46 46Introduction to Information Retrieval Why distance is a bad idea The Euclidean distance of and is large although the distribution of terms in the query q and the distribution of terms in the document d are very similar. 2 Questions about basic vector space setup 47 47Introduction to Information Retrieval Use angle instead of distance  Rank documents according to angle with query  Thought experiment: take a document d and append it to itself. Call this document d′. d′ is twice as long as d.  “Semantically” d and d′ have the same content.  The angle between the two documents is 0, corresponding to maximal similarity . . .  . . . even though the Euclidean distance between the two documents can be quite large. 48 48Introduction to Information Retrieval From angles to cosines  The following two notions are equivalent.  Rank documents according to the angle between query and document in decreasing order  Rank documents according to cosine(query,document) in increasing order  Cosine is a monotonically decreasing function of the angle ◦◦ for the interval 0 , 180 49 49Introduction to Information Retrieval Cosine 50 50Introduction to Information Retrieval Length normalization  How do we compute the cosine  A vector can be (length) normalized by dividing each of its components by its length – here we use the L norm: 2  This maps vectors onto the unit sphere . . .  . . . since after normalization:  As a result, longer documents and shorter documents have weights of the same order of magnitude.  Effect on the two documents d and d′ (d appended to itself) from earlier slide: they have identical vectors after length normalization. 51 51Introduction to Information Retrieval Cosine similarity between query and document  q is the tfidf weight of term i in the query. i  d is the tfidf weight of term i in the document. i  and are the lengths of and  This is the cosine similarity of and . . . . . . or, equivalently, the cosine of the angle between and 52 52Introduction to Information Retrieval Cosine for normalized vectors  For normalized vectors, the cosine is equivalent to the dot product or scalar product.  (if and are lengthnormalized). 53 53Introduction to Information Retrieval Cosine similarity illustrated 54 54Introduction to Information Retrieval Cosine: Example term frequencies (counts) term SaS PaP WH How similar are these novels SaS: AFFECTION 115 58 20 Sense and JEALOUS 10 7 11 GOSSIP Sensibility PaP: 2 0 6 WUTHERING Pride and 0 0 38 Prejudice WH: Wuthering Heights 55 55Introduction to Information Retrieval Cosine: Example term frequencies (counts) log frequency weighting term SaS PaP WH term SaS PaP WH AFFECTION AFFECTION 3.06 2.76 2.30 115 58 20 JEALOUS JEALOUS 2.0 1.85 2.04 10 7 11 GOSSIP GOSSIP 1.30 0 1.78 2 0 6 WUTHERING WUTHERING 0 0 38 0 0 2.58 ' (To simplify this example, we don t do idf weighting.) 56 56Introduction to Information Retrieval Cosine: Example log frequency weighting log frequency weighting cosine normalization term SaS PaP WH term SaS PaP WH AFFECTION 3.06 2.76 2.30 AFFECTION 0.789 0.832 0.524 JEALOUS 2.0 1.85 2.04 JEALOUS 0.515 0.555 0.465 GOSSIP 1.30 0 1.78 GOSSIP 0.335 0.0 0.405 WUTHERING 0 0 2.58 WUTHERING 0.0 0.0 0.588  cos(SaS,PaP) ≈ 0.789 ∗ 0.832 + 0.515 ∗ 0.555 + 0.335 ∗ 0.0 + 0.0 ∗ 0.0 ≈ 0.94.  cos(SaS,WH) ≈ 0.79  cos(PaP,WH) ≈ 0.69  Why do we have cos(SaS,PaP) cos(SAS,WH) 57 57Introduction to Information Retrieval Computing the cosine score 58 58Introduction to Information Retrieval Components of tfidf weighting 59 59Introduction to Information Retrieval tfidf example  We often use different weightings for queries and documents.  Notation: ddd.qqq  Example: lnc.ltn  document: logarithmic tf, no df weighting, cosine normalization  query: logarithmic tf, idf, no normalization  Isn’t it bad to not idfweight the document  Example query: “best car insurance”  Example document: “car insurance auto insurance” 60 60Introduction to Information Retrieval tfidf example: Inc.Itn Query: “best car insurance”. Document: “car insurance auto insurance”. Key to columns: tfraw: raw (unweighted) term frequency, tfwght: logarithmically weighted term frequency, df: document frequency, idf: inverse document frequency, weight: the final weight of the term in the query or document, n’lized: document weights after cosine normalization, product: the product of final query weight and final document weight 1/1.92 ≈ 0.52 1.3/1.92 ≈ 0.68 Final similarity score between query and  document: w · w = 0 + 0 + 1.04 + 2.04 = 3.08 Questions i qi di 61 61Introduction to Information Retrieval Summary: Ranked retrieval in the vector space model  Represent the query as a weighted tfidf vector  Represent each document as a weighted tfidf vector  Compute the cosine similarity between the query vector and each document vector  Rank documents with respect to the query  Return the top K (e.g., K = 10) to the user 62 62Introduction to Information Retrieval Takeaway today  Ranking search results: why it is important (as opposed to just presenting a set of unordered Boolean results)  Term frequency: This is a key ingredient for ranking.  Tfidf ranking: best known traditional ranking scheme  Vector space model: One of the most important formal models for information retrieval (along with Boolean and probabilistic models) 63 63Introduction to Information Retrieval Resources  Chapters 6 and 7 of IIR  Resources at http://ifnlp.org/ir  Vector space for dummies  Exploring the similarity space (Moffat and Zobel, 2005)  Okapi BM25 (a stateoftheart weighting method, 11.4.3 of IIR) 64 64