Question? Leave a message!




Latent semantic indexing

Latent semantic indexing
Introduction to Information Retrieval Introduction to Information Retrieval Latent Semantic Indexing 1Introduction to Information Retrieval Overview ❶ Latent semantic indexing ❷ Dimensionality reduction ❸ LSI in information retrieval 2Introduction to Information Retrieval Outline ❶ Latent semantic indexing ❷ Dimensionality reduction ❸ LSI in information retrieval 3Introduction to Information Retrieval Recall: Termdocument matrix Anthony and Julius The Hamlet Othello Macbeth Cleopatra Caesar Tempest anthony 5.25 3.18 0.0 0.0 0.0 0.35 brutus 1.21 6.10 0.0 1.0 0.0 0.0 caesar 8.59 2.54 0.0 1.51 0.25 0.0 calpurnia 0.0 1.54 0.0 0.0 0.0 0.0 cleopatra 2.85 0.0 0.0 0.0 0.0 0.0 mercy 1.51 0.0 1.90 0.12 5.25 0.88 worser 1.37 0.0 0.11 4.15 0.25 1.95 . . . This matrix is the basis for computing the similarity between documents and queries. Today: Can we transform this matrix, so that we get a better measure of similarity between documents and queries 4 4Introduction to Information Retrieval Latent semantic indexing: Overview  We will decompose the termdocument matrix into a product of matrices.  The particular decomposition we’ll use: singular value decomposition (SVD). T  SVD: C = UΣV (where C = termdocument matrix)  We will then use the SVD to compute a new, improved termdocument matrix C′.  We’ll get better similarity values out of C′ (compared to C).  Using SVD for this purpose is called latent semantic indexing or LSI. 5 5Introduction to Information Retrieval T Example of C = UΣV : The matrix C This is a standard termdocument matrix. Actually, we use a nonweighted matrix here to simplify the example. 6 6Introduction to Information Retrieval T Example of C = UΣV : The matrix U One row perterm, one column per min(M,N) where M is the number of terms and N is the number of documents. This is an orthonormal matrix: (i) Row vectors have unit length. (ii) Any two distinct row vectors are orthogonal to each other. Think of the dimensions as “semantic” dimensions that capture distinct topics like politics, sports, economics. Each number u in the matrix indicates how ij strongly related term i is to the topic represented by semantic dimension j . 7 7Introduction to Information Retrieval T Example of C = UΣV : The matrix Σ This is a square, diagonal matrix of dimensionality min(M,N) × min(M,N). The diagonal consists of the singular values of C. The magnitude of the singular value measures the importance of the corresponding semantic dimension. We’ll make use of this by omitting unimportant dimensions. 8 8Introduction to Information Retrieval T T Example of C = UΣV : The matrix V One column per document, one row per min(M,N) where M is the number of terms and N is the number of documents. Again: This is an orthonormal matrix: (i) Column vectors have unit length. (ii) Any two distinct column vectors are orthogonal to each other. These are again the semantic dimensions from the term matrix U that capture distinct topics like politics, sports, economics. Each number v in the matrix indicates how strongly ij related document i is to the topic represented by semantic dimension j . 9 9Introduction to Information Retrieval T Example of C = UΣV : All four matrices 10 10Introduction to Information Retrieval LSI: Summary  We’ve decomposed the termdocument matrix C into a product of three matrices.  The term matrix U – consists of one (row) vector for each term T  The document matrix V – consists of one (column) vector for each document  The singular value matrix Σ – diagonal matrix with singular values, reflecting importance of each dimension  Next: Why are we doing this 11 11Introduction to Information Retrieval Outline ❶ Latent semantic indexing ❷ Dimensionality reduction ❸ LSI in information retrieval 12Introduction to Information Retrieval How we use the SVD in LSI  Key property: Each singular value tells us how important its dimension is.  By setting less important dimensions to zero, we keep the important information, but get rid of the “details”.  These details may  be noise – in that case, reduced LSI is a better representation because it is less noisy  make things dissimilar that should be similar – again reduced LSI is a better representation because it represents similarity better.  Analogy for “fewer details is better”  Image of a bright red flower  Image of a black and white flower  Omitting color makes is easier to see similarity 13 13Introduction to Information Retrieval Reducing the dimensionality to 2 Actually, we only zero out singular values in Σ. This has the effect of setting the corresponding dimensions in T U and V to zero when computing the product T C = UΣV . 14 14Introduction to Information Retrieval Reducing the dimensionality to 2 15 15Introduction to Information Retrieval T Recall unreduced decomposition C=UΣV 16 16Introduction to Information Retrieval T Original matrix C vs. reduced C = UΣ V 2 2 We can view C as a two 2 dimensional representation of the matrix. We have performed a dimensionality reduction to two dimensions. 17 17Introduction to Information Retrieval Why the reduced matrix is “better” Similarity of d2 and d3 in the original space: 0. Similarity of d2 und d3 in the reduced space: 0.52 0.28 + 0.36 0.16 + 0.72 0.36 + 0.12 0.20 + 0.39 0.08 ≈ 0.52 18 18Introduction to Information Retrieval Why the reduced matrix is “better” “boat” and “ship” are semantically similar. The “reduced” similarity measure reflects this. What property of the SVD reduction is responsible for improved similarity 19 19Introduction to Information Retrieval Why the reduced matrix is “better” 20 20Introduction to Information Retrieval Why the reduced matrix is “better” 21 21Introduction to Information Retrieval Outline ❶ Latent semantic indexing ❷ Dimensionality reduction ❸ LSI in information retrieval 22Introduction to Information Retrieval Why we use LSI in information retrieval  LSI takes documents that are semantically similar (= talk about the same topics), . . .  . . . but are not similar in the vector space (because they use different words) . . .  . . . and rerepresents them in a reduced vector space . . .  . . . in which they have higher similarity.  Thus, LSI addresses the problems of synonymy and semantic relatedness.  Standard vector space: Synonyms contribute nothing to document similarity.  Desired effect of LSI: Synonyms contribute strongly to document similarity. 23 23Introduction to Information Retrieval How LSI addresses synonymy and semantic relatedness  The dimensionality reduction forces us to omit a lot of “detail”.  We have to map differents words (= different dimensions of the full space) to the same dimension in the reduced space.  The “cost” of mapping synonyms to the same dimension is much less than the cost of collapsing unrelated words.  SVD selects the “least costly” mapping (see below).  Thus, it will map synonyms to the same dimension.  But it will avoid doing that for unrelated words. 24 24Introduction to Information Retrieval LSI: Comparison to other approaches  Recap: Relevance feedback and query expansion are used to increase recall in information retrieval – if query and documents have (in the extreme case) no terms in common.  LSI increases recall and hurts precision.  Thus, it addresses the same problems as (pseudo) relevance feedback and query expansion . . .  . . . and it has the same problems. 25 25Introduction to Information Retrieval Implementation  Compute SVD of termdocument matrix  Reduce the space and compute reduced document representations  Map the query into the reduced space  This follows from:  Compute similarity of q with all reduced documents in V . 2 2  Output ranked list of documents as usual  Exercise: What is the fundamental problem with this approach 26 26Introduction to Information Retrieval Optimality  SVD is optimal in the following sense.  Keeping the k largest singular values and setting all others to zero gives you the optimal approximation of the original matrix C. EckartYoung theorem  Optimal: no other matrix of the same rank (= with the same underlying dimensionality) approximates C better.  Measure of approximation is Frobenius norm:  So LSI uses the “best possible” matrix.  Caveat: There is only a tenuous relationship between the Frobenius norm and cosine similarity between documents. 27 27Introduction to Information Retrieval Resources  Chapter 18 of IIR  Resources at http://ifnlp.org/ir  Original paper on latent semantic indexing by Deerwester et al.  Paper on probabilistic LSI by Thomas Hofmann  Word space: LSI for words 28 28
Website URL
Comment
sharer
Presentations
Free
Document Information
Category:
Presentations
User Name:
WilliamsMcmahon
User Type:
Professional
Country:
United States
Uploaded Date:
20-07-2017