Question? Leave a message!




Latent semantic indexing

Latent semantic indexing
WilliamsMcmahon Profile Pic
WilliamsMcmahon,United States,Professional
Published Date:20-07-2017
Website URL
Comment
Introduction to Information Retrieval Introduction to Information Retrieval Latent Semantic Indexing 1Introduction to Information Retrieval Overview ❶ Latent semantic indexing ❷ Dimensionality reduction ❸ LSI in information retrieval 2Introduction to Information Retrieval Outline ❶ Latent semantic indexing ❷ Dimensionality reduction ❸ LSI in information retrieval 3Introduction to Information Retrieval Recall: Term-document matrix Anthony and Julius The Hamlet Othello Macbeth Cleopatra Caesar Tempest anthony 5.25 3.18 0.0 0.0 0.0 0.35 brutus 1.21 6.10 0.0 1.0 0.0 0.0 caesar 8.59 2.54 0.0 1.51 0.25 0.0 calpurnia 0.0 1.54 0.0 0.0 0.0 0.0 cleopatra 2.85 0.0 0.0 0.0 0.0 0.0 mercy 1.51 0.0 1.90 0.12 5.25 0.88 worser 1.37 0.0 0.11 4.15 0.25 1.95 . . . This matrix is the basis for computing the similarity between documents and queries. Today: Can we transform this matrix, so that we get a better measure of similarity between documents and queries? 4 4Introduction to Information Retrieval Latent semantic indexing: Overview  We will decompose the term-document matrix into a product of matrices.  The particular decomposition we’ll use: singular value decomposition (SVD). T  SVD: C = UΣV (where C = term-document matrix)  We will then use the SVD to compute a new, improved term-document matrix C′.  We’ll get better similarity values out of C′ (compared to C).  Using SVD for this purpose is called latent semantic indexing or LSI. 5 5Introduction to Information Retrieval T Example of C = UΣV : The matrix C This is a standard term-document matrix. Actually, we use a non-weighted matrix here to simplify the example. 6 6Introduction to Information Retrieval T Example of C = UΣV : The matrix U One row perterm, one column per min(M,N) where M is the number of terms and N is the number of documents. This is an orthonormal matrix: (i) Row vectors have unit length. (ii) Any two distinct row vectors are orthogonal to each other. Think of the dimensions as “semantic” dimensions that capture distinct topics like politics, sports, economics. Each number u in the matrix indicates how ij strongly related term i is to the topic represented by semantic dimension j . 7 7Introduction to Information Retrieval T Example of C = UΣV : The matrix Σ This is a square, diagonal matrix of dimensionality min(M,N) × min(M,N). The diagonal consists of the singular values of C. The magnitude of the singular value measures the importance of the corresponding semantic dimension. We’ll make use of this by omitting unimportant dimensions. 8 8Introduction to Information Retrieval T T Example of C = UΣV : The matrix V One column per document, one row per min(M,N) where M is the number of terms and N is the number of documents. Again: This is an orthonormal matrix: (i) Column vectors have unit length. (ii) Any two distinct column vectors are orthogonal to each other. These are again the semantic dimensions from the term matrix U that capture distinct topics like politics, sports, economics. Each number v in the matrix indicates how strongly ij related document i is to the topic represented by semantic dimension j . 9 9Introduction to Information Retrieval T Example of C = UΣV : All four matrices 10 10Introduction to Information Retrieval LSI: Summary  We’ve decomposed the term-document matrix C into a product of three matrices.  The term matrix U – consists of one (row) vector for each term T  The document matrix V – consists of one (column) vector for each document  The singular value matrix Σ – diagonal matrix with singular values, reflecting importance of each dimension  Next: Why are we doing this? 11 11Introduction to Information Retrieval Outline ❶ Latent semantic indexing ❷ Dimensionality reduction ❸ LSI in information retrieval 12Introduction to Information Retrieval How we use the SVD in LSI  Key property: Each singular value tells us how important its dimension is.  By setting less important dimensions to zero, we keep the important information, but get rid of the “details”.  These details may  be noise – in that case, reduced LSI is a better representation because it is less noisy  make things dissimilar that should be similar – again reduced LSI is a better representation because it represents similarity better.  Analogy for “fewer details is better”  Image of a bright red flower  Image of a black and white flower  Omitting color makes is easier to see similarity 13 13Introduction to Information Retrieval Reducing the dimensionality to 2 Actually, we only zero out singular values in Σ. This has the effect of setting the corresponding dimensions in T U and V to zero when computing the product T C = UΣV . 14 14Introduction to Information Retrieval Reducing the dimensionality to 2 15 15Introduction to Information Retrieval T Recall unreduced decomposition C=UΣV 16 16Introduction to Information Retrieval T Original matrix C vs. reduced C = UΣ V 2 2 We can view C as a two- 2 dimensional representation of the matrix. We have performed a dimensionality reduction to two dimensions. 17 17Introduction to Information Retrieval Why the reduced matrix is “better” Similarity of d2 and d3 in the original space: 0. Similarity of d2 und d3 in the reduced space: 0.52 0.28 + 0.36 0.16 + 0.72 0.36 + 0.12 0.20 + - 0.39 - 0.08 ≈ 0.52 18 18Introduction to Information Retrieval Why the reduced matrix is “better” “boat” and “ship” are semantically similar. The “reduced” similarity measure reflects this. What property of the SVD reduction is responsible for improved similarity? 19 19Introduction to Information Retrieval Why the reduced matrix is “better” 20 20