Question? Leave a message!




Clustering in IR

Clustering in IR
Introduction to Information Retrieval Introduction to Information Retrieval : Flat Clustering 1Introduction to Information Retrieval Overview ❶ Recap ❷ Clustering: Introduction ❸ Clustering in IR ❹ Kmeans ❺ Evaluation ❻ How many clusters 2Introduction to Information Retrieval Outline ❶ Recap ❷ Clustering: Introduction ❸ Clustering in IR ❹ Kmeans ❺ Evaluation ❻ How many clusters 3Introduction to Information Retrieval MI example for poultry/ EXPORT in Reuters 4 4Introduction to Information Retrieval Linear classifiers  Linear classifiers compute a linear combination or weighted sum of the feature values.  Classification decision:  Geometrically, the equation defines a line (2D), a plane (3D) or a hyperplane (higher dimensionalities).  Assumption: The classes are linearly separable.  Methods for finding a linear separator: Perceptron, Rocchio, Naive Bayes, linear support vector machines, many others 5 5Introduction to Information Retrieval A linear classifier in 1D  A linear classifier in 1D is a point described by the equation w d = θ 1 1  The point at θ/w 1  Points (d ) with w d ≥ 1 1 1 are in the class c.  Points (d ) with w d θ 1 1 1 are in the complement class 6 6Introduction to Information Retrieval A linear classifier in 2D  A linear classifier in 2D is a line described by the equation w d +w d = θ 1 1 2 2  Example for a 2D linear classifier  Points (d d ) with w d + 1 2 1 1 w d ≥ θ are in the class c. 2 2  Points (d d ) with w d + 1 2 1 1 w d θ are in the 2 2 complement class 7 7Introduction to Information Retrieval A linear classifier in 3D  A linear classifier in 3D is a plane described by the equation w1d1 + w2d2 + w3d3 = θ  Example for a 3D linear classifier  Points (d1 d2 d3) with w1d1 + w2d2 + w3d3 ≥ θ are in the class c.  Points (d1 d2 d3) with w1d1 + w2d2 + w3d3 θ are in the complement class 8 8Introduction to Information Retrieval Rocchio as a linear classifier  Rocchio is a linear classifier defined by:  where is the normal vector and 9 9Introduction to Information Retrieval Naive Bayes as a linear classifier Naive Bayes is a linear classifier (in log space) defined by: where , d = number of occurrences of t i i in d, and . Here, the index i , 1 ≤ i ≤ M, refers to terms of the vocabulary (not to positions in d as k did in our original definition of Naive Bayes) 10 10Introduction to Information Retrieval kNN is not a linear classifier  The decision boundaries between classes are piecewise linear . . .  . . . but they are in general not linear classifiers that can be described as 11 11Introduction to Information Retrieval Takeaway today  What is clustering  Applications of clustering in information retrieval  Kmeans algorithm  Evaluation of clustering  How many clusters 12 12Introduction to Information Retrieval Outline ❶ Recap ❷ Clustering: Introduction ❸ Clustering in IR ❹ Kmeans ❺ Evaluation ❻ How many clusters 13Introduction to Information Retrieval Clustering: Definition  (Document) clustering is the process of grouping a set of documents into clusters of similar documents.  Documents within a cluster should be similar.  Documents from different clusters should be dissimilar.  Clustering is the most common form of unsupervised learning.  Unsupervised = there are no labeled or annotated data. 14 14Introduction to Information Retrieval Data set with clear cluster structure Propose algorithm for finding the cluster structure in this example 15 15Introduction to Information Retrieval Classification vs. Clustering  Classification: supervised learning  Clustering: unsupervised learning  Classification: Classes are humandefined and part of the input to the learning algorithm.  Clustering: Clusters are inferred from the data without human input.  However, there are many ways of influencing the outcome of clustering: number of clusters, similarity measure, representation of documents, . . . 16 16Introduction to Information Retrieval Outline ❶ Recap ❷ Clustering: Introduction ❸ Clustering in IR ❹ Kmeans ❺ Evaluation ❻ How many clusters 17Introduction to Information Retrieval The cluster hypothesis Cluster hypothesis. Documents in the same cluster behave similarly with respect to relevance to information needs. All applications of clustering in IR are based (directly or indirectly) on the cluster hypothesis. Van Rijsbergen’s original wording: “closely associated documents tend to be relevant to the same requests”. 18 18Introduction to Information Retrieval Applications of clustering in IR Application What is Benefit clustered Search result clustering search more effective results information presentation to user ScatterGather (subsets alternative user of) collection interface: “search without typing” Collection clustering collection effective information presentation for exploratory browsing Clusterbased retrieval collection higher efficiency: faster search 19 19Introduction to Information Retrieval Search result clustering for better navigation 20 20Introduction to Information Retrieval ScatterGather 21 21Introduction to Information Retrieval Global navigation: Yahoo 22 22Introduction to Information Retrieval Global navigation: MESH (upper level) 23 23Introduction to Information Retrieval Global navigation: MESH (lower level) 24 24Introduction to Information Retrieval Navigational hierarchies: Manual vs. automatic creation  Note: Yahoo/MESH are not examples of clustering.  But they are well known examples for using a global hierarchy for navigation.  Some examples for global navigation/exploration based on clustering:  Cartia  Themescapes  Google News 25 25Introduction to Information Retrieval Global navigation combined with visualization (1) 26 26Introduction to Information Retrieval Global navigation combined with visualization (2) 27 27Introduction to Information Retrieval Global clustering for navigation: Google News http://news.google.com 28 28Introduction to Information Retrieval Clustering for improving recall  To improve search recall:  Cluster docs in collection a priori  When a query matches a doc d, also return other docs in the cluster containing d  Hope: if we do this: the query “car” will also return docs containing “automobile”  Because the clustering algorithm groups together docs containing “car” with those containing “automobile”.  Both types of documents contain words like “parts”, “dealer”, “mercedes”, “road trip”. 29 29Introduction to Information Retrieval Data set with clear cluster structure Propose algorithm for finding the cluster structure in this example 30 30Introduction to Information Retrieval Desiderata for clustering  General goal: put related docs in the same cluster, put unrelated docs in different clusters.  How do we formalize this  The number of clusters should be appropriate for the data set we are clustering.  Initially, we will assume the number of clusters K is given.  Later: Semiautomatic methods for determining K  Secondary goals in clustering  Avoid very small and very large clusters  Define clusters that are easy to explain to the user  Many others . . . 31 31Introduction to Information Retrieval Flat vs. Hierarchical clustering  Flat algorithms  Usually start with a random (partial) partitioning of docs into groups  Refine iteratively  Main algorithm: Kmeans  Hierarchical algorithms  Create a hierarchy  Bottomup, agglomerative  Topdown, divisive 32 32Introduction to Information Retrieval Hard vs. Soft clustering  Hard clustering: Each document belongs to exactly one cluster.  More common and easier to do  Soft clustering: A document can belong to more than one cluster.  Makes more sense for applications like creating browsable hierarchies  You may want to put sneakers in two clusters:  sports apparel  shoes  You can only do that with a soft clustering approach.  We will do flat, hard clustering only in this class.  See IIR 16.5, IIR 17, IIR 18 for soft clustering and hierarchical clustering 33 33Introduction to Information Retrieval Flat algorithms  Flat algorithms compute a partition of N documents into a set of K clusters.  Given: a set of documents and the number K  Find: a partition into K clusters that optimizes the chosen partitioning criterion  Global optimization: exhaustively enumerate partitions, pick optimal one  Not tractable  Effective heuristic method: Kmeans algorithm 34 34Introduction to Information Retrieval Outline ❶ Recap ❷ Clustering: Introduction ❸ Clustering in IR ❹ Kmeans ❺ Evaluation ❻ How many clusters 35Introduction to Information Retrieval Kmeans  Perhaps the best known clustering algorithm  Simple, works well in many cases  Use as default / baseline for clustering documents 36 36Introduction to Information Retrieval Document representations in clustering  Vector space model  As in vector space classification, we measure relatedness between vectors by Euclidean distance . . .  . . .which is almost equivalent to cosine similarity.  Almost: centroids are not lengthnormalized. 37 37Introduction to Information Retrieval Kmeans  Each cluster in Kmeans is defined by a centroid.  Objective/partitioning criterion: minimize the average squared difference from the centroid  Recall definition of centroid: where we use ω to denote a cluster.  We try to find the minimum average squared difference by iterating two steps:  reassignment: assign each vector to its closest centroid  recomputation: recompute each centroid as the average of the vectors that were assigned to it in reassignment 38 38Introduction to Information Retrieval Kmeans algorithm 39Introduction to Information Retrieval Worked Example: Set of to be clustered 40Introduction to Information Retrieval Worked Example: Random selection of initial centroids Exercise: (i) Guess what the optimal clustering into two clusters is in this case; (ii) compute the centroids of the clusters 41 41Introduction to Information Retrieval Worked Example: Assign points to closest center 42Introduction to Information Retrieval Worked Example: Assignment 43Introduction to Information Retrieval Worked Example: Recompute cluster centroids 44Introduction to Information Retrieval Worked Example: Assign points to closest centroid 45Introduction to Information Retrieval Worked Example: Assignment 46Introduction to Information Retrieval Worked Example: Recompute cluster centroids 47Introduction to Information Retrieval Worked Example: Assign points to closest centroid 48Introduction to Information Retrieval Worked Example: Assignment 49Introduction to Information Retrieval Worked Example: Recompute cluster centroids 50Introduction to Information Retrieval Worked Example: Assign points to closest centroid 51Introduction to Information Retrieval Worked Example: Assignment 52Introduction to Information Retrieval Worked Example: Recompute cluster centroids 53Introduction to Information Retrieval Worked Example: Assign points to closest centroid 54Introduction to Information Retrieval Worked Example: Assignment 55Introduction to Information Retrieval Worked Example: Recompute cluster centroids 56Introduction to Information Retrieval Worked Example: Assign points to closest centroid 57Introduction to Information Retrieval Worked Example: Assignment 58Introduction to Information Retrieval Worked Example: Recompute cluster centroids 59Introduction to Information Retrieval Worked Example: Assign points to closest centroid 60Introduction to Information Retrieval Worked Example: Assignment 61Introduction to Information Retrieval Worked Example: Recompute cluster caentroids 62Introduction to Information Retrieval Worked Ex.: Centroids and assignments after convergence 63Introduction to Information Retrieval Kmeans is guaranteed to converge: Proof  RSS = sum of all squared distances between document vector and closest centroid  RSS decreases during each reassignment step.  because each vector is moved to a closer centroid  RSS decreases during each recomputation step.  see next slide  There is only a finite number of clusterings.  Thus: We must reach a fixed point.  Assumption: Ties are broken consistently. 64 64Introduction to Information Retrieval Recomputation decreases average distance – the residual sum of squares (the “goodness” measure) The last line is the componentwise definition of the centroid We minimize RSSk when the old centroid is replaced with the new centroid. RSS, the sum of the RSSk , must then also decrease during recomputation. 65 65Introduction to Information Retrieval Kmeans is guaranteed to converge  But we don’t know how long convergence will take  If we don’t care about a few docs switching back and forth, then convergence is usually fast ( 1020 iterations).  However, complete convergence can take many more iterations. 66 66Introduction to Information Retrieval Optimality of Kmeans  Convergence does not mean that we converge to the optimal clustering  This is the great weakness of Kmeans.  If we start with a bad set of seeds, the resulting clustering can be horrible. 67 67Introduction to Information Retrieval Convergence Exercise: Suboptimal clustering  What is the optimal clustering for K = 2  Do we converge on this clustering for arbitrary seeds d , d i j 68 68Introduction to Information Retrieval Initialization of Kmeans  Random seed selection is just one of many ways Kmeans can be initialized.  Random seed selection is not very robust: It’s easy to get a suboptimal clustering.  Better ways of computing initial centroids:  Select seeds not randomly, but using some heuristic (e.g., filter out outliers or find a set of seeds that has “good coverage” of the document space)  Use hierarchical clustering to find good seeds  Select i (e.g., i = 10) different random sets of seeds, do a K means clustering for each, select the clustering with lowest RSS 69 69Introduction to Information Retrieval Time complexity of Kmeans  Computing one distance of two vectors is O(M).  Reassignment step: O(KNM) (we need to compute KN documentcentroid distances)  Recomputation step: O(NM) (we need to add each of the document’s M values to one of the centroids)  Assume number of iterations bounded by I  Overall complexity: O(IKNM) – linear in all important dimensions  However: This is not a real worstcase analysis.  In pathological cases, complexity can be worse than linear. 70 70Introduction to Information Retrieval Outline ❶ Recap ❷ Clustering: Introduction ❸ Clustering in IR ❹ Kmeans ❺ Evaluation ❻ How many clusters 71Introduction to Information Retrieval What is a good clustering  Internal criteria  Example of an internal criterion: RSS in Kmeans  But an internal criterion often does not evaluate the actual utility of a clustering in the application.  Alternative: External criteria  Evaluate with respect to a humandefined classification 72 72Introduction to Information Retrieval External criteria for clustering quality  Based on a gold standard data set, e.g., the Reuters collection we also used for the evaluation of classification  Goal: Clustering should reproduce the classes in the gold standard  (But we only want to reproduce how documents are divided into groups, not the class labels.)  First measure for how well we were able to reproduce the classes: purity 73 73Introduction to Information Retrieval External criterion: Purity  Ω= ω , ω , . . . , ω is the set of clusters and 1 2 K C = c , c , . . . , c is the set of classes. 1 2 J  For each cluster ω : find class c with most members n in ω k j kj k  Sum all n and divide by total number of points kj 74 74Introduction to Information Retrieval Example for computing purity To compute purity: 5 = max ω ∩ c (class x, cluster 1); j 1 j 4 = max ω ∩ c (class o, cluster 2); and 3 = max ω ∩ c j 2 j j 3 j (class ⋄, cluster 3). Purity is (1/17) × (5 + 4 + 3) ≈ 0.71. 75 75Introduction to Information Retrieval Rand index  Definition:  Based on 2x2 contingency table of all pairs of documents:  TP+FN+FP+TN is the total number of pairs.  There are pairs for N documents.  Example: = 136 in o/⋄/x example  Each pair is either positive or negative (the clustering puts the two documents in the same or in different clusters) . . .  . . . and either “true” (correct) or “false” (incorrect): the clustering decision is correct or incorrect. 76 76Introduction to Information Retrieval Rand Index: Example As an example, we compute RI for the o/⋄/x example. We first compute TP + FP. The three clusters contain 6, 6, and 5 points, respectively, so the total number of “positives” or pairs of documents that are in the same cluster is: Of these, the x pairs in cluster 1, the o pairs in cluster 2, the ⋄ pairs in cluster 3, and the x pair in cluster 3 are true positives: Thus, FP = 40 − 20 = 20. FN and TN are computed similarly. 77 77Introduction to Information Retrieval Rand measure for the o/⋄/x example (20 + 72)/(20 + 20 + 24 + 72) ≈ 0.68. 78 78Introduction to Information Retrieval Two other external evaluation measures  Two other measures  Normalized mutual information (NMI)  How much information does the clustering contain about the classification  Singleton clusters (number of clusters = number of docs) have maximum MI  Therefore: normalize by entropy of clusters and classes  F measure  Like Rand, but “precision” and “recall” can be weighted 79 79Introduction to Information Retrieval Evaluation results for the o/⋄/x example All four measures range from 0 (really bad clustering) to 1 (perfect clustering). 80 80Introduction to Information Retrieval Outline ❶ Recap ❷ Clustering: Introduction ❸ Clustering in IR ❹ Kmeans ❺ Evaluation ❻ How many clusters 81Introduction to Information Retrieval How many clusters  Number of clusters K is given in many applications.  E.g., there may be an external constraint on K. Example: In the case of ScatterGather, it was hard to show more than 10–20 clusters on a monitor in the 90s.  What if there is no external constraint Is there a “right” number of clusters  One way to go: define an optimization criterion  Given docs, find K for which the optimum is reached.  What optimiation criterion can we use  We can’t use RSS or average squared distance from centroid as criterion: always chooses K = N clusters. 82 82Introduction to Information Retrieval Exercise  Your job is to develop the clustering algorithms for a competitor to news.google.com  You want to use Kmeans clustering.  How would you determine K 83 83Introduction to Information Retrieval Simple objective function for K (1)  Basic idea:  Start with 1 cluster (K = 1)  Keep adding clusters (= keep increasing K)  Add a penalty for each new cluster  Trade off cluster penalties against average squared distance from centroid  Choose the value of K with the best tradeoff 84 84Introduction to Information Retrieval Simple objective function for K (2)  Given a clustering, define the cost for a document as (squared) distance to centroid  Define total distortion RSS(K) as sum of all individual document costs (corresponds to average distance)  Then: penalize each cluster with a cost λ  Thus for a clustering with K clusters, total cluster penalty is Kλ  Define the total cost of a clustering as distortion plus total cluster penalty: RSS(K) + Kλ  Select K that minimizes (RSS(K) + Kλ)  Still need to determine good value for λ . . . 85 85Introduction to Information Retrieval Finding the “knee” in the curve Pick the number of clusters where curve “flattens”. Here: 4 or 9. 86 86Introduction to Information Retrieval Takeaway today  What is clustering  Applications of clustering in information retrieval  Kmeans algorithm  Evaluation of clustering  How many clusters 87 87Introduction to Information Retrieval Resources  Chapter 16 of IIR  Resources at http://ifnlp.org/ir  Kmeans example  Keith van Rijsbergen on the cluster hypothesis (he was one of  the originators)  Bing/Carrot2/Clusty: search result clustering 88 88
sharer
Presentations
Free
Document Information
Category:
Presentations
User Name:
WilliamsMcmahon
User Type:
Professional
Country:
United States
Uploaded Date:
20-07-2017