Question? Leave a message!

Index construction in MapReduce

Index construction in MapReduce
Introduction to Information Retrieval Introduction to Information Retrieval Index Construction 1Introduction to Information Retrieval Overview ❶ Recap ❷ Introduction ❸ BSBI algorithm ❹ SPIMI algorithm ❺ Distributed indexing ❻ Dynamic indexing 2Introduction to Information Retrieval Outline ❶ Recap ❷ Introduction ❸ BSBI algorithm ❹ SPIMI algorithm ❺ Distributed indexing ❻ Dynamic indexing 3Introduction to Information Retrieval Dictionary as array of fixedwidth entries space needed: 20 bytes 4 bytes 4 bytes 4 4Introduction to Information Retrieval Btree for looking up entries in array 5 5Introduction to Information Retrieval Wildcard queries using a permuterm index Queries:  For X, look up X  For X, look up X  For X, look up X  For X, look up X  For XY, look up YX 6 6Introduction to Information Retrieval kgram indexes for spelling correction: bordroom 7 7Introduction to Information Retrieval Levenshtein distance for spelling correction 8 8Introduction to Information Retrieval Exercise: Understand Peter Norvig’s spelling corrector 9 9Introduction to Information Retrieval Takeaway  Two index construction algorithms: BSBI (simple) and SPIMI (more realistic)  Distributed index construction: MapReduce  Dynamic index construction: how to keep the index uptodate as the collection changes 10 10Introduction to Information Retrieval Outline ❶ Recap ❷ Introduction ❸ BSBI algorithm ❹ SPIMI algorithm ❺ Distributed indexing ❻ Dynamic indexing 11Introduction to Information Retrieval Hardware basics  Many design decisions in information retrieval are based on hardware constraints.  We begin by reviewing hardware basics that we’ll need in this course. 12 12Introduction to Information Retrieval Hardware basics  Access to data is much faster in memory than on disk. (roughly a factor of 10)  Disk seeks are “idle” time: No data is transferred from disk while the disk head is being positioned.  To optimize transfer time from disk to memory: one large chunk is faster than many small chunks.  Disk I/O is blockbased: Reading and writing of entire blocks (as opposed to smaller chunks). Block sizes: 8KB to 256 KB  Servers used in IR systems typically have several GB of main memory, sometimes tens of GB, and TBs or 100s of GB of disk space.  Fault tolerance is expensive: It’s cheaper to use many regular machines than one fault tolerant machine. 13 13Introduction to Information Retrieval Some stats (ca. 2008) symbol statistic value −3 s average seek time 5 ms = 5 × 10 s −8 b transfer time per byte 0.02 μs = 2 × 10 s 9 −1 processor’s clock rate 10 s −8 P lowlevel operation (e.g., compare swap a 0.01 μs = 10 s word) size of main memory several GB size of disk space 1 TB or more 14 14Introduction to Information Retrieval RCV1 collection  Shakespeare’s collected works are not large enough for demonstrating many of the points in this course.  As an example for applying scalable index construction algorithms, we will use the Reuters RCV1 collection.  English newswire articles sent over the wire in 1995 and 1996 (one year). 15 15Introduction to Information Retrieval A Reuters RCV1 document 16 16Introduction to Information Retrieval Reuters RCV1 statistics N documents 800,000 L tokens per document 200 M terms (= word types) 400,000 bytes per token (incl. spaces/punct.) 6 bytes per token (without spaces/punct.) 4.5 bytes per term (= word type) 7.5 T nonpositional postings 100,000,000 Exercise: Average frequency of a term (how many tokens) 4.5 bytes per word token vs. 7.5 bytes per word type: why the difference How many positional postings 17 17Introduction to Information Retrieval Outline ❶ Recap ❷ Introduction ❸ BSBI algorithm ❹ SPIMI algorithm ❺ Distributed indexing ❻ Dynamic indexing 18Introduction to Information Retrieval Goal: construct the inverted Index dictonary postings 19 19Introduction to Information Retrieval Index construction in IIR 1: Sort postings in memory 20 20Introduction to Information Retrieval Sortbased index construction  As we build index, we parse docs one at a time.  The final postings for any term are incomplete until the end.  Can we keep all postings in memory and then do the sort in memory at the end  No, not for large collections  At 10–12 bytes per postings entry, we need a lot of space for large collections.  T = 100,000,000 in the case of RCV1: we can do this in memory on a typical machine in 2010.  But inmemory index construction does not scale for large collections.  Thus: We need to store intermediate results on disk. 21 21Introduction to Information Retrieval Same algorithm for disk  Can we use the same index construction algorithm for larger collections, but by using disk instead of memory  No: Sorting T = 100,000,000 records on disk is too slow – too many disk seeks.  We need an external sorting algorithm. 22 22Introduction to Information Retrieval “External” sorting algorithm (using few disk seeks)  We must sort T = 100,000,000 nonpositional postings.  Each posting has size 12 bytes (4+4+4: termID, docID, document frequency).  Define a block to consist of 10,000,000 such postings  We can easily fit that many postings into memory.  We will have 10 such blocks for RCV1.  Basic idea of algorithm:  For each block: (i) accumulate postings, (ii) sort in memory, (iii) write to disk  Then merge the blocks into one long sorted order. 23 23Introduction to Information Retrieval Merging two blocks 24 24Introduction to Information Retrieval Blocked SortBased Indexing  Key decision: What is the size of one block 25 25Introduction to Information Retrieval Outline ❶ Recap ❷ Introduction ❸ BSBI algorithm ❹ SPIMI algorithm ❺ Distributed indexing ❻ Dynamic indexing 26Introduction to Information Retrieval Problem with sortbased algorithm  Our assumption was: we can keep the dictionary in memory.  We need the dictionary (which grows dynamically) in order to implement a term to termID mapping.  Actually, we could work with term,docID postings instead of termID,docID postings . . .  . . . but then intermediate files become very large. (We would end up with a scalable, but very slow index construction method.) 27 27Introduction to Information Retrieval Singlepass inmemory indexing  Abbreviation: SPIMI  Key idea 1: Generate separate dictionaries for each block – no need to maintain termtermID mapping across blocks.  Key idea 2: Don’t sort. Accumulate postings in postings lists as they occur.  With these two ideas we can generate a complete inverted index for each block.  These separate indexes can then be merged into one big index. 28 28Introduction to Information Retrieval SPIMIInvert 29 29Introduction to Information Retrieval SPIMI: Compression  Compression makes SPIMI even more efficient.  Compression of terms  Compression of postings  See next lecture 30 30Introduction to Information Retrieval Exercise: Time 1 machine needs for Google size collection 31 31Introduction to Information Retrieval Outline ❶ Recap ❷ Introduction ❸ BSBI algorithm ❹ SPIMI algorithm ❺ Distributed indexing ❻ Dynamic indexing 32Introduction to Information Retrieval Distributed indexing  For webscale indexing (don’t try this at home): must use a distributed computer cluster  Individual machines are faultprone.  Can unpredictably slow down or fail.  How do we exploit such a pool of machines 33 33Introduction to Information Retrieval Google data centers (2007 estimates; Gartner)  Google data centers mainly contain commodity machines.  Data centers are distributed all over the world.  1 million servers, 3 million processors/cores  Google installs 100,000 servers each quarter.  Based on expenditures of 200–250 million dollars per year  This would be 10 of the computing capacity of the world  If in a nonfaulttolerant system with 1000 nodes, each node has 99.9 uptime, what is the uptime of the system (assuming it does not tolerate failures)  Answer: 63  Suppose a server will fail after 3 years. For an installation of 1 million servers, what is the interval between machine failures  Answer: less than two minutes 34 34Introduction to Information Retrieval Distributed indexing  Maintain a master machine directing the indexing job – considered “safe”  Break up indexing into sets of parallel tasks  Master machine assigns each task to an idle machine from a pool. 35 35Introduction to Information Retrieval Parallel tasks  We will define two sets of parallel tasks and deploy two types of machines to solve them:  Parsers  Inverters  Break the input document collection into splits (corresponding to blocks in BSBI/SPIMI)  Each split is a subset of documents. 36 36Introduction to Information Retrieval Parsers  Master assigns a split to an idle parser machine.  Parser reads a document at a time and emits (term,docID) pairs.  Parser writes pairs into j termpartitions.  Each for a range of terms’ first letters  E.g., af, gp, qz (here: j = 3) 37 37Introduction to Information Retrieval Inverters  An inverter collects all (term,docID) pairs (= postings) for one termpartition (e.g., for af).  Sorts and writes to postings lists 38 38Introduction to Information Retrieval Data flow 39 39Introduction to Information Retrieval MapReduce  The index construction algorithm we just described is an instance of MapReduce.  MapReduce is a robust and conceptually simple framework for distributed computing . . .  . . .without having to write code for the distribution part.  The Google indexing system (ca. 2002) consisted of a number of phases, each implemented in MapReduce.  Index construction was just one phase.  Another phase: transform termpartitioned into document partitioned index. 40 40Introduction to Information Retrieval Index construction in MapReduce 41 41Introduction to Information Retrieval Exercise  What information does the task description contain that the master gives to a parser  What information does the parser report back to the master upon completion of the task  What information does the task description contain that the master gives to an inverter  What information does the inverter report back to the master upon completion of the task 42 42Introduction to Information Retrieval Outline ❶ Recap ❷ Introduction ❸ BSBI algorithm ❹ SPIMI algorithm ❺ Distributed indexing ❻ Dynamic indexing 43Introduction to Information Retrieval Dynamic indexing  Up to now, we have assumed that collections are static.  They rarely are: Documents are inserted, deleted and modified.  This means that the dictionary and postings lists have to be dynamically modified. 44 44Introduction to Information Retrieval Dynamic indexing: Simplest approach  Maintain big main index on disk  New docs go into small auxiliary index in memory.  Search across both, merge results  Periodically, merge auxiliary index into big index  Deletions:  Invalidation bitvector for deleted docs  Filter docs returned by index using this bitvector 45 45Introduction to Information Retrieval Issue with auxiliary and main index  Frequent merges  Poor search performance during index merge  Actually:  Merging of the auxiliary index into the main index is not that costly if we keep a separate file for each postings list.  Merge is the same as a simple append.  But then we would need a lot of files – inefficient.  Assumption for the rest of the lecture: The index is one big file.  In reality: Use a scheme somewhere in between (e.g., split very large postings lists into several files, collect small postings lists in one file etc.) 46 46Introduction to Information Retrieval Logarithmic merge  Logarithmic merging amortizes the cost of merging indexes over time.  → Users see smaller effect on response times.  Maintain a series of indexes, each twice as large as the previous one.  Keep smallest (Z ) in memory 0  Larger ones (I , I , . . . ) on disk 0 1  If Z gets too big ( n), write to disk as I 0 0  . . . or merge with I (if I already exists) and write merger to I 0 0 1 etc. 47 47Introduction to Information Retrieval 48 48Introduction to Information Retrieval 3 2 1 0 Binary numbers: I I I I = 2 2 2 2 3 2 1 0  0001  0010  0011  0100  0101  0110  0111  1000  1001  1010  1011  1100 49 49Introduction to Information Retrieval Logarithmic merge  Number of indexes bounded by O(log T) (T is total number of postings read so far)  So query processing requires the merging of O(log T) indexes  Time complexity of index construction is O(T log T).  . . . because each of T postings is merged O(log T) times. 2  Auxiliary index: index construction time is O(T ) as each posting is touched in each merge.  Suppose auxiliary index has size a   So logarithmic merging is an order of magnitude more efficient. 50 50
Document Information
User Name:
User Type:
United States
Uploaded Date: