Question? Leave a message!




Supermarket shelf management –Market-basket model

Supermarket shelf management –Market-basket model
Big Data Analytics CSCI 4030Supermarket shelf management – Marketbasket model:  Goal: Identify items that are bought together by sufficiently many customers  Approach: Process the sales data collected with barcode scanners to find dependencies among items  A classic rule:  If someone buys diaper and milk, then he/she is likely to buy beer  Don’t be surprised if you find sixpacks next to diapers Big Data Analytics CSCI 4030 2Input:  A large set of items TID Items 1 Bread, Coke, Milk  e.g., things sold in a 2 Beer, Bread supermarket 3 Beer, Coke, Diaper, Milk  A large set of baskets 4 Beer, Bread, Diaper, Milk 5 Coke, Diaper, Milk  Each basket is a small subset of items Output:  e.g., the things one Rules Discovered: customer buys on one day Milk Coke  Want to discover Diaper, Milk Beer association rules  People who bought x,y,z tend to buy v,w  Amazon Big Data Analytics CSCI 4030 3 Items = products; Baskets = sets of products someone bought in one trip to the store  Real market baskets: Chain stores keep data about what customers buy together  Tells how typical customers navigate stores, lets them position tempting items  Suggests tiein “tricks”, e.g., run sale on diapers and raise the price of beer  Need the rule to occur frequently, or no ’s  Amazon’s people who bought X also bought Y Big Data Analytics CSCI 4030 4 Baskets = sentences; Items = documents containing those sentences  Items that appear together too often could represent plagiarism  Baskets = patients; Items = drugs sideeffects  Has been used to detect combinations of drugs that result in particular sideeffects Big Data Analytics CSCI 4030 5 A general manytomany mapping (association) between two kinds of things  But we ask about connections among “items”, not “baskets”  For example:  Finding communities in graphs (e.g., Twitter) Big Data Analytics CSCI 4030 6 Finding communities in graphs (e.g., Twitter)  Baskets = nodes; Items = outgoing neighbors  Searching for complete bipartite subgraphs K of a s,t big graph  How  View each node i as a basket B of nodes i it points to i  K = a set Y of size t that s,t occurs in s buckets B i  Looking for K set of s,t support s and look at layer t – A dense 2layer graph all frequent sets of size t Big Data Analytics CSCI 4030 7 s nodes … … t nodesFirst: Define Frequent itemsets Association rules: Confidence, Support, Interestingness Then: Algorithms for finding frequent itemsets Finding frequent pairs APriori algorithm PCY algorithm + 2 refinements Big Data Analytics CSCI 4030 8 Simplest question: Find sets of items that appear together “frequently” in baskets  Support for itemset I: Number of baskets containing all items in I TID Items 1 Bread, Coke, Milk  (Often expressed as a fraction 2 Beer, Bread 3 Beer, Coke, Diaper, Milk of the total number of baskets) 4 Beer, Bread, Diaper, Milk 5 Coke, Diaper, Milk  Given a support threshold s, Support of then sets of items that appear Beer, Bread = 2 in at least s baskets are called frequent itemsets Big Data Analytics CSCI 4030 9 Items = milk, coke, pepsi, beer, juice  Support threshold = 3 baskets B = m, c, b B = m, p, j 1 2 B = m, b B = c, j 3 4 B = m, p, b B = m, c, b, j 5 6 B = c, b, j B = b, c 7 8  Frequent itemsets: m, c, b, j, , b,c , c,j. m,b Big Data Analytics CSCI 4030 10 Association Rules: Ifthen rules about the contents of baskets  i , i ,…,i → j means: “if a basket contains 1 2 k all of i ,…,i then it is likely to contain j” 1 k  In practice there are many rules, want to find significant/interesting ones  Confidence of this association rule is the probability of j given I = i ,…,i 1 k support(I j) conf(I j) support(I) Big Data Analytics CSCI 4030 11 Not all highconfidence rules are interesting  The rule X → milk may have high confidence for many itemsets X, because milk is just purchased very often (independent of X) and the confidence will be high  Interest of an association rule I → j: difference between its confidence and the fraction of baskets that contain j Interest(I j) conf(I j) Pr j  Interesting rules are those with high interest values (usually above 0.5) Big Data Analytics CSCI 4030 12B = m, c, b B = m, p, j 1 2 B = m, b B = c, j 3 4 B = m, p, b B = m, c, b, j 5 6 B = c, b, j B = b, c 7 8  Association rule: m, b →c  Confidence = 2/4 = 0.5  Interest = 0.5 – 5/8 = 1/8  Item c appears in 5/8 of the baskets  Rule is not very interesting Big Data Analytics CSCI 4030 13 Problem: Find all association rules with support ≥s and confidence ≥c  Note: Support of an association rule is the support of the set of items on the left side  Hard part: Finding the frequent itemsets  If i , i ,…, i → j has high support and 1 2 k confidence, then both i , i ,…, i and 1 2 k i , i ,…,i , j will be “frequent” 1 2 k support(I j) conf(I j) support(I) Big Data Analytics CSCI 4030 14 Step 1: Find all frequent itemsets I  (we will explain this next)  Step 2: Rule generation  For every subset A of I, generate a rule A → I \ A  Since I is frequent, A is also frequent  Variant 1: Single pass to compute the rule confidence  confidence(A,B→C,D) = support(A,B,C,D) / support(A,B)  Variant 2:  Observation: If A,B,C→D is below confidence, so is A,B→C,D  Can generate “bigger” rules from smaller ones  Output the rules above the confidence threshold Big Data Analytics CSCI 4030 15B = m, c, b B = m, p, j 1 2 B = m, c, b, n B = c, j 3 4 B = m, p, b B = m, c, b, j 5 6 B = c, b, j B = b, c 7 8  Support threshold s = 3, confidence c = 0.75  1) Frequent itemsets:  b,m b,c c,m c,j m,c,b  2) Generate rules:  b→m: c=4/6 b→c: c=5/6 b,c→m: c=3/5  m→b: c=4/5 … b,m→c: c=3/4  b→c,m: c=3/6 Big Data Analytics CSCI 4030 16 To reduce the number of rules we can postprocess them and only output:  Maximal frequent itemsets: No immediate superset is frequent  Gives more pruning or  Closed itemsets: No immediate superset has the same count ( 0)  Stores not only frequent information, but exact counts Big Data Analytics CSCI 4030 17Frequent, but superset BC also frequent. Support Maximal(s=3) Closed A 4 No No Frequent, and its only superset, B 5 No Yes ABC, not freq. C 3 No No Superset BC has same count. AB 4 Yes Yes Its only super AC 2 No No set, ABC, has BC 3 Yes Yes smaller count. ABC 2 No Yes Big Data Analytics CSCI 4030 18Item  Back to finding frequent itemsets Item Item  Data is often kept in flat files Item Item rather than in a database system: Item Item  Stored on disk Item Item  Stored basketbybasket Item Item  Baskets are small but we have Item many baskets and many items  Expand baskets into pairs, triples, etc. Etc. as you read baskets  Use k nested loops to generate all sets of size k Items are positive integers, Note: We want to find frequent itemsets. To find them, we and boundaries between baskets are –1. have to count them. To count them, we have to generate them. Big Data Analytics CSCI 4030 20 The true cost of mining diskresident data is usually the number of disk I/Os  In practice, associationrule algorithms read the data in passes – all baskets read in turn  We measure the cost by the number of passes an algorithm makes over the data Big Data Analytics CSCI 4030 21 For many frequentitemset algorithms, mainmemory is the critical resource  As we read baskets, we need to count something, e.g., occurrences of pairs of items  The number of different things we can count is limited by main memory  Swapping in/out is a disaster (why) Big Data Analytics CSCI 4030 22 The hardest problem often turns out to be finding the frequent pairs of items i , i 1 2  Why Freq. pairs are common, freq. triples are rare  Why Probability of being frequent drops exponentially with size; number of sets grows more slowly with size  Let’s first concentrate on pairs, then extend to larger sets  The approach:  We always need to generate all the itemsets  But we would only like to count (keep track) of those itemsets that in the end turn out to be frequent Big Data Analytics CSCI 4030 23 Naïve approach to finding frequent pairs  Read file once, counting in main memory the occurrences of each pair:  From each basket of n items, generate its n(n1)/2 pairs by two nested loops 2  Fails if (items) exceeds main memory  Remember: items can be 100K (WalMart) or 10B (Web pages) 5  Suppose 10 items, counts are 4byte integers 5 5 9  Number of pairs of items: 10 (10 1)/2 = 510 10  Therefore, 210 (20 gigabytes) of memory needed Big Data Analytics CSCI 4030 24Two approaches:  Approach 1: Count all pairs using a matrix  Approach 2: Keep triples i, j, c = “the count of the pair of items i, j is c.”  If integers and item ids are 4 bytes, we need approximately 12 bytes for pairs with count 0  Plus some additional overhead for the hashtable Note:  Approach 1 only requires 4 bytes per pair  Approach 2 uses 12 bytes per pair (but only for pairs with count 0) Big Data Analytics CSCI 4030 2512 per 4 bytes per pair occurring pair Triangular Matrix Triples Big Data Analytics CSCI 4030 26 Approach 1: Triangular Matrix  n = total number items  Count pair of items i, j only if ij  Keep pair counts in lexicographic order:  1,2, 1,3,…, 1,n, 2,3, 2,4,…,2,n, 3,4,…  Pair i, j is at position (i –1)(n– i/2) + j –1 2  Total number of pairs n(n –1)/2; total bytes= 2n  Triangular Matrix requires 4 bytes per pair  Approach 2 uses 12 bytes per occurring pair (but only for pairs with count 0)  Beats Approach 1 if less than 1/3 of possible pairs actually occur Big Data Analytics CSCI 4030 27 Approach 1: Triangular Matrix  n = total number items  Count pair of items i, j only if ij  Keep pair counts in lexicographic order: Problem is if we have too  1,2, 1,3,…, 1,n, 2,3, 2,4,…,2,n, 3,4,… many items so the pairs  Pair i, j is at position (i –1)(n– i/2) + j –1 2  Total number of pairs n(n –1)/2; total bytes= 2n do not fit into memory.  Triangular Matrix requires 4 bytes per pair  Approach 2 uses 12 bytes per pair Can we do better (but only for pairs with count 0)  Beats Approach 1 if less than 1/3 of possible pairs actually occur Big Data Analytics CSCI 4030 28 A twopass approach called APriori limits the need for main memory  Key idea: monotonicity  If a set of items I appears at least s times, so does every subset J of I  Contrapositive for pairs: If item i does not appear in s baskets, then no pair including i can appear in s baskets  So, how does APriori find freq. pairs Big Data Analytics CSCI 4030 30 Pass 1: Read baskets and count in main memory the occurrences of each individual item  Requires only memory proportional to items  Items that appear ≥𝒔 times are the frequent items  Pass 2: Read baskets again and count in main memory only those pairs where both elements are frequent (from Pass 1)  Requires memory proportional to square of frequent items only (for counts)  Plus a list of the frequent items (so you know what must be counted) Big Data Analytics CSCI 4030 31Frequent items Item counts Counts of pairs of frequent items (candidate pairs) Pass 1 Pass 2 Big Data Analytics CSCI 4030 32 Main memory You can use the triangular matrix Old method with n = number Frequent Item counts item items of frequent items s  May save space compared Counts of pairs with storing triples of Cofun reque ts of nt pairs of items  Trick: renumber frequent items frequent items 1,2,… and keep a table relating new numbers to original Pass 2 Pass 1 item numbers Big Data Analytics CSCI 4030 33 Main memory For each k, we construct two sets of ktuples (sets of size k):  C = candidate ktuples = those that might be k frequent sets (support s) based on information from the pass for k–1  L = the set of truly frequent ktuples k All pairs Count Count To be All of items the items the pairs explained items from L 1 Filter Construct Filter Construct C L C L C 1 1 2 2 3 Big Data Analytics CSCI 4030 34 Note here we generate new candidates by generating C from L and L . k k1 1 But that one can be more careful with candidate generation. For example, in C we know b,m,j 3 cannot be frequent since m,j is not frequent  Hypothetical steps of the APriori algorithm  C = b c j m n p 1  Count the support of itemsets in C 1  Prune nonfrequent: L = b, c, j, m 1  Generate C = b,c b,j b,m c,j c,m j,m 2  Count the support of itemsets in C 2  Prune nonfrequent: L = b,m b,c c,m c,j 2  Generate C = b,c,m b,c,j b,m,j c,m,j 3  Count the support of itemsets in C 3  Prune nonfrequent: L = b,c,m 3 Big Data Analytics CSCI 4030 35 One pass for each k (itemset size)  Needs room in main memory to count each candidate k–tuple  For typical marketbasket data and reasonable support (e.g., 1), k = 2 requires the most memory  Many possible extensions:  Association rules with intervals:  For example: Men over 65 have 2 cars  Association rules when items are in a taxonomy  Bread, Butter → FruitJam  BakedGoods, MilkProduct → PreservedGoods  Lower the support s as itemset gets bigger Big Data Analytics CSCI 4030 36 Observation: In pass 1 of APriori, most memory is idle  We store only individual item counts  Can we use the idle memory to reduce memory required in pass 2  Pass 1 of PCY: In addition to item counts, maintain a hash table with as many buckets as fit in memory  Keep a count for each bucket into which pairs of items are hashed  For each bucket just keep the count, not the actual pairs that hash to the bucket Big Data Analytics CSCI 4030 38FOR (each basket) : FOR (each item in the basket) : add 1 to item’s count; FOR (each pair of items) : New hash the pair to a bucket; in PCY add 1 to the count for that bucket;  Few things to note:  Pairs of items need to be generated from the input file; they are not present in the file  We are not just interested in the presence of a pair, but we need to see whether it is present at least s (support) times Big Data Analytics CSCI 4030 39 Observation: If a bucket contains a frequent pair, then the bucket is surely frequent  However, even without any frequent pair, a bucket can still be frequent   So, we cannot use the hash to eliminate any member (pair) of a “frequent” bucket  But, for a bucket with total count less than s, none of its pairs can be frequent   Pairs that hash to this bucket can be eliminated as candidates (even if the pair consists of 2 frequent items)  Pass 2: Only count pairs that hash to frequent buckets Big Data Analytics CSCI 4030 40 Replace the buckets by a bitvector:  1 means the bucket count exceeded the support s (call it a frequent bucket); 0 means it did not  4byte integer counts are replaced by bits, so the bitvector requires 1/32 of memory  Also, decide which items are frequent and list them for the second pass Big Data Analytics CSCI 4030 41 Count all pairs i, j that meet the conditions for being a candidate pair: 1. Both i and j are frequent items 2. The pair i, j hashes to a bucket whose bit in the bit vector is 1 (i.e., a frequent bucket)  Both conditions are necessary for the pair to have a chance of being frequent Big Data Analytics CSCI 4030 42Frequent items Item counts Bitmap Hash Hash table table Counts of for pairs candidate pairs Pass 1 Pass 2 Big Data Analytics CSCI 4030 43 Main memory Buckets require a few bytes each:  Note: we do not have to count past s  buckets is O(mainmemory size)  On second pass, a table of (item, item, count) triples is essential  Hash table must eliminate approx. 2/3 of the candidate pairs for PCY to beat APriori Big Data Analytics CSCI 4030 44 Limit the number of candidates to be counted  Remember: Memory is the bottleneck  Still need to generate all the itemsets but we only want to count/keep track of the ones that are frequent  Key idea: After Pass 1 of PCY, rehash only those pairs that qualify for Pass 2 of PCY  i and j are frequent, and  i, j hashes to a frequent bucket from Pass 1  On middle pass, fewer pairs contribute to buckets, so fewer false positives  Requires 3 passes over the data Big Data Analytics CSCI 4030 45Freq. items Freq. items Item counts Bitmap 1 Bitmap 1 First Bitmap 2 hash table First Second Counts of hash table Counts of hash table candidate candidate pairs pairs Pass 1 Pass 2 Pass 3 Hash pairs i,j Count pairs i,j iff: into Hash2 iff: i,j are frequent, Count items i,j are frequent, i,j hashes to Hash pairs i,j i,j hashes to freq. bucket in B1 freq. bucket in B1 i,j hashes to freq. bucket in B2 Big Data Analytics CSCI 4030 46 Main memory Count only those pairs i, j that satisfy these candidate pair conditions: 1. Both i and j are frequent items 2. Using the first hash function, the pair hashes to a bucket whose bit in the first bitvector is 1 3. Using the second hash function, the pair hashes to a bucket whose bit in the second bitvector is 1 Big Data Analytics CSCI 4030 471. The two hash functions have to be independent 2. We need to check both hashes on the third pass  If not, we would end up counting pairs of frequent items that hashed first to an infrequent bucket but happened to hash second to a frequent bucket Big Data Analytics CSCI 4030 48 Key idea: Use several independent hash tables on the first pass  Risk: Halving the number of buckets doubles the average count  We have to be sure most buckets will still not reach count s  If so, we can get a benefit like multistage, but in only 2 passes Big Data Analytics CSCI 4030 49Freq. items Item counts Bitmap 1 First First hash Bitmap 2 hash table table Counts of Counts of Second candidate Second candidate hash table pairs hash table pairs Pass 1 Pass 2 Big Data Analytics CSCI 4030 50 Main memory Either multistage or multihash can use more than two hash functions  In multistage, there is a point of diminishing returns, since the bitvectors eventually consume all of main memory  For multihash, the bitvectors occupy exactly what one PCY bitmap does, but too many hash functions makes all counts s Big Data Analytics CSCI 4030 51 APriori, PCY, etc., take k passes to find frequent itemsets of size k  Can we use fewer passes  Use 2 or fewer passes for all sizes, but may miss some frequent itemsets  Random sampling  SON (Savasere, Omiecinski, and Navathe)  Toivonen (see textbook) Big Data Analytics CSCI 4030 53 Take a random sample of the market baskets  Run apriori or one of its improvements in main memory Copy of  So we don’t pay for disk I/O each sample baskets time we increase the size of itemsets  Reduce support threshold Space proportionally for to match the sample size counts Big Data Analytics CSCI 4030 54 Main memory Optionally, verify that the candidate pairs are truly frequent in the entire data set by a second pass (avoid false positives)  But you don’t catch sets frequent in the whole but not in the sample  Smaller threshold, e.g., s/125, helps catch more truly frequent itemsets  But requires more space Big Data Analytics CSCI 4030 55 Repeatedly read small subsets of the baskets into main memory and run an inmemory algorithm to find all frequent itemsets  Note: we are not sampling, but processing the entire file in memorysized chunks  An itemset becomes a candidate if it is found to be frequent in any one or more subsets of the baskets. Big Data Analytics CSCI 4030 56 On a second pass, count all the candidate itemsets and determine which are frequent in the entire set  Key “monotonicity” idea: an itemset cannot be frequent in the entire set of baskets unless it is frequent in at least one subset. Big Data Analytics CSCI 4030 57 SON lends itself to distributed data mining  Baskets distributed among many nodes  Compute frequent itemsets at each node  Distribute candidates to all nodes  Accumulate the counts of all candidates Big Data Analytics CSCI 4030 58 Consider following set of baskets. Assume we set our threshold at s = 3. Compute frequent pairs of items. 1. Cat, and, dog, bites 2. Yahoo, news, claims, a, cat, mated, with, a, dog, and, produced, viable, offspring 3. Cat, killer, likely, is, a, big, dog 4. Professional, free, advice, on, dog, training, puppy, training 5. Cat, and, kitten, training, and, behavior 6. Dog, , Cat, provides, dog, training, in, Eugene, Oregon 7. Dog, and, cat, is, a, slang, term, used, by, police, officers, for, a, male, female, relationship 8. Shop, for, your, show, dog, grooming, and, pet, supplies Big Data Analytics CSCI 4030 59 Are there any frequent triples and quadruples Big Data Analytics CSCI 4030 60 Consider the baskets in Slide 59.  What is the confidence of the rule: cat, dog → and  What is the confidence of the rule: cat → kitten Big Data Analytics CSCI 4030 61 Consider the baskets in Slide 59.  What is the interest of the rule: dog → cat  What is the interest of the rule: cat → kitten  Are these rules “interesting” Big Data Analytics CSCI 4030 62 Compute frequent itemsets for the baskets below with Apriori Algorithm. Assume threshold s = 3. a) 1, 2, 4, 5, 8, 9 b) 1, 4, 7, 8, 9 c) 1, 2, 5, 9 d) 1, 2, 5, 7, 8 Big Data Analytics CSCI 4030 63 Describe how the bitmap is used in PCY algorithm.  Why is the hash map in main memory from Pass 1 transformed into a bitmap in PCY algorithm Big Data Analytics CSCI 4030 64 Describe the key idea behind the multistage algorithm Big Data Analytics CSCI 4030 65 What wrong can potentially happen if instead of 3 passes one will use 100 passes in multistage algorithm Big Data Analytics CSCI 4030 66 MarketBasket model of data assumes there are two kinds of entities: items and baskets. There is a many–many relationship between items and baskets.  Typically, baskets are related to small sets of items, while items may be related to many baskets. Big Data Analytics CSCI 4030 67 The support for a set of items is the number of baskets containing all those items.  Itemsets with support that is at least some threshold are called frequent itemsets.  Association Rules: These are implications that if a basket contains certain set of items I, then it is likely to contain another particular item j as well.  The probability that j is also in a basket containing I is called the confidence of the rule.  The interest of the rule is the amount by which the confidence deviates from the fraction of all baskets that contain j. Big Data Analytics CSCI 4030 68 Monotonicity of Frequent Itemsets: An important property of itemsets is that if a set of items is frequent, then so are all its subsets.  We exploit this property to eliminate the need to count certain itemsets by using its Contrapositive.  APriori algorithm allows us to find frequent itemsets larger than pairs, if we make one pass over the baskets for each size itemset, up to some limit.  To find the frequent itemsets of size k, monotonicity lets us restrict our attention to only those itemsets such that all their subsets of size k − 1 have already been found frequent. Big Data Analytics CSCI 4030 69 The Multistage Algorithm: We can insert additional passes between the first and second pass of the PCY Algorithm to hash pairs to other, independent hash tables.  At each intermediate pass, we only have to hash pairs of frequent items that have hashed to frequent buckets on all previous passes. Big Data Analytics CSCI 4030 70 The Multihash Algorithm: We can modify the first pass of the PCY Algorithm to divide available main memory into several hash tables.  On the second pass, we only have to count a pair of frequent items if they hashed to frequent buckets in all hash tables.  Alternatives:  Randomized Algorithms (Sampling)  The SON Algorithm (Segments) Big Data Analytics CSCI 4030 71 Review slides  Read Chapter 6 from course book.  You can find electronic version of the book on Blackboard. Big Data Analytics CSCI 4030 72