Association rules in Data mining with example

Data Mining Association Rules: Advanced Concepts and Algorithms and data mining association rules algorithm
Dr.JakeFinlay Profile Pic
Dr.JakeFinlay,Germany,Teacher
Published Date:22-07-2017
Your Website URL(Optional)
Comment
Data Mining Association Rules: Advanced Concepts and Algorithms © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 1Continuous and Categorical Attributes How to apply association analysis formulation to non- asymmetric binary variables? Session Country Session Number of Browser Id Length Web Pages Gender Buy Type (sec) viewed 1 USA 982 8 Male IE No 2 China 811 10 Female Netscape No 3 USA 2125 45 Female Mozilla Yes 4 Germany 596 4 Male IE Yes 5 Australia 123 9 Male Mozilla No … … … … … … … 10 Example of Association Rule: Number of Pages 5,10)  (Browser=Mozilla)  Buy = No © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹›Handling Categorical Attributes  Transform categorical attribute into asymmetric binary variables  Introduce a new “item” for each distinct attribute- value pair – Example: replace Browser Type attribute with  Browser Type = Internet Explorer  Browser Type = Mozilla  Browser Type = Mozilla © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹›Handling Categorical Attributes  Potential Issues – What if attribute has many possible values  Example: attribute country has more than 200 possible values  Many of the attribute values may have very low support – Potential solution: Aggregate the low-support attribute values – What if distribution of attribute values is highly skewed  Example: 95% of the visitors have Buy = No  Most of the items will be associated with (Buy=No) item – Potential solution: drop the highly frequent items © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹›Handling Continuous Attributes  Different kinds of rules: – Age21,35)  Salary70k,120k)  Buy – Salary70k,120k)  Buy  Age: =28, =4  Different methods: – Discretization-based – Statistics-based – Non-discretization based  minApriori © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹›Handling Continuous Attributes  Use discretization  Unsupervised: – Equal-width binning – Equal-depth binning – Clustering  Supervised: Attribute values, v Class v v v v v v v v v 1 2 3 4 5 6 7 8 9 Anomalous 0 0 20 10 20 0 0 0 0 Normal 150 100 0 0 0 100 100 150 100 bin bin bin 1 2 3 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹›Discretization Issues  Size of the discretized intervals affect support & confidence Refund = No, (Income = 51,250)  Cheat = No Refund = No, (60K  Income  80K)  Cheat = No Refund = No, (0K  Income  1B)  Cheat = No – If intervals too small  may not have enough support – If intervals too large  may not have enough confidence  Potential solution: use all possible intervals © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹›Discretization Issues  Execution time – If intervals contain n values, there are on average 2 O(n ) possible ranges  Too many rules Refund = No, (Income = 51,250)  Cheat = No Refund = No, (51K  Income  52K)  Cheat = No Refund = No, (50K  Income  60K)  Cheat = No © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹›Approach by Srikant & Agrawal  Preprocess the data – Discretize attribute using equi-depth partitioning  Use partial completeness measure to determine number of partitions  Merge adjacent intervals as long as support is less than max-support  Apply existing association rule mining algorithms  Determine interesting rules in the output © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹›Approach by Srikant & Agrawal  Discretization will lose information Approximated X X – Use partial completeness measure to determine how much information is lost C: frequent itemsets obtained by considering all ranges of attribute values P: frequent itemsets obtained by considering all ranges over the partitions P is K-complete w.r.t C if P  C,and X  C,  X’  P such that: 1. X’ is a generalization of X and support (X’)  K  support(X) (K  1) 2. Y  X,  Y’  X’ such that support (Y’)  K  support(Y) Given K (partial completeness level), can determine number of intervals (N) © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹›Interestingness Measure Refund = No, (Income = 51,250)  Cheat = No Refund = No, (51K  Income  52K)  Cheat = No Refund = No, (50K  Income  60K)  Cheat = No  Given an itemset: Z = z , z , …, z and its 1 2 k generalization Z’ = z ’, z ’, …, z ’ 1 2 k P(Z): support of Z E (Z): expected support of Z based on Z’ Z’ P(z ) P(z ) P(z ) 1 2 k E (Z)P(Z') Z ' P(z ') P(z ') P(z ') 1 2 k – Z is R-interesting w.r.t. Z’ if P(Z)  R  E (Z) Z’ © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹›Interestingness Measure  For S: X  Y, and its generalization S’: X’  Y’ P(YX): confidence of X  Y P(Y’X’): confidence of X’  Y’ E (YX): expected support of Z based on Z’ S’ P(y ) P(y ) P(y ) 1 2 k E(Y X )P(Y'X ') P(y ') P(y ') P(y ') 1 2 k  Rule S is R-interesting w.r.t its ancestor rule S’ if – Support, P(S)  R  E (S) or S’ – Confidence, P(YX)  R  E (YX) S’ © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹›Statistics-based Methods  Example: Browser=Mozilla  Buy=Yes  Age: =23  Rule consequent consists of a continuous variable, characterized by their statistics – mean, median, standard deviation, etc.  Approach: – Withhold the target variable from the rest of the data – Apply existing frequent itemset generation on the rest of the data – For each frequent itemset, compute the descriptive statistics for the corresponding target variable  Frequent itemset becomes a rule by introducing the target variable as rule consequent – Apply statistical test to determine interestingness of the rule © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹›Statistics-based Methods  How to determine whether an association rule interesting? – Compare the statistics for segment of population covered by the rule vs segment of population not covered by the rule: A  B:  versus A  B: ’ ' Z – Statistical hypothesis testing: 2 2 s s 1 2   Null hypothesis: H0: ’ =  +  n n 1 2  Alternative hypothesis: H1: ’  +   Z has zero mean and variance 1 under null hypothesis © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹›Statistics-based Methods  Example: r: Browser=Mozilla  Buy=Yes  Age: =23 – Rule is interesting if difference between  and ’ is greater than 5 years (i.e.,  = 5) – For r, suppose n1 = 50, s1 = 3.5 – For r’ (complement): n2 = 250, s2 = 6.5 ' 30 23 5 Z 3.11 2 2 2 2 s s 3.5 6.5 1 2  n n 50 250 1 2 – For 1-sided test at 95% confidence level, critical Z-value for rejecting null hypothesis is 1.64. – Since Z is greater than 1.64, r is an interesting rule © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹›Min-Apriori (Han et al) Document-term matrix: TID W1 W2 W3 W4 W5 D1 2 2 0 0 1 D2 0 0 1 2 2 D3 2 3 0 0 0 D4 0 0 1 0 1 D5 1 1 1 0 2 Example: W1 and W2 tends to appear together in the same document © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹›Min-Apriori  Data contains only continuous attributes of the same “type” – e.g., frequency of words in a document TID W1 W2 W3 W4 W5 D1 2 2 0 0 1 D2 0 0 1 2 2 D3 2 3 0 0 0 D4 0 0 1 0 1 D5 1 1 1 0 2  Potential solution: – Convert into 0/1 matrix and then apply existing algorithms  lose word frequency information – Discretization does not apply as users want association among words not ranges of words © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹›Min-Apriori  How to determine the support of a word? – If we simply sum up its frequency, support count will be greater than total number of documents  Normalize the word vectors – e.g., using L norm 1  Each word has a support equals to 1.0 TID W1 W2 W3 W4 W5 TID W1 W2 W3 W4 W5 D1 2 2 0 0 1 D1 0.40 0.33 0.00 0.00 0.17 Normalize D2 0 0 1 2 2 D2 0.00 0.00 0.33 1.00 0.33 D3 2 3 0 0 0 D3 0.40 0.50 0.00 0.00 0.00 D4 0 0 1 0 1 D4 0.00 0.00 0.33 0.00 0.17 D5 1 1 1 0 2 D5 0.20 0.17 0.33 0.00 0.33 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹›Min-Apriori  New definition of support: sup(C) D(i, j) min jC iT TID W1 W2 W3 W4 W5 Example: D1 0.40 0.33 0.00 0.00 0.17 Sup(W1,W2,W3) D2 0.00 0.00 0.33 1.00 0.33 = 0 + 0 + 0 + 0 + 0.17 D3 0.40 0.50 0.00 0.00 0.00 = 0.17 D4 0.00 0.00 0.33 0.00 0.17 D5 0.20 0.17 0.33 0.00 0.33 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹›Anti-monotone property of Support TID W1 W2 W3 W4 W5 D1 0.40 0.33 0.00 0.00 0.17 D2 0.00 0.00 0.33 1.00 0.33 D3 0.40 0.50 0.00 0.00 0.00 D4 0.00 0.00 0.33 0.00 0.17 D5 0.20 0.17 0.33 0.00 0.33 Example: Sup(W1) = 0.4 + 0 + 0.4 + 0 + 0.2 = 1 Sup(W1, W2) = 0.33 + 0 + 0.4 + 0 + 0.17 = 0.9 Sup(W1, W2, W3) = 0 + 0 + 0 + 0 + 0.17 = 0.17 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹›