Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
L27/17-09-09 1
Association Rule Mining
• Given a set of transactions, find rules that will predict
the occurrence of an item based on the occurrences of
other items in the transaction
Market-Basket transactions
Example of Association
Rules
TID Item
{Diaper} → {Beer},
{Milk, Bread} → {Eggs,Coke},
{Beer, Bread} → {Milk},
L27/17-09-09 2
Definition: Frequent Itemset
• Itemset
TID
– A collection of one or more items
• Example: {Milk, Bread, Diaper}
– k-itemset
• An itemset that contains k items
• Support count (σ )
– Frequency of occurrence of an itemset
1
– E.g. σ ({Milk, Bread,Diaper}) = 2 •Frequent
• Support Itemset
An itemset
– Fraction of transactions that contain an whose support is
itemset greater than or
– E.g. s({Milk, Bread, Diaper}) = 2/5 equal to a
minsup
2
threshold
L27/17-09-09 3
Definition: Association Rule
• Association Rule
TID
– An implication expression of the
form X → Y, where X and Y are
itemsets
– Example:
{Milk, Diaper} → {Beer}
1
• Rule Evaluation Metrics Example:
– Support (s) {Milk, Diaper } ⇒ Beer
– Fraction of transactions that
contain both X and Y σ (Milk , Diaper, Beer) 2
s= = = 0.4
– Confidence (c) |T| 5
– Measures how often items in Y σ (Milk, Diaper, Beer) 2
2
appear in transactions that c= = = 0.67
σ (Milk, Diaper) 3
contain X
L27/17-09-09 4
Association Rule Mining Task
• Given a set of transactions T, the goal of association rule
mining is to find all rules having
– support ≥ minsup threshold
– confidence ≥ minconf threshold
• Brute-force approach:
– List all possible association rules
– Compute the support and confidence for each rule
– Prune rules that fail the minsup and minconf thresholds
⇒ Computationally prohibitive!
L27/17-09-09 5
Mining Association Rules
Example of Rules:
TID Items
{Milk,Diaper} → {Beer} (s=0.4, c=0.67)
{Milk,Beer} → {Diaper} (s=0.4, c=1.0)
{Diaper,Beer} → {Milk} (s=0.4, c=0.67)
{Beer} → {Milk,Diaper} (s=0.4, c=0.67)
{Diaper} → {Milk,Beer} (s=0.4, c=0.5)
{Milk} → {Diaper,Beer} (s=0.4, c=0.5)
1 Bread
Observations:
• All the above rules are binary partitions of the same itemset:
{Milk, Diaper, Beer}
• Rules originating from the same itemset have identical support but
can have different confidence
• Thus, we may decouple the support and confidence requirements
L27/17-09-09 6
Mining Association Rules
• Two-step approach:
1. Frequent Itemset Generation
– Generate all itemsets whose support ≥ minsup
2. Rule Generation
– Generate high confidence rules from each frequent
itemset, where each rule is a binary partitioning of a
frequent itemset
• Frequent itemset generation is still
computationally expensive
L27/17-09-09 7
Frequent Itemset Generation
null
A B C D E
AB AC AD AE BC BD BE CD CE DE
ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE
Transactions List of
Candidate
TID Items
1 Bread, Milk
2 Bread, Diaper, Beer, Eggs
3 Milk, Diaper, Beer, Coke
N
4 Bread, Milk, Diaper, Beer
5 Bread, Milk, Diaper, Coke
w
L27/17-09-09 9
Computational Complexity
• Given d unique items:
– Total number of itemsets = 2d
– Total number of possible association rules:
d d − k
R = ∑ × ∑
d −1 d −k
k j
k =1 j =1
= 3 − 2 +1
d d +1
L27/17-09-09 10
Frequent Itemset Generation Strategies
• Reduce the number of candidates (M)
– Complete search: M=2d
– Use pruning techniques to reduce M
L27/17-09-09 11
Reducing Number of Candidates
• Apriori principle:
– If an itemset is frequent, then all of its subsets
must also be frequent
• Apriori principle holds due to the following
property of the support measure:
∀X , Y : ( X ⊆ Y ) ⇒ s( X ) ≥ s(Y )
• Support of an itemset never exceeds the
support of its subsets
– This is known as the anti-monotone property of
support
L27/17-09-09 12
Illustrating Apriori Principle
null
A B C D E
AB AC AD AE BC BD BE CD
Found to be
Infrequent
Pruned
supersets ABCD ABCE ABDE ACDE BCDE
L27/17-09-09 13
Illustrating Apriori Principle
Item Count Items (1-itemsets)
Bread 4
Coke 2
Milk 4 Itemset Count Pairs (2-itemsets)
Beer 3 {Bread,Milk} 3
Diaper 4 {Bread,Beer} 2 (No need to generate
Eggs 1
{Bread,Diaper} 3 candidates involving Coke
{Milk,Beer} 2 or Eggs)
{Milk,Diaper} 3
{Beer,Diaper} 3
Minimum Support = 3
Triplets (3-itemsets)
L27/17-09-09 14
Apriori Algorithm
• Method:
Let k=1
– Generate frequent itemsets of length 1
– Repeat until no new frequent itemsets are
identified
• Generate length (k+1) candidate itemsets from
length k frequent itemsets
• Prune candidate itemsets containing subsets of
length k that are infrequent
• Count the support of each candidate by scanning
the DB
• Eliminate candidates that are infrequent, leaving
only those that are frequent
L27/17-09-09 15
Reducing Number of Comparisons
• Candidate counting:
– Scan the database of transactions to determine the support
of each candidate itemset
– To reduce the number of comparisons, store the candidates
in a hash structure
• Instead of matching each transaction against every candidate, match
it against candidates contained in the hashed buckets
TID Items
1 Bread, Milk
2 Bread, Diaper, Beer, Eggs
N 3 Milk, Diaper, Beer, Coke k
4 Bread, Milk, Diaper, Beer
5 Bread, Milk, Diaper, Coke
L27/17-09-09 16
Buckets
Generate Hash Tree
Suppose you have 15 candidate itemsets of length 3:
{1 4 5}, {1 2 4}, {4 5 7}, {1 2 5}, {4 5 8}, {1 5 9}, {1 3 6}, {2 3 4}, {5 6 7}, {3 4 5},
{3 5 6}, {3 5 7}, {6 8 9}, {3 6 7}, {3 6 8}
You need:
• Hash function
• Max leaf size: max number of itemsets stored in a leaf node (if number of
candidate itemsets exceeds max leaf size, split the node)
1,4,7 3,6,9
2,5,8
234
567
145 136
345 356 367
Hash on
357 368
1, 4 or 7
124 159 689
125
457 458
L27/17-09-09 18
Association Rule Discovery:
Hash Function
Hash tree
Candidate Hash Tree
1,4,7 3,6,9
2,5,8
234
567
145 136
345 356 367
Hash on
357 368
2, 5 or 8
124 159 689
125
457 458
L27/17-09-09 19
Association Rule Discovery:
Hash Function
Hash tree
Candidate Hash Tree
1,4,7 3,6,9
2,5,8
234
567
145 136
345 356 367
Hash on
357 368
3, 6 or 9
124 159 689
125
457 458
L27/17-09-09 20
Subset Operation
Given a transaction t, what
are the possible subsets of Transaction, t
size 3?
1 2 3 5 6
Level 1
1 2 3 5 6 2 3 5 6
Level 2
12 3 5 6 13 5 6 15 6 23 5 6 25 6 3
123
135 235
125 156 256
136 236
126
L27/17-09-09 21
Subset Operation Using Hash
Tree
1 2 3 5 6 transaction Hash Function
1+ 2356
2+ 356 1,4,7 3,6,9
2,5,8
3+ 56
234
567
145 136
345 356 367
357 368
124 159 689
125
457 458
L27/17-09-09 22
Subset Operation Using Hash
Tree Hash Function
1 2 3 5 6 transaction
1+ 2356
2+ 356 1,4,7 3,6,9
12+ 356 2,5,8
3+ 56
13+ 56
234
15+ 6 567
145 136
345 356 367
357 368
124 159 689
125
457 458
L27/17-09-09 23
Subset Operation Using Hash
Tree Hash Function
1 2 3 5 6 transaction
1+ 2356
2+ 356 1,4,7 3,6,9
12+ 356 2,5,8
3+ 56
13+ 56
234
15+ 6 567
145 136
345 356 367
357 368
124 159 689
125
457 458
Match transaction against 11 out of 15 candidates
L27/17-09-09 24
•
Factors Affecting Complexity
Choice of minimum support threshold
– lowering support threshold results in more frequent itemsets
– this may increase number of candidates and max length of
frequent itemsets
• Dimensionality (number of items) of the data set
– more space is needed to store support count of each item
– if number of frequent items also increases, both computation and
I/O costs may also increase
• Size of database
– since Apriori makes multiple passes, run time of algorithm may
increase with number of transactions
• Average transaction width
– transaction width increases with denser data sets
– This may increase max length of frequent itemsets and
traversals of hash tree (number of subsets in a
transaction increases with its width)
L27/17-09-09 25
Compact Representation of Frequent
Itemsets
• Some itemsets are redundant because they have identical support as their supersets
TID A1 A2 A3 A4
1 1 1 1 1
2 1 1 1 1 10
= 3 × ∑
10
3 1 1 1 1
• Number of frequent itemsets k
k =1
Maximal
Itemsets A B C D E
AB AC AD AE BC BD BE CD CE
Infrequent
Itemsets Border
ABCD ABCE ABDE ACDE BCDE
L27/17-09-09 27
Closed Itemset
• An itemset is closed if none of its immediate
supersets has the same support as the itemset
1 ABC
2 ABCD 12 124 24 4 123 2 3 24 34 45
AB AC AD AE BC BD BE CD CE DE
3 BCE
4 ACDE
5 DE 12 2 24 4 4 2 3 4
ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE
2 4
ABCD ABCE ABDE ACDE BCDE
Not supported by
any transactions ABCDE
L27/17-09-09 29
Maximal vs Closed Frequent Itemsets
Minimum support = 2 null Closed but
not
maximal
124 123 1234 245 345
A B C D E
Closed and
maximal
12 124 24 4 123 2 3 24 34 45
AB AC AD AE BC BD BE CD CE DE
12 2 24 4 4 2 3 4
ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE
2 4
ABCD ABCE ABDE ACDE BCDE # Closed = 9
# Maximal = 4
ABCDE
L27/17-09-09 30
Maximal vs Closed Itemsets
Frequent
Itemsets
Closed
Frequent
Itemsets
Maximal
Frequent
Itemsets
L27/17-09-09 31
Alternative Methods for Frequent Itemset
Generation
Frequent
itemset Frequent
border null null itemset null
border
.. .. ..
.. .. ..
Frequent
{a 1,a 2,...,a n} {a 1,a 2,...,a n} itemset {a 1,a 2 ,...,a
L27/17-09-09 border 32
(a) General-to-specific (b) Specific-to-general (c) Bidirection
Alternative Methods for Frequent Itemset
Generation
A B C D A B C D
AB AC AD BC BD CD AB AC BC AD BD CD
ABCD ABCD
L27/17-09-09 33
(a) Prefix tree (b) Suffix tree
Alternative Methods for Frequent Itemset
Generation
L27/17-09-09 36
FP-tree construction
null
After reading TID=1:
A:1
TID Items
1 {A,B}
2 {B,C,D} B:1
3 {A,C,D,E}
4 {A,D,E} After reading TID=2:
5 {A,B,C} null
6 {A,B,C,D} B:1
A:1
7 {B,C}
8 {A,B,C}
9 {A,B,D} B:1 C:1
10 {B,C,E}
D:1
L27/17-09-09 37
FP-Tree Construction
TID Items
Transaction
1 {A,B}
2 {B,C,D} Database
null
3 {A,C,D,E}
4 {A,D,E}
5 {A,B,C}
A:7 B:3
6 {A,B,C,D}
7 {B,C}
8 {A,B,C}
9 {A,B,D} B:5 C:3
10 {B,C,E}
C:1 D:1
L27/17-09-09 39
Tree Projection
null
Set enumeration tree:
A B C D E
Possible Extension:
E(A) = {B,C,D,E}
AB AC AD AE BC BD BE CD CE DE
ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE
Possible Extension:
E(ABC) = {D,E}
ABCD ABCE ABDE ACDE BCDE
ABCDE
L27/17-09-09 40
Tree Projection
• Items are listed in lexicographic order
• Each node P stores the following information:
– Itemset for node P
– List of possible lexicographic extensions of P: E(P)
– Pointer to projected database of its ancestor node
– Bitvector containing information about which
transactions in the projected database contain the
itemset
L27/17-09-09 41
Projected Database
Projected Database
Original Database: for node A:
TID Items TID Items
1 {A,B} 1 {B}
2 {B,C,D} 2 {}
3 {A,C,D,E}
3 {C,D,E}
4 {A,D,E}
5 {A,B,C} 4 {D,E}
6 {A,B,C,D} 5 {B,C}
7 {B,C} 6 {B,C,D}
8 {A,B,C} 7 {}
9 {A,B,D} 8 {B,C}
10 {B,C,E} 9 {B,D}
10 at node
For each transaction T, projected transaction {} A is T ∩ E(A)
L27/17-09-09 42
ECLAT
• For each item, store a list of transaction ids (tids)
Horizontal
Data Layout Vertical Data Layout
TID Items A B C D E
1 A,B,E 1 1 2 2 1
2 B,C,D 4 2 3 4 3
3 C,E 5 5 4 5 6
4 A,C,D 6 7 8 9
5 A,B,C,D 7 8 9
6 A,E 8 10
7 A,B 9
8 A,B,C
9 A,C,D
10 B TID-list
L27/17-09-09 43
•
ECLAT
Determine support of any k-itemset by intersecting tid-
lists of two of its (k-1) subsets.
A B AB
1 1 1
4
5 ∧ 2
5 → 5
7
6 7
8
7 8
8 10
• 3 traversal 9approaches:
– top-down, bottom-up and hybrid
• Advantage: very fast support counting
• Disadvantage: intermediate tid-lists may become too
large for memory
L27/17-09-09 44
Rule Generation
• Given a frequent itemset L, find all non-empty
subsets f ⊂ L such that f → L – f satisfies the
minimum confidence requirement
– If {A,B,C,D} is a frequent itemset, candidate rules:
ABC →D, ABD →C, ACD →B, BCD →A,
A →BCD, B →ACD, C →ABD, D →ABC
AB →CD, AC → BD, AD → BC, BC →AD,
BD →AC, CD →AB,
L27/17-09-09 45
Rule Generation
• How to efficiently generate rules from frequent
itemsets?
– In general, confidence does not have an anti-
monotone property
c(ABC →D) can be larger or smaller than c(AB →D)
– But confidence of rules generated from the same
itemset has an anti-monotone property
– e.g., L = {A,B,C,D}:
c(ABC → D) ≥ c(AB → CD) ≥ c(A → BCD)
• Confidence is anti-monotone w.r.t. number of items on the
RHS of the rule
L27/17-09-09 46
Rule Generation for Apriori
Lattice of rules Algorithm
ABCD=>{ }
Low
Confidence
Rule
BCD=>A ACD=>B ABD=>C ABC=>D
Pruned
Rules
D=>ABC C=>ABD
L27/17-09-09 B=>ACD A=>BCD
47
Rule Generation for Apriori
Algorithm
• Candidate rule is generated by merging two rules that
share the same prefix
in the rule consequent CD=>AB BD=>AC
• join(CD=>AB,BD=>AC)
would produce the candidate
rule D => ABC
D=>ABC
• Prune rule D=>ABC if its
subset AD=>BC does not have
high confidence
L27/17-09-09 48
Effect of Support Distribution
• Many real data sets have skewed support
distribution
Support
distribution of
a retail data set
L27/17-09-09 49
Effect of Support Distribution
• How to set the appropriate minsup threshold?
– If minsup is set too high, we could miss itemsets
involving interesting rare items (e.g., expensive
products)
L27/17-09-09 50
Multiple Minimum Support
• How to apply multiple minimum supports?
– MS(i): minimum support for item i
– e.g.: MS(Milk)=5%, MS(Coke) = 3%,
MS(Broccoli)=0.1%, MS(Salmon)=0.5%
– MS({Milk, Broccoli}) = min (MS(Milk), MS(Broccoli))
= 0.1%
L27/17-09-09 51
Multiple Minimum Support
Item MS(I) Sup(I) AB ABC
AC ABD
A
A 0.10% 0.25%
AD ABE
B AE ACD
B 0.20% 0.26%
BC ACE
C
C 0.30% 0.29% BD ADE
D BE BCD
D 0.50% 0.05%
CD BCE
E
E 3% 4.20% CE BDE
DE CDE
L27/17-09-09 52
Multiple Minimum Support
AB ABC
Item MS(I) Sup(I)
AC ABD
A
A 0.10% 0.25% AD ABE
B AE ACD
B 0.20% 0.26%
BC ACE
C
C 0.30% 0.29% BD ADE
D BE BCD
D 0.50% 0.05%
CD BCE
E
E 3% 4.20% CE BDE
DE CDE
L27/17-09-09 53
Multiple Minimum Support (Liu
•
1999)
Order the items according to their minimum
support (in ascending order)
– e.g.: MS(Milk)=5%, MS(Coke) = 3%,
MS(Broccoli)=0.1%, MS(Salmon)=0.5%
– Ordering: Broccoli, Salmon, Coke, Milk
L27/17-09-09 55
Pattern Evaluation
• Association rule algorithms tend to produce too
many rules
– many of them are uninteresting or redundant
– Redundant if {A,B,C} → {D} and {A,B} → {D}
have same support & confidence
L27/17-09-09 56
Application of Interestingness
Interestingness Measure
Knowledge
Measures Patterns
Postprocessing
Preprocessed
Data
Prod
Prod
Prod
Prod
Prod
Prod
Prod
Prod
Prod
Prod
uct
uct
uct
uct
uct
uct
uct
uct
uct
uct
Featur
Featur
e
Featur
e
Mining
Featur
e
Featur
e
Featur
e
Featur
e
Featur
e
Featur
e
Featur
e
e
Selected
Data
Data Preprocessing
Selection
L27/17-09-09 57
Computing Interestingness Measure
• Given a rule X → Y, information needed to compute rule interestingness can be
obtained from a contingency table
L27/17-09-09 60
Statistical-based Measures
• Measures that take into account statistical
dependence
P (Y | X )
Lift =
P (Y )
P( X , Y )
Interest =
P ( X ) P (Y )
PS = P ( X , Y ) − P ( X ) P (Y )
P ( X , Y ) − P ( X ) P (Y )
φ − coefficient =
P ( X )[1 − P ( X )]P (Y )[1 − P (Y )]
L27/17-09-09 61
Example: Lift/Interest
Coffee Coffee
Tea 15 5 20
Tea 75 5 80
90 10 100
L27/17-09-09 62
Drawback of Lift & Interest
Y Y Y Y
X 10 0 10 X 90 0 90
X 0 90 90 X 0 10 10
10 90 100 90 10 100
0.1 0 .9
Lift = = 10 Lift = = 1.11
(0.1)(0.1) (0.9)(0.9)
Statistical independence:
If P(X,Y)=P(X)P(Y) => Lift = 1
L27/17-09-09 63
Properties of A Good Measure
• Piatetsky-Shapiro:
3 properties a good measure M must satisfy:
– M(A,B) = 0 if A and B are statistically independent
L27/17-09-09 64
Comparing Different Measures
Exam ple f f f f 11 10 01 00
10 examples of E1 8123 83 424 1370
contingency tables: E2 8330 2 622 1046
E3 9481 94 127 298
E4 3954 3080 5 2961
E5 2886 1363 1320 4431
E6 1500 2000 500 6000
E7 4000 2000 1000 3000
Rankings of contingency tables E8 4000 2000 2000 2000
using various measures: E9 1720 7121 5 1154
E10 61 2483 4 7452
L27/17-09-09 65
Property under Variable
Permutation
B B A A
A p q B p r
A r s B q s
Symmetric measures:
◆ support, lift, collective strength, cosine, Jaccard, etc
Asymmetric measures:
◆ confidence, conviction, Laplace, J-measure, etc
L27/17-09-09 66
Property under Row/Column
Scaling
Grade-Gender Example (Mosteller, 1968):
High 2 3 5 High 4 30 34
Low 1 4 5 Low 2 40 42
3 7 10 6 70 76
2x 10x
Mosteller:
Underlying association should be independent of
the relative number of male and female students
in the samples
L27/17-09-09 67
Property under Inversion
A BOperation
C D E F
. 1 0 0 1 0 0
Transaction 1
0 0 1 1 1 0
. 0 0 1 1 1 0
. 0
0
0
1
1
1
1
0
1
1
0
1
. 0 0 1 1 1 0
.
0 0 1 1 1 0
0 0 1 1 1 0
0 0 1 1 1 0
Transaction N 1 0 0 1 0 0
Invariant measures:
◆ support, cosine, Jaccard, etc
Non-invariant measures:
◆ correlation, Gini, mutual information, odds ratio, etc
L27/17-09-09 70
Different Measures have Different
Properties
Sym bol M e as ure Range P1 P2 P3 O1 O2 O3 O3'
Φ Correlation -1 … 0 … 1 Yes Yes Yes Yes No Yes Yes
λ Lambda 0…1 Yes No No Yes No No* Yes
α Odds ratio 0 … 1 …∞ Yes* Yes Yes Yes Yes Yes* Yes
Q Yule's Q -1 … 0 … 1 Yes Yes Yes Yes Yes Yes Yes
Y Yule's Y -1 … 0 … 1 Yes Yes Yes Yes Yes Yes Yes
κ Cohen's -1 … 0 … 1 Yes Yes Yes Yes No No Yes
M Mutual Inf ormation 0…1 Yes Yes Yes Yes No No* Yes
J J-Measure 0…1 Yes No No No No No No
G Gini Index 0…1 Yes No No No No No* Yes
s Support 0…1 No Yes No Yes No No No
c Confidence 0…1 No Yes No Yes No No No Y
L Laplace 0…1 No Yes No Yes No No No
V Conviction 0.5 … 1 … ∞ No Yes No Y es** No No Yes
I Interest 0 … 1 …∞ Yes* Yes Yes Yes No No No
IS IS (cosine) 0 .. 1 No Yes Yes Yes No No No Y
PS Piatetsky-Shapiro's -0.25 … 0 … 0.25 Yes Yes Yes Yes No Yes Yes
F Certainty f actor -1 … 0 … 1 Yes Yes Yes No No No Yes
AV Added value 0.5 … 1 … 1 Yes Yes Yes No No No No
2 1 2
S Collective strength − 1 2 −0 …
3−1 … ∞0L27/17-09-09
No Yes Yes Yes No Yes* 71Yes
3 3 3 3
ζ Jaccard 0 .. 1 No Yes Yes Yes No No No Y
Support-based Pruning
• Most of the association rule mining algorithms use
support measure to prune rules and itemsets
L27/17-09-09 72
Effect of Support-based Pruning
All Item pairs
1000
900
800
700
600
500
400
300
200
100
0
0
1
-1
1
2
3
4
5
6
7
8
9
.9
.8
.7
.6
.4
.3
.2
.1
.5
0.
0.
0.
0.
0.
0.
0.
0.
0.
-0
-0
-0
-0
-0
-0
-0
-0
-0
L27/17-09-09 73
Correlation
Effect of Support-based Pruning
Support < 0.01 Support < 0.03
300 300
250 250
200 200
150 150
100 100
50 50
0 0
0
1
-1
-1
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
.9
.8
.7
.6
.5
.4
.3
.2
.1
.5
.4
.1
.9
.8
.7
.6
.3
.2
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
-0
-0
-0
-0
-0
-0
-0
-0
-0
-0
-0
-0
-0
-0
-0
-0
-0
-0
Correlation Support < 0.05
Correlation
300
250
Support-based pruning 200
itemsets 50
L27/17-09-09 74
1
-1
2
3
8
1
6
7
9
.8
.7
.5
.4
.3
.2
.1
.9
.6
0.
0.
0.
0.
0.
0.
0.
0.
0.
-0
-0
-0
-0
-0
-0
-0
-0
-0
Correlation
Effect of Support-based Pruning
• Investigate how support-based pruning affects other
measures
• Steps:
– Generate 10000 contingency tables
– Rank each table according to the different measures
– Compute the pair-wise correlation between the measures
L27/17-09-09 75
Effect of Support-based Pruning
◆ Without Support Pruning (All Pairs)
A
llP
airs(40.14%
)
C
onvictio
n
1
O
ddsratio
0.9
C
olS
trength
C
orre
lation 0.8
Inte
rest
P
S
0.7
C
F
0.6
Jaccard
Y
uleY
R
eliab
ility 0.5
K
appa
K
losgen 0.4
Y
uleQ
0.3
C
onfiden
ce
Lap
lace 0.2
IS
S
upp
ort 0.1
Jaccard
0
L
amb
da -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
G
in
i C
orrelation
J-m
ea
sure
M
utu
alIn
fo
1 2 3 4 5 6 7 8 9 10 11 12 1
3 14 15 16 1
7 1
8 19 20 21
C
olS
trength 0.8
Lap
lace
C
onfiden
ce 0.7
C
orrelation
0.6
K
losgen
Jaccard
R
eliability
0.5
P
S
Y
uleQ 0.4
C
F
0.3
Y
uleY
K
appa
0.2
IS
Jaccard 0.1
S
upp
ort
0
L
amb
da -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
G
in
i C
orrelation
J-m
ea
sure
M
utu
al In
fo
1 2 3 4 5 6 7 8 9 10 1
1 12 1
3 14 15 16 17 1
8 19 20 21
Scatter Plot between Correlation
& Jaccard Measure:
◆ 61.45% pairs have correlation > 0.85
L27/17-09-09 77
Effect of Support-based Pruning
◆ 0.5% ≤ support ≤ 30%
0.005<=support<=0.300(76.42%
)
S
upp
ort 1
Inte
rest
R
eliab
ility
0.9
C
onvictio
n
0.8
Y
uleQ
O
ddsratio 0.7
C
onfiden
ce
0.6
C
F
Jaccard
Y
uleY
0.5
K
appa
C
orre
lation 0.4
C
olS
trength
IS 0.3
Jaccard
0.2
Lap
lace
P
S 0.1
K
losgen
L
amb
da 0
-0.4 -0.2 0 0.2 0.4 0.6 0.8 1
M
utu
alIn
fo Correlation
G
in
i
J-m
ea
sure
1 2 3 4 5 6 7 8 9 10 11 12 1
3 14 15 16 1
7 1
8 19 20 21
L27/17-09-09 78
Subjective Interestingness
Measure
• Objective measure:
– Rank patterns based on statistics computed from data
– e.g., 21 measures of association (support, confidence, Laplace,
Gini, mutual information, Jaccard, etc).
• Subjective measure:
– Rank patterns according to user’s interpretation
• A pattern is subjectively interesting if it contradicts the
expectation of a user (Silberschatz & Tuzhilin)
• A pattern is subjectively interesting if it is actionable
(Silberschatz & Tuzhilin)
L27/17-09-09 79
Interestingness via Unexpectedness
• Need to model expectation of users (domain knowledge)
+ - Expected Patterns
- + Unexpected Patterns
L27/17-09-09 80
Interestingness via
Unexpectedness
• Web Data (Cooley et al 2001)
– Domain knowledge in the form of site structure
– Given an itemset F = {X1, X2, …, Xk} (Xi : Web pages)
• L: number of links connecting the pages
• lfactor = L / (k × k-1)
• cfactor = 1 (if graph is connected), 0 (disconnected graph)
– Structure evidence = cfactor × lfactor
P ( X X ... X )
– Usage evidence
= 1 2 k
P ( X ∪ X ∪ ... ∪ X )
1 2 k