Sei sulla pagina 1di 10

Multivariate interdependent discretization in discovering the best correlated attribute

S. Chao & Y. P. Li
Faculty of Science and Technolog, University of Macau, Macau, China.

Abstract
Decision tree is one of the most widely used and practical methods in data mining. However, many discretization algorithms developed in this field focus on univariate only, which discretize continuous-valued attributes independently, without considering the interdependent relationship between other attributes, at most taking the class attribute into account. Such univariate discretization is inadequate to handle the critical problems especially owned in medical domain. In this paper, we propose a new multivariate discretization method called Multivariate Interdependent Discretization for Continuous Attributes MIDCA. The method incorporates the normalized relief and information measures to look for the best correlated attribute respect to each continuous-valued attribute being discretized, and using the discovered best correlated attribute as the interdependent attribute to carry out the multivariate discretization. As we believe that a good multivariate discretization scheme for continuous-valued attributes should highly rely on their perfect correlated attributes respectively. Since among an attribute space, each attribute should have at least one most relevant attribute that may be different from others. Our novel multivariate discretization algorithm can minimize the uncertainty between the interdependent attribute and the continuous-valued attribute being discretized and at the same time to maximize their correlation. Such method can be used as a preprocessing step for the learning algorithms. The empirical results demonstrate a comparison of performance between MIDCA and various discretization mthods for two decision tree algorithms ID3 and C4.5 on twelve real-life datasets from UCI repository. Keywords: multivariate discretization, interdependent feature, correlated attribute, data mining, machine learning.

Introduction

Decision tree is one of the most widely used and practical method for inductive inference in the data mining and machine learning discipline Han and Kamber [1]. Most decision tree learning algorithms are limited to handle the attributes with discrete values only. However, the datasets are always the mix of discrete and continuous values of attributes. The common method to handle continuousvalued attributes is to discretize them by dividing them into intervals. Moreover, even if a learning algorithm is able to deal with continuous-valued attributes directly, it is still better to carry out the discretization prior the learning algorithm, so as to minimize the information lost and increase the classification accuracy. Many discretization algorithms developed in data mining focus on univariate, which discretize each continuous-valued attribute independently without considering the interdependent relationship between other attributes, at most taking the interdependent relationship between class attribute into account. The simplest discretization method is equal width interval binning Dougherty et al [2], which divides the range of a continuous-valued attribute into several equally sized bins. It makes no use of class attribute and thus it is an unsupervised discretization method. The best discretization algorithms are supervised that take the class attribute information into consideration. One is entropy-based, it recursively partitions a continuous-valued attribute to obtain the minimal entropy measure Fayyad and Irani [3], and uses the minimum description length principle to be the stopping criteria; the other is based on the chi-square statistics Liu and Setiono [4], which aims at having the most similar distribution to the original data even after discretization. Evaluations and comparisons of some supervised and unsupervised univariate discretization methods can be found in [2, 5]. As Bay [6, 7] indicated, the discretized intervals should make sense to human expert. For example, when learning the medical data for hypertensive patients, we know that a persons blood pressure is increasing as his/her age increasing. Therefore it is improper to set 140mmHg and 90mmHg as systolic pressure and diastolic pressure respectively for all patients. Since the standard for diagnosing hypertension is a little bit different from young people (orthoarteriotony is 120-130mmHg/80mmHg) to the old people (orthoarteriotony is 140mmHg/90mmHg) [8]. If the blood pressure of a person aged 20 is 139mmHg/89mmHg, he/she might be considered as a potential hypertensive. In contrast, if a person aged 65 has the same blood pressure measures, he/she is definitely considered as normotensive. Obviously, to discretize the continuousvalued attribute blood pressure, it must take at least the attribute age into consideration. While discretizing other continuous-valued attribute may not take age into consideration. The only solution to address the mentioned problem is to use multivariate interdependent discretization in place of univariate discretization. Multivariate interdependent discretization concerns the correlation between the attribute being discretized and the other potential interdependent

attributes in addition to the class attribute. There are few literatures discussed about the multivariate interdependent discretization methods. In this paper, we propose a new multivariate interdependent discretization method that can be used as a preprocessing step for the learning algorithms, called Multivariate Interdependent Discretization for Continuous Attributes MIDCA. The method is based on the normalized relief and information measure to look for the best correlated attribute for each continuous-valued attribute being discretized, and using it as the interdependent attribute to carry out the multivariate discretization. In the next section, we describe our discretization method in detail. The evaluation of the proposed algorithm on some real datasets is performed in section 3. Finally, we discuss the limitations of the method and present the directions for our further research in section 4.

MIDCA algorithm

In order to obtain the good quality for a multivariate discretization, to discover a best interdependent attribute respect to each continuous-valued attribute being discretized is considered as the primary vital task. To measure the correlation between attributes, entropy measure [3, 9, 10] and relief theory [11, 12] are adopted. Relief is a feature weighting algorithm for estimating the quality of attributes such that it is able to discover the interdependencies among attributes. Entropy from information theory is a measure of the uncertainty for an arbitrary variable. In this section, we first recall the entropy information and theory of relief and then describe our discretization method in detail. 2.1 Entropy information Entropy specifies the minimum number of bits of information needed to encode the classification of an arbitrary member of instances [9, 10]. Given a collection of instances S, containing C types of examples of a target attribute, the entropy of S relative to this C-classification is defined as Entropy ( S ) = p ( Si )log ( p( Si )). (1)
iC

where p(Si) is the proportion of S belonging to class i. Based on this measure, we may find out the most informative attribute A relative to a collection of examples S by defining the measure called information gain | Sv | (2) Gain( S , A) = Entropy ( S ) Entropy ( Sv ). vValues ( A ) | S | where Values(A) is the set of all distinct values of attribute A; Sv is the subset of S for which attribute A has value v, that is Sv = {s S | A( s ) = v} .
2.2 Relief

The key idea of relief Kira and Rendell [11, 12] is to estimate the quality of an attribute by calculating how well its values distinguish among the instances from both same class and different class. A good attribute should have the same values

for instances from the same class and should differentiate between instances from the different classes. Kononenko [13] notes that Relief attempts to approximate the following difference of probabilities for the weight of an attribute A Relief A = P (different value of A|different class) (3) P(different value of A|same class). which can be reformulated as Gini '( A) p( x) 2 x X Relief A = . (4) (1 p (c) 2 ) p(c) 2
cC cC

where C is the class attribute and


p ( x) 2 (5) Gini '( A) = p(c )(1 P(c)) p ( c | x )(1 p ( c | x )) . 2 cC xX p( x) cC xX Gini is a variance of another attribute quality measure algorithm Gini-index Breiman [14].

2.3 MIDCA

Our proposed multivariate discretization method MIDCA is interested mainly in discovering the best interdependent attribute relative to the continuous-valued attribute being discretized. Among an attribute space, attributes should have certain relevancy between each other. No matter how loose or tight the relationships are, there must exist at least one such interdependent attribute that perfect correlates with the continuous-valued attribute being discretized. As we believe that a good multivariate discretization scheme for continuous-valued attributes should highly rely on their perfect correlated attributes respectively. We assume that a dataset S = {s1, s2, , sN} contains N instances. Each instance s S is defined over a set of M attributes (features) A = {a1, a2, , aM} and a class attribute c C. For each continuous-valued attribute ai A, there exists at least one aj A, such that aj is the most correlated attribute respect to ai, or vice versa, since the interdependent weight is measured in symmetrically. For the purpose of finding out such a best interdependent attribute aj for each continuous-valued attribute ai, both entropy information and relief measure are taken into account to capture their interactions among the attributes space A. First, for each attribute pair (ai, aj) A where i j, we calculate the correlation weights by utilizing both symmetric relief and entropy information. Then normalize the two measures and finally get the best result as our target. The algorithm can be defined as InterdependentWeight (ai , a j ) =
SymGain(a , a ) i j + A 2 SymGain ( a , a ) i M M i / 2. A 2 SymRelief (ai , aM ) M i SymRelief (ai , a j )

(6)

SymGain(ai, aj) and SymRelief(ai, aj) are two symmetric forms of information gain and relief measures respectively, which treated either ai or aj in turn to be the class attribute C in the formula. That is SymGain( A, B ) = [Gain( A, B ) + Gain( B, A) ] / 2. (7) and
Gini '( A) p( x) 2 Gini '( B ) p ( y ) 2 yY x X + SymRelief ( A, B) = / 2. (8) 2 2 2 2 (1 p(b) ) p(b) (1 p (a) ) p(a ) bB bB a A a A The advantage of incorporating the measures of information gain and relief in our multivariate interdependent discretization algorithm is to minimize the uncertainty between the continuous-valued attribute being discretized and its interdependent attribute, and at the same time to maximize their correlation. The measures output from eqns (7) and (8) are in different standards, the only way to balance them is to normalize each result by using proportions in place of real values. Thus, a best interdependent attribute with respect to the continuousvalued attribute being discretized is determined by further averaging the two normalized proportions for which the interdependent weight is the best amongst all the potential interdependent attributes. However, if a potential interdependent attribute is a continuous-valued attribute too, it is first discretized with entropybased method [2, 3]. This is important and may reduce the bias of in favor of the attribute with more values. Furthermore, our method creates an interdependent attribute for each continuous-valued attribute in a dataset rather than using one for all continuous-valued attributes, this is also the main factor for improving the final classification accuracy. Once the best interdependent attribute has been discovered, the multivariate interdependent discretization process carries out by adopting the most efficient supervised discretization algorithm Minimal Entropy Partitioning with MDLP Fayyad and Irani [3]. Nevertheless, our method makes several differences compared with the original one. First, our method ensures at least binary discretization for each continuous-valued attribute, which is different from the original method that sometimes the boundary of a continuous-valued attribute is [-, +]. We realized that if a continuous-valued attribute generates null cutting point means the attribute is useless and will be ignored during learning process. This may conflict with our belief that most continuous-valued attributes in medical domain have their specific meanings. For example, most figures express the degrees of illness, such as blood pressure, heart rate, cholesterol, etc., so their discretization cannot be ignored. Second, our discretization carries out with respect to the best interdependent attribute that discovered from eqn (6) in addition to the class attribute. Moreover, we assume that the interdependent attribute INT has T discrete values; as such each of its distinct value identifies a subset in the original dataset, the probability should be generated relative to the subset in place of the whole dataset. Therefore, the combinational probability distribution over the attribute space {C} A is redefined as well as the information gain algorithm as

MIDCAInfoGain( A, P; INTT , S ) =

(9) | Sv | Entropy ( Sv ). |S| where the algorithm defines the class information entropy of the partition induced by P, which is a collection of candidate cutting points for attribute A and under the projection of value T for the interdependent attribute INT. We replace the Entropy(S) with the conditional entropy Entropy(S|INTT) to emphasize the interaction between the interdependent attribute INT. As a consequence, vValues(A)|INTT becomes the set of all distinct values of attribute A of the cluster induced by T of interdependent attribute INT; Sv is the subset of S for which attribute A has value v and under the projection of T for INT, that is Sv = {s S | A( s ) = v INT ( s ) = T } .
Entropy ( S | INTT )
vValues ( A )|INTT

2.4 MIDCA high level descriptions

We now present the high level descriptions of our MIDCA algorithm and the algorithm INTDDiscovery for discovering the best correlated interdependent attribute as follows: Algorithm MIDCA For each continuous-valued attribute A Sort A in ascending order; Discover the best interdependent attribute of A by INTDDiscovery; Repeat Discover the best cutpoints by MIDCAInfoGain measure; Until MDLP = pass; Regenerate the dataset according to the obtained cutpoints; End MIDCA. Algorithm INTDDiscovery For each attribute atr other than A If atr is a continuous-valued attribute Discretize atr using entropy-based method; Calculate symmetric entropy SymGain for A and atr; Calculate symmetric relief SymRelief for A and atr; Normalize SymGain and SymRelief; Average SymGain and SymRelief; Output the attribute with the highest average measure; End INTDDiscovery.

Experiments

In this section, our empirical evaluation results are presented. We have tested our method MIDCA on twelve real-life datasets from UCI repository Blake and Merz [15], which containing a mixture of continuous and discrete attributes. The details of each dataset are listed in table 1. In order to make comparisons between MIDCA algorithm and different discretization methods, we simulated

the univariate and multivariate discretization methods. While the interdependent attributes of multivariate discretizations are obtained by using Relief and Gain Ratio respectively. In the experiment, MIDCA and various discretization methods are used as preprocessing steps for the two learning algorithms: ID3 Quinlan [16], and C4.5 Quinlan [17, 18]. Table 1: Twelve real-life datasets from UCI
Features No. 1 2 3 4 5 6 7 8 9 10 11 12 Dataset Cleve Hepatitis Hypothyroid Heart Sick-euthyroid Iris Australian Auto Breast Crx Diabetes Horse-colic Average Continuous 6 6 7 13 7 4 6 15 10 6 8 7 7.92 Discrete 7 13 18 0 18 0 8 11 0 9 0 15 8.25 Instances Size Training Testing set set 202 101 103 52 2108 1055 180 90 2108 1055 100 50 460 230 136 69 466 233 490 200 512 256 300 68 597.08 288.25 Classes 2 2 2 2 2 3 2 7 2 2 2 2 2.5

Table 2: Comparison of classification error rates of decision tree algorithm ID3 with/without discretization algorithms
No. 1 2 3 4 5 6 7 8 9 10 11 12 Avg No discretization 35.644.79 21.155.72 0.950.30 23.334.48 3.790.59 6.003.39 18.702.58 26.095.33 5.581.51 27.503.17 error 26.475.39 17.753.39 ID3 Classification Error Rate (%) Univariate Multivariate discretization discretization Average(Relief, GainRatio) 26.734.43 27.704.47 19.235.52 21.155.72 1.230.34 0.190.13 15.563.84 18.344.10 3.700.58 0.000.00 4.002.80 6.003.39 20.872.69 17.392.51 26.095.33 26.815.33 3.861.27 4.291.33 21.502.91 17.002.66 25.782.74 33.982.97 32.355.72 4.412.51 16.743.18 14.772.93

MIDCA 17.823.83 9.624.13 0.000.00 17.784.05 0.000.00 4.002.80 20.442.67 23.885.25 4.291.33 20.502.86 32.032.92 8.823.47 13.272.78

The experiments results summarized in table 2 and table 3 reveal that MIDCA improves the classification accuracy on average. In table 2, although MIDCA increases the error rate on three datasets for ID3 with univariate and

multivariate discretizations respectively, it decreases the error rate on all but only one dataset for ID3 without discretization. In table 3, it improves the performance on all but one dataset for C4.5 with/without univariate discretization and two datasets for C4.5 with multivariate discretization. For the rest of the datasets, MIDCA provides a significant increase in classification accuracy, especially on two datasets: Hypothyroid and Sick-euthyroid, which approached to zero error rates for both learning algorithms. As observed from table 2, MIDCA slightly decreases the performance on three datasets comparing with the ID3 with univariate discretization; similarly, MIDCA increases the error rate on one dataset for C4.5 with univariate discretization in table 3. As we discovered that all these downgrade datasets contain only continuous attributes. This makes worse classification performance, because the MIDCA algorithm needs to carry out a univariate discretization first, prior the multivariate discretization if an interdependent attribute is a continuousvalued attribute too. This extra step increases the uncertainty to the attribute being discretized, hence increases the error rate accordingly. Table 3: Comparison of classification error rates of decision tree algorithm C4.5 with/without discretization algorithms
No. 1 2 3 4 5 6 7 8 9 10 11 12 Avg No discretization 24.8 17.3 0.9 17.8 3.1 8.0 13.9 29.0 5.6 17.5 30.9 19.1 15.66 C4.5 Classification Error rate (%) Univariate Multivariate discretization Average(Relief, GainRatio) discretization 21.8 28.7 13.5 17.3 0.3 0.2 16.7 16.7 0.0 0.0 18.0 6.0 13.5 19.6 31.9 28.2 4.3 3.9 15 16.5 30.5 30.5 14.7 2.9 15.02 14.21

MIDCA 17.8 9.6 0.0 21.1 0.0 6.0 13.5 23.9 3.9 15 30.1 5.9 12.33

Moreover, from the average error rate depicted in table 2 and table 3 respectively, our method MIDCA indeed decreases the classification error rate from 17.75%, 16.74% and 14.77% down to 13.27% of ID3 algorithm; and from 15.66%, 15.02% and 14.21% down to 12.33% of C4.5 algorithm, although several datasets obtained higher error rates versus the average of the algorithms with multivariate discretizations of relief and gain ratio respectively. The improvements relative to both algorithms without discretizations, with univariate and multivariate discretizations reach to approximately 25.2% and 21.3%, 20.7% and 17.9%, and 10.2% and 13.2% respectively. The least improvement is over 10%, this verifies that our algorithm MIDCA that incorporating Relief and Gain

Ratio does outperform their original ones, and of course better than the univariate discretization method and no discretization at all.

Conclusions and future research

In this paper, we have proposed a novel method for multivariate interdependent discretization that focused on the discovery of a best interdependent attribute for each continuous-valued attribute. The method can be used as a preprocessing tool for any learning algorithms, and it ensures at least the binary discretization so that minimizes the information lost and maximizes the classification accuracy. The empirical evaluation results presented in this paper indicate the significant evidence that our method MIDCA can appropriately discretize a continuousvalued attribute with respect to a specific interdependent attribute, thus improves the final classification performance. However, the method has limitation in handling the dataset contains all continuous-valued attributes. If this is the case, the complexity and cost for discovering an interdependent attribute will be increased and the performance of MIDCA will be decreased. Since a perfect matching of an interdependent attribute to a continuous-valued attribute is considered as the key success factor in multivariate interdependent discretization. Our experiments were performed by applying ID3 and C4.5 learning algorithms, for further comparisons, we plan to perform the experiments by other learning algorithms, such as naive-bayes Langley et al [19], or clusters, etc. On the other hand, further research should include investigating the complexity as well as efficiency of the algorithm, and may extend the discretization on more than two attributes. Finally, limitations should be resolved to be able to handle continuous-valued interdependent attribute efficiently and effectively. These addressed research directions may finally guide us to create a valuable algorithm.

References
[1] Han J. & Kamber M., Data Mining - Concepts and Techniques, Morgan Kaufmann Publishers, 2000. [2] Dougherty J., Kohavi R. & Sahami M., Supervised and Unsupervised Discretization of Continuous Features. Proceedings of the Twelfth International Conference, Morgan Kaufmann Publishers, San Francisco, CA. 1995. [3] Fayyad U. M. & Irani K. B., Multi-interval discretization of continuousvalued attributes for classification learning. Proceeding of the Thirteenth International Joint Conference on Artificial Intelligence, pp. 1022 1027, 1993. [4] Liu H. & Setiono R., Feature selection via discretization, Technical report, 1997, Dept. of Information Systems and Computer Science, Singapore. [5] Liu H., Hussain F., Tan C. & Dash M., Discretization: An enabling technique. Topics in Data Mining and Knowledge Discovery, pp. 393 423, 2002. [6] Bay S. D., Multivariate Discretization of Continuous Variables for Set

Mining. Proceedings of the Sixth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 315 319, 2000. [7] Bay S.D., Multivariate Discretization for Set Mining. Knowledge and Information Systems, 3(4), pp. 491 512, 2001. [8] 1998 [9] Mitchell T. M., Machine Learning, McGraw-Hill Companies, Inc. 1997. [10] 2000 [11] Kira K. & Rendell L., A practical approach to feature selection. Proc. Intern. Conf. on Machine Learning, Aberdeen, Morgan Kaufmann, pp. 249 256, 1992. [12] Kira K. & Rendell L., The feature selection problem: traditional methods and new algorithm. Proc. AAAI92, San Jose, CA. 1992. [13] Kononenko I., On biases in estimating multi-valued attributes. In IJCAI95, pp. 1034 1040, 1995. [14] Breiman L., Technical note: Some properties of splitting criteria. Machine Learning, 24: pp. 41 47, 1996 [15] Blake C. L. & Merz C. J., UCI Repository of machine learning databases. Irvine, CA: University of California, Department of Information and Computer Science. 1998. http://www.ics.uci.edu/~mlearn/MLRepository.html [16] Quinlan J. R., Induction of decision trees. Machine Learning, 1(1), pp. 81 106, 1986. [17] Quinlan J. R., C4.5: Programs for Machine Learning, San Mateo, CA. Morgan Kaufmann, 1993. [18] Quinlan J. R., Improved use of continuous attributes in C4.5. Journal of Artificial Intelligence Research 4, pp. 77 90, 1996. [19] Langley P., Iba W. & Thompsom K., An analysis of Bayesian classifiers. In Proceedings of the tenth national conference on artificial intelligence, AAAI Press and MIT Press, pp. 223 228, 1992.

Potrebbero piacerti anche