Sei sulla pagina 1di 10

International Journal of Advances in Science and Technology, Vol. 3, No.

3, 2011

Incomplete data with Multi-Valued Information Systems

M.Shokry1 and Assem.Elshenawy2


1

Physics and Engineering Mathematics department, Faculty of Engineering, Tanta University, Tanta, Egypt, mohnayle@yahoo.com Physics and Engineering Mathematics department, Faculty of Engineering, Tanta University, Tanta, Egypt, dod_assem@yahoo.com

Abstract
Incomplete decision tables are described by characteristic relations in the same way complete decision tables are described by indiscernibility relations. A multi-valued information system (MVIS) is a generalization of the idea of a single-valued information system (SVIS), in a MVIS, attribute functions are allowed to map elements to sets of attribute values. Data sets, described by decision tables, are incomplete when for some cases (examples, objects) the corresponding attribute values are missing, e.g., are lost or represent do not care condition. we will discuss a technique to work with incomplete decision tables using a block of an attribute-value pair. Characteristic relations are conveniently determined by blocks of attribute-value pairs. Three different kinds of lower and upper approximations for incomplete decision tables may be easily computed from characteristic relations. All three definitions are reduced to the same definition of the indiscernibility relation when the decision table is complete.

Keywords: Rough set, incomplete data, multi-valued systems. 1. Introduction


The approximation space [7, 8, 10, 13] is a pair of (U, R), where U is a non-empty finite set of objects(states, patients, digits, cars, students,.,etc.) called a universe and R is an equivalence relation over U which makes a partition for U, i.e. a family C= { 1, 2, 3,, n } such that i U, i , i for j = , = 1, 2, 3,,n and = U, the class C is called the knowledge base of (U, R). the universe U of objects i with relation R play an important role in converting data into knowledge which use R as a tool of a mathematical model for dealing with members and subsets of U [9]. We will assume that data sets are presented as decision tables. In such a table columns are labeled by variables and rows by case names. In the simplest case such case names, also called cases, are numbers. Variables are categorized as either independent, also called attributes or dependent, called decisions [4, 6]. Usually only one decision is given in a decision table. The set of all cases that

Special Issue

Page 39 of 95

ISSN 2229 5216

International Journal of Advances in Science and Technology, Vol. 3, No.3, 2011 correspond to the same decision value is called a concept (or a class). In most articles on rough set theory it is assumed that for all variables and all cases the corresponding values are specified. For such tables the indiscernibility relation, one of the most fundamental ideas of rough set theory, describes cases that can be distinguished from other cases [4, 5, 20]. For the set of elements U called universe and A refers to a non empty finite set of attributes, , such that a . The set a is called the range of the attribute a. For an element and an attribute , the pair ( ) has the attribute . The pair (U, R) is a indicates that called an information system and is often referred to as a single-valued information system SVIS [14]. In a SVIS, attributes map elements to a single attribute value = in the range a. The MVIS (multi-valued information systems) is a generalization of the idea of SVIS [14]. In MVIS attribute functions are allowed to map Va elements to sets of attribute values [39]. More formally, we allow multi-valued attributes a such that .A subset may also be referred to as an attribute value, A SVIS being a particular case of a MVIS [9, 14]. a However, in many real-life applications, data sets have missing attribute values or, in other words, the corresponding decision tables are incompletely specified. For simplicity, incompletely specified decision tables will be called incomplete decision tables [2, 4]. Generally there are two reasons for decision tables to be incomplete. The first reason is that an attribute value, for a specific case, is lost. For example, originally the attribute value was known; however, due to a variety of reasons, currently the value is not recorded. Maybe it was recorded but is erased. The second possibility is that an attribute value was not relevant the case was decided to be a member of some concept, i.e., was classified, or diagnosed, in spite of the fact that some attribute values were not known. For example, it was feasible to diagnose a patient regardless of the fact that some test results were not taken (here attributes correspond to tests, so attribute values are test results). Since such missing attribute values do not matter for the final outcome, we will call them do not care conditions. The main objective of this paper is to study incomplete decision tables, i.e., incomplete data sets, or, yet in different words, data sets with missing attribute values. We will assume that in the same decision table some attribute values may be lost and some may be do not care conditions [3, 4, 15]. Incomplete decision tables in which all attribute values are lost, from the viewpoint of rough set theory, were studied for the first time in [8], On the other hand, incomplete decision tables in which all missing attribute values are do not care conditions, from the view point of rough set theory were studied for the first time in [2].

2. Preliminaries
2.1 Blocks of Attribute-value Pairs and Characteristic Relations
The input data sets are presented in the form of a decision table, an example of a decision table is shown in Table 1. Rows of the decision table represent cases, while columns are labeled by variables. The set of all cases will be denoted by U. In Table 1, U= {1, 2, ...., 8}. Independent variables are called attributes and a dependent variable is called a decision and is denoted by d. The set of all attributes will be denoted by A. In Table 1, A= {Temperature, Headache, Nausea}. Any decision table defines a function that maps the direct product of U and A into the set of all values. For example, in Table 1, (1, Temperature)= high. Function describing Table 1 is completely specified (total). A decision table with completely specified function will be called completely specified, or, for the sake of simplicity, complete Rough set theory [11], [12] is based on the idea of an indiscernibility relation, defined for complete decision tables. Let B be a nonempty subset of the set A of all attributes. The indiscernibility relation IND (B) is a relation on U defined for , U as follows: ( , ) IND (B) if and only if ( , ) = ( , ) for all , So we have IND(A) are {1},{2},{3},{4,5} ,{6, 8} and {7}. For complete decision tables if t = ( , ) is an attribute-value pair then a block of t, denoted [t], is a set of all cases from U that for attribute a have value v then we have, [(Temperature, high)] = {1, 3, 4, 5}, [(Temperature, very high)] = {2}, [(Temperature, normal)] = {6, 7, 8}, [(Headache, yes)] = {1, 2, 4, 5, 6, 8}, [(Headache, no)] = {3, 7}, [(Nausea, no)] = {1, 3, 6}, [(Nausea, yes)] = {2, 4,

Special Issue

Page 40 of 95

ISSN 2229 5216

International Journal of Advances in Science and Technology, Vol. 3, No.3, 2011 5, 7}. The indiscernibility relation IND (B) is known when all elementary blocks of IND (B) are known. Such elementary blocks of B are intersections of the corresponding attribute-value pairs, i.e., for any case x U, [x]B = {[( , )] / , ( , ) = } , then we have: [1]A = [(Temperature, high)][(Headache, yes)][(Nausea, no)] = {1}, [2]A = [(Temperature, very high)] [(Headache, yes)][(Nausea, yes)]= {2}, [3]A = [(Temperature, high)][(Headache, no)][(Nausea, no)]= {3}, [4]A= [5]A= [(Temperature, high)][(Headache, yes)][(Nausea, yes)]= {4, 5},[6]A= [8]A= [(Temperature, normal)][(Headache, yes)][(Nausea, no]= {6, 8}, [7]A = [(Temperature, normal)] [(Headache, no][(Nausea, yes)]= {7}. Table1. A complete decision table Attributes Decision Case Temperature Headache Nausea Flu 1 high yes no yes 2 very high yes yes yes 3 high no no no 4 high yes yes yes 5 high yes yes no 6 normal yes no no 7 normal no yes no 8 normal yes no yes Incomplete decision tables are described by characteristic relations instead of indiscernibility relations. Also, elementary blocks are replaced by characteristic sets, an example of an incomplete table is presented in Table 2. Table2. An incomplete decision table Attributes Temperature Headache Nausea high ? no very high yes yes ? no no high yes yes high ? yes normal yes no normal no yes * yes *

Objects Case 1 2 3 4 5 6 7 8

Decision Flu yes yes no yes no no no yes

Definition 2.1.1
The characteristic set KB( ) is the intersection of blocks of attribute-value pairs ( , ) for all attributes a from B for which ( , ) is specified and ( , ) = . Then we have from Table 2: KA(1) = {1, 4, 5, 8}{1, 3, 6, 8}= {1, 8}, KA(2) = {2, 8}{2, 4, 6, 8}{2, 4, 5, 7, 8} = {2, 8}, KA(3) = {3, 7}{1, 3, 6, 8}= {3}, KA(4)= {1, 4, 5, 8}{2, 4, 6, 8}{2, 4, 5, 7, 8}= {4, 8}, and so on.

Special Issue

Page 41 of 95

ISSN 2229 5216

International Journal of Advances in Science and Technology, Vol. 3, No.3, 2011 Note: Characteristic set KB(x) may be interpreted as the smallest set of cases that are indistinguishable from x using all attributes from B, using a given interpretation of missing attribute values. Thus, KA(x) is the set of all cases that cannot be distinguished from x using all attributes.

Definition 2.1.2
The characteristic relation R(B) is a relation on U defined for x, y R(B) if and only if y KB( ). U as follows:

( , )

The characteristic relation R(B) is known if we know characteristic sets K(x) for all x U, For KB( ) generated from Table 2, R(A) = {(1, 1), (1, 8), (2, 2), (2, 8), (3, 3), (4, 4), (4, 8), (5, 4), (5, 5), (5, 8), (6, 6), (6, 8), (7, 7), (8,2), (8, 4), (8, 6), (8, 8)}.

2.2 Lower and Upper Approximations:


For completely specified decision tables lower and upper approximations are defined on the basis of the indiscernibility relation, for incompletely specified decision tables lower and upper approximations may be defined in a few different ways let be a concept, let B be a subset of the set A of all attributes, and let R(B) be the characteristic relation of the incomplete decision table with characteristic sets K( ), where U. The first definition B-lower approximation of is defined as follows: = {x U/ KB(x) }, and B-upper approximation of is = {x U/ KB(x) }.

The second method of defining lower and upper approximations for complete decision tables uses another idea: lower and upper approximations are unions of elementary sets, subsets of U. Therefore we may define lower and upper approximations for incomplete decision tables by analogy with the second method using characteristic sets instead of elementary sets a subset B-lower approximation of is defined as follows: = {KB(x)| x U,KB(x) },a subset B-upper approximation of X is = {KB(x)|x U,KB(x) }.

The third concept to modify the subset definition of lower and upper approximation by replacing the universe U from the subset definition by a concept , B-lower approximation of the concept is defined as follows: Note: So far we have used two approaches to missing attribute values, in the first one a missing attribute value was interpreted as lost, in the second as a do not care condition. There are many other possible approaches to missing attribute values, for some discussion on this topic see [7]. = {KB(x)| x ,KB(x) } and B-upper approximation of is = {KB(x)| x ,KB(x) }.

3. New result
Definition 3.1 Incomplete MVIS
The missing attribute values are denoted either by ? or by *, lost values will be denoted by ?, do not care conditions will be denoted by *, in MVIS (U, ), each attribute implies a multi-values of the same attribute at a time.

Special Issue

Page 42 of 95

ISSN 2229 5216

International Journal of Advances in Science and Technology, Vol. 3, No.3, 2011

Definition 3.2
Characteristic set KB( ) in MVIS is the intersection of blocks of attribute-value pairs ( , ) for all attributes from B for which ( , ) is specified and ( , ) .

Definition 3.3
The indiscernibility relation IND(B) is know when all elementary blocks of IND(B) are known, such elementary blocks of B are intersections of the corresponding attribute-value pairs, i.e. for any case , we can define [ ]B = {[( )]| B, ( , ) }.

Example 3.1
Let U = { } be students, = { 1, 2, 3} be languages, sports and skills as shown in Table 3 Where: 1: Languages = {English, French, Arabic}= {E, F, A}, 2: Sports={Tennis, Football, Basketball} = {T, F, B}and 3: Skills={Swimming, Running, Fishing}= {S, R, F}. To show that how to deal with the incomplete data as ? or do not care we should note that we deal with * to be verified for any attribute we need, by means of if in we have * means we can assume it as all cases i.e. {E, F}, {E}, {E, A}, {A} and so on. Table3. Incomplete MVIS Students U Ali Basem Charles Denis Attributes
1 2 3

* {E} {E, A} ?

{T, F} * {T} {T, B}

{S} ? {R, F} {S,F}

But with ? (lost value) we do not know or we can not suggest a value of it so we can say that as follows, for incomplete decision tables the definition of a block of an attribute-value pair must be modified. If for an attribute a there exists a case such that ( , )= ?, i.e., the corresponding value is lost, then the case is not included in the block [( , )] for any value of attribute a. If for an attribute a there exists a case such that the corresponding value is a do not care condition, i.e. ( , ) = , then the corresponding case should be included in blocks [( , )] for all values of attribute a, then we can find in our example that: [( 1, {E})]= { }, [( 1, {A})]= { }, [( 2, {T})]= { }= U, [( 3,{F})]= { }, [( 2,{T, B})]= { }, and so on. Note that referes to student Ali, refers to Basem, to Charles and to Denis. For =B we find that [ ]A= { } { } = { }, [ ]A = { ,c}, [ ]A= { } { } { }={ } and [ ]A= { } { }= { }. Also we can get the characteristic set KB( ) in case of incomplete data table according to new definition where = B as follows: KA( )= { } { } = { }, KA( ) = { }, KA( ) = { } { } { } = { } and KA( ) = { } { } = { }. Also the characteristic relation R( ) can be constructed as follows: R( )= {( ), ( ), ( ), ( ), ( ), ( )}.

Special Issue

Page 43 of 95

ISSN 2229 5216

International Journal of Advances in Science and Technology, Vol. 3, No.3, 2011 Note:The characteristic relation R(B) may be defined independently of characteristic sets in the following way: ) R(B) if and only if ( , ) ( , ) or ( , )= or (y, )= for all B such that ( , ) ?. To calculate lower and upper approximations of any set we can use the three different definitions of the lower and upper defined above as follows, let = { }, then by using the first definition, we have the lower approximation } ( for } { ) ={ = {x U/ KB(x) } ={ } and the upper approximation = {x U/ KB(x) = card ( )/ card } }, also we can calculate the accuracy according to the definition,

= 2/3 = 66.66%. For = { } ={ } and ={ } = 3/3 = 100%. By using the second definition of the lower and upper approximation, = {KB(x)| x U, KB(x) ={ ={ } }, we have { ={ }= { } { }, and the upper approximation, }= U the accuracy = 3/3 = 100%. , KB(x) }, for }= } = 2/4 = 50%. For ={ } = {KB(x)|x U,KB(x) ={ } ={ { } {d}= {

={ } }, and the } { }= } and

} { }= { } and The third definition =

={ } = {KB(x)| x {KB(x)| x

} we have, ={ } ={ } {

upper approximation { ={ Note:

, KB(x)

{KB(x)| x

} = U, and the accuracy } = 3/3 = 100%.

= 2/4 = 50%. For = {

} { }= {

All three definitions are the same in case of complete data tables. The second definition of the upper approximation is subset of the third one. All three definitions of lower approximation are the same.

4. Second point of view:


Now we will introduce another way to calculate the characteristic set which of course will affect the definition of the characteristic relation and also will change the lower and upper approximations of the incomplete data tables.

Definition 4.1
Characteristic set KB( ) in MVIS is the intersection of blocks of attribute-value pairs ( , ) for all attributes from B for which ( , ) is specified and ( , ) .

Definition 4.2
The indiscernibility relation IND(B) is know when all elementary blocks of IND(B) are known, such elementary blocks of B are intersections of the corresponding attribute-value pairs, i.e. for any case , we can define [ ]B = {[( )]| B, ( , ) }.

Example 4.1
Continued from example 3.1 and according to the new definition of KB( ) we can find the following: KA( )= { } { }={ }, KA( )={ }, KA( ) = { } { } { }={ } and KA( )={ } { }={ }. Also the characteristic relation R( ) can be constructed as follows:

Special Issue

Page 44 of 95

ISSN 2229 5216

International Journal of Advances in Science and Technology, Vol. 3, No.3, 2011 R( )={( ),( ), ( ), ( ), ( ), ( ), ( ), ( ),( )}, of course there is a difference in the results in the characteristic sets and characteristic relation from the first point of view. Also we can calculate the lower and upper approximation of any set by using the three definitions according to the new definition of the characteristic set KB( ) as follows: First definition, B-lower approximation of is defined as follows: ={x U/ KB(x) }, and B-upper approximation of is = {x U/ KB(x) }, then for ={ }, we have } ={ = { } and } and ={ ={ }= U U,KB(x) }, we have ={ } },a = { }= , KB(x) }, we have ={ } { } }, also we can calculate the accuracy = 1/3 = 33.33%. For = { = 2/4 = 50%. Second definition, a subset B-lower approximation of subset B-upper approximation of X is { { } and ={ = { KB(x)| x }= U, and the accuracy

is defined as ={

= {KB(x)| x }

U, KB(x) = 2/4 = 50%. For

}, then for ={

} and ={ } { } { } { }= { }= U = 3/4 = 75%. Third definition, B-lower approximation of the concept is defined as follows: = {KB(x)| x } and B-upper approximation of ={ } and } and ={ ={ } { is = { KB(x)| x }= U , KB(x) }, then for } ={ }= U, and the accuracy } { }= { = 2/4 = 50%. For = { = 3/4 = 75%.

={

Note
In this case we note that there is a difference between the three definitions in case of lower approximation not as in the first way of defining the characteristic set KB( ) (Definition 3.2).

5. Third point of view:


Now we have another way to calculate the characteristic set which of course will affect the definition of the characteristic relation and also will change the lower and upper approximations of the incomplete data tables and we can consider this view as the general one.

Definition 5.1
Characteristic set KB( ) in MVIS is the union of blocks of attribute-value pairs ( , ) for all attributes from B for which ( , ) is specified and ( , ) .

Example 5.1
Continued from example 3.1 and according to the new definition of KB( ) we can find the following: KA( )= { } { }={ }, KA( ) = { }, KA( ) = { } { } { }={ } and KA( ) = { } { }={ }. Also the characteristic relation R( ) can be constructed as follows: R( )= {( ), ( ), ( ), ( ), ( ), ( ), ( ), ( ), ( ), ( ), ( ), ( )}. Note: We note that all the characteristic sets for each object in case of the intersection case is a subset of the union case and also the characteristic relation is included inside the case of union. Also we can calculate the lower and upper approximation of any set by using the three definitions according to the new definition of the characteristic set KB( ) as follows: First definition, B-lower approximation of is defined as follows: ={x U/ KB(x) }, and B-upper approximation of { { is = {x U/ KB(x) }, then for ={ = 2/4 = 50%. For }, we have = { } = { } and = = }, also we can calculate the accuracy }= U = 1/4 = 25%. = { } and

Special Issue

Page 45 of 95

ISSN 2229 5216

International Journal of Advances in Science and Technology, Vol. 3, No.3, 2011 Second definition, a subset B-lower approximation of subset B-upper approximation of X is { { = { KB(x)| x is defined as = {KB(x)| x U,KB(x) }, we have },a =

U, KB(x)

}, then for ={

} and ={ }= U, and the accuracy = 3/4 = 75%. For = { } ={ } and = }= U = 3/4 = 75%. Third definition, B-lower approximation of the concept is defined as follows: = {KB(x)| x , KB(x) } and B-upper approximation of ={ ={ } and }= U ={ is = { KB(x)| x , KB(x) }, then for ={ } ={ }, we have } and }= U, and the accuracy = 3/4 = 75%. = 3/4 = 75%. For = {

6. Fourth point of view:


Definition 6.1
Characteristic set KB( ) in MVIS is the union of blocks of attribute-value pairs ( , ) for all attributes from B for which ( , ) is specified and ( , ) .

Example 6.1
Continued from example 3.1 and according to the new definition of KB( ) we can find the following: KA( )= { } { }={ }, KA( )={ }, KA( ) = { } { } { }={ } and KA( )={ } { }={ }. Also the characteristic relation R( ) can be constructed as follows: R( )={( ), ), ( ), ( ), ( ), ( ), ( ), ( ) , ( ) , ( ) , ( ), ( ), ( ), ( ),( )}. Note: We can also note that all the characteristic sets for each object in case of the intersection case is a subset of the union case and also the characteristic relation is included inside the case of union. Also we can calculate the lower and upper approximation of any set by using the three definitions according to the new definition of the characteristic set KB( ) as follows: First definition, B-lower approximation of is defined as follows: ={x U/ KB(x) }, and B-upper approximation of is = {x U/ KB(x) }, then for ={ } }, we have = { } and = ={ = {KB(x)| x = { and ={ }= U U,KB(x) } and }, = },a = = also we can calculate the accuracy = 0/4 = 0%. For = { 1/4 = 25%. Second definition, a subset B-lower approximation of subset B-upper approximation of X is and { = { KB(x)| x

is defined as

U, KB(x)

}, then for ={

}, we have

= { }= U, and the accuracy = 0/4 = 0%. For = { } }= U = 3/4 = 75%. Third definition, B-lower approximation of the concept is defined as follows: } and B-upper approximation of = is = { KB(x)| x , KB(x) }, then for } and ={ }=U }= U, and the accuracy = 3/4 = 75%. = 0/4 = 0%. For = {

= {KB(x)| x ={ ={

, KB(x) }, we have =

} and

7. General view
Definition 7.1
The miss-approximation space is a space ( ) generated by the sets induced by characteristic sets.

Definition 7.2
The space ( ) is a sub approximation space from ( where i=1, 2, 3, 4, and j= 1, 2, 3, 4,.
*

) if for all Gi

), Gj

), i.e. Gi

Gj

Special Issue

Page 46 of 95

ISSN 2229 5216

International Journal of Advances in Science and Technology, Vol. 3, No.3, 2011

Proposition 7.1
The miss-approximation space formed by KA( ) in both cases of intersection between the characteristic sets is sub missing approximation space formed by KA*( ) in both cases of the union between characteristic sets.

Proposition 7.2
The miss-approximation space formed by KA( ) in both cases of intersection and union between the characteristic sets with the case of defining KA( ) as a ( , ) is sub missing approximation space formed by KA*( ) defined by ( , ) .

8. Conclusion
Multi-valued information systems are very useful way to express most of the real life examples, it was clear that the concept of attribute-value pair blocks is an extremely useful tool. That concept may be used for computing characteristic relations for incomplete multi-valued decision tables, characteristic sets are used for determining lower and upper approximations, also the same idea of attribute-value pair blocks may be used for rule induction. Two concepts of how to deal with the incomplete multi-valued information systems were introduced and there is a difference in the results of both in terms of the characteristic sets and relations and of course lower and upper approximations.

References
[1] Grzymala-Busse, J.W. and Hu, M.: A comparison of several approaches to missing attribute values in data mining. Proceedings of the Second International Conference /on Rough Sets and Current Trends in Computing RSCTC2000, Banff, Canada, October 1619, 340347, 2000. [2] Grzymala-Busse, J.W.: On the unknown attribute values in learning from examples Proc. of the ISMIS-91, 6th International Symposium on Methodologies for Intelligent Systems, Charlotte, North Carolina, October 1619, 1991. Lecture Notes in Artificial Intelligence, vol. 542, Springer-Verlag, Berlin, Heidelberg, New York , 368 377, 1991. [3] Grzymala-Busse, J.W.: Rough set strategies to data with missing attribute values Workshop Notes, Foundations and New Directions of Data Mining, the 3-rd International Conference on Data Mining, Melbourne, FL, USA, November 1922, 5663, 2003. [4] [5] J.F. Peters et al. (Eds.): Transactions on Rough Sets I, LNCS 3100, pp. 7895, 2004. Kryszkiewicz, M.: Rough set approach to incomplete information systems. Proceedings of the Second Annual Joint Conference on Information Sciences, Wrightsville Beach, NC, September 28October 1, 194197, 1995. [6] [7] Kryszkiewicz, M.: Rules in incomplete information systems. Information Sciences 113 , 271292, 1999. Lashin E.F, and Medhat T.: Topological reduction of information systems, Chaos, Solitons and Fractals 25:277286, 2005. [8] Lashin E.F, Kozae A.M., Abo Khadra A.A., and Medhat T.: Rough set theory for topological spaces, International Journal of Approximate Reasoning, 40/1-2: 35-43, 2005.

Special Issue

Page 47 of 95

ISSN 2229 5216

International Journal of Advances in Science and Technology, Vol. 3, No.3, 2011 [9] Medhat T.: Supra Topological Approach for Decision Making via Granular Computing, PhD thesis, Egypt, Tanta University, Faculty of Engineering, 2007. [10] Medhat T.: Topological applications on information analysis by rough sets, Master thesis, Egypt, Tanta University, Faculty of Engineering, 2004. [11] Pawlak, Z.: Decision rules, Bayes rule and rough sets, in New Di- rection in Rough Sets, Data Mining, and Granular-Soft Computing, N. Zhong, A. Skowron, and S. Ohsuga, Eds. Springer, pp.1-9, 1999. [12] [13] Pawlak, Z.: Rough sets, International Journal of Information and computer Science,11(5): 341-356, 1982. Pawlak, Z.: Rough Sets. Theoretical Aspects of Reasoning about Data. Kluwer Academic Publishers, Dordrecht, Boston, London, 1991. [14] Qukbir K.: Idiscernibility and Vagueness in Spatial Information Systems, Doctoral Dissertation, Royal Institute of Technology, Department of Numerical Analysis and Computer Science, Stockholm, 2003. [15] Shen Qing, Jiang Yun-Liang: Attribute reduction of multi-valued information system based on conditional information entropy, Granular Computing, IEEE International Conference, Hangzhou, 26-28 Aug., 562-565, 2008.

[16]

Stefanowski, J. and Tsoukias, A.: Incomplete information tables and rough classification, Computational Intelligence 17, 545566, 2001.

[17]

Stefanowski, J. and Tsoukias, A.: On the extension of rough sets under incomplete information. Proceedings of the 7th International Workshop on New Directions in Rough Sets, Data Mining, and Granular-Soft Computing, RSFDGrC1999, Ube, Yamaguchi, Japan, November 810, 7381, 1999.

[18]

Stefanowski, J.: Algorithms of Decision Rule Induction in Data Mining. Poznan University of Technology Press, Poznan, Poland , 2001.

[19]

Yao, Y.Y.: On the generalizing rough set theory. Proc. of the 9th Int. Conference on Rough Sets, Fuzzy Sets, Data Mining and Granular Computing (RSFDGrC2003), Chongqing, China, October 1922, 4451, 2003.

[20]

Yao, Y.Y., Two views of the theory of rough sets in finite universes. International Journal of Approximate Reasoning 15, pp.291-317, 1996.

[21]

Yao, Y.Y, Zhong N.: Granular computing using information tables, In: Data Mining Rough Sets and Granular Computing, Lin T.Y., Yao Y.Y., Zadeh L.A., (Eds.), Physics-Verlag, Heidelberg, 102-124, 2002.

Special Issue

Page 48 of 95

ISSN 2229 5216

Potrebbero piacerti anche