HOME

TheInfoList



OR:

In
computer science Computer science is the study of computation, automation, and information. Computer science spans theoretical disciplines (such as algorithms, theory of computation, information theory, and automation) to Applied science, practical discipli ...
, a rough set, first described by
Polish Polish may refer to: * Anything from or related to Poland, a country in Europe * Polish language * Poles Poles,, ; singular masculine: ''Polak'', singular feminine: ''Polka'' or Polish people, are a West Slavic nation and ethnic group, w ...
computer scientist Zdzisław I. Pawlak, is a formal approximation of a
crisp set A set is the mathematical model for a collection of different things; a set contains '' elements'' or ''members'', which can be mathematical objects of any kind: numbers, symbols, points in space, lines, other geometrical shapes, variables, or ...
(i.e., conventional set) in terms of a pair of sets which give the ''lower'' and the ''upper'' approximation of the original set. In the standard version of rough set theory (Pawlak 1991), the lower- and upper-approximation sets are crisp sets, but in other variations, the approximating sets may be
fuzzy set In mathematics, fuzzy sets (a.k.a. uncertain sets) are sets whose elements have degrees of membership. Fuzzy sets were introduced independently by Lotfi A. Zadeh in 1965 as an extension of the classical notion of set. At the same time, defined a ...
s.


Definitions

The following section contains an overview of the basic framework of rough set theory, as originally proposed by Zdzisław I. Pawlak, along with some of the key definitions. More formal properties and boundaries of rough sets can be found in Pawlak (1991) and cited references. The initial and basic theory of rough sets is sometimes referred to as ''"Pawlak Rough Sets"'' or ''"classical rough sets"'', as a means to distinguish from more recent extensions and generalizations.


Information system framework

Let I = (\mathbb,\mathbb) be an information system ( attribute–value system), where \mathbb is a non-empty, finite set of objects (the universe) and \mathbb is a non-empty, finite set of attributes such that I:\mathbb \rightarrow V_a for every a \in \mathbb. V_a is the set of values that attribute a may take. The information table assigns a value a(x) from V_a to each attribute a and object x in the universe \mathbb. With any P \subseteq \mathbb there is an associated
equivalence relation In mathematics, an equivalence relation is a binary relation that is reflexive, symmetric and transitive. The equipollence relation between line segments in geometry is a common example of an equivalence relation. Each equivalence relation ...
\mathrm(P): : \mathrm(P) = \left\ The relation \mathrm(P) is called a P''-indiscernibility relation''. The partition of \mathbb is a family of all
equivalence class In mathematics, when the elements of some set S have a notion of equivalence (formalized as an equivalence relation), then one may naturally split the set S into equivalence classes. These equivalence classes are constructed so that elements a ...
es of \mathrm(P) and is denoted by \mathbb/\mathrm(P) (or \mathbb/P). If (x,y)\in \mathrm(P), then x and y are ''indiscernible'' (or indistinguishable) by attributes from P . The equivalence classes of the P-indiscernibility relation are denoted P.


Example: equivalence-class structure

For example, consider the following information table: : When the full set of attributes P = \ is considered, we see that we have the following seven equivalence classes: : \begin \ \\ \ \\ \ \\ \ \\ \ \\ \ \\ \ \end Thus, the two objects within the first equivalence class, \, cannot be distinguished from each other based on the available attributes, and the three objects within the second equivalence class, \, cannot be distinguished from one another based on the available attributes. The remaining five objects are each discernible from all other objects. It is apparent that different attribute subset selections will in general lead to different indiscernibility classes. For example, if attribute P =\ alone is selected, we obtain the following, much coarser, equivalence-class structure: : \begin \ \\ \ \\ \ \end


Definition of a ''rough set''

Let X \subseteq \mathbb be a target set that we wish to represent using attribute subset P; that is, we are told that an arbitrary set of objects X comprises a single class, and we wish to express this class (i.e., this subset) using the equivalence classes induced by attribute subset P. In general, X cannot be expressed exactly, because the set may include and exclude objects which are indistinguishable on the basis of attributes P. For example, consider the target set X = \, and let attribute subset P = \, the full available set of features. The set X cannot be expressed exactly, because in P,, objects \ are indiscernible. Thus, there is no way to represent any set X which ''includes'' O_ but ''excludes'' objects O_ and O_. However, the target set X can be ''approximated'' using only the information contained within P by constructing the P-lower and P-upper approximations of X: : X= \ : X = \


Lower approximation and positive region

The P''-lower approximation'', or ''positive region'', is the union of all equivalence classes in P which are contained by (i.e., are subsets of) the target set – in the example, X = \ \cup \, the union of the two equivalence classes in P which are contained in the target set. The lower approximation is the complete set of objects in \mathbb/P that can be ''positively'' (i.e., unambiguously) classified as belonging to target set X.


Upper approximation and negative region

The P''-upper approximation'' is the union of all equivalence classes in P which have non-empty intersection with the target set – in the example, X = \ \cup \ \cup \, the union of the three equivalence classes in P that have non-empty intersection with the target set. The upper approximation is the complete set of objects that in \mathbb/P that ''cannot'' be positively (i.e., unambiguously) classified as belonging to the ''complement'' (\overline X) of the target set X. In other words, the upper approximation is the complete set of objects that are ''possibly'' members of the target set X. The set \mathbb-X therefore represents the ''negative region'', containing the set of objects that can be definitely ruled out as members of the target set.


Boundary region

The ''boundary region'', given by set difference X - X, consists of those objects that can neither be ruled in nor ruled out as members of the target set X. In summary, the lower approximation of a target set is a ''conservative'' approximation consisting of only those objects which can positively be identified as members of the set. (These objects have no indiscernible "clones" which are excluded by the target set.) The upper approximation is a ''liberal'' approximation which includes all objects that might be members of target set. (Some objects in the upper approximation may not be members of the target set.) From the perspective of \mathbb/P, the lower approximation contains objects that are members of the target set with certainty (probability = 1), while the upper approximation contains objects that are members of the target set with non-zero probability (probability > 0).


The rough set

The tuple \langleX,X\rangle composed of the lower and upper approximation is called a ''rough set''; thus, a rough set is composed of two crisp sets, one representing a ''lower boundary'' of the target set X, and the other representing an ''upper boundary'' of the target set X. The ''accuracy'' of the rough-set representation of the set X can be given (Pawlak 1991) by the following: : \alpha_(X) = \frac That is, the accuracy of the rough set representation of X, \alpha_(X), 0 \leq \alpha_(X) \leq 1, is the ratio of the number of objects which can ''positively'' be placed in X to the number of objects that can ''possibly'' be placed in X – this provides a measure of how closely the rough set is approximating the target set. Clearly, when the upper and lower approximations are equal (i.e., boundary region empty), then \alpha_(X) = 1, and the approximation is perfect; at the other extreme, whenever the lower approximation is empty, the accuracy is zero (regardless of the size of the upper approximation).


Objective analysis

Rough set theory is one of many methods that can be employed to analyse uncertain (including vague) systems, although less common than more traditional methods of
probability Probability is the branch of mathematics concerning numerical descriptions of how likely an Event (probability theory), event is to occur, or how likely it is that a proposition is true. The probability of an event is a number between 0 and ...
,
statistics Statistics (from German language, German: ''wikt:Statistik#German, Statistik'', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of ...
,
entropy Entropy is a scientific concept, as well as a measurable physical property, that is most commonly associated with a state of disorder, randomness, or uncertainty. The term and the concept are used in diverse fields, from classical thermodynam ...
and
Dempster–Shafer theory The theory of belief functions, also referred to as evidence theory or Dempster–Shafer theory (DST), is a general framework for reasoning with uncertainty, with understood connections to other frameworks such as probability, possibility and i ...
. However a key difference, and a unique strength, of using classical rough set theory is that it provides an objective form of analysis (Pawlak et al. 1995). Unlike other methods, as those given above, classical rough set analysis requires no additional information, external parameters, models, functions, grades or subjective interpretations to determine set membership – instead it only uses the information presented within the given data (Düntsch and Gediga 1995). More recent adaptations of rough set theory, such as dominance-based, decision-theoretic and fuzzy rough sets, have introduced more subjectivity to the analysis.


Definability

In general, the upper and lower approximations are not equal; in such cases, we say that target set X is ''undefinable'' or ''roughly definable'' on attribute set P. When the upper and lower approximations are equal (i.e., the boundary is empty), X = X, then the target set X is ''definable'' on attribute set P. We can distinguish the following special cases of undefinability: * Set X is ''internally'' ''undefinable'' if X = \emptyset and X \neq \mathbb. This means that on attribute set P, there are ''no'' objects which we can be certain belong to target set X, but there ''are'' objects which we can definitively exclude from set X. * Set X is ''externally undefinable'' if X \neq \emptyset and X = \mathbb. This means that on attribute set P, there ''are'' objects which we can be certain belong to target set X, but there are ''no'' objects which we can definitively exclude from set X. * Set X is ''totally undefinable'' if X = \emptyset and X = \mathbb. This means that on attribute set P, there are ''no'' objects which we can be certain belong to target set X, and there are ''no'' objects which we can definitively exclude from set X. Thus, on attribute set P, we cannot decide whether any object is, or is not, a member of X.


Reduct and core

An interesting question is whether there are attributes in the information system (attribute–value table) which are more important to the knowledge represented in the equivalence class structure than other attributes. Often, we wonder whether there is a subset of attributes which can, by itself, fully characterize the knowledge in the database; such an attribute set is called a ''reduct''. Formally, a reduct is a subset of attributes \mathrm \subseteq P such that * = P, that is, the equivalence classes induced by the reduced attribute set \mathrm are the same as the equivalence class structure induced by the full attribute set P. * the attribute set \mathrm is ''minimal'', in the sense that \neq P for any attribute a \in \mathrm; in other words, no attribute can be removed from set \mathrm without changing the equivalence classes P. A reduct can be thought of as a ''sufficient'' set of features – sufficient, that is, to represent the category structure. In the example table above, attribute set \ is a reduct – the information system projected on just these attributes possesses the same equivalence class structure as that expressed by the full attribute set: : \begin \ \\ \ \\ \ \\ \ \\ \ \\ \ \\ \ \end Attribute set \ is a reduct because eliminating any of these attributes causes a collapse of the equivalence-class structure, with the result that \neq P. The reduct of an information system is ''not unique'': there may be many subsets of attributes which preserve the equivalence-class structure (i.e., the knowledge) expressed in the information system. In the example information system above, another reduct is \, producing the same equivalence-class structure as P. The set of attributes which is common to all reducts is called the ''core'': the core is the set of attributes which is possessed by ''every'' reduct, and therefore consists of attributes which cannot be removed from the information system without causing collapse of the equivalence-class structure. The core may be thought of as the set of ''necessary'' attributes – necessary, that is, for the category structure to be represented. In the example, the only such attribute is \; any one of the other attributes can be removed singly without damaging the equivalence-class structure, and hence these are all ''dispensable''. However, removing \ by itself ''does'' change the equivalence-class structure, and thus \ is the ''indispensable'' attribute of this information system, and hence the core. It is possible for the core to be empty, which means that there is no indispensable attribute: any single attribute in such an information system can be deleted without altering the equivalence-class structure. In such cases, there is no ''essential'' or necessary attribute which is required for the class structure to be represented.


Attribute dependency

One of the most important aspects of database analysis or data acquisition is the discovery of attribute dependencies; that is, we wish to discover which variables are strongly related to which other variables. Generally, it is these strong relationships that will warrant further investigation, and that will ultimately be of use in predictive modeling. In rough set theory, the notion of dependency is defined very simply. Let us take two (disjoint) sets of attributes, set P and set Q, and inquire what degree of dependency obtains between them. Each attribute set induces an (indiscernibility) equivalence class structure, the equivalence classes induced by P given by P, and the equivalence classes induced by Q given by Q. Let Q = \, where Q_i is a given equivalence class from the equivalence-class structure induced by attribute set Q. Then, the ''dependency'' of attribute set Q on attribute set P, \gamma_(Q), is given by : \gamma_(Q) = \frac \leq 1 That is, for each equivalence class Q_i in Q, we add up the size of its lower approximation by the attributes in P, i.e., Q_i. This approximation (as above, for arbitrary set X) is the number of objects which on attribute set P can be positively identified as belonging to target set Q_i. Added across all equivalence classes in Q, the numerator above represents the total number of objects which – based on attribute set P – can be positively categorized according to the classification induced by attributes Q. The dependency ratio therefore expresses the proportion (within the entire universe) of such classifiable objects. The dependency \gamma_(Q) "can be interpreted as a proportion of such objects in the information system for which it suffices to know the values of attributes in P to determine the values of attributes in Q". Another, intuitive, way to consider dependency is to take the partition induced by Q as the target class C, and consider P as the attribute set we wish to use in order to "re-construct" the target class C. If P can completely reconstruct C, then Q depends totally upon P; if P results in a poor and perhaps a random reconstruction of C, then Q does not depend upon P at all. Thus, this measure of dependency expresses the degree of ''functional'' (i.e., deterministic) dependency of attribute set Q on attribute set P; it is ''not'' symmetric. The relationship of this notion of attribute dependency to more traditional information-theoretic (i.e., entropic) notions of attribute dependence has been discussed in a number of sources (e.g., Pawlak, Wong, & Ziarko 1988; Yao & Yao 2002; Wong, Ziarko, & Ye 1986, Quafafou & Boussouf 2000).


Rule extraction

The category representations discussed above are all ''extensional'' in nature; that is, a category or complex class is simply the sum of all its members. To represent a category is, then, just to be able to list or identify all the objects belonging to that category. However, extensional category representations have very limited practical use, because they provide no insight for deciding whether novel (never-before-seen) objects are members of the category. What is generally desired is an ''intentional'' description of the category, a representation of the category based on a set of ''rules'' that describe the scope of the category. The choice of such rules is not unique, and therein lies the issue of
inductive bias The inductive bias (also known as learning bias) of a learning algorithm is the set of assumptions that the learner uses to predict outputs of given inputs that it has not encountered. In machine learning, one aims to construct algorithms that a ...
. See
Version space Version space learning is a Symbolic artificial intelligence, logical approach to machine learning, specifically binary classification. Version space learning algorithms search a predefined space of hypothesis, hypotheses, viewed as a set of Senten ...
and
Model selection Model selection is the task of selecting a statistical model from a set of candidate models, given data. In the simplest cases, a pre-existing set of data is considered. However, the task can also involve the design of experiments such that the ...
for more about this issue. There are a few rule-extraction methods. We will start from a rule-extraction procedure based on Ziarko & Shan (1995).


Decision matrices

Let us say that we wish to find the minimal set of consistent rules (
logical implication Logical consequence (also entailment) is a fundamental concept in logic, which describes the relationship between statements that hold true when one statement logically ''follows from'' one or more statements. A valid logical argument is one ...
s) that characterize our sample system. For a set of ''condition'' attributes \mathcal = \ and a decision attribute Q, Q \notin \mathcal, these rules should have the form P_i^a P_j^b \dots P_k^c \to Q^d, or, spelled out, :(P_i=a) \land (P_j=b) \land \dots \land (P_k=c) \to (Q=d) where \ are legitimate values from the domains of their respective attributes. This is a form typical of
association rules Association rule learning is a rule-based machine learning method for discovering interesting relations between variables in large databases. It is intended to identify strong rules discovered in databases using some measures of interestingness.P ...
, and the number of items in \mathbb which match the condition/antecedent is called the ''support'' for the rule. The method for extracting such rules given in is to form a ''decision matrix'' corresponding to each individual value d of decision attribute Q. Informally, the decision matrix for value d of decision attribute Q lists all attribute–value pairs that ''differ'' between objects having Q = d and Q \ne d. This is best explained by example (which also avoids a lot of notation). Consider the table above, and let P_ be the decision variable (i.e., the variable on the right side of the implications) and let \ be the condition variables (on the left side of the implication). We note that the decision variable P_ takes on two different values, namely \. We treat each case separately. First, we look at the case P_=1, and we divide up \mathbb into objects that have P_=1 and those that have P_ \ne 1. (Note that objects with P_ \ne 1 in this case are simply the objects that have P_=2, but in general, P_ \ne 1 would include all objects having any value for P_ ''other than'' P_=1, and there may be several such classes of objects (for example, those having P_=2,3,4,etc.).) In this case, the objects having P_=1 are \ while the objects which have P_ \ne 1 are \. The decision matrix for P_=1 lists all the differences between the objects having P_=1 and those having P_ \ne 1; that is, the decision matrix lists all the differences between \ and \. We put the "positive" objects (P_=1) as the rows, and the "negative" objects P_ \ne 1 as the columns. : To read this decision matrix, look, for example, at the intersection of row O_ and column O_, showing P_1^2,P_3^0 in the cell. This means that ''with regard to'' decision value P_=1, object O_ differs from object O_ on attributes P_1 and P_3, and the particular values on these attributes for the positive object O_ are P_1=2 and P_3=0. This tells us that the correct classification of O_ as belonging to decision class P_=1 rests on attributes P_1 and P_3; although one or the other might be dispensable, we know that ''at least one'' of these attributes is ''in''dispensable. Next, from each decision matrix we form a set of Boolean expressions, one expression for each row of the matrix. The items within each cell are aggregated disjunctively, and the individuals cells are then aggregated conjunctively. Thus, for the above table we have the following five Boolean expressions: : \begin (P_1^1 \lor P_2^2 \lor P_3^0) \land (P_1^1 \lor P_2^2) \land (P_1^1 \lor P_2^2 \lor P_3^0) \land (P_1^1 \lor P_2^2 \lor P_3^0) \land (P_1^1 \lor P_2^2) \\ (P_1^1 \lor P_2^2 \lor P_3^0) \land (P_1^1 \lor P_2^2) \land (P_1^1 \lor P_2^2 \lor P_3^0) \land (P_1^1 \lor P_2^2 \lor P_3^0) \land (P_1^1 \lor P_2^2) \\ (P_1^2 \lor P_3^0) \land (P_2^0) \land (P_1^2 \lor P_3^0) \land (P_1^2 \lor P_2^0 \lor P_3^0) \land (P_2^0) \\ (P_1^2 \lor P_3^0) \land (P_2^0) \land (P_1^2 \lor P_3^0) \land (P_1^2 \lor P_2^0 \lor P_3^0) \land (P_2^0) \\ (P_1^2 \lor P_3^0) \land (P_2^0) \land (P_1^2 \lor P_3^0) \land (P_1^2 \lor P_2^0 \lor P_3^0) \land (P_2^0) \end Each statement here is essentially a highly specific (probably ''too'' specific) rule governing the membership in class P_=1 of the corresponding object. For example, the last statement, corresponding to object O_, states that all the following must be satisfied: # Either P_1 must have value 2, or P_3 must have value 0, or both. # P_2 must have value 0. # Either P_1 must have value 2, or P_3 must have value 0, or both. # Either P_1 must have value 2, or P_2 must have value 0, or P_3 must have value 0, or any combination thereof. # P_2 must have value 0. It is clear that there is a large amount of redundancy here, and the next step is to simplify using traditional
Boolean algebra In mathematics and mathematical logic, Boolean algebra is a branch of algebra. It differs from elementary algebra in two ways. First, the values of the variables are the truth values ''true'' and ''false'', usually denoted 1 and 0, whereas in e ...
. The statement (P_1^1 \lor P_2^2 \lor P_3^0) \land (P_1^1 \lor P_2^2) \land (P_1^1 \lor P_2^2 \lor P_3^0) \land (P_1^1 \lor P_2^2 \lor P_3^0) \land (P_1^1 \lor P_2^2) corresponding to objects \ simplifies to P_1^1 \lor P_2^2, which yields the implication :(P_1=1) \lor (P_2=2) \to (P_=1) Likewise, the statement (P_1^2 \lor P_3^0) \land (P_2^0) \land (P_1^2 \lor P_3^0) \land (P_1^2 \lor P_2^0 \lor P_3^0) \land (P_2^0) corresponding to objects \ simplifies to P_1^2 P_2^0 \lor P_3^0 P_2^0. This gives us the implication :(P_1=2 \land P_2=0) \lor (P_3=0 \land P_2=0) \to (P_=1) The above implications can also be written as the following rule set: : \begin (P_1=1) \to (P_=1) \\ (P_2=2) \to (P_=1) \\ (P_1=2) \land (P_2=0) \to (P_=1) \\ (P_3=0) \land (P_2=0) \to (P_=1) \end It can be noted that each of the first two rules has a ''support'' of 1 (i.e., the antecedent matches two objects), while each of the last two rules has a support of 2. To finish writing the rule set for this knowledge system, the same procedure as above (starting with writing a new decision matrix) should be followed for the case of P_=2, thus yielding a new set of implications for that decision value (i.e., a set of implications with P_=2 as the consequent). In general, the procedure will be repeated for each possible value of the decision variable.


LERS rule induction system

The data system LERS (Learning from Examples based on Rough Sets) Grzymala-Busse (1997) may induce rules from inconsistent data, i.e., data with conflicting objects. Two objects are conflicting when they are characterized by the same values of all attributes, but they belong to different concepts (classes). LERS uses rough set theory to compute lower and upper approximations for concepts involved in conflicts with other concepts. Rules induced from the lower approximation of the concept ''certainly'' describe the concept, hence such rules are called ''certain''. On the other hand, rules induced from the upper approximation of the concept describe the concept ''possibly'', so these rules are called ''possible''. For rule induction LERS uses three algorithms: LEM1, LEM2, and IRIM. The LEM2 algorithm of LERS is frequently used for rule induction and is used not only in LERS but also in other systems, e.g., in RSES (Bazan et al. (2004). LEM2 explores the search space of attribute–value pairs. Its input data set is a lower or upper approximation of a concept, so its input data set is always consistent. In general, LEM2 computes a local covering and then converts it into a rule set. We will quote a few definitions to describe the LEM2 algorithm. The LEM2 algorithm is based on an idea of an attribute–value pair block. Let X be a nonempty lower or upper approximation of a concept represented by a decision-value pair (d, w). Set X ''depends'' on a set T of attribute–value pairs t = (a, v) if and only if : \emptyset \neq = \bigcap_ \subseteq X. Set T is a ''minimal complex'' of X if and only if X depends on T and no proper subset S of T exists such that X depends on S. Let \mathbb be a nonempty collection of nonempty sets of attribute–value pairs. Then \mathbb is a ''local covering'' of X if and only if the following three conditions are satisfied: each member T of \mathbb is a minimal complex of X, : \bigcup_ = X, : \mathbb is minimal, i.e., \mathbb has the smallest possible number of members. For our sample information system, LEM2 will induce the following rules: : \begin (P_1, 1) \to (P_4, 1) \\ (P_5, 0) \to (P_4, 1) \\ (P_1, 0) \to (P_4, 2) \\ (P_2, 1) \to (P_4, 2) \end Other rule-learning methods can be found, e.g., in Pawlak (1991), Stefanowski (1998), Bazan et al. (2004), etc.


Incomplete data

Rough set theory is useful for rule induction from incomplete data sets. Using this approach we can distinguish between three types of missing attribute values: ''lost values'' (the values that were recorded but currently are unavailable), ''attribute-concept values'' (these missing attribute values may be replaced by any attribute value limited to the same concept), and ''"do not care" conditions'' (the original values were irrelevant). A ''concept'' (''class'') is a set of all objects classified (or diagnosed) the same way. Two special data sets with missing attribute values were extensively studied: in the first case, all missing attribute values were lost (Stefanowski and Tsoukias, 2001), in the second case, all missing attribute values were "do not care" conditions (Kryszkiewicz, 1999). In attribute-concept values interpretation of a missing attribute value, the missing attribute value may be replaced by any value of the attribute domain restricted to the concept to which the object with a missing attribute value belongs (Grzymala-Busse and Grzymala-Busse, 2007). For example, if for a patient the value of an attribute Temperature is missing, this patient is sick with flu, and all remaining patients sick with flu have values high or very-high for Temperature when using the interpretation of the missing attribute value as the attribute-concept value, we will replace the missing attribute value with high and very-high. Additionally, the ''characteristic relation'', (see, e.g., Grzymala-Busse and Grzymala-Busse, 2007) enables to process data sets with all three kind of missing attribute values at the same time: lost, "do not care" conditions, and attribute-concept values.


Applications

Rough set methods can be applied as a component of hybrid solutions in
machine learning Machine learning (ML) is a field of inquiry devoted to understanding and building methods that 'learn', that is, methods that leverage data to improve performance on some set of tasks. It is seen as a part of artificial intelligence. Machine ...
and data mining. They have been found to be particularly useful for
rule induction Rule induction is an area of machine learning in which formal rules are extracted from a set of observations. The rules extracted may represent a full scientific model of the data, or merely represent local patterns in the data. Data mining in g ...
and
feature selection In machine learning and statistics, feature selection, also known as variable selection, attribute selection or variable subset selection, is the process of selecting a subset of relevant features (variables, predictors) for use in model construc ...
(semantics-preserving
dimensionality reduction Dimensionality reduction, or dimension reduction, is the transformation of data from a high-dimensional space into a low-dimensional space so that the low-dimensional representation retains some meaningful properties of the original data, ideally ...
). Rough set-based data analysis methods have been successfully applied in
bioinformatics Bioinformatics () is an interdisciplinary field that develops methods and software tools for understanding biological data, in particular when the data sets are large and complex. As an interdisciplinary field of science, bioinformatics combi ...
,
economics Economics () is the social science that studies the Production (economics), production, distribution (economics), distribution, and Consumption (economics), consumption of goods and services. Economics focuses on the behaviour and intera ...
and finance, medicine, multimedia, web and
text mining Text mining, also referred to as ''text data mining'', similar to text analytics, is the process of deriving high-quality information from text. It involves "the discovery by computer of new, previously unknown information, by automatically extract ...
, signal and image processing,
software engineering Software engineering is a systematic engineering approach to software development. A software engineer is a person who applies the principles of software engineering to design, develop, maintain, test, and evaluate computer software. The term '' ...
, robotics, and engineering (e.g. power systems and
control engineering Control engineering or control systems engineering is an engineering discipline that deals with control systems, applying control theory to design equipment and systems with desired behaviors in control environments. The discipline of controls o ...
). Recently the three regions of rough sets are interpreted as regions of acceptance, rejection and deferment. This leads to three-way decision making approach with the model which can potentially lead to interesting future applications.


History

The idea of rough set was proposed by Pawlak (1981) as a new mathematical tool to deal with vague concepts. Comer, Grzymala-Busse, Iwinski, Nieminen, Novotny, Pawlak, Obtulowicz, and Pomykala have studied algebraic properties of rough sets. Different algebraic semantics have been developed by P. Pagliani, I. Duntsch, M. K. Chakraborty, M. Banerjee and A. Mani; these have been extended to more generalized rough sets by D. Cattaneo and A. Mani, in particular. Rough sets can be used to represent
ambiguity Ambiguity is the type of meaning in which a phrase, statement or resolution is not explicitly defined, making several interpretations plausible. A common aspect of ambiguity is uncertainty. It is thus an attribute of any idea or statement ...
,
vagueness In linguistics and philosophy, a vague predicate is one which gives rise to borderline cases. For example, the English adjective "tall" is vague since it is not clearly true or false for someone of middling height. By contrast, the word "prime" is ...
and general
uncertainty Uncertainty refers to epistemic situations involving imperfect or unknown information. It applies to predictions of future events, to physical measurements that are already made, or to the unknown. Uncertainty arises in partially observable or ...
.


Extensions and generalizations

Since the development of rough sets, extensions and generalizations have continued to evolve. Initial developments focused on the relationship - both similarities and difference - with
fuzzy sets In mathematics, fuzzy sets (a.k.a. uncertain sets) are sets whose elements have degrees of membership. Fuzzy sets were introduced independently by Lotfi A. Zadeh in 1965 as an extension of the classical notion of set. At the same time, defined ...
. While some literature contends these concepts are different, other literature considers that rough sets are a generalization of fuzzy sets - as represented through either fuzzy rough sets or rough fuzzy sets. Pawlak (1995) considered that fuzzy and rough sets should be treated as being complementary to each other, addressing different aspects of uncertainty and vagueness. Three notable extensions of classical rough sets are: *
Dominance-based rough set approach The dominance-based rough set approach (DRSA) is an extension of rough set theory for multi-criteria decision analysis (MCDA), introduced by Greco, Matarazzo and Słowiński. Greco, S., Matarazzo, B., Słowiński, R.: Rough sets theory for multi ...
(DRSA) is an extension of rough set theory for
multi-criteria decision analysis Multiple-criteria decision-making (MCDM) or multiple-criteria decision analysis (MCDA) is a sub-discipline of operations research that explicitly evaluates multiple conflicting criteria in decision making (both in daily life and in settings ...
(MCDA), introduced by Greco, Matarazzo and Słowiński (2001). The main change in this extension of classical rough sets is the substitution of the indiscernibility relation by a ''dominance'' relation, which permits the formalism to deal with inconsistencies typical in consideration of criteria and preference-ordered decision classes. * Decision-theoretic rough sets (DTRS) is a probabilistic extension of rough set theory introduced by Yao, Wong, and Lingras (1990). It utilizes a Bayesian decision procedure for minimum risk decision making. Elements are included into the lower and upper approximations based on whether their conditional probability is above thresholds \textstyle \alpha and \textstyle \beta. These upper and lower thresholds determine region inclusion for elements. This model is unique and powerful since the thresholds themselves are calculated from a set of six loss functions representing classification risks. * Game-theoretic rough sets (GTRS) is a game theory-based extension of rough set that was introduced by Herbert and Yao (2011). It utilizes a game-theoretic environment to optimize certain criteria of rough sets based classification or decision making in order to obtain effective region sizes.


Rough membership

Rough sets can be also defined, as a generalisation, by employing a rough membership function instead of objective approximation. The rough membership function expresses a conditional probability that x belongs to X given \textstyle \R. This can be interpreted as a degree that x belongs to X in terms of information about x expressed by \textstyle \R. Rough membership primarily differs from the fuzzy membership in that the membership of union and intersection of sets cannot, in general, be computed from their constituent membership as is the case of fuzzy sets. In this, rough membership is a generalization of fuzzy membership. Furthermore, the rough membership function is grounded more in probability than the conventionally held concepts of the fuzzy membership function.


Other generalizations

Several generalizations of rough sets have been introduced, studied and applied to solving problems. Here are some of these generalizations: *rough multisets (Grzymala-Busse, 1987) *fuzzy rough sets extend the rough set concept through the use of fuzzy equivalence classes(Nakamura, 1988) *Alpha rough set theory (α-RST) - a generalization of rough set theory that allows approximation using of fuzzy concepts (Quafafou, 2000) *intuitionistic fuzzy rough sets (Cornelis, De Cock and Kerre, 2003) *generalized rough fuzzy sets (Feng, 2010) *rough intuitionistic fuzzy sets (Thomas and Nair, 2011) *soft rough fuzzy sets and soft fuzzy rough sets (Meng, Zhang and Qin, 2011) *composite rough sets (Zhang, Li and Chen, 2014)


See also

* Algebraic semantics *
Alternative set theory In a general sense, an alternative set theory is any of the alternative mathematical approaches to the concept of set and any alternative to the de facto standard set theory described in axiomatic set theory by the axioms of Zermelo–Fraenkel set ...
*
Analog computer An analog computer or analogue computer is a type of computer that uses the continuous variation aspect of physical phenomena such as electrical, mechanical, or hydraulic quantities (''analog signals'') to model the problem being solved. In c ...
* Description logic *
Fuzzy logic Fuzzy logic is a form of many-valued logic in which the truth value of variables may be any real number between 0 and 1. It is employed to handle the concept of partial truth, where the truth value may range between completely true and completely ...
*
Fuzzy set theory In mathematics, fuzzy sets (a.k.a. uncertain sets) are sets whose elements have degrees of membership. Fuzzy sets were introduced independently by Lotfi A. Zadeh in 1965 as an extension of the classical notion of set. At the same time, defined ...
*
Granular computing Granular computing (GrC) is an emerging computing paradigm of information processing that concerns the processing of complex information entities called "information granules", which arise in the process of data abstraction and derivation of knowl ...
* Near sets *
Rough fuzzy hybridization {{No footnotes, date=April 2009 Rough fuzzy hybridization is a method of hybrid intelligent system or soft computing, where Fuzzy set theory is used for linguistic representation of patterns, leading to a ''fuzzy granulation'' of the feature spac ...
* Type-2 fuzzy sets and systems * Decision-theoretic rough sets*
Version space Version space learning is a Symbolic artificial intelligence, logical approach to machine learning, specifically binary classification. Version space learning algorithms search a predefined space of hypothesis, hypotheses, viewed as a set of Senten ...
*
Dominance-based rough set approach The dominance-based rough set approach (DRSA) is an extension of rough set theory for multi-criteria decision analysis (MCDA), introduced by Greco, Matarazzo and Słowiński. Greco, S., Matarazzo, B., Słowiński, R.: Rough sets theory for multi ...


References

* * * * * * * * * Pawlak, Zdzisław ''Rough Sets'' Research Report PAS 431, Institute of Computer Science, Polish Academy of Sciences (1981) * * * * * * * * * * * * * * *Burgin M. (1990). Theory of Named Sets as a Foundational Basis for Mathematics, In Structures in mathematical theories: Reports of the San Sebastian international symposium, September 25–29, 1990 (http://www.blogg.org/blog-30140-date-2005-10-26.html) *Burgin, M. (2004). Unified Foundations of Mathematics, Preprint Mathematics LO/0403186, p39. (electronic edition: https://arxiv.org/ftp/math/papers/0403/0403186.pdf) *Burgin, M. (2011), Theory of Named Sets, Mathematics Research Developments, Nova Science Pub Inc, *Cornelis, C., De Cock, M. and Kerre, E. (2003) Intuitionistic fuzzy rough sets: at the crossroads of imperfect knowledge, Expert Systems, 20:5, pp260–270 *Düntsch, I. and Gediga, G. (1995) Rough Set Dependency Analysis in Evaluation Studies – An Application in the Study of Repeated Heart Attacks. University of Ulster, Informatics Research Reports No. 10 *Feng F. (2010). Generalized Rough Fuzzy Sets Based on Soft Sets, Soft Computing, 14:9, pp 899–911 *Grzymala-Busse, J. (1987). Learning from examples based on rough multisets, in Proceedings of the 2nd International Symposium on Methodologies for Intelligent Systems, pp. 325–332. Charlotte, NC, USA, *Meng, D., Zhang, X. and Qin, K. (2011). Soft rough fuzzy sets and soft fuzzy rough sets, Computers & Mathematics with Applications, 62:12, pp4635–4645 *Quafafou M. (2000). α-RST: a generalization of rough set theory, Information Sciences, 124:1–4, pp301–316. *Quafafou M. and Boussouf M. (2000). Generalized rough sets based feature selection. Journal Intelligent Data Analysis, 4:1 pp3 – 17 *Nakamura, A. (1988) Fuzzy rough sets, ‘Notes on Multiple-valued Logic in Japan’, 9:1, pp1–8 *Pawlak, Z., Grzymala-Busse, J., Slowinski, R. Ziarko, W. (1995). Rough Sets. Communications of the ACM, 38:11, pp88–95 *Thomas, K. and Nair, L. (2011). Rough intuitionistic fuzzy sets in a lattice, International Mathematical Forum, 6:27, pp1327–1335 *Zhang J., Li T., Chen H. (2014). Composite rough sets for dynamic data mining, Information Sciences, 257, pp81–100 *Zhang J., Wong J-S, Pan Y, Li T. (2015). A parallel matrix-based method for computing approximations in incomplete information systems, IEEE Transactions on Knowledge and Data Engineering, 27(2): 326-339 *Chen H., Li T., Luo C., Horng S-J., Wang G. (2015). A decision-theoretic rough set approach for dynamic data mining. IEEE Transactions on Fuzzy Systems, 23(6): 1958-1970 *Chen H., Li T., Luo C., Horng S-J., Wang G. (2014). A rough set-based method for updating decision rules on attribute values' coarsening and refining, IEEE Transactions on Knowledge and Data Engineering, 26(12): 2886-2899 *Chen H., Li T., Ruan D., Lin J., Hu C, (2013) A rough-set based incremental approach for updating approximations under dynamic maintenance environments. IEEE Transactions on Knowledge and Data Engineering, 25(2): 274-284


Further reading

* Gianpiero Cattaneo and Davide Ciucci, "Heyting Wajsberg Algebras as an Abstract Environment Linking Fuzzy and Rough Sets" in J.J. Alpigini et al. (Eds.): RSCTC 2002, LNAI 2475, pp. 77–84, 2002.


External links


The International Rough Set Society

Rough set tutorial



Rough Set Exploration System

Rough Sets in Data Warehousing
{{Authority control Systems of set theory Theoretical computer science Approximations