Results 1  10
of
33
The rough set exploration system
 TRANSACTIONS ON ROUGH SETS III
, 2005
"... This article gives an overview of the Rough Set Exploration System (RSES). RSES is a freely available software system toolset for data exploration, classification support and knowledge discovery. The main functionalities of this software system are presented along with a brief explanation of the a ..."
Abstract

Cited by 25 (3 self)
 Add to MetaCart
This article gives an overview of the Rough Set Exploration System (RSES). RSES is a freely available software system toolset for data exploration, classification support and knowledge discovery. The main functionalities of this software system are presented along with a brief explanation of the algorithmic methods used by RSES. Many of the RSES methods have originated from rough set theory introduced by Zdzislaw Pawlak during the early 1980s.
RSES and RSESlib – A Collection of Tools for Rough Set Computations
 Proc. of RSCTC’2000, LNAI 2005
, 2001
"... Abstract. Rough Set Exploration System a set of software tools featuring a library of methods and a graphical user interface is presented. Methods, features and abilities of the implemented software are discussed and illustrated with a case study in data analysis. 1 ..."
Abstract

Cited by 25 (3 self)
 Add to MetaCart
(Show Context)
Abstract. Rough Set Exploration System a set of software tools featuring a library of methods and a graphical user interface is presented. Methods, features and abilities of the implemented software are discussed and illustrated with a case study in data analysis. 1
A new version of rough set exploration system
 RSES). Rough Sets and Current Trends in Computing. Proc. of 3rd International Conference RSCTC 2002
, 2002
"... Abstract. We introduce a new version of the Rough Set Exploration System – a software tool featuring a library of methods and a graphical user interface supporting variety of roughsetbased computations. Methods, features and abilities of the implemented software are discussed and illustrated with ..."
Abstract

Cited by 23 (3 self)
 Add to MetaCart
(Show Context)
Abstract. We introduce a new version of the Rough Set Exploration System – a software tool featuring a library of methods and a graphical user interface supporting variety of roughsetbased computations. Methods, features and abilities of the implemented software are discussed and illustrated with a case study in data analysis. 1
Various Approaches to Reasoning With Frequency Based Decision Reducts: A Survey
, 2000
"... Different aspects of reduct approximations are discussed. In particular, we show how to use them to develop flexible tools for analysis of strongly inconsistent and/or noisy data tables. A special attention is paid to the notion of a rough membership decision reduct  a feature subset (almost) pre ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
Different aspects of reduct approximations are discussed. In particular, we show how to use them to develop flexible tools for analysis of strongly inconsistent and/or noisy data tables. A special attention is paid to the notion of a rough membership decision reduct  a feature subset (almost) preserving the frequency based information about conditions!decision dependencies. Approximate criteria of preserving such a kind of information under attribute reduction are considered. These criteria are specified by using distances between frequency distributions and information measures related to different ways of interpreting rough membership based knowledge.
Approximate Reducts and Association Rules  Correspondence and Complexity Results 
, 1999
"... . We consider approximate versions of fundamental notions of theories of rough sets and association rules. We analyze the complexity of searching for ffreducts, understood as subsets discerning "ffalmost" objects from different decision classes, in decision tables. We present how opt ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
. We consider approximate versions of fundamental notions of theories of rough sets and association rules. We analyze the complexity of searching for ffreducts, understood as subsets discerning "ffalmost" objects from different decision classes, in decision tables. We present how optimal approximate association rules can be derived from data by using heuristics for searching for minimal ffreducts. NPhardness of the problem of finding optimal approximate association rules is shown as well. It makes the results enabling the usage of rough sets algorithms to the search of association rules extremely important in view of applications. 1 Introduction Theory of rough sets ([5]) provides efficient tools for dealing with fundamental data mining challenges, like data representation and classification, or knowledge description (see e.g. [2], [3], [4], [8]). Basing on the notions of information system and decision table, the language of reducts and rules was proposed for expressing ...
A Rough Set Perspective on Data and Knowledge
, 1999
"... Rough set theory was proposed by Zdzisław Pawlak [24, 25] in the early 1980's. Since then we have witnessed a systematic, worldwide growth of interest in rough set theory and its applications. Rough set theory deals with the analysis of classificatory properties of data tables. Data repres ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
Rough set theory was proposed by Zdzisław Pawlak [24, 25] in the early 1980's. Since then we have witnessed a systematic, worldwide growth of interest in rough set theory and its applications. Rough set theory deals with the analysis of classificatory properties of data tables. Data represented in the tables can be acquired from measurements or from human experts; although in principle it must be discrete, there exist today methods that allow processing continuous values. The main goal of the rough set analysis is synthesis of approximations of concepts. The most important issues in the synthesis process are:  construction of relevant primitive concepts from which approximations of more complex concepts are assembled,  similarity (closeness) measures between concepts,  construction of operations producing compound concepts from the primitive ones. This presentation shows how several aspects of the above problems are solved by the classical rough set approach and ...
Pattern Extraction From Data
, 1998
"... . Searching for patterns is one of the main goals in data mining. Patterns have important applications in many KDD domains like rule extraction or classification. In this paper we present some methods of rule extraction by generalizing the existing approaches for the pattern problem. These methods, ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
. Searching for patterns is one of the main goals in data mining. Patterns have important applications in many KDD domains like rule extraction or classification. In this paper we present some methods of rule extraction by generalizing the existing approaches for the pattern problem. These methods, called partition of attribute values or grouping of attribute values, can be applied to decision tables with symbolic value attributes. If data tables contain symbolic and numeric attributes, some of the proposed methods can be used jointly with discretization methods. Moreover, these methods are applicable for incomplete data. The optimization problems for grouping of attribute values are either NPcomplete or NPhard. Hence we propose some heuristics returning approximate solutions for such problems. 1. Introduction We consider decision tables containing objects represented by vectors of attribute values. Formally, attributes are defined as functions from the set of objects into a corresp...
Genetic Algorithms in Decomposition and Classification Problems
, 1998
"... Introduction Some combinatorical problems concerned with using rough set theory in knowledge discovery (KD) and data analysis can be successfully solved using genetic algorithms (GA)  a sophisticated, adaptive search method based on the Darwinian principle of natural selection (see [4], [6]). These ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
Introduction Some combinatorical problems concerned with using rough set theory in knowledge discovery (KD) and data analysis can be successfully solved using genetic algorithms (GA)  a sophisticated, adaptive search method based on the Darwinian principle of natural selection (see [4], [6]). These problems are frequently NPhard, as in case of reducts or templates finding (see [12]), and there is no fast and reliable way to solve them in deterministic way. Genetic algorithms are flexible and universal  they can be used in various situations. On the other hand, approximate but fast heuristics are known for many of considered tasks. They are designed and tuned up especially for a problem, and often are more efficient than simple genetic algorithm. Unfortunately, they are often suboptimal and cannot avoid local optima. Moreover, if they are deterministic, there is no hope for improvement even if one can spend more time on computations. The advantages of both genetic and heur
Hyperrelations in Version Space
, 2004
"... A version space is a set of all hypotheses consistent with a given set of training examples, delimited by the specific boundary and the general boundary. In existing studies [5, 6, 8] a hypothesis is a conjunction of attributevalue pairs, which is shown to have limited expressive power [9]. In a mo ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
A version space is a set of all hypotheses consistent with a given set of training examples, delimited by the specific boundary and the general boundary. In existing studies [5, 6, 8] a hypothesis is a conjunction of attributevalue pairs, which is shown to have limited expressive power [9]. In a more expressive hypothesis space, e.g., disjunction of conjunction of attributevalue pairs, a general version space becomes uninteresting unless some restriction (inductive bias) is imposed [9]. In this paper we investigate version space in a hypothesis space where a hypothesis is a hyperrelation, which is in effect a disjunction of conjunctions of disjunctions of attributevalue pairs. Such a hypothesis space is more expressive than the conjunction of attributevalue pairs and the disjunction of conjunction of attributevalue pairs. However, given a dataset, we focus our attention only on those hypotheses which are consistent with given data and are maximal in the sense that the elements in a hypothesis can not be merged further. Such a hypothesis is called an Eset for the given data, and the set of all Esets is the version space which is delimited by the least Eset (specific boundary) and the greatest Eset (general boundary). Based on this version space we propose three classification rules for use in different situations. The first two are based on Esets, and the third one is based on "degraded" Esets called weak hypotheses, where the maximality constraint is relaxed. We present an algorithm to calculate Esets, though it is computationally expensive in the worst case. We also present an efficient algorithm to calculate weak hypotheses. The third rule is evaluated using public datasets, and the results compare well with C5.0 decision tree classifier. 1