Results 1 - 10
of
87
The art of granular computing:
- Proceeding of the International Conference on Rough Sets and Emerging Intelligent Systems Paradigms,
, 2007
"... Abstract: This paper has two purposes. One is to present a critical examination of the rise of granular computing and the other is to suggest a triarchic theory of granular computing. By examining the reasons, justifications, and motivations for the rise of granular computing, we may be able to ful ..."
Abstract
-
Cited by 74 (20 self)
- Add to MetaCart
(Show Context)
Abstract: This paper has two purposes. One is to present a critical examination of the rise of granular computing and the other is to suggest a triarchic theory of granular computing. By examining the reasons, justifications, and motivations for the rise of granular computing, we may be able to fully appreciate its scope, goal and potential values. The results enable us to formulate a triarchic theory in the light of research results from many disciplines. The three components of the theory are labeled as the philosophy, the methodology, and the computation. The integration of the three offers a unified view of granular computing as a way of structured thinking, a method of structured problem solving, and a paradigm of structured information processing, focusing on hierarchical granular structures. The triarchic theory is an important effort in synthesizing the various theories and models of granular computing. Key words: Triarchic theory of granular computing; systems theory; structured thinking, problem solving and information processing. CLC number: Document code: A Introduction Although granular computing, as a separate field of study, started a decade ago [1], its basic philosophy, ideas, principles, methodologies, theories and tools has, in fact, long been used either explicitly or implicitly across many branches of natural and social sciences The answers, at least partial answers, to these questions may be obtained by drawing and synthesizing results from well-established disciplines, including philosophy, psychology, neuroscience, cognitive science, education, artificial intelligence, computer programming, and many more. Previously, I argued that granular computing represents an idea converged from many branches of natural and social sciences Human-Inspired Computing Research on understanding the human brain and natural intelligence is closely related to the field of artificial intelligence (AI) and information technology (IT). The results have led to a computational view for explaining how the mind works
Attribute Reduction in Decision-Theoretic Rough Set Models
- INFORMATION SCIENCES, 178(17), 3356-3373, ELSEVIER B.V.
, 2008
"... Rough set theory can be applied to rule induction. There are two different types of classification rules, positive and boundary rules, leading to different decisions and consequences. They can be distinguished not only from the syntax measures such as confidence, coverage and generality, but also th ..."
Abstract
-
Cited by 29 (2 self)
- Add to MetaCart
Rough set theory can be applied to rule induction. There are two different types of classification rules, positive and boundary rules, leading to different decisions and consequences. They can be distinguished not only from the syntax measures such as confidence, coverage and generality, but also the semantic measures such as decisionmonotocity, cost and risk. The classification rules can be evaluated locally for each individual rule, or globally for a set of rules. Both the two types of classification rules can be generated from, and interpreted by, a decision-theoretic model, which is a probabilistic extension of the Pawlak rough set model. As an important concept of rough set theory, an attribute reduct is a subset of attributes that are jointly sufficient and individually necessary for preserving a particular property of the given information table. This paper addresses attribute reduction in decision-theoretic rough set models regarding different classification properties, such as: decision-monotocity, confidence, coverage, generality and cost. It is important to note that many of these properties can be truthfully reflected by a single measure γ in the Pawlak rough set model. On the other hand, they need to be considered separately in probabilistic models. A straightforward extension of the γ measure is unable to evaluate these properties. This study provides a new insight into the problem of attribute reduction.
Granular Computing: An Emerging Paradigm
, 2001
"... We provide an overview of Granular Computing- a rapidly growing area of information processing aimed at the construction of intelligent systems. We highlight the main features of Granular Computing, elaborate on the underlying formalisms of information granulation and discuss ways of their developme ..."
Abstract
-
Cited by 28 (0 self)
- Add to MetaCart
We provide an overview of Granular Computing- a rapidly growing area of information processing aimed at the construction of intelligent systems. We highlight the main features of Granular Computing, elaborate on the underlying formalisms of information granulation and discuss ways of their development. We also discuss the concept of granular modeling and present the issues of communication between formal frameworks of Granular Computing. © 2007 World Academic Press, UK. All rights reserved.
Y.H.: Rough Set Approximations in Formal Concept Analysis
- Transactions on Rough Sets. LNCS
, 2006
"... Abstract. This paper proposes a generalized definition of rough set approximations, based on a subsystem of subsets of a universe. The subsystem is not assumed to be closed under set complement, union and intersection. The lower or upper approximation is no longer one set but composed of several set ..."
Abstract
-
Cited by 15 (5 self)
- Add to MetaCart
(Show Context)
Abstract. This paper proposes a generalized definition of rough set approximations, based on a subsystem of subsets of a universe. The subsystem is not assumed to be closed under set complement, union and intersection. The lower or upper approximation is no longer one set but composed of several sets. As special cases, approximations in formal concept analysis and knowledge spaces are examined. The results provide a better understanding of rough set approximations. 1
A note on definability and approximations
- Transactions on Rough Sets VII
, 2007
"... Abstract. Definability and approximations are two important notions of the theory of rough sets. In many studies, one is used to define the other. There is a lack of an explicit interpretation of the physical meaning of definability. In this paper, the definability is used as a more primitive notion ..."
Abstract
-
Cited by 13 (9 self)
- Add to MetaCart
(Show Context)
Abstract. Definability and approximations are two important notions of the theory of rough sets. In many studies, one is used to define the other. There is a lack of an explicit interpretation of the physical meaning of definability. In this paper, the definability is used as a more primitive notion, interpreted in terms of formulas of a logic language. A set is definable if there is a formula that defines the set, i.e., the set consists of all those elements satisfying the formula. As a derived notion, the lower and upper approximations of a set are two definable sets that approximate the set from below and above, respectively. This formulation may be more natural, bringing new insights into our understanding of rough set approximations. 1
An information granulation based data mining approach for classifying imbalanced data
, 2008
"... ..."
On the feasibility of Description Logic knowledge bases with rough concepts and vague instances
"... Abstract. A usage scenario of bio-ontologies is hypothesis testing, such as finding relationships or new subconcepts in the data linked to the ontology. Whilst validating the hypothesis, such knowledge is uncertain or vague and the data is often incomplete, which DL knowledge bases do not take into ..."
Abstract
-
Cited by 7 (4 self)
- Add to MetaCart
(Show Context)
Abstract. A usage scenario of bio-ontologies is hypothesis testing, such as finding relationships or new subconcepts in the data linked to the ontology. Whilst validating the hypothesis, such knowledge is uncertain or vague and the data is often incomplete, which DL knowledge bases do not take into account. In addition, it requires scalability with large amounts of data. To address these requirements, we take the SROIQ(D) and DL-Lite family of languages and their application infrastructures augmented with notions of rough sets. Although one can represent only little of rough concepts in DL-Lite, useful aspects can be dealt with in the mapping layer that links the concepts in the ontology to queries over the data source. We discuss the trade-offs and demonstrate validation of the theoretical assessment with the HGT application ontology about horizontal gene transfer and its 17GB database by taking advantage of the Ontology-Based Data Access framework. However, the prospects for comprehensive and usable rough DL knowledge bases are not good, and may require both sophisticated modularization and scientific workflows to achieve systematic use of rough ontologies. 1
Interpreting Low and High Order Rules: A Granular Computing Approach. LNAI 4585
, 2007
"... Abstract. The main objective of this paper is to provide a granular computing based interpretation of rules representing two levels of knowledge. This is done by adopting and adapting the decision logic language for granular computing. The language provides a formal method for describing and interpr ..."
Abstract
-
Cited by 5 (3 self)
- Add to MetaCart
(Show Context)
Abstract. The main objective of this paper is to provide a granular computing based interpretation of rules representing two levels of knowledge. This is done by adopting and adapting the decision logic language for granular computing. The language provides a formal method for describing and interpreting conditions in rules as granules and rules as relationships between granules. An information table is used to construct a concrete granular computing model. Two types of granules are constructed from an information table. They lead to two types of rules called low order and high order rules. As examples, we examine rules in the standard rough set analysis and dominance-based rough set analysis.
Activity Generator for Informal Learning in Museum
"... decide activity for themselves. Museum is chosen for our research because it is one of the informal learning and has variety of domain knowledge. Furthermore we want to enhance the museum learning to be more efficiently. The m-learning is chosen because the benefit of m-learning could leads students ..."
Abstract
-
Cited by 5 (5 self)
- Add to MetaCart
(Show Context)
decide activity for themselves. Museum is chosen for our research because it is one of the informal learning and has variety of domain knowledge. Furthermore we want to enhance the museum learning to be more efficiently. The m-learning is chosen because the benefit of m-learning could leads students making their own knowledge in variety context. The context-awareness knowledge structure is used to manage knowledge such as learning objects and characteristics. Learning objects are as the antiques and characteristics are as the antiques ` features “color”, “shape”. The knowledge will be embedded in each activity. However, we desire to lead activities attractive for students. Hence the elements of game-based learning are appended into activities as “challenge”, “fantasy”, “control”. Key-Words: museum learning, game-based learning, context awareness, mobile device, personalized
A Knowledge Mining Model for Ranking Institutions using Rough Computing with Ordering Rules and Formal Concept Analysis
"... Emergences of computers and information technological revolution made tremendous changes in the real world and provides a different dimension for the intelligent data analysis. Well formed fact, the information at right time and at right place deploy a better knowledge. However, the challenge arises ..."
Abstract
-
Cited by 4 (3 self)
- Add to MetaCart
(Show Context)
Emergences of computers and information technological revolution made tremendous changes in the real world and provides a different dimension for the intelligent data analysis. Well formed fact, the information at right time and at right place deploy a better knowledge. However, the challenge arises when larger volume of inconsistent data is given for decision making and knowledge extraction. To handle such imprecise data certain mathematical tools of greater importance has developed by researches in recent past namely fuzzy set, intuitionistic fuzzy set, rough Set, formal concept analysis and ordering rules. It is also observed that many information system contains numerical attribute values and therefore they are almost similar instead of exact similar. To handle such type of information system, in this paper we use two processes such as pre process and post process. In pre process we use rough set on intuitionistic fuzzy approximation space with ordering rules for finding the knowledge whereas in post process we use formal concept analysis to explore better knowledge and vital factors affecting decisions.