Results 1  10
of
87
Numerical Uncertainty Management in User and Student Modeling: An Overview of Systems and Issues
, 1996
"... . A rapidly growing number of user and student modeling systems have employed numerical techniques for uncertainty management. The three major paradigms are those of Bayesian networks, the DempsterShafer theory of evidence, and fuzzy logic. In this overview, each of the first three main sections fo ..."
Abstract

Cited by 118 (10 self)
 Add to MetaCart
. A rapidly growing number of user and student modeling systems have employed numerical techniques for uncertainty management. The three major paradigms are those of Bayesian networks, the DempsterShafer theory of evidence, and fuzzy logic. In this overview, each of the first three main sections focuses on one of these paradigms. It first introduces the basic concepts by showing how they can be applied to a relatively simple user modeling problem. It then surveys systems that have applied techniques from the paradigm to user or student modeling, characterizing each system within a common framework. The final main section discusses several aspects of the usability of these techniques for user and student modeling, such as their knowledge engineering requirements, their need for computational resources, and the communicability of their results. Key words: numerical uncertainty management, Bayesian networks, DempsterShafer theory, fuzzy logic, user modeling, student modeling 1. Introdu...
Current Approaches to Handling Imperfect Information in Data and Knowledge Bases
, 1996
"... This paper surveys methods for representing and reasoning with imperfect information. It opens with an attempt to classify the different types of imperfection that may pervade data, and a discussion of the sources of such imperfections. The classification is then used as a framework for considering ..."
Abstract

Cited by 70 (1 self)
 Add to MetaCart
(Show Context)
This paper surveys methods for representing and reasoning with imperfect information. It opens with an attempt to classify the different types of imperfection that may pervade data, and a discussion of the sources of such imperfections. The classification is then used as a framework for considering work that explicitly concerns the representation of imperfect information, and related work on how imperfect information may be used as a basis for reasoning. The work that is surveyed is drawn from both the field of databases and the field of artificial intelligence. Both of these areas have long been concerned with the problems caused by imperfect information, and this paper stresses the relationships between the approaches developed in each.
The Paradoxical Success of AspectOriented Programming
 ACM SIGPLAN NOTICES
, 2006
"... Aspectoriented programming is considered a promising new technology. As objectoriented programming did before, it is beginning to pervade all areas of software engineering. With its growing popularity, practitioners and academics alike are wondering whether they should start looking into it, or o ..."
Abstract

Cited by 69 (3 self)
 Add to MetaCart
Aspectoriented programming is considered a promising new technology. As objectoriented programming did before, it is beginning to pervade all areas of software engineering. With its growing popularity, practitioners and academics alike are wondering whether they should start looking into it, or otherwise risk having missed an important development. The author of this essay finds that much of aspectoriented programming’s success seems to be based on the conception that it improves both modularity and the structure of code, while in fact, it works against the primary purposes of the two, namely independent development and understandability of programs. Not seeing any way of fixing this situation, he thinks the success of aspectoriented programming to be paradoxical.
What is a Forest? On the vagueness of certain geographic concepts
 Topoi
, 2002
"... The paper examines ways in which the meanings of geographical concepts are affected by the phenomenon of vagueness. A logical analysis based on the theory of supervaluation semantics is developed and employed to describe differences and logical dependencies between different senses of vague concepts ..."
Abstract

Cited by 39 (3 self)
 Add to MetaCart
(Show Context)
The paper examines ways in which the meanings of geographical concepts are affected by the phenomenon of vagueness. A logical analysis based on the theory of supervaluation semantics is developed and employed to describe differences and logical dependencies between different senses of vague concepts. Particular attention is given to analysing the concept of `forest' which exhibits many kinds of vagueness.
Condensing Uncertainty via Incremental Treatment Learning
 ANNALS OF SOFTWARE ENGINEERING, SPECIAL ISSUE ON COMPUTATIONAL INTELLIGENCE. TO APPEAR.
, 2002
"... Models constrain the range of possible behaviors de£ned for a domain. When parts of a model are uncertain, the possible behaviors may be a data cloud: i.e. an overwhelming range of possibilities that bewilder an analyst. Faced with large data clouds, it is hard to demonstrate that any particular de ..."
Abstract

Cited by 24 (20 self)
 Add to MetaCart
Models constrain the range of possible behaviors de£ned for a domain. When parts of a model are uncertain, the possible behaviors may be a data cloud: i.e. an overwhelming range of possibilities that bewilder an analyst. Faced with large data clouds, it is hard to demonstrate that any particular decision leads to a particular outcome. Even if we can’t make de£nite decisions from such models, it is possible to £nd decisions that reduce the variance of values within a data cloud. Also, it is possible to change the range of these future behaviors such that the cloud condenses to some improved mode. Our approach uses two tools. Firstly, a model simulator is constructed that knows the range of possible values for uncertain parameters. Secondly, the TAR2 treatment learner uses the output from the simulator to incrementally learn better constraints. In our incremental treatment learning cycle, users review newly discovered treatments before they are added to a growing pool of constraints used by the model simulator.
Can we enforce full compositionality in uncertainty calculi
 In: Proc of the 11th nat conf on artificial intelligence (AAAI94). AAAI Press/MIT Press, Menlo Park/Cambridge, pp 149–154
, 1994
"... At AAAI’93, Elkan has claimed to have a result trivializing fuzzy logic. This trivialization is based on too strong a view of equivalence in fuzzy logic and relates to a fully compositional treatment of uncertainty. Such a treatment is shown to be impossible in this paper. We emphasize the distincti ..."
Abstract

Cited by 24 (2 self)
 Add to MetaCart
(Show Context)
At AAAI’93, Elkan has claimed to have a result trivializing fuzzy logic. This trivialization is based on too strong a view of equivalence in fuzzy logic and relates to a fully compositional treatment of uncertainty. Such a treatment is shown to be impossible in this paper. We emphasize the distinction between i) degrees of partial truth which are allowed to be truth functional and which pertain to gradual (or fuzzy) propositions, and ii) degrees of uncertainty which cannot be compositional with respect to all the connectives when attached to classical propositions. This distinction is exemplified by the difference between fuzzy logic and possibilistic logic. We also investigate an almost compositional uncertainty calculus, but it is shown to lack expressiveness. I.
Is The Success Of Fuzzy Logic Really Paradoxical? Or: Towards The . . .
 INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS
, 1994
"... The formal concept of logical equivalence in fuzzy logic, while theoretically sound, seems impractical. The misinterpretation of this concept has led to some pessimistic conclusions. Motivated by practical interpretation of truth values for fuzzy propositions, we take the class (lattice) of all sub ..."
Abstract

Cited by 16 (10 self)
 Add to MetaCart
The formal concept of logical equivalence in fuzzy logic, while theoretically sound, seems impractical. The misinterpretation of this concept has led to some pessimistic conclusions. Motivated by practical interpretation of truth values for fuzzy propositions, we take the class (lattice) of all subintervals of the unit interval [0,1] as the truth value space for fuzzy logic, subsuming the traditional class of numerical truth values from [0,1]. The associated concept of logical equivalence is stronger than the traditional one. Technically, we are dealing with much smaller set of pairs of equivalent formulas, so that we are able to check equivalence algorithmically. The checking is done by showing that our strong equivalence notion coincides with the equivalence in logic programming.
Constructing a Logic of Plausible Inference: a Guide To Cox's Theorem
 International Journal of Approximate Reasoning
, 2003
"... Cox's Theorem provides a theoretical basis for using probability theory as a general logic of plausible inference. The theorem states that any system for plausible reasoning that satisfies certain qualitative requirements intended to ensure consistency with classical deductive logic and corresp ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
(Show Context)
Cox's Theorem provides a theoretical basis for using probability theory as a general logic of plausible inference. The theorem states that any system for plausible reasoning that satisfies certain qualitative requirements intended to ensure consistency with classical deductive logic and correspondence with commonsense reasoning is isomorphic to probability theory. However, the requirements used to obtain this result have been the subject of much debate. We review Cox's Theorem, discussing its requirements, the intuition and reasoning behind these, and the most important objections, and finish with an abbreviated proof of the theorem.
Modal Semantics for Knowledge Bases Dealing with Vague Concepts
 Principles of Knowledge Representation and Reasoning: Proceedings of the 6th International Conference (KR98
, 1998
"... The paper investigates the characterisation of vague concepts within the framework of modal logic. This work builds on the supervaluation approach of Fine and exploits the idea of a precisification space. A simple language is presented with two modalities: a necessity operator and an operator `it i ..."
Abstract

Cited by 14 (7 self)
 Add to MetaCart
(Show Context)
The paper investigates the characterisation of vague concepts within the framework of modal logic. This work builds on the supervaluation approach of Fine and exploits the idea of a precisification space. A simple language is presented with two modalities: a necessity operator and an operator `it is unequivocal that' which is used to articulate the logic of vagueness. Both these operators obey the schemas of the logic S5. I show how this language can be used to represent logical properties of vague predicates which have a variety of possible precise interpretations. I consider the use within KR systems of number of different entailment relations that can be specified for this language. Certain vague predicates (such as `tall') may be indefinite even when there is no ambiguity in meaning. These can be accounted for by means of a threevalued logic, incorporating a definiteness operator. I also show the relationship between observable quantities (such as height) and vague predicates (su...
Focused Web Crawling: A Generic Framework for Specifying the User Interest and for Adaptive Crawling Strategies
, 2001
"... Compared to the standard web search engines, focused crawlers yield good recall as well as good precision by restricting themselves to a limited domain. In this paper, we do not introduce another focused crawler, but we introduce a generic framework for focused crawling consisting of two major ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
Compared to the standard web search engines, focused crawlers yield good recall as well as good precision by restricting themselves to a limited domain. In this paper, we do not introduce another focused crawler, but we introduce a generic framework for focused crawling consisting of two major components: (1) Specification of the user interest and measuring the resulting relevance of a given web page. The proposed method of specifying the user interest by a formula combining atomic topics significantly improves the expressive power of the user. (2) Crawling strategy. Ordering the links at the crawl frontier is a challenging task since pages of a low relevance may be on a path to highly relevant pages. Thus, tunneling may be necessary. The explicit specification of the user interest allows us to define topicspecific strategies for tunneling. Our system Ariadne is a prototype implementation of the proposed framework. An experimental evaluation of different crawling strategies demonstrates the performance gain obtained by focusing a crawl and by dynamically adapting the focus. 1