Results 11 - 20
of
23
12 A Method for Managing Evidential Reasoning in a Hierarchical Hypothesis Space ∗
"... Abstract. Although informal models of evidential reasoning have been successfully applied in automated reasoning systems, it is generally difficult to define the range of their applicability. In addition, they have not provided a basis for consistent management of evidence bearing on hypotheses that ..."
Abstract
- Add to MetaCart
Abstract. Although informal models of evidential reasoning have been successfully applied in automated reasoning systems, it is generally difficult to define the range of their applicability. In addition, they have not provided a basis for consistent management of evidence bearing on hypotheses that are related hierarchically. The Dempster–Shafer (D-S) theory of evidence is appealing because it does suggest a coherent approach for dealing with such relationships. However, the theory’s complexity and potential for computational inefficiency have tended to discourage its use in reasoning systems. In this paper we describe the central elements of the D-S theory, basing our exposition on simple examples drawn from the field of medicine. We then demonstrate the relevance of the D-S theory to a familiar expert-system domain, namely the bacterial-organism identification problem that lies at the heart of the mycin system. Finally, we present a new adaptation of the D-S approach that achieves computational efficiency while permitting the management of evidential reasoning within an abstraction hierarchy. 1
The Knowledge Engineering Review An Al view of the treatment of uncertainty
"... This paper reviews many of the very varied concepts of uncertainty used in AI. Because of their great popularity and generality "parallel certainty inference " techniques, so-called, are prominently in the foreground. We illustrate and comment in detail on three of these techniques; Bayes ..."
Abstract
- Add to MetaCart
(Show Context)
This paper reviews many of the very varied concepts of uncertainty used in AI. Because of their great popularity and generality "parallel certainty inference " techniques, so-called, are prominently in the foreground. We illustrate and comment in detail on three of these techniques; Bayes ' theory (section 2); Dempster-Shafer theory (section 3); Cohen's model of endorsements (section 4), and give an account of the debate that has arisen around each of them. Techniques of a different kind (such as Zadeh's fuzzy-sets, fuzzy-logic theory, and the use of non-standard logics and methods that manage uncertainty without explicitly dealing with it) may be seen in the background (section 5). The discussion of technicalities is accompanied by a historical and philosophical excursion on the nature and the use of uncertainty (section 1), and by a brief discussion of the problem of choosing an adequate AI approach to the treatment of uncertainty (section 6). The aim of the paper is to highlight the complex nature of uncertainty and to argue for an open-minded attitude towards its representation and use. In this spirit the pros and cons of uncertainty treatment techniques are presented in order to reflect the various uncertainty types. A guide to the literature in the field, and an extensive bibliography are appended.
@ 1996 Kluwer Academic Publishers. Manufactured in The Netherlands. Knowledge Integration in a Multiple Classifier System
, 1993
"... Abstract. This paper introduces a knowledge integration framework based on Dempster-Shafer's mathematical theory of evidence for integrating classification results derived from multiple classifiers. This framework enables us to understand in which situations the classifiers give uncertain respo ..."
Abstract
- Add to MetaCart
Abstract. This paper introduces a knowledge integration framework based on Dempster-Shafer's mathematical theory of evidence for integrating classification results derived from multiple classifiers. This framework enables us to understand in which situations the classifiers give uncertain responses, to interpret classification evidence, and allows the classifiers to compensate for their individual deficiencies. Under this framework, we developed algorithms to model classification evidence and combine classification evidence from difference classifiers, we derived inference rules from evidential intervals for reasoning about classification results. The algorithms have been implemented and tested. Implementation issues, performance analysis and experimental results are presented.
A Proposal for Computing With Imprecise Probabilities: A Framework for Multiple Representations of Uncertainty in Simulation Software
, 2007
"... We propose the design and construction of a programming language for the formal representation of uncertainty in modeling and simulation. Modeling under uncertainty has been of paramount importance in the past half century, as quantitative methods of analysis have been developed to take advantage of ..."
Abstract
- Add to MetaCart
(Show Context)
We propose the design and construction of a programming language for the formal representation of uncertainty in modeling and simulation. Modeling under uncertainty has been of paramount importance in the past half century, as quantitative methods of analysis have been developed to take advantage of computational resources. Simulation is gaining prominence as the proper tool of scientific analysis under circumstances where it is infeasible or impractical to directly study the system in question. This programming language will be built as an extension to the Modelica programming language, which is an acausal object-oriented language for hybrid continuous and discrete-event simulations [22]. Our language extensions will serve as a platform for the research into representation and calibration of imprecise probabilities in quantitative risk analysis simulations. Imprecise probability is used a generic term for any mathematical model which measures chance or uncertainty without crisp numerical probabilities. The explicit representation of imprecise probability theories in a domain-specific programming language will facilitate the development of efficient algorithms for expressing, computing, and calibrating imprecise probability structures. Computation with imprecise probability structures will lead to quantitative risk analyses that are more informative than analyses using traditional probability theory. We have three primary research objectives: (i) the exploration of efficient representational structures and computational algorithms of Dempster-Shafer belief structures; (ii) the application of the imprecise
Constructing Probability Boxes and . . .
, 2003
"... This report summarizes a variety of the most useful and commonly applied methods for obtaining Dempster-Shafer structures, and their mathematical kin probability boxes, from empirical information or theoretical knowledge. The report includes a review of the aggregation methods for handling agreement ..."
Abstract
- Add to MetaCart
This report summarizes a variety of the most useful and commonly applied methods for obtaining Dempster-Shafer structures, and their mathematical kin probability boxes, from empirical information or theoretical knowledge. The report includes a review of the aggregation methods for handling agreement and conflict when multiple such objects are obtained from different sources.
Order in DSmT; definition of continuous DSm models
, 2006
"... Abstract. When implementing the DSmT, a difficulty may arise from the possible huge dimension of hyperpower sets, which are indeed free structures. However, it is possible to reduce the dimension of these structures by involving logical constraints. In this paper, the logical constraints will be rel ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract. When implementing the DSmT, a difficulty may arise from the possible huge dimension of hyperpower sets, which are indeed free structures. However, it is possible to reduce the dimension of these structures by involving logical constraints. In this paper, the logical constraints will be related to a predefined order over the logical propositions. The use of such orders and their resulting logical constraints will ensure a great reduction of the model complexity. Such results will be applied to the definition of continuous DSm models. In particular, a simplified description of the continuous impreciseness is considered, based on impreciseness intervals of the sensors. From this viewpoint, it is possible to manage the contradictions between continuous sensors in a DSmT manner, while the complexity of the model stays handlable. 1