Results 11  20
of
153
Constructing Bayesian Networks for Medical Diagnosis from Incomplete and Partially Correct Statistics
 IEEE Transactions on Knowledge and Data Engineering
, 2000
"... The paper discusses several knowledge engineering techniques for the construction of Bayesian networks for medical diagnostics when the available numerical probabilistic information is incomplete or partially correct. This situation occurs often when epidemiological studies publish only indirect sta ..."
Abstract

Cited by 31 (0 self)
 Add to MetaCart
The paper discusses several knowledge engineering techniques for the construction of Bayesian networks for medical diagnostics when the available numerical probabilistic information is incomplete or partially correct. This situation occurs often when epidemiological studies publish only indirect statistics and when significant unmodeled conditional dependence exists in the problem domain. While nothing can replace precise and complete probabilistic information, still a useful diagnostic system can be built with imperfect data by introducing domaindependent constraints. We propose a solution to the problem of determining the combined influences of several diseases on a single test result from specificity and sensitivity data for individual diseases. We also demonstrate two techniques for dealing with unmodeled conditional dependencies in a diagnostic network. These techniques are discussed in the context of an effort to design a portable device for cardiac diagnosis and monitoring from...
Learning from sparse data by exploiting monotonicity constraints
 Conf. Uncertainty in Artificial Intelligence
, 2005
"... When training data is sparse, more domain knowledge must be incorporated into the learning algorithm in order to reduce the effective size of the hypothesis space. This paper builds on previous work in which knowledge about qualitative monotonicities was formally represented and incorporated into le ..."
Abstract

Cited by 31 (4 self)
 Add to MetaCart
(Show Context)
When training data is sparse, more domain knowledge must be incorporated into the learning algorithm in order to reduce the effective size of the hypothesis space. This paper builds on previous work in which knowledge about qualitative monotonicities was formally represented and incorporated into learning algorithms (e.g., Clark & Matwin’s work with the CN2 rule learning algorithm). We show how to interpret knowledge of qualitative influences, and in particular of monotonicities, as constraints on probability distributions, and to incorporate this knowledge into Bayesian network learning algorithms. We show that this yields improved accuracy, particularly with very small training sets. 1
Intercausal Reasoning with Uninstantiated Ancestor Nodes
 In Proceedings of the Ninth Annual Conference on Uncertainty in Artificial Intelligence (UAI93
, 1993
"... Intercausal reasoning is a common inference pattern involving probabilistic dependence of causes of an observed common effect. The sign of this dependence is captured by a qualitative property called product synergy. The current definition of product synergy is insufficient for intercausal rea ..."
Abstract

Cited by 31 (13 self)
 Add to MetaCart
(Show Context)
Intercausal reasoning is a common inference pattern involving probabilistic dependence of causes of an observed common effect. The sign of this dependence is captured by a qualitative property called product synergy. The current definition of product synergy is insufficient for intercausal reasoning where there are additional uninstantiated causes of the common effect. We propose a new definition of product synergy and prove its adequacy for intercausal reasoning with direct and indirect evidence for the common effect. The new definition is based on a new property matrix half positive semidefiniteness, a weakened form of matrix positive semidefiniteness. 1
Probabilistic Reasoning in Decision Support Systems: From Computation to Common Sense
, 1993
"... Most areas of engineering, science, and management use important tools based on probabilistic methods. The common thread of the entire spectrum of these tools is aiding in decision making under uncertainty: the choice of an interpretation of reality or the choice of a course of action. Although the ..."
Abstract

Cited by 31 (14 self)
 Add to MetaCart
Most areas of engineering, science, and management use important tools based on probabilistic methods. The common thread of the entire spectrum of these tools is aiding in decision making under uncertainty: the choice of an interpretation of reality or the choice of a course of action. Although the importance of dealing with uncertainty in decision making is widely acknowledged, dissemination of probabilistic and decisiontheoretic methods in Artificial Intelligence has been surprisingly slow. Opponents of probability theory have pointed out three major obstacles to applying it in computerized decision aids: (1) the counterintuitiveness of probabilistic inference, which makes it hard for system builders, experts, and users to translate knowledge into probabilistic form, create knowledge bases, and to interpret results; (2) the quantitative character of probability theory, which implies collection or assessment of vast quantities of numbers and, since these are not always readily available, raises questions about their quality; and (3) closely related to its quantitative character, the computational complexity of probabilistic inference. Its proponents, on the other hand, point
Graphoid properties of epistemic irrelevance and independence
, 2005
"... This paper investigates Walley’s concepts of epistemic irrelevance and epistemic independence for imprecise probability models. We study the mathematical properties of irrelevance and independence, and their relation to the graphoid axioms. Examples are given to show that epistemic irrelevance can v ..."
Abstract

Cited by 29 (5 self)
 Add to MetaCart
This paper investigates Walley’s concepts of epistemic irrelevance and epistemic independence for imprecise probability models. We study the mathematical properties of irrelevance and independence, and their relation to the graphoid axioms. Examples are given to show that epistemic irrelevance can violate the symmetry, contraction and intersection axioms, that epistemic independence can violate contraction and intersection, and that this accords with informal notions of irrelevance and independence.
Argumentation as a General Framework for Uncertain Reasoning
 In Proceedings of the 9th Conference on Uncertainty in Artificial Intelligence
, 1996
"... : Argumentation is the process of constructing arguments about propositions, and the assignment of statements of confidence to those propositions based on the nature and relative strength of their supporting arguments. The process is modelled as a labelled deductive system, in which propositions are ..."
Abstract

Cited by 29 (8 self)
 Add to MetaCart
(Show Context)
: Argumentation is the process of constructing arguments about propositions, and the assignment of statements of confidence to those propositions based on the nature and relative strength of their supporting arguments. The process is modelled as a labelled deductive system, in which propositions are doubly labelled with the grounds on which they are based and a representation of the confidence attached to the argument. Argument construction is captured by a generalised argument consequence relation based on the ! fragment of minimal logic.. Arguments can be aggregated by a variety of numeric and symbolic flattening functions. This approach appears to shed light on the common logical structure of a variety of quantitative, qualitative and defeasible uncertainty calculi. Introduction Probability theory is the most widely accepted mathematical framework for reasoning under uncertainty. However questions about its universal applicability have often been raised. Before 1606, when Pascal...
Explaining "Explaining Away"
, 1994
"... Explaining away is a common pattern of reasoning in which the confirmation of one cause of an observed or believed event reduces the need to invoke alternative causes. The opposite of explaining away also can occur, in which the confirmation of one cause increases belief in another. We provide a gen ..."
Abstract

Cited by 26 (4 self)
 Add to MetaCart
Explaining away is a common pattern of reasoning in which the confirmation of one cause of an observed or believed event reduces the need to invoke alternative causes. The opposite of explaining away also can occur, in which the confirmation of one cause increases belief in another. We provide a general qualitative probabilistic analysis of intercausal reasoning, and identify the property of the interaction among the causes, product synergy, that determines which form of reasoning is appropriate. Product synergy extends the qualitative probabilistic network (QPN) formalism to support qualitative intercausal inference about the directions of change in probabilistic belief. The intercausal relation also justifies Occam's razor, facilitating pruning in search for likely diagnoses. Appeared as a correspondence in IEEE Transactions on Pattern Analysis and Machine Intelligence, 15(3):287292, 1993. 0 Portions of this paper originally appeared in Proceedings of the Second International Confe...
Complexity in Manufacturing Systems  Part 1: Analysis of Static Complexity
 IIE Transactions
, 1998
"... This paper studies static complexity in manufacturing systems. We enumerate factors influencing static complexity, and define a static complexity measure in terms of the processing requirements of parts to be produced and machine capabilities. The measure suggested for static complexity in manufa ..."
Abstract

Cited by 22 (0 self)
 Add to MetaCart
This paper studies static complexity in manufacturing systems. We enumerate factors influencing static complexity, and define a static complexity measure in terms of the processing requirements of parts to be produced and machine capabilities. The measure suggested for static complexity in manufacturing systems needs only the information available from production orders and process plans. The variation in static complexity is studied with respect to part similarity, system size, and product design changes. Finally, we present relationships between the static complexity measure and system performance. 1 Introduction Manufacturing systems are often described as being complex [Pritsker, 1990, Lin, 1993]. The dynamic nature of the manufacturing environment greatly increases the number of decisions that need to be made and system integration makes it difficult to predict the effect of a decision on future system performance. In fact, Upton [Upton, 1988] observes that many integrate...
Decisiontheoretic specification of credal networks: a unified language for uncertain modeling with sets of Bayesian networks
 International Journal of Approximate Reasoning
"... Credal networks are models that extend Bayesian nets to deal with imprecision in probability, and can actually be regarded as sets of Bayesian nets. Credal nets appear to be powerful means to represent and deal with many important and challenging problems in uncertain reasoning. We give examples to ..."
Abstract

Cited by 22 (10 self)
 Add to MetaCart
(Show Context)
Credal networks are models that extend Bayesian nets to deal with imprecision in probability, and can actually be regarded as sets of Bayesian nets. Credal nets appear to be powerful means to represent and deal with many important and challenging problems in uncertain reasoning. We give examples to show that some of these problems can only be modeled by credal nets called nonseparately specified. These, however, are still missing a graphical representation language and updating algorithms. The situation is quite the opposite with separately specified credal nets, which have been the subject of much study and algorithmic development. This paper gives two major contributions. First, it delivers a new graphical language to formulate any type of credal network, both separately and nonseparately specified. Second, it shows that any nonseparately specified net represented with the new language can be easily transformed into an equivalent separately specified net, defined over a larger domain. This result opens up a number of new outlooks and concrete outcomes: first of all, it immediately enables the existing algorithms for separately specified credal nets to be applied to nonseparately specified ones. We explore this possibility for the 2U algorithm: an algorithm for exact updating of singly connected credal nets, which is extended by our results to a class of nonseparately specified models. We also consider the problem of inference on Bayesian networks, when the reason that prevents some of the variables from being observed is unknown. The problem is first reformulated in the new graphical language, and then mapped into an equivalent problem on a separately specified net. This provides a first algorithmic approach to this kind of inference, which is also proved to be NPhard by similar transformations based on our formalism.
Improving Big Plans
 In Proceedings of the Fifteenth National Conference on Artificial Intelligence (AAAI98
, 1998
"... Past research on assessing and improving plans in domains that contain uncertainty has focused on analytic techniques that are exponential in the length of the plan. Little work has been done on choosing from among the many ways in which a plan can be improved. We present the Improve algorithm which ..."
Abstract

Cited by 19 (6 self)
 Add to MetaCart
(Show Context)
Past research on assessing and improving plans in domains that contain uncertainty has focused on analytic techniques that are exponential in the length of the plan. Little work has been done on choosing from among the many ways in which a plan can be improved. We present the Improve algorithm which simulates the execution of large, probabilistic plans. Improve runs a data mining algorithm on the execution traces to pinpoint defects in the plan that most often lead to plan failure. Finally, Improve applies qualitative reasoning and plan adaptation algorithms to modify the plan to correct these defects. We have tested Improve on plans containing over 250 steps in an evacuation domain, produced by a domainspecific scheduling routine. In these experiments, the modified plans have over a 15% higher probability of achieving their goal than the original plan. Introduction Large, complex domains call for large, robust plans. However, today's stateoftheart planning algorithms cannot eff...