Results 1  10
of
15
On nonspecific evidence
 International Journal of Intelligent Systems
, 1993
"... When simultaneously reasoning with evidences about several different events it is necessary to separate the evidence according to event. These events should then be handled independently. However, when propositions of evidences are weakly specified in the sense that it may not be certain to which ev ..."
Abstract

Cited by 29 (24 self)
 Add to MetaCart
When simultaneously reasoning with evidences about several different events it is necessary to separate the evidence according to event. These events should then be handled independently. However, when propositions of evidences are weakly specified in the sense that it may not be certain to which event they are referring, this may not be directly possible. In this paper a criterion for partitioning evidences into subsets representing events is established. This criterion, derived from the conflict within each subset, involves minimising a criterion function for the overall conflict of the partition. An algorithm based on characteristics of the criterion function and an iterative optimisation among partitionings of evidences is proposed.
Clusterbased specification techniques in DempsterShafer theory
 SYMBOLIC AND QUANTITATIVE APPROACHES TO REASONING AND UNCERTAINTY, C. FROIDEVAUX, AND J. KOLAS, EDS., PROCEEDINGS OF THE EUROPEAN CONFERENCE (ECSQARU'95)
, 1995
"... When reasoning with uncertainty there are many situations where evidences are not only uncertain but their propositions may also be weakly specified in the sense that it may not be certain to which event a proposition is referring. It is then crucial not to combine such evidences in the mistaken bel ..."
Abstract

Cited by 25 (12 self)
 Add to MetaCart
(Show Context)
When reasoning with uncertainty there are many situations where evidences are not only uncertain but their propositions may also be weakly specified in the sense that it may not be certain to which event a proposition is referring. It is then crucial not to combine such evidences in the mistaken belief that they are referring to the same event. This situation would become manageable if the evidences could be clustered into subsets representing events that should be handled separately. In an earlier article we established within DempsterShafer theory a criterion function called the metaconflict function. With this criterion we can partition a set of evidences into subsets. Each subset representing a separate event. In this article we will not only find the most plausible subset for each piece of evidence, we will also find the plausibility for every subset that the evidence belongs to the subset. Also, when the number of subsets are uncertain we aim to find a posterior probability distribution regarding the number of subsets.
DempsterShafer clustering using Potts spin mean field theory
 Soft Computing
, 2001
"... In this article we investigate a problem within DempsterShafer theory where 2^q  1 pieces of evidence are clustered into q clusters by minimizing a metaconflict function, or equivalently, by minimizing the sum of weight of conflict over all dusters. Previously one of us developed a method based on ..."
Abstract

Cited by 18 (14 self)
 Add to MetaCart
In this article we investigate a problem within DempsterShafer theory where 2^q  1 pieces of evidence are clustered into q clusters by minimizing a metaconflict function, or equivalently, by minimizing the sum of weight of conflict over all dusters. Previously one of us developed a method based on a Hopfield and Tank model. However, for very large problems we need a method with lower computational complexity. We demonstrate that the weight of conflict of evidence can, as an approximation, be linearized and mapped to an antiferromagnetic Potts spin model. This facilitates efficient numerical solution, even for large problem sizes. Optimal or nearly optimal solutions are found for DempsterShafer clustering benchmark tests with a time complexity of approximately O(N^2 log^2 N). Furthermore, an isomorphism between the antiferromagnetic Potts spin model and a graph optimization problem is shown. The graph model has dynamic variables living on the links, which have a priori probabilities that are directly related to the pairwise conflict between pieces of evidence. Hence, the relations between three different models are shown.
Fast DempsterShafer clustering using a neural network structure
 Proceedings of the Seventh International Conference on Information Processing and Management of Uncertainty in Knowledgebased Systems (IPMU´'98)
, 1998
"... In this paper we study a problem within DempsterShafer theory where 2^n − 1 pieces of evidence are clustered by a neural structure into n clusters. The clustering is done by minimizing a metaconflict function. Previously we developed a method based on iterative optimization. However, for large scal ..."
Abstract

Cited by 12 (10 self)
 Add to MetaCart
In this paper we study a problem within DempsterShafer theory where 2^n − 1 pieces of evidence are clustered by a neural structure into n clusters. The clustering is done by minimizing a metaconflict function. Previously we developed a method based on iterative optimization. However, for large scale problems we need a method with lower computational complexity. The neural structure was found to be effective and much faster than iterative optimization for larger problems. While the growth in metaconflict was faster for the neural structure compared with iterative optimization in medium sized problems, the metaconflict per cluster and evidence was moderate. The neural structure was able to find a global minimum over ten runs for problem sizes up to six clusters.
Applying data mining and machine learning techniques to submarine intelligence analysis
 Proceedings of the Third International Conference on Knowledge Discovery and Data Mining (KDD'97)
, 1997
"... We describe how specialized database technology and data analysis methods were applied by the Swedish defense to help deal with the violation of Swedish marine territory by foreign submarine intruders during the Eighties and early Nineties. Among several approaches tried some yielded interesting inf ..."
Abstract

Cited by 12 (6 self)
 Add to MetaCart
(Show Context)
We describe how specialized database technology and data analysis methods were applied by the Swedish defense to help deal with the violation of Swedish marine territory by foreign submarine intruders during the Eighties and early Nineties. Among several approaches tried some yielded interesting information, although most of the key questions remain unanswered. We conclude with a survey of belieffunction and geneticalgorithmbased methods which were proposed to support interpretation of intelligence reports and prediction of future submarine positions, respectively.
Finding a posterior domain probability distribution by specifying nonspecific evidence
 International Journal of Uncertainty, Fuzziness and KnowledgeBased Systems
, 1995
"... This article is an extension of the results of two earlier articles. In [J. Schubert, "On nonspecific evidence", Int. J. Intell. Syst. 8 (1993) 711725] we established within DempsterShafer theory a criterion function called the metaconflict function. With this criterion we can partition ..."
Abstract

Cited by 11 (11 self)
 Add to MetaCart
This article is an extension of the results of two earlier articles. In [J. Schubert, "On nonspecific evidence", Int. J. Intell. Syst. 8 (1993) 711725] we established within DempsterShafer theory a criterion function called the metaconflict function. With this criterion we can partition into subsets a set of several pieces of evidence with propositions that are weakly specified in the sense that it may be uncertain to which event a proposition is referring. In a second article [J. Schubert, "Specifying nonspecific evidence", in "Clusterbased specification techniques in DempsterShafer theory for an evidential intelligence analysis of multiple target tracks", Ph.D. Thesis, TRITANA9410, Royal Institute of Technology, Stockholm, 1994, ISBN 9171708014] we not only found the most plausible subset for each piece of evidence, we also found the plausibility for every subset that this piece of evidence belongs to the subset. In this article we aim to find a posterior probability distribution regarding the number of subsets. We use the idea that each piece of evidence in a subset supports the existence of that subset to the degree that this piece of evidence supports anything at all. From this we can derive a bpa that is concerned with the question of how many subsets we have. That bpa can then be combined with a given prior domain probability distribution in order to obtain the soughtafter posterior domain distribution.
A Realistic (NonAssociative) Logic And a Possible Explanations of 7±2 Law
, 2000
"... When we know the subjective probabilities (degrees of belief) p1 and p2 of two statements S1 and S2 , and we have no information about the relationship between these statements, then the probability of S1 &S2 can take any value from the interval [max(p1 + p2 \Gamma 1; 0); min(p1 ; p2 )]. If we ..."
Abstract

Cited by 11 (9 self)
 Add to MetaCart
When we know the subjective probabilities (degrees of belief) p1 and p2 of two statements S1 and S2 , and we have no information about the relationship between these statements, then the probability of S1 &S2 can take any value from the interval [max(p1 + p2 \Gamma 1; 0); min(p1 ; p2 )]. If we must select a single number from this interval, the natural idea is to take its midpoint. The corresponding "and" operation p1 & p2 def = (1=2) \Delta (max(p1 +p2 \Gamma 1; 0)+min(p1 ; p2)) is not associative. However, since the largest possible nonassociativity degree j(a & b) & c \Gamma a & (b & c)j is equal to 1/9, this nonassociativity is negligible if the realistic "granular" degree of belief have granules of width 1=9. This may explain why humans are most comfortable with 9 items to choose from (the famous "7 plus minus 2" law). We also show that the use of interval computations can simplify the (rather complicated) proofs. 1 1 In Expert Systems, We Need Estimates for the Degree of...
Decision Making Under Interval Probabilities
, 1998
"... If we know the probabilities p1 ; : : : ; pn of different situations s1 ; : : : ; sn , then we can choose a decision A i for which the expected benefit C i = p1 \Delta c i1 + : : : + pn \Delta cin takes the largest possible value, where c ij denotes the benefit of decision A i in situation s j . In ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
If we know the probabilities p1 ; : : : ; pn of different situations s1 ; : : : ; sn , then we can choose a decision A i for which the expected benefit C i = p1 \Delta c i1 + : : : + pn \Delta cin takes the largest possible value, where c ij denotes the benefit of decision A i in situation s j . In many real life situations, however, we do not know the exact values of the probabilities p j ; we only know the intervals p j = [p \Gamma j ; p + j ] of possible values of these probabilities. In order to make decisions under such interval probabilities, we would like to generalize the notion of expected benefits to interval probabilities. In this paper, we show that natural requirements lead to a unique (and easily computable) generalization. Thus, we have a natural way of decision making under interval probabilities. 1 Introduction to the Problem Decision making: case of exactly known consequences. One of the main problems in decision making is the problem of choosing one of (finitel...
On ρ in a decisiontheoretic apparatus of DempsterShafer theory
 International Journal of Approximate Reasoning
, 1995
"... Thomas M. Strat has developed a decisiontheoretic apparatus for DempsterShafer theory (Decision analysis using belief functions, Intern. J. Approx. Reason. 4(5/6), 391417, 1990). In this apparatus, expected utility intervals are constructed for different choices. The choice with the highest expec ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
(Show Context)
Thomas M. Strat has developed a decisiontheoretic apparatus for DempsterShafer theory (Decision analysis using belief functions, Intern. J. Approx. Reason. 4(5/6), 391417, 1990). In this apparatus, expected utility intervals are constructed for different choices. The choice with the highest expected utility is preferable to others. However, to find the preferred choice when the expected utility interval of one choice is included in that of another, it is necessary to interpolate a discerning point in the intervals. This is done by the parameter ρ, defined as the probability that the ambiguity about the utility of every nonsingleton focal element will turn out as favorable as possible. If there are several different decision makers, we might sometimes be more interested in having the highest expected utility among the decision makers rather than only trying to maximize our own expected utility regardless of choices made by other decision makers. The preference of each choice is then determined by the probability of yielding the highest expected utility. This probability is equal to the maximal interval length of ρ under which an alternative is preferred. We must here take into account not only the choices already made by other decision makers but also the rational choices we can assume to be made by later decision makers. In Strats apparatus, an assumption, unwarranted by the evidence at hand, has to be made about the value of ρ. We demonstrate that no such assumption is necessary. It is sufficient to assume a uniform probability distribution for ρ to be able to discern the most preferable choice. We discuss when this approach is ustifiable.
Belief logic programming: Uncertainty reasoning with correlation of evidence
 In Intl. Conf. on Logic Programming and Nonmonotonic Reasoning (LPNMR
, 2009
"... Abstract. Belief Logic Programming (BLP) is a novel form of quantitative logic programming in the presence of uncertain and inconsistent information, which was designed to be able to combine and correlate evidence obtained from nonindependent information sources. BLP has nonmonotonic semantics bas ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
Abstract. Belief Logic Programming (BLP) is a novel form of quantitative logic programming in the presence of uncertain and inconsistent information, which was designed to be able to combine and correlate evidence obtained from nonindependent information sources. BLP has nonmonotonic semantics based on the concepts of belief combination functions and is inspired by DempsterShafer theory of evidence. Most importantly, unlike the previous efforts to integrate uncertainty and logic programming, BLP can correlate structural information contained in rules and provides more accurate certainty estimates. The results are illustrated via simple, yet realistic examples of rulebased Web service integration. 1