Results 1  10
of
25
Data Fusion in the Transferable Belief Model.
, 2000
"... When Shafer introduced his theory of evidence based on the use of belief functions, he proposed a rule to combine belief functions induced by distinct pieces of evidence. Since then, theoretical justifications of this socalled Dempster's rule of combination have been produced and the meaning of ..."
Abstract

Cited by 52 (0 self)
 Add to MetaCart
When Shafer introduced his theory of evidence based on the use of belief functions, he proposed a rule to combine belief functions induced by distinct pieces of evidence. Since then, theoretical justifications of this socalled Dempster's rule of combination have been produced and the meaning of distinctness has been assessed. We will present practical applications where the fusion of uncertain data is well achieved by Dempster's rule of combination. It is essential that the meaning of the belief functions used to represent uncertainty be well fixed, as the adequacy of the rule depends strongly on a correct understanding of the context in which they are applied. Missing to distinguish between the upper and lower probabilities theory and the transferable belief model can lead to serious confusion, as Dempster's rule of combination is central in the transferable belief model whereas it hardly fits with the upper and lower probabilities theory. Keywords: belief function, transferable beli...
The DempsterShafer theory of evidence: An alternative approach to multicriteria decision modeling
 Omega
, 2000
"... The objective of this paper is to describe the potential oered by the Dempster–Shafer theory (DST) of evidence as a promising improvement on ‘‘traditional’ ’ approaches to decision analysis. Dempster–Shafer techniques originated in the work of Dempster on the use of probabilities with upper and lowe ..."
Abstract

Cited by 39 (2 self)
 Add to MetaCart
(Show Context)
The objective of this paper is to describe the potential oered by the Dempster–Shafer theory (DST) of evidence as a promising improvement on ‘‘traditional’ ’ approaches to decision analysis. Dempster–Shafer techniques originated in the work of Dempster on the use of probabilities with upper and lower bounds. They have subsequently been popularised in the literature on Artificial Intelligence (AI) and Expert Systems, with particular emphasis placed on combining evidence from dierent sources. In the paper we introduce the basic concepts of the DST of evidence, briefly mentioning its origins and comparisons with the more traditional Bayesian theory. Following this we discuss recent developments of this theory including analytical and application areas of interest. Finally we discuss developments via the use of an example incorporating DST with the Analytic Hierarchy Process
On nonspecific evidence
 International Journal of Intelligent Systems
, 1993
"... When simultaneously reasoning with evidences about several different events it is necessary to separate the evidence according to event. These events should then be handled independently. However, when propositions of evidences are weakly specified in the sense that it may not be certain to which ev ..."
Abstract

Cited by 29 (24 self)
 Add to MetaCart
When simultaneously reasoning with evidences about several different events it is necessary to separate the evidence according to event. These events should then be handled independently. However, when propositions of evidences are weakly specified in the sense that it may not be certain to which event they are referring, this may not be directly possible. In this paper a criterion for partitioning evidences into subsets representing events is established. This criterion, derived from the conflict within each subset, involves minimising a criterion function for the overall conflict of the partition. An algorithm based on characteristics of the criterion function and an iterative optimisation among partitionings of evidences is proposed.
Data Association in MultiTarget Detection Using the Transferable Belief Model.
 International Journal of Intelligent Systems
, 2000
"... In the transferable belief model, a model for the quantified representation of beliefs, some masses can be allocated to the empty set. It reflects the conflict between the sources of information. This quantified conflict can be used in order to solve the problem of data association in a multitarg ..."
Abstract

Cited by 24 (0 self)
 Add to MetaCart
In the transferable belief model, a model for the quantified representation of beliefs, some masses can be allocated to the empty set. It reflects the conflict between the sources of information. This quantified conflict can be used in order to solve the problem of data association in a multitarget detection problem. We present and illustrate the procedure by studying an example based on the detection of submarines. Their number and the association of each sensor to a particular source are determined by the procedure. Keywords: Transferable belief model, belief functions, data association, DempsterShafer theory, conflict of beliefs, fusion. 1 Introduction. Multisensor data fusion is the data processing function that combines data collected from systems comprising several sensors. These multisensor systems are characterized by the following features that must be taken into account: . the di#erent sensors observe the same scene, or at least partially (overlapping fields ...
DempsterShafer clustering using Potts spin mean field theory
 Soft Computing
, 2001
"... In this article we investigate a problem within DempsterShafer theory where 2^q  1 pieces of evidence are clustered into q clusters by minimizing a metaconflict function, or equivalently, by minimizing the sum of weight of conflict over all dusters. Previously one of us developed a method based on ..."
Abstract

Cited by 18 (14 self)
 Add to MetaCart
In this article we investigate a problem within DempsterShafer theory where 2^q  1 pieces of evidence are clustered into q clusters by minimizing a metaconflict function, or equivalently, by minimizing the sum of weight of conflict over all dusters. Previously one of us developed a method based on a Hopfield and Tank model. However, for very large problems we need a method with lower computational complexity. We demonstrate that the weight of conflict of evidence can, as an approximation, be linearized and mapped to an antiferromagnetic Potts spin model. This facilitates efficient numerical solution, even for large problem sizes. Optimal or nearly optimal solutions are found for DempsterShafer clustering benchmark tests with a time complexity of approximately O(N^2 log^2 N). Furthermore, an isomorphism between the antiferromagnetic Potts spin model and a graph optimization problem is shown. The graph model has dynamic variables living on the links, which have a priori probabilities that are directly related to the pairwise conflict between pieces of evidence. Hence, the relations between three different models are shown.
Conflictbased Force Aggregation
 Proceedings of the Sixth International Command and Control Research and Technology Symposium (6th ICCRTS)
, 2001
"... In this paper we present an application where we put together two methods for clustering and classification into a force aggregation method. Both methods are based on conflicts between elements. These methods work with different type of elements (intelligence reports, vehicles, military units) on di ..."
Abstract

Cited by 18 (7 self)
 Add to MetaCart
In this paper we present an application where we put together two methods for clustering and classification into a force aggregation method. Both methods are based on conflicts between elements. These methods work with different type of elements (intelligence reports, vehicles, military units) on different hierarchical levels using specific conflict assessment methods on each level. We use DempsterShafer theory for conflict calculation between elements, DempsterShafer clustering for clustering these elements, and templates for classification. The result of these processes is a complete force aggregation on all levels handled.
Managing inconsistent intelligence
 Proceedings of the Third International Conference on Information Fusion (FUSION 2000)
, 2000
"... In this paper we demonstrate that it is possible to manage intelligence in constant time as a preprocess to information fusion through a series of processes dealing with issues such as clustering reports, ranking reports with respect to importance, extraction of prototypes from clusters and immedia ..."
Abstract

Cited by 13 (12 self)
 Add to MetaCart
In this paper we demonstrate that it is possible to manage intelligence in constant time as a preprocess to information fusion through a series of processes dealing with issues such as clustering reports, ranking reports with respect to importance, extraction of prototypes from clusters and immediate classification of newly arriving intelligence reports. These methods are used when intelligence reports arrive which concerns different events which should be handled independently, when it is not known a priori to which event each intelligence report is related. We use clustering that runs as a backend process to partition the intelligence into subsets representing the events, and in parallel, a fast classification that runs as a frontend process in order to put the newly arriving intelligence into its correct information fusion process.
Fast DempsterShafer clustering using a neural network structure
 Proceedings of the Seventh International Conference on Information Processing and Management of Uncertainty in Knowledgebased Systems (IPMU´'98)
, 1998
"... In this paper we study a problem within DempsterShafer theory where 2^n − 1 pieces of evidence are clustered by a neural structure into n clusters. The clustering is done by minimizing a metaconflict function. Previously we developed a method based on iterative optimization. However, for large scal ..."
Abstract

Cited by 12 (10 self)
 Add to MetaCart
In this paper we study a problem within DempsterShafer theory where 2^n − 1 pieces of evidence are clustered by a neural structure into n clusters. The clustering is done by minimizing a metaconflict function. Previously we developed a method based on iterative optimization. However, for large scale problems we need a method with lower computational complexity. The neural structure was found to be effective and much faster than iterative optimization for larger problems. While the growth in metaconflict was faster for the neural structure compared with iterative optimization in medium sized problems, the metaconflict per cluster and evidence was moderate. The neural structure was able to find a global minimum over ten runs for problem sizes up to six clusters.
Finding a posterior domain probability distribution by specifying nonspecific evidence
 International Journal of Uncertainty, Fuzziness and KnowledgeBased Systems
, 1995
"... This article is an extension of the results of two earlier articles. In [J. Schubert, "On nonspecific evidence", Int. J. Intell. Syst. 8 (1993) 711725] we established within DempsterShafer theory a criterion function called the metaconflict function. With this criterion we can partition ..."
Abstract

Cited by 11 (11 self)
 Add to MetaCart
This article is an extension of the results of two earlier articles. In [J. Schubert, "On nonspecific evidence", Int. J. Intell. Syst. 8 (1993) 711725] we established within DempsterShafer theory a criterion function called the metaconflict function. With this criterion we can partition into subsets a set of several pieces of evidence with propositions that are weakly specified in the sense that it may be uncertain to which event a proposition is referring. In a second article [J. Schubert, "Specifying nonspecific evidence", in "Clusterbased specification techniques in DempsterShafer theory for an evidential intelligence analysis of multiple target tracks", Ph.D. Thesis, TRITANA9410, Royal Institute of Technology, Stockholm, 1994, ISBN 9171708014] we not only found the most plausible subset for each piece of evidence, we also found the plausibility for every subset that this piece of evidence belongs to the subset. In this article we aim to find a posterior probability distribution regarding the number of subsets. We use the idea that each piece of evidence in a subset supports the existence of that subset to the degree that this piece of evidence supports anything at all. From this we can derive a bpa that is concerned with the question of how many subsets we have. That bpa can then be combined with a given prior domain probability distribution in order to obtain the soughtafter posterior domain distribution.
A Realistic (NonAssociative) Logic And a Possible Explanations of 7±2 Law
, 2000
"... When we know the subjective probabilities (degrees of belief) p1 and p2 of two statements S1 and S2 , and we have no information about the relationship between these statements, then the probability of S1 &S2 can take any value from the interval [max(p1 + p2 \Gamma 1; 0); min(p1 ; p2 )]. If we ..."
Abstract

Cited by 11 (9 self)
 Add to MetaCart
When we know the subjective probabilities (degrees of belief) p1 and p2 of two statements S1 and S2 , and we have no information about the relationship between these statements, then the probability of S1 &S2 can take any value from the interval [max(p1 + p2 \Gamma 1; 0); min(p1 ; p2 )]. If we must select a single number from this interval, the natural idea is to take its midpoint. The corresponding "and" operation p1 & p2 def = (1=2) \Delta (max(p1 +p2 \Gamma 1; 0)+min(p1 ; p2)) is not associative. However, since the largest possible nonassociativity degree j(a & b) & c \Gamma a & (b & c)j is equal to 1/9, this nonassociativity is negligible if the realistic "granular" degree of belief have granules of width 1=9. This may explain why humans are most comfortable with 9 items to choose from (the famous "7 plus minus 2" law). We also show that the use of interval computations can simplify the (rather complicated) proofs. 1 1 In Expert Systems, We Need Estimates for the Degree of...