Results 1  10
of
925
Loopy belief propagation for approximate inference: An empirical study. In:
 Proceedings of Uncertainty in AI,
, 1999
"... Abstract Recently, researchers have demonstrated that "loopy belief propagation" the use of Pearl's polytree algorithm in a Bayesian network with loops can perform well in the context of errorcorrecting codes. The most dramatic instance of this is the near Shannonlimit performanc ..."
Abstract

Cited by 676 (15 self)
 Add to MetaCart
the parents of each symptom were a random subset of the diseases. terior marginals of all other nodes. Again we found that loopy belief propagation always converged with the average number of iterations equal to 14.55. The results presented up until now show that loopy propagation performs well for a variety
Dynamics of Directed Boolean Networks under Generalized Elementary Cellular Automata Rules, with PowerLaw Distributions and Popularity Assignment of Parent Nodes
"... Abstract: This study provides an analysis of the dynamics of fixedsize directed Boolean networks governed by generalizations of elementary cellular automata rules 22 and 126, under a powerlaw distribution of parent nodes and a popularity parent assignment. The analysis shows the existence of a two ..."
Abstract
 Add to MetaCart
Abstract: This study provides an analysis of the dynamics of fixedsize directed Boolean networks governed by generalizations of elementary cellular automata rules 22 and 126, under a powerlaw distribution of parent nodes and a popularity parent assignment. The analysis shows the existence of a
Hierarchical edge bundles: Visualization of adjacency relations in hierarchical data
 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS
, 2006
"... A compound graph is a frequently encountered type of data set. Relations are given between items, and a hierarchy is defined on the items as well. We present a new method for visualizing such compound graphs. Our approach is based on visually bundling the adjacency edges, i.e., nonhierarchical edge ..."
Abstract

Cited by 271 (12 self)
 Add to MetaCart
bundling reduces visual clutter and also visualizes implicit adjacency edges between parent nodes that are the result of explicit adjacency edges between their respective child nodes. Furthermore, hierarchical edge bundling is a generic method which can be used in conjunction with existing tree
Learning Bayesian Networks is NPComplete
, 1996
"... Algorithms for learning Bayesian networks from data havetwo components: a scoring metric and a search procedure. The scoring metric computes a score reflecting the goodnessoffit of the structure to the data. The search procedure tries to identify network structures with high scores. Heckerman e ..."
Abstract

Cited by 228 (8 self)
 Add to MetaCart
et al. (1995) introduce a Bayesian metric, called the BDe metric, that computes the relative posterior probabilityofanetwork structure given data. In this paper, we show that the search problem of identifying a Bayesian networkamong those where each node has at most K parentsthat has a
TL.“Decision making with the analytic hierarchy process
 International Journal of [ Services Sciences (IJSSCI
"... Abstract: Decisions involve many intangibles that need to be traded off. To do that, they have to be measured along side tangibles whose measurements must also be evaluated as to, how well, they serve the objectives of the decision maker. The Analytic Hierarchy Process (AHP) is a theory of measureme ..."
Abstract

Cited by 166 (0 self)
 Add to MetaCart
to a given attribute. The judgements may be inconsistent, and how to measure inconsistency and improve the judgements, when possible to obtain better consistency is a concern of the AHP. The derived priority scales are synthesised by multiplying them by the priority of their parent nodes and adding
parents
, 2013
"... A Bayesian network (BN) [14, 19] is a combination of: • directed graph (DAG) G = (V, E), in which each node vi ∈ V corresponds to a random variable Xi (a gene, a trait, an environmental factor, etc.); • a global probability distribution over X = {Xi}, which can be split into simpler local probabilit ..."
Abstract
 Add to MetaCart
A Bayesian network (BN) [14, 19] is a combination of: • directed graph (DAG) G = (V, E), in which each node vi ∈ V corresponds to a random variable Xi (a gene, a trait, an environmental factor, etc.); • a global probability distribution over X = {Xi}, which can be split into simpler local
A New Evolutionary System for Evolving Artificial Neural Networks
 IEEE TRANSACTIONS ON NEURAL NETWORKS
, 1996
"... This paper presents a new evolutionary system, i.e., EPNet, for evolving artificial neural networks (ANNs). The evolutionary algorithm used in EPNet is based on Fogel's evolutionary programming (EP) [1], [2], [3]. Unlike most previous studies on evolving ANNs, this paper puts its emphasis on ev ..."
Abstract

Cited by 202 (35 self)
 Add to MetaCart
on evolving ANN's behaviours. This is one of the primary reasons why EP is adopted. Five mutation operators proposed in EPNet reflect such an emphasis on evolving behaviours. Close behavioural links between parents and their offspring are maintained by various mutations, such as partial training and node
Bayesian networks: the parental synergy
 Proceedings of the Fourth European Workshop on Probabilistic Graphical Models
, 2008
"... In a Bayesian network, for any node its conditional probabilities given all possible combinations of values for its parent nodes are specified. In this paper a new notion, the parental synergy, is introduced which can be computed from these probabilities. This paper also conjectures a general expre ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
In a Bayesian network, for any node its conditional probabilities given all possible combinations of values for its parent nodes are specified. In this paper a new notion, the parental synergy, is introduced which can be computed from these probabilities. This paper also conjectures a general
Learning Bayesian Networks is NPHard
, 1994
"... Algorithms for learning Bayesian networks from data have two components: a scoring metric and a search procedure. The scoring metric computes a score reflecting the goodnessoffit of the structure to the data. The search procedure tries to identify network structures with high scores. Heckerman et ..."
Abstract

Cited by 194 (2 self)
 Add to MetaCart
there is a Bayesian networkamong those where each node has at most k parentsthat has a relative posterior probability greater than a given constant is NPcomplete, when the BDe metric is used. 1 Introduction Recently, many researchers have begun to investigate methods for learning Bayesian networks
Node Deletion in the . . .
, 1994
"... The problem of node deletion in the hB \Pi tree, a multiattribute point data indexing method, is addressed. The hB \Pi tree is a modified hBtree [LS90] that provides concurrency and recovery and can be integrated in a general purpose Database Management System. First, we describe the hB \Pi ..."
Abstract
 Add to MetaCart
parent node and a node that absorbs the contents of the sparse node), and one is deallocated (the sparse node). Keywords: indexing, multiattribute access methods, Btrees, deletion algorithms, concurrency 1 Introduction The contents of a database can change over time not only in terms of new data
Results 1  10
of
925