Results 1 
9 of
9
Efficient Belief Propagation for Utility Maximization and Repeated Inference
 PROCEEDINGS OF THE TWENTYFOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE (AAAI10)
, 2010
"... Many problems require repeated inference on probabilistic graphical models, with different values for evidence variables or other changes. Examples of such problems include utility maximization, MAP inference, online and interactive inference, parameter and structure learning, and dynamic inference. ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
Many problems require repeated inference on probabilistic graphical models, with different values for evidence variables or other changes. Examples of such problems include utility maximization, MAP inference, online and interactive inference, parameter and structure learning, and dynamic inference. Since small changes to the evidence typically only affect a small region of the network, repeatedly performing inference from scratch can be massively redundant. In this paper, we propose expanding frontier belief propagation (EFBP), an efficient approximate algorithm for probabilistic inference with incremental changes to the evidence (or model). EFBP is an extension of loopy belief propagation (BP) where each run of inference reuses results from the previous ones, instead of starting from scratch with the new evidence; messages are only propagated in regions of the network affected by the changes. We provide theoretical guarantees bounding the difference in beliefs generated by EFBP and standard BP, and apply EFBP to the problem of expected utility maximization in influence diagrams. Experiments on viral marketing and combinatorial auction problems show that EFBP can converge much faster than BP without significantly affecting the quality of the solutions.
Efficient Lifting for Online Probabilistic Inference
 PROCEEDINGS OF THE TWENTYFOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE (AAAI10)
, 2010
"... Lifting can greatly reduce the cost of inference on firstorder probabilistic graphical models, but constructing the lifted network can itself be quite costly. In online applications (e.g., video segmentation) repeatedly constructing the lifted network for each new inference can be extremely wasteful ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
Lifting can greatly reduce the cost of inference on firstorder probabilistic graphical models, but constructing the lifted network can itself be quite costly. In online applications (e.g., video segmentation) repeatedly constructing the lifted network for each new inference can be extremely wasteful, because the evidence typically changes little from one inference to the next. The same is true in many other problems that require repeated inference, like utility maximization, MAP inference, interactive inference, parameter and structure learning, etc. In this paper, we propose an efficient algorithm for updating the structure of an existing lifted network with incremental changes to the evidence. This allows us to construct the lifted network once for the initial inference
Incremental knowledge base construction using DeepDive
 Proceedings of the VLDB Endowment (PVLDB
, 2015
"... Populating a database with unstructured information is a longstanding problem in industry and research that encompasses problems of extraction, cleaning, and integration. Recent names used for this problem include dealing with dark data and knowledge base construction (KBC). In this work, we desc ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
(Show Context)
Populating a database with unstructured information is a longstanding problem in industry and research that encompasses problems of extraction, cleaning, and integration. Recent names used for this problem include dealing with dark data and knowledge base construction (KBC). In this work, we describe DeepDive, a system that combines database and machine learning ideas to help develop KBC systems, and we present techniques to make the KBC process more efficient. We observe that the KBC process is iterative, and we develop techniques to incrementally produce inference results for KBC systems. We propose two methods for incremental inference, based respectively on sampling and variational techniques. We also study the tradeoff space of these methods and develop a simple rulebased optimizer. DeepDive includes all of these contributions, and we evaluate DeepDive on five KBC systems, showing that it can speed up KBC inference tasks by up to two orders of magnitude with negligible impact on quality. 1.
Lifted Belief Propagation: Pairwise Marginals and Beyond
"... Lifted belief propagation (LBP) can be extremely fast at computing approximate marginal probability distributions over single ground atoms and neighboring ones in the underlying graphical model. It does, however, not prescribe a way to compute joint distributions over pairs, triples or ktuples of d ..."
Abstract

Cited by 5 (5 self)
 Add to MetaCart
Lifted belief propagation (LBP) can be extremely fast at computing approximate marginal probability distributions over single ground atoms and neighboring ones in the underlying graphical model. It does, however, not prescribe a way to compute joint distributions over pairs, triples or ktuples of distant ground atoms. In this paper, we present an algorithm, called conditioned LBP, for approximating these distributions. Essentially, we select variables one at a time for conditioning, running lifted belief propagation after each selection. This naive solution, however, recomputes the lifted network in each step from scratch, therefore often canceling the benefits of lifted inference. We show how to avoid this by efficiently computing the lifted network for each conditioning directly from the one already known for the single node marginals. Our experimental results validate that significant efficiency gains are possible and illustrate the potential for secondorder parameter estimation of Markov logic networks. 1
Efficient Sequential Clamping for Lifted Message Passing
"... Abstract. Lifted message passing approaches can be extremely fast at computing approximate marginal probability distributions over single variables and neighboring ones in the underlying graphical model. They do, however, not prescribe a way to solve more complex inference tasks such as computing jo ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
Abstract. Lifted message passing approaches can be extremely fast at computing approximate marginal probability distributions over single variables and neighboring ones in the underlying graphical model. They do, however, not prescribe a way to solve more complex inference tasks such as computing joint marginals for ktuples of distant random variables or satisfying assignments of CNFs. A popular solution in these cases is the idea of turning the complex inference task into a sequence of simpler ones by selecting and clamping variables one at a time and running lifted message passing again after each selection. This naive solution, however, recomputes the lifted network in each step from scratch, therefore often canceling the benefits of lifted inference. We show how to avoid this by efficiently computing the lifted network for each conditioning directly from the one already known for the single node marginals. Our experiments show that significant efficiency gains are possible for lifted message passing guided decimation for SAT and sampling.
Tractable Operations for Arithmetic Circuits of Probabilistic Models
"... Abstract We consider tractable representations of probability distributions and the polytime operations they support. In particular, we consider a recently proposed arithmetic circuit representation, the Probabilistic Sentential Decision Diagram (PSDD). We show that PSDDs support a polytime multipl ..."
Abstract
 Add to MetaCart
Abstract We consider tractable representations of probability distributions and the polytime operations they support. In particular, we consider a recently proposed arithmetic circuit representation, the Probabilistic Sentential Decision Diagram (PSDD). We show that PSDDs support a polytime multiplication operator, while they do not support a polytime operator for summingout variables. A polytime multiplication operator makes PSDDs suitable for a broader class of applications compared to classes of arithmetic circuits that do not support multiplication. As one example, we show that PSDD multiplication leads to a very simple but effective compilation algorithm for probabilistic graphical models: represent each model factor as a PSDD, and then multiply them.
IEEE WCNC 2011 Network A Factor Graph Based Dynamic Spectrum Allocation Approach for Cognitive Network
"... Abstract—Cognitive radios share the whitespace in the absence of licensed users. For spectrum sharing, avoiding the interference among multiple secondary users is a fundamental problem (known as dynamic spectrum allocation problem). This paper proposes a factor graph based approach to solve the dyna ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract—Cognitive radios share the whitespace in the absence of licensed users. For spectrum sharing, avoiding the interference among multiple secondary users is a fundamental problem (known as dynamic spectrum allocation problem). This paper proposes a factor graph based approach to solve the dynamic spectrum allocation problem in an efficient manner. With the proposed DWATree (Distributed Wave Algorithm), this problem can be solved with 2n number of messages for tree structured graphs. This paper also proposes another novel algorithm called DWACycle to solve the problem for general graphs with cycles. Simulation results show that both DWATree and DWACycle can improve the global link quality consistently better than naive local optimization approaches. I.
DeepDive: A Data Management System for Automatic Knowledge Base Construction
, 2015
"... iACKNOWLEDGMENTS I owe Christopher Re ́ my career as a researcher, the greatest dream of my life. Since the day I first met Chris and told him about my dream, he has done everything he could, as a scientist, an educator, and a friend, to help me. I am forever indebted to him for his completely hones ..."
Abstract
 Add to MetaCart
(Show Context)
iACKNOWLEDGMENTS I owe Christopher Re ́ my career as a researcher, the greatest dream of my life. Since the day I first met Chris and told him about my dream, he has done everything he could, as a scientist, an educator, and a friend, to help me. I am forever indebted to him for his completely honest criticisms and feedback, the most valuable gifts an advisor can give. His training equipped me with confidence and pride that I will carry for the rest of my career. He is the role model that I will follow. If my whole future career achieves an approximation of what he has done so far in his, I will be proud and happy. I am also indebted to Jude Shavlik and Miron Livny, who, after Chris left for Stanford, kindly helped me through all the paperwork and payments at Wisconsin. If it were not for their help, I would not have been able to continue my PhD studies. I am also profoundly grateful to Jude for being the chair of my committee. I am also likewise grateful to Jeffrey Naughton, David Page, and Shanan Peters for serving on my committee; and Thomas Reps for his feedback during defense. DeepDive would have not been possible without all its users. Shanan Peters was the first user, working with it before it even got its name. He spent three years going through a painful process with us before we understood the current abstraction of DeepDive. I am grateful to him for sticking with