Results 1  10
of
385
Markov Logic Networks
 MACHINE LEARNING
, 2006
"... We propose a simple approach to combining firstorder logic and probabilistic graphical models in a single representation. A Markov logic network (MLN) is a firstorder knowledge base with a weight attached to each formula (or clause). Together with a set of constants representing objects in the ..."
Abstract

Cited by 816 (39 self)
 Add to MetaCart
(Show Context)
We propose a simple approach to combining firstorder logic and probabilistic graphical models in a single representation. A Markov logic network (MLN) is a firstorder knowledge base with a weight attached to each formula (or clause). Together with a set of constants representing objects in the domain, it specifies a ground Markov network containing one feature for each possible grounding of a firstorder formula in the KB, with the corresponding weight. Inference in MLNs is performed by MCMC over the minimal subset of the ground network required for answering the query. Weights are efficiently learned from relational databases by iteratively optimizing a pseudolikelihood measure. Optionally, additional clauses are learned using inductive logic programming techniques. Experiments with a realworld database and knowledge base in a university domain illustrate the promise of this approach.
Logit models and logistic regressions for social networks. II . . .
, 1999
"... The research described here builds on our previous work by generalizing the univariate models described there to models for multivariate relations. This family, labelled p*, generalizes the Markov random graphs of Frank and Strauss, which were further developed by them and others, building on Besag’ ..."
Abstract

Cited by 321 (7 self)
 Add to MetaCart
The research described here builds on our previous work by generalizing the univariate models described there to models for multivariate relations. This family, labelled p*, generalizes the Markov random graphs of Frank and Strauss, which were further developed by them and others, building on Besag’s ideas on estimation. These models were first used to model random variables embedded in lattices by Ising, and have been quite common in the study of spatial data. Here, they are applied to the statistical analysis of multigraphs, in general, and the analysis of multivariate social networks, in particular. In this paper, we show how to formulate models for multivariate social networks by considering a range of theoretical claims about social structure. We illustrate the models by developing structural models for several multivariate networks.
Dependency networks for inference, collaborative filtering, and data visualization
 Journal of Machine Learning Research
"... We describe a graphical model for probabilistic relationshipsan alternative tothe Bayesian networkcalled a dependency network. The graph of a dependency network, unlike aBayesian network, is potentially cyclic. The probability component of a dependency network, like aBayesian network, is a set of ..."
Abstract

Cited by 208 (12 self)
 Add to MetaCart
We describe a graphical model for probabilistic relationshipsan alternative tothe Bayesian networkcalled a dependency network. The graph of a dependency network, unlike aBayesian network, is potentially cyclic. The probability component of a dependency network, like aBayesian network, is a set of conditional distributions, one for each nodegiven its parents. We identify several basic properties of this representation and describe a computationally e cient procedure for learning the graph and probability components from data. We describe the application of this representation to probabilistic inference, collaborative ltering (the task of predicting preferences), and the visualization of acausal predictive relationships.
Spatstat: An R package for analyzing spatial point patterns
 Journal of Statistical Software
, 2005
"... spatstat is a package for analyzing spatial point pattern data. Its functionality includes exploratory data analysis, modelfitting, and simulation. It is designed to handle realistic datasets, including inhomogeneous point patterns, spatial sampling regions of arbitrary shape, extra covariate data, ..."
Abstract

Cited by 206 (3 self)
 Add to MetaCart
spatstat is a package for analyzing spatial point pattern data. Its functionality includes exploratory data analysis, modelfitting, and simulation. It is designed to handle realistic datasets, including inhomogeneous point patterns, spatial sampling regions of arbitrary shape, extra covariate data, and ‘marks ’ attached to the points of the point pattern. A unique feature of spatstat is its generic algorithm for fitting point process models to point pattern data. The interface to this algorithm is a function ppm that is strongly analogous to lm and glm. This paper is a general description of spatstat and an introduction for new users.
Classification in Networked Data: A toolkit and a univariate case study
, 2006
"... This paper is about classifying entities that are interlinked with entities for which the class is known. After surveying prior work, we present NetKit, a modular toolkit for classification in networked data, and a casestudy of its application to networked data used in prior machine learning resear ..."
Abstract

Cited by 200 (10 self)
 Add to MetaCart
This paper is about classifying entities that are interlinked with entities for which the class is known. After surveying prior work, we present NetKit, a modular toolkit for classification in networked data, and a casestudy of its application to networked data used in prior machine learning research. NetKit is based on a nodecentric framework in which classifiers comprise a local classifier, a relational classifier, and a collective inference procedure. Various existing nodecentric relational learning algorithms can be instantiated with appropriate choices for these components, and new combinations of components realize new algorithms. The case study focuses on univariate network classification, for which the only information used is the structure of class linkage in the network (i.e., only links and some class labels). To our knowledge, no work previously has evaluated systematically the power of classlinkage alone for classification in machine learning benchmark data sets. The results demonstrate that very simple networkclassification models perform quite well—well enough that they should be used regularly as baseline classifiers for studies of learning with networked data. The simplest method (which performs remarkably well) highlights the close correspondence between several existing methods introduced for different purposes—i.e., Gaussianfield classifiers, Hopfield networks, and relationalneighbor classifiers. The case study also shows that there are two sets of techniques that are preferable in different situations, namely when few versus many labels are known initially. We also demonstrate that link selection plays an important role similar to traditional feature selection.
An introduction to exponential random graph (p*) models for social networks.
 Social Networks,
, 2007
"... Abstract This article provides an introductory summary to the formulation and application of exponential random graph models for social networks. The possible ties among nodes of a network are regarded as random variables, and assumptions about dependencies among these random tie variables determin ..."
Abstract

Cited by 195 (4 self)
 Add to MetaCart
(Show Context)
Abstract This article provides an introductory summary to the formulation and application of exponential random graph models for social networks. The possible ties among nodes of a network are regarded as random variables, and assumptions about dependencies among these random tie variables determine the general form of the exponential random graph model for the network. Examples of different dependence assumptions and their associated models are given, including Bernoulli, dyadindependent and Markov random graph models. The incorporation of actor attributes in social selection models is also reviewed. Newer, more complex dependence assumptions are briefly outlined.
Markov Chain Monte Carlo Estimation of Exponential Random
 Graph Models,” Journal of Social Structure,
, 2002
"... Abstract This paper is about estimating the parameters of the exponential random graph model, also known as the p * model, using frequentist Markov chain Monte Carlo (MCMC) methods. The exponential random graph model is simulated using Gibbs or MetropolisHastings sampling. The estimation procedures ..."
Abstract

Cited by 185 (19 self)
 Add to MetaCart
(Show Context)
Abstract This paper is about estimating the parameters of the exponential random graph model, also known as the p * model, using frequentist Markov chain Monte Carlo (MCMC) methods. The exponential random graph model is simulated using Gibbs or MetropolisHastings sampling. The estimation procedures considered are based on the RobbinsMonro algorithm for approximating a solution to the likelihood equation. A major problem with exponential random graph models resides in the fact that such models can have, for certain parameter values, bimodal (or multimodal) distributions for the sufficient statistics such as the number of ties. The bimodality of the exponential graph distribution for certain parameter values seems a severe limitation to its practical usefulness. The possibility of bior multimodality is reflected in the possibility that the outcome space is divided into two (or more) regions such that the more usual type of MCMC algorithms, updating only single relations, dyads, or triplets, have extremely long sojourn times within such regions, and a negligible probability to move from one region to another. In such situations, convergence to the target distribution is extremely slow. To be useful, MCMC algorithms must be able to make transitions from a given graph to a very different graph. It is proposed to include transitions to the graph complement as updating steps to improve the speed of convergence to the target distribution. Estimation procedures implementing these ideas work satisfactorily for some data sets and model specifications, but not for all.
Representation learning: A review and new perspectives.
 of IEEE Conf. Comp. Vision Pattern Recog. (CVPR),
, 2005
"... AbstractThe success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can b ..."
Abstract

Cited by 173 (4 self)
 Add to MetaCart
(Show Context)
AbstractThe success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representationlearning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks. This motivates longer term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation, and manifold learning.
Estimators for Stochastic "UnificationBased" Grammars*
, 1999
"... Loglinear models provide a statistically sound framework for Stochastic "UnificationBased" Grammars (SUBGs) and stochastic versions of other kinds of grammars. We describe two computationallytractable ways of estimating the parameters of such grammars from a training corpus of synta ..."
Abstract

Cited by 154 (21 self)
 Add to MetaCart
Loglinear models provide a statistically sound framework for Stochastic "UnificationBased" Grammars (SUBGs) and stochastic versions of other kinds of grammars. We describe two computationallytractable ways of estimating the parameters of such grammars from a training corpus of syntactic analyses, and apply these to estimate a stochastic version of LexicalFunctional Grammar.
Recovering Intrinsic Images from a Single Image
, 2002
"... We present an algorithm that uses multiple cues to recover shading and reflectance intrinsic images from a single image. Using both color information and a classifier trained to recognize grayscale patterns, each image derivative is classified as being caused by shading or a change in the surface&a ..."
Abstract

Cited by 144 (8 self)
 Add to MetaCart
(Show Context)
We present an algorithm that uses multiple cues to recover shading and reflectance intrinsic images from a single image. Using both color information and a classifier trained to recognize grayscale patterns, each image derivative is classified as being caused by shading or a change in the surface's reflectance. Generalized Belief Propagation is then used to propagate information from areas where the correct classification is clear to areas where it is ambiguous. We also show results on real images.