Results 1 
8 of
8
Identification of Gaussian Process StateSpace Models with Particle Stochastic Approximation EM
"... Abstract: Gaussian process statespace models (GPSSMs) are a very flexible family of models of nonlinear dynamical systems. They comprise a Bayesian nonparametric representation of the dynamics of the system and additional (hyper)parameters governing the properties of this nonparametric representa ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Abstract: Gaussian process statespace models (GPSSMs) are a very flexible family of models of nonlinear dynamical systems. They comprise a Bayesian nonparametric representation of the dynamics of the system and additional (hyper)parameters governing the properties of this nonparametric representation. The Bayesian formalism enables systematic reasoning about the uncertainty in the system dynamics. We present an approach to maximum likelihood identification of the parameters in GPSSMs, while retaining the full nonparametric description of the dynamics. The method is based on a stochastic approximation version of the EM algorithm that employs recent developments in particle Markov chain Monte Carlo for efficient identification.
CS761 Spring 2013 Advanced Machine Learning Graphical Models
"... In directed graphs, a directed cycle is a sequence s1 → s2 →... sk → s1. A directed acyclic graph (DAG) is a graph in which every edge is directed and without directed cycle. A directed graphical model (aka Bayesian Network) on a DAG is a family of probability distributions that factorize as follows ..."
Abstract
 Add to MetaCart
(Show Context)
In directed graphs, a directed cycle is a sequence s1 → s2 →... sk → s1. A directed acyclic graph (DAG) is a graph in which every edge is directed and without directed cycle. A directed graphical model (aka Bayesian Network) on a DAG is a family of probability distributions that factorize as follows: p(x1,..., xm) = ∏ p(xs  xπ(s)), (1) s∈V where V is the vertex set of the DAG, and π(s) is the parent set of s in the DAG. The conditionals p(xs  x π(s)) are also known as the conditional probability distributions (CPDs) or conditional probability tables (CPTs, for discrete random variables). In undirected graphs, a clique C is a fully connected subset of V: s, t ∈ C ⇒ (s, t) ∈ E where E is the edge set. Let ψC be a nonnegative potential function defined over C. An undirected graphical model (aka Markov Random Field) on the graph is a family of probability distributions that factorize as follows: p(x1,..., xm) = 1 Z ψC(xC), (2)
CONTENTS
"... The effect of a series of photographic developers on the final silverstaining picture has been investigated. Ten common developers were used, but of these only hydroquinone, chloroquinol, pyrogallol, and ̂aminophenol, were found to be of general use. The other developers were either so weak in th ..."
Abstract
 Add to MetaCart
(Show Context)
The effect of a series of photographic developers on the final silverstaining picture has been investigated. Ten common developers were used, but of these only hydroquinone, chloroquinol, pyrogallol, and ̂aminophenol, were found to be of general use. The other developers were either so weak in their action that the final staining was light and incomplete, or so powerful that a differentiated nerve staining was not produced. For silver staining to be effected nuclei of reduced silver should be present in the section. These nuclei act as centres for the deposition of additional silver reduced by the developer; the additional silver may either be derived from that combined with the sections during impregnation or from the developing solution itself. Whether or not the additional silver is deposited in such a way as to produce differentiated nerve staining depends on the properties of the developer and on the composition of the developing solution. The redox and 'bromide'potentials, the sulphite and hydrogen ion concentrations in the developing solution, and the protective action of the tissue components of the section all play a part in determining the final staining picture.
A Network Model characterized by a Latent Attribute Structure with Competition
, 2014
"... The quest for a model that is able to explain, describe, analyze and simulate realworld complex networks is of uttermost practical, as well as theoretical, interest. In this paper we introduce and study a network model that is based on a latent attribute structure: each node is characterized by a ..."
Abstract
 Add to MetaCart
(Show Context)
The quest for a model that is able to explain, describe, analyze and simulate realworld complex networks is of uttermost practical, as well as theoretical, interest. In this paper we introduce and study a network model that is based on a latent attribute structure: each node is characterized by a number of features and the probability of the existence of an edge between two nodes depends on the features they share. Features are chosen according to a process of IndianBuffet type but with an additional random “fitness” parameter attached to each node, that determines its ability to transmit its own features to other nodes. As a consequence, a node’s connectivity does not depend on its age alone, so also “young ” nodes are able to compete and succeed in acquiring links. One of the advantages of our model for the latent bipartite “nodeattribute ” network is that it depends on few parameters with a straightforward interpretation. We provide some theoretical, as well experimental, results regarding the powerlaw behavior of the model and the estimation of the parameters. By experimental data, we also show how the proposed model for the attribute structure naturally captures most local and global properties (e.g., degree distributions, connectivity and distance distributions) real networks exhibit.
Big Learning with Bayesian Methods
, 2007
"... Explosive growth in data and availability of cheap computing resources have sparked increasing interest in Big learning, an emerging subfield that studies scalable machine learning algorithms, systems, and applications with Big Data. Bayesian methods represent one important class of statistic metho ..."
Abstract
 Add to MetaCart
(Show Context)
Explosive growth in data and availability of cheap computing resources have sparked increasing interest in Big learning, an emerging subfield that studies scalable machine learning algorithms, systems, and applications with Big Data. Bayesian methods represent one important class of statistic methods for machine learning, with substantial recent developments on adaptive, flexible and scalable Bayesian learning. This article provides a survey of the recent advances in Big learning with Bayesian methods, termed Big Bayesian Learning, including nonparametric Bayesian methods for adaptively inferring model complexity, regularized Bayesian inference for improving the flexibility via posterior regularization, and scalable algorithms and systems based on stochastic subsampling and distributed computing for dealing with largescale applications.
GPatt: Fast Multidimensional Pattern Extrapolation with Gaussian Processes
"... Gaussian processes are typically used for smoothing and interpolation on small datasets. We introduce a new Bayesian nonparametric framework – GPatt – enabling automatic pattern extrapolation with Gaussian processes on large multidimensional datasets. GPatt unifies and extends highly expressive ker ..."
Abstract
 Add to MetaCart
Gaussian processes are typically used for smoothing and interpolation on small datasets. We introduce a new Bayesian nonparametric framework – GPatt – enabling automatic pattern extrapolation with Gaussian processes on large multidimensional datasets. GPatt unifies and extends highly expressive kernels and fast exact inference techniques. Without human intervention – no hand crafting of kernel features, and no sophisticated initialisation procedures – we show that GPatt can solve large scale pattern extrapolation, inpainting, and kernel discovery problems, including a problem with 383400 training points. We find that GPatt significantly outperforms popular alternative scalable Gaussian process methods in speed and accuracy. Moreover, we discover profound differences between each of these methods, suggesting expressive kernels, nonparametric representations, and exact inference are useful for modelling large scale multidimensional patterns. 1.
Scaling Nonparametric Bayesian Inference via SubsampleAnnealing
"... We describe an adaptation of the simulated annealing algorithm to nonparametric clustering and related probabilistic models. This new algorithm learns nonparametric latent structure over a growing and constantly churning subsample of training data, where the portion of data subsampled can be inter ..."
Abstract
 Add to MetaCart
(Show Context)
We describe an adaptation of the simulated annealing algorithm to nonparametric clustering and related probabilistic models. This new algorithm learns nonparametric latent structure over a growing and constantly churning subsample of training data, where the portion of data subsampled can be interpreted as the inverse temperature β(t) in an annealing schedule. Gibbs sampling at high temperature (i.e., with a very small subsample) can more quickly explore sketches of the final latent state by (a) making longer jumps around latent space (as in block Gibbs) and (b) lowering energy barriers (as in simulated annealing). We prove subsample annealing speeds up mixing time N2 → N in a simple clustering model and exp(N) → N in another class of models, where N is data size. Empirically subsampleannealing outperforms naive Gibbs sampling in accuracyperwallclock time, and can scale to larger datasets and deeper hierarchical models. We demonstrate improved inference on millionrow subsamples of US Census data and network log data and a 307row hospital rating dataset, using a PitmanYor generalization of the Cross Categorization model. 1
Review Article An Overview of Bayesian Methods for
"... permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Neural spike train analysis is an important task in computational neuroscience which aims to understand neural mechanisms and gain insights into neural circuits. With the advancement ..."
Abstract
 Add to MetaCart
(Show Context)
permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Neural spike train analysis is an important task in computational neuroscience which aims to understand neural mechanisms and gain insights into neural circuits. With the advancement of multielectrode recording and imaging technologies, it has become increasingly demanding to develop statistical tools for analyzing large neuronal ensemble spike activity. Here we present a tutorial overview of Bayesian methods and their representative applications in neural spike train analysis, at both single neuron and population levels. On the theoretical side, we focus on various approximate Bayesian inference techniques as applied to latent state and parameter estimation. On the application side, the topics include spike sorting, tuning curve estimation, neural encoding and decoding, deconvolution of spike trains from calcium imaging signals, and inference of neuronal functional connectivity and synchrony. Some research challenges and opportunities for neural spike train analysis are discussed. 1.