Results 1 
7 of
7
Building Blocks For Variational Bayesian Learning Of Latent Variable Models
 JOURNAL OF MACHINE LEARNING RESEARCH
, 2006
"... We introduce standardised building blocks designed to be used with variational Bayesian learning. The blocks include Gaussian variables, summation, multiplication, nonlinearity, and delay. A large variety of latent variable models can be constructed from these blocks, including variance models a ..."
Abstract

Cited by 12 (8 self)
 Add to MetaCart
We introduce standardised building blocks designed to be used with variational Bayesian learning. The blocks include Gaussian variables, summation, multiplication, nonlinearity, and delay. A large variety of latent variable models can be constructed from these blocks, including variance models and nonlinear modelling, which are lacking from most existing variational systems. The introduced blocks are designed to fit together and to yield e#cient update rules. Practical implementation of various models is easy thanks to an associated software package which derives the learning formulas automatically once a specific model structure has been fixed. Variational Bayesian learning provides a cost function which is used both for updating the variables of the model and for optimising the model structure. All the computations can be carried out locally, resulting in linear computational complexity. We present
Bayes Blocks: An implementation of the variational Bayesian building blocks framework
 In Proceedings of the 21st Conference on Uncertainty in Artificial Intelligence, UAI 2005
, 2005
"... A software library for constructing and learning probabilistic models is presented. The library offers a set of building blocks from which a large variety of static and dynamic models can be built. These include hierarchical models for variances of other variables and many nonlinear models. The unde ..."
Abstract

Cited by 7 (5 self)
 Add to MetaCart
(Show Context)
A software library for constructing and learning probabilistic models is presented. The library offers a set of building blocks from which a large variety of static and dynamic models can be built. These include hierarchical models for variances of other variables and many nonlinear models. The underlying variational Bayesian machinery, providing for fast and robust estimation but being mathematically rather involved, is almost completely hidden from the user thus making it very easy to use the library. The building blocks include Gaussian, rectified Gaussian and mixtureofGaussians variables and computational nodes which can be combined rather freely. 1
Blind Separation of Nonlinear Mixtures by Variational Bayesian Learning
"... Blind separation of sources from nonlinear mixtures is a challenging and often illposed problem. We present three methods for solving this problem: an improved nonlinear factor analysis (NFA) method using multilayer perceptron (MLP) network to model the nonlinearity, a hierarchical NFA (HNFA) method ..."
Abstract
 Add to MetaCart
(Show Context)
Blind separation of sources from nonlinear mixtures is a challenging and often illposed problem. We present three methods for solving this problem: an improved nonlinear factor analysis (NFA) method using multilayer perceptron (MLP) network to model the nonlinearity, a hierarchical NFA (HNFA) method suitable for larger problems and a postnonlinear NFA (PNFA) method for more restricted postnonlinear mixtures. The methods are based on variational Bayesian learning, which provides the needed regularisation and allows for easy handling of missing data. While the basic methods are incapable of recovering the correct rotation of the source space, they can discover the underlying nonlinear manifold and allow reconstruction of the original sources using standard linear independent component analysis (ICA) techniques. Key words: Bayesian learning, blind source separation, nonlinear mixtures, postnonlinear mixtures, variational Bayes PACS: 1
unknown title
"... 0.1 Bayesian modeling and variational learning: introduction Unsupervised learning methods are often based on a generative approach where the goal is to find a model which explains how the observations were generated. It is assumed that there exist certain source signals (also called factors, latent ..."
Abstract
 Add to MetaCart
(Show Context)
0.1 Bayesian modeling and variational learning: introduction Unsupervised learning methods are often based on a generative approach where the goal is to find a model which explains how the observations were generated. It is assumed that there exist certain source signals (also called factors, latent or hidden variables, or hidden causes) which have generated the observed data through an unknown mapping. The goal of generative learning is to identify both the source signals and the unknown generative mapping. The success of a specific model depends on how well it captures the structure of the phenomena underlying the observations. Various linear models have been popular, because their mathematical treatment is fairly easy. However, in many realistic cases the observations have been generated by a nonlinear process. Unsupervised learning of a nonlinear model is a challenging task, because it is typically computationally much more demanding than for linear models, and flexible models require strong regularization. In Bayesian data analysis and estimation methods, all the uncertain quantities are modeled in terms of their joint probability distribution. The key principle is to construct
Title in English: Hierarchical Variance Models of Image Sequences Professuurin koodi ja nimi: T61 Informaatiotekniikka
, 2004
"... Tiivistelmä: Ohjaamattomaan oppimiseen perustuvat kuvasekvenssien mallit tuottavat yleensä yksinkertaisia piirteitä kuten reunasuotimia. Nämä yksinkertaiset piirteet eivät tarjoa kovinkaan korkean tason informaatiota kuvasekvenssistä. Yhdistämällä näiden tuottamaa informaatiota on kuitenkin mahdolli ..."
Abstract
 Add to MetaCart
(Show Context)
Tiivistelmä: Ohjaamattomaan oppimiseen perustuvat kuvasekvenssien mallit tuottavat yleensä yksinkertaisia piirteitä kuten reunasuotimia. Nämä yksinkertaiset piirteet eivät tarjoa kovinkaan korkean tason informaatiota kuvasekvenssistä. Yhdistämällä näiden tuottamaa informaatiota on kuitenkin mahdollista irrottaa mielekkäämpiä piirteitä datasta. Tilastollisten mallien ennustamat arvot ovat yleensä taustalla olevien todennäköisyysjakaumien odotusarvoja. Korkeamman kertaluvun statistiikat jätetään huomiotta. Varianssi kuvaa todennäköisyysjakauman hajontaa sen keskiarvosta. Varianssien estimointi yhdessä odotusarvojen kanssa on hankalaa ja yleensä sitä ei juurikaan tehdä. Kuitenkin on hyvin tiedossa, että monissa datajoukoissa varianssi sisältää paljon informaatiota, jota ei saada irrotettua pelkkiä keskiarvoja mallintamalla. Tässä työssä oleellinen kysymys on, saavutetaanko varianssien mallintamisella kuvasekvensseissä jotain hyödyllistä tavallisiin malleihin verrattuna. Työssä näytetään, että näin todellakin on ja rakennetaan eräs variansseja hyödyntävä hierarkkinen malli.
Chapter 4 Variational Bayesian learning of generative models
"... 80 Variational Bayesian learning of generative models 4.1 Bayesian modeling and variational learning: introduction Unsupervised learning methods are often based on a generative approach where the goal is to find a model which explains how the observations were generated. It is assumed that there exi ..."
Abstract
 Add to MetaCart
(Show Context)
80 Variational Bayesian learning of generative models 4.1 Bayesian modeling and variational learning: introduction Unsupervised learning methods are often based on a generative approach where the goal is to find a model which explains how the observations were generated. It is assumed that there exist certain source signals (also called factors, latent or hidden variables, or hidden causes) which have generated the observed data through an unknown mapping. The goal of generative learning is to identify both the source signals and the unknown generative mapping. The success of a specific model depends on how well it captures the structure of the phenomena underlying the observations. Various linear models have been popular, because their mathematical treatment is fairly easy. However, in many realistic cases the observations have been generated by a nonlinear process. Unsupervised learning of a nonlinear model is a challenging task, because it is typically computationally much more demanding than for linear models, and flexible models require strong regularization. In Bayesian data analysis and estimation methods, all the uncertain quantities are