Results 1  10
of
67
Bayesian Interpolation
 NEURAL COMPUTATION
, 1991
"... Although Bayesian analysis has been in use since Laplace, the Bayesian method of modelcomparison has only recently been developed in depth. In this paper, the Bayesian approach to regularisation and modelcomparison is demonstrated by studying the inference problem of interpolating noisy data. T ..."
Abstract

Cited by 728 (17 self)
 Add to MetaCart
(Show Context)
Although Bayesian analysis has been in use since Laplace, the Bayesian method of modelcomparison has only recently been developed in depth. In this paper, the Bayesian approach to regularisation and modelcomparison is demonstrated by studying the inference problem of interpolating noisy data. The concepts and methods described are quite general and can be applied to many other problems. Regularising constants are set by examining their posterior probability distribution. Alternative regularisers (priors) and alternative basis sets are objectively compared by evaluating the evidence for them. `Occam's razor' is automatically embodied by this framework. The way in which Bayes infers the values of regularising constants and noise levels has an elegant interpretation in terms of the effective number of parameters determined by the data set. This framework is due to Gull and Skilling.
InformationBased Objective Functions for Active Data Selection
 Neural Computation
"... Learning can be made more efficient if we can actively select particularly salient data points. Within a Bayesian learning framework, objective functions are discussed which measure the expected informativeness of candidate measurements. Three alternative specifications of what we want to gain infor ..."
Abstract

Cited by 428 (4 self)
 Add to MetaCart
(Show Context)
Learning can be made more efficient if we can actively select particularly salient data points. Within a Bayesian learning framework, objective functions are discussed which measure the expected informativeness of candidate measurements. Three alternative specifications of what we want to gain information about lead to three different criteria for data selection. All these criteria depend on the assumption that the hypothesis space is correct, which may prove to be their main weakness. 1 Introduction Theories for data modelling often assume that the data is provided by a source that we do not control. However, there are two scenarios in which we are able to actively select training data. In the first, data measurements are relatively expensive or slow, and we want to know where to look next so as to learn as much as possible. According to Jaynes (1986), Bayesian reasoning was first applied to this problem two centuries ago by Laplace, who in consequence made more important discoveries...
Interpolation and extrapolation using a highresolution discrete fourier transform
 IEEE Transaction on Signal Processing
, 1998
"... Abstract—We present an iterative nonparametric approach to spectral estimation that is particularly suitable for estimation of line spectra. This approach minimizes a cost function derived from Bayes ’ theorem. The method is suitable for line spectra since a “long tailed ” distribution is used to mo ..."
Abstract

Cited by 65 (7 self)
 Add to MetaCart
(Show Context)
Abstract—We present an iterative nonparametric approach to spectral estimation that is particularly suitable for estimation of line spectra. This approach minimizes a cost function derived from Bayes ’ theorem. The method is suitable for line spectra since a “long tailed ” distribution is used to model the prior distribution of spectral amplitudes. An important aspect of this method is that since the data themselves are used as constraints, phase information can also be recovered and used to extend the data outside the original window. The objective function is formulated in terms of hyperparameters that control the degree of fit and spectral resolution. Noise rejection can also be achieved by truncating the number of iterations. Spectral resolution and extrapolation length are controlled by a single parameter. When this parameter is large compared with the spectral powers, the algorithm leads to zero extrapolation of the data, and the estimated Fourier transform yields the periodogram. When the data are sampled at a constant rate, the algorithm uses one Levinson recursion per iteration. For irregular sampling (unevenly sampled and/or gapped data), the algorithm uses one Cholesky decomposition per iteration. The performance of the algorithm is illustrated with three different problems that frequently arise in geophysical data processing: 1) harmonic retrieval from a time series contaminated with noise; 2) linear event detection from a finite aperture array of receivers [which, in fact, is an extension of 1)], 3) interpolation/extrapolation of gapped data. The performance of the algorithm as a spectral estimator is tested with the Kay and Marple data set. It is shown that the achieved resolution is comparable with parametric methods but with more accurate representation of the relative power in the spectral lines. Index Terms—Bayes procedures, discrete Fourier transforms, interpolation, inverse problems, iterative methods, signal restoration, signal sampling/reconstruction, spectral analysis. I.
PartialVolume Bayesian Classification of Material Mixtures in MR Volume Data using Voxel Histograms
, 1998
"... We present a new algorithm for identifying the distribution of different material types in volumetric datasets such as those produced with Magnetic Resonance Imaging (MRI) or Computed Tomography (CT). Because we allow for mixtures of materials and treat voxels as regions, our technique reduces error ..."
Abstract

Cited by 63 (2 self)
 Add to MetaCart
We present a new algorithm for identifying the distribution of different material types in volumetric datasets such as those produced with Magnetic Resonance Imaging (MRI) or Computed Tomography (CT). Because we allow for mixtures of materials and treat voxels as regions, our technique reduces errors that other classification techniques can create along boundaries between materials and is particularly useful for creating accurate geometric models and renderings from volume data. It also has the potential to make volume measurements more accurately and classifies noisy, lowresolution data well. There are two unusual aspects to our approach. First, we assume that, due to partialvolume effects, or blurring, voxels can contain more than one material, e.g., both muscle and fat; we compute the relative proportion of each material in the voxels. Second, we incorporate information from neighboring voxels into the classification process by reconstructing a continuous function, ##x#, from the...
Bayes in the sky: Bayesian inference and model selection in cosmology
 Contemp. Phys
"... The application of Bayesian methods in cosmology and astrophysics has flourished over the past decade, spurred by data sets of increasing size and complexity. In many respects, Bayesian methods have proven to be vastly superior to more traditional statistical tools, offering the advantage of higher ..."
Abstract

Cited by 58 (7 self)
 Add to MetaCart
(Show Context)
The application of Bayesian methods in cosmology and astrophysics has flourished over the past decade, spurred by data sets of increasing size and complexity. In many respects, Bayesian methods have proven to be vastly superior to more traditional statistical tools, offering the advantage of higher efficiency and of a consistent conceptual basis for dealing with the problem of induction in the presence of uncertainty. This trend is likely to continue in the future, when the way we collect, manipulate and analyse observations and compare them with theoretical models will assume an even more central role in cosmology. This review is an introduction to Bayesian methods in cosmology and astrophysics and recent results in the field. I first present Bayesian probability theory and its conceptual underpinnings, Bayes ’ Theorem and the role of priors. I discuss the problem of parameter inference and its general solution, along with numerical techniques such as Monte Carlo Markov Chain methods. I then review the theory and application of Bayesian model comparison, discussing the notions of Bayesian evidence and effective model complexity, and how to compute and interpret those quantities. Recent developments in cosmological parameter extraction and Bayesian cosmological model building are summarized, highlighting the challenges that lie ahead.
The Promise of Bayesian Inference for Astrophysics
, 1992
"... . The `frequentist' approach to statistics, currently dominating statistical practice in astrophysics, is compared to the historically older Bayesian approach, which is now growing in popularity in other scientific disciplines, and which provides unique, optimal solutions to wellposed problems ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
. The `frequentist' approach to statistics, currently dominating statistical practice in astrophysics, is compared to the historically older Bayesian approach, which is now growing in popularity in other scientific disciplines, and which provides unique, optimal solutions to wellposed problems. The two approaches address the same questions with very different calculations, but in simple cases often give the same final results, confusing the issue of whether one is superior to the other. Here frequentist and Bayesian methods are applied to problems where such a mathematical coincidence does not occur, allowing assessment of their relative merits based on their performance, rather than on philosophical argument. Emphasis is placed on a key distinction between the two approaches: Bayesian methods, based on comparisons among alternative hypotheses using the single observed data set, consider averages over hypotheses; frequentist methods, in contrast, average over hypothetical alternative...
Bayesian Methods for Neural Networks: Theory and Applications
, 1995
"... this document. Before these are discussed however, perhaps we should have a tutorial on Bayesian probability theory and its application to model comparison problems. 2 Probability theory and Occam's razor ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
this document. Before these are discussed however, perhaps we should have a tutorial on Bayesian probability theory and its application to model comparison problems. 2 Probability theory and Occam's razor
Estimating First Order Geometric Parameters and Monitoring Contact Transitions During Force Controlled Compliant Motion
 Int. J. Robotics Research
, 1999
"... This paper uses (linearized) Kalman Filters to estimate firstorder geometric parameters (i.e., orientation of contact normals and location of contact points) and/or their timevariance that occur in force controlled compliant motions, adn to monitor transitions between contact situations. The conta ..."
Abstract

Cited by 15 (5 self)
 Add to MetaCart
(Show Context)
This paper uses (linearized) Kalman Filters to estimate firstorder geometric parameters (i.e., orientation of contact normals and location of contact points) and/or their timevariance that occur in force controlled compliant motions, adn to monitor transitions between contact situations. The contact between the manipulated object and its environment is general, i.e., multiple contacts can occur at the same time, and both the topology and the geometry of each single contact are arbitrary.
Learning, Bayesian Probability, Graphical Models, and Abduction
 Abduction and Induction: Essays on their Relation and Integration, Chapter 10
, 1998
"... In this chapter I review Bayesian statistics as used for induction and relate it to logicbased abduction. Much reasoning under uncertainty, including induction, is based on Bayes' rule. Bayes' rule is interesting precisely because it provides a mechanism for abduction. I review work of Bu ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
(Show Context)
In this chapter I review Bayesian statistics as used for induction and relate it to logicbased abduction. Much reasoning under uncertainty, including induction, is based on Bayes' rule. Bayes' rule is interesting precisely because it provides a mechanism for abduction. I review work of Buntine that argues that much of the work on Bayesian learning can be best viewed in terms of graphical models such as Bayesian networks, and review previous work of Poole that relates Bayesian networks to logicbased abduction. This lets us see how much of the work on induction can be viewed in terms of logicbased abduction. I then explore what this means for extending logicbased abduction to richer representations, such as learning decision trees with probabilities at the leaves. Much of this paper is tutorial in nature; both the probabilistic and logicbased notions of abduction and induction are introduced and motivated. 1 Introduction This paper explores the relationship between learning (induct...
Bayesian and InformationTheoretic Tools for Neuroscience
 Schoolof Psychology, University of
, 2006
"... The overarching purpose of the studies presented in this report is the exploration of the uses of information theory and Bayesian inference applied to neural codes. Two approaches were taken: Starting from first principles, a coding mechanism is proposed, the results are compared to a biological neu ..."
Abstract

Cited by 9 (7 self)
 Add to MetaCart
(Show Context)
The overarching purpose of the studies presented in this report is the exploration of the uses of information theory and Bayesian inference applied to neural codes. Two approaches were taken: Starting from first principles, a coding mechanism is proposed, the results are compared to a biological neural code. Secondly, tools from information theory are used to measure the information contained in a biological neural code. Chapter 3: The REC model proposed by Harpur and Prager [33] codes inputs into a sparse, factorial representation, maintaining reconstruction accuracy. Here I propose a modification of the REC model to determine the optimal network dimensionality. The resulting code for unfiltered natural images is accurate, highly sparse and a large fraction of the code elements show localized features. Furthermore, I propose an activation algorithm for the network that is faster and more accurate than a gradient descent based activation method. Moreover, it is demonstrated that asymmetric noise promotes sparseness. Chapter 4: A fast, exact alternative to Bayesian classification is introduced. Computational time is quadratic in both the number of observed data points and the number of degrees of freedom of the underlying model. As an example application, responses of single neurons from highlevel visual cortex (area STSa) to rapid sequences of complex visual stimuli are analyzed. Chapter 5: I present an exact Bayesian treatment of a simple, yet sufficiently general probability distribution model. The model complexity, exact values of the expectations of entropies and their variances can be computed with polynomial effort given the data. The expectation of the mutual information becomes thus available, too, and a strict upper bound on its variance. The resulting algorithm is first tested on artificial data. To that end, an information theoretic similarity measure is derived. Second, the algorithm is demonstrated to be useful in neuroscience by studying the information content of the neural responses analyzed in the previous chapter. It is shown that the information throughput of STS neurons is maximized for stimulus durations ≈ 60ms.