• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 1,269
Next 10 →

Blind Beamforming for Non Gaussian Signals

by Jean-François Cardoso, Antoine Souloumiac - IEE Proceedings-F , 1993
"... This paper considers an application of blind identification to beamforming. The key point is to use estimates of directional vectors rather than resorting to their hypothesized value. By using estimates of the directional vectors obtained via blind identification i.e. without knowing the arrray mani ..."
Abstract - Cited by 719 (31 self) - Add to MetaCart
estimation of directional vectors, based on joint diagonalization of 4th-order cumulant matrices

Ensemble Learning For Independent Component Analysis

by Harri Lappalainen , 1999
"... In this paper, a recently developed Bayesian method called ensemble learning is applied to independent component analysis (ICA). Ensemble learning is a computationally efficient approximation for exact Bayesian analysis. In general, the posterior probability density function (pdf) is a complex high ..."
Abstract - Cited by 50 (4 self) - Add to MetaCart
, the posterior pdf is approximated by a diagonal Gaussian pdf. According to the ICA-model used in this paper, the measurements are generated by a linear mapping from mutually independent source signals whose distributions are mixtures of Gaussians. The measurements are also assumed to have additive Gaussian

Distance Distribution of the Diagonal Gaussian Graphs

by Cristóbal Camarero Coterillo, V Z[i]α
"... The Gaussian integers Z[i] is the subset of the complex numbers C with integer real and imaginary parts, that is: Z[i]: = {x + yi | x, y ∈ Z}. Given any 0 ̸ = α ∈ Z[i] we consider Z[i]α which is the ring of the classes of Z[i] modulo the ideal (α) generated by α. Definition 1 Let 0 ̸ = α ∈ Z[i], the ..."
Abstract - Add to MetaCart
[i], then the Diagonal Gaussian graph generated by α, G 8 α = (V, E), is defined as:

Coil sensitivity encoding for fast MRI. In:

by Klaas P Pruessmann , Markus Weiger , Markus B Scheidegger , Peter Boesiger - Proceedings of the ISMRM 6th Annual Meeting, , 1998
"... New theoretical and practical concepts are presented for considerably enhancing the performance of magnetic resonance imaging (MRI) by means of arrays of multiple receiver coils. Sensitivity encoding (SENSE) is based on the fact that receiver sensitivity generally has an encoding effect complementa ..."
Abstract - Cited by 193 (3 self) - Add to MetaCart
n K . Assembling sample and image values in vectors, image reconstruction may be rewritten in matrix notation: With such linear mapping the propagation of noise from sample values into image values is conveniently described by noise matrices. The -th diagonal entry of the image noise matrix X

Behavioral theories and the neurophysiology of reward,

by Wolfram Schultz - Annu. Rev. Psychol. , 2006
"... ■ Abstract The functions of rewards are based primarily on their effects on behavior and are less directly governed by the physics and chemistry of input events as in sensory systems. Therefore, the investigation of neural mechanisms underlying reward functions requires behavioral theories that can ..."
Abstract - Cited by 187 (0 self) - Add to MetaCart
of different food and liquid rewards Reward neurons should distinguish rewards from punishers. Different neurons in orbitofrontal cortex respond to rewarding and aversive liquids The omission of reward following a CS moves the contingency toward the diagonal line in Prediction Error Just as with behavioral

Walk-Sums and Belief Propagation in Gaussian Graphical Models

by Dmitry M. Malioutov, Jason K. Johnson, Alan S. Willsky - Journal of Machine Learning Research , 2006
"... We present a new framework based on walks in a graph for analysis and inference in Gaussian graphical models. The key idea is to decompose the correlation between each pair of variables as a sum over all walks between those variables in the graph. The weight of each walk is given by a product of edg ..."
Abstract - Cited by 101 (16 self) - Add to MetaCart
of edge-wise partial correlation coefficients. This representation holds for a large class of Gaussian graphical models which we call walk-summable. We give a precise characterization of this class of models, and relate it to other classes including diagonally dominant, attractive, nonfrustrated

Blind Separation of Instantaneous Mixtures of Non Stationary Sources

by Dinh-tuan Pham, Jean-François Cardoso - IEEE Trans. Signal Processing , 2000
"... Most ICA algorithms are based on a model of stationary sources. This paper considers exploiting the (possible) non-stationarity of the sources to achieve separation. We introduce two objective functions based on the likelihood and on mutual information in a simple Gaussian non stationary model and w ..."
Abstract - Cited by 167 (12 self) - Add to MetaCart
and we show how they can be optimized, off-line or on-line, by simple yet remarkably efficient algorithms (one is based on a novel joint diagonalization procedure, the other on a Newton-like technique). The paper also includes (limited) numerical experiments and a discussion contrasting non-Gaussian

GAUSSIAN ELIMINATION IS STABLE FOR THE INVERSE OF A DIAGONALLY DOMINANT MATRIX

by Alan George, Khakim, D. Ikramov
"... Abstract. Let B ∈ Mn(C) be a row diagonally dominant matrix, i.e., n� σi|bii | = |bij|, i =1,...,n, j=1 j�=i where 0 ≤ σi < 1, i =1,...,n, with σ =max1≤i≤n σi. We show that no pivoting is necessary when Gaussian elimination is applied to A = B −1. Moreover, the growth factor for A does not excee ..."
Abstract - Cited by 1 (0 self) - Add to MetaCart
Abstract. Let B ∈ Mn(C) be a row diagonally dominant matrix, i.e., n� σi|bii | = |bij|, i =1,...,n, j=1 j�=i where 0 ≤ σi < 1, i =1,...,n, with σ =max1≤i≤n σi. We show that no pivoting is necessary when Gaussian elimination is applied to A = B −1. Moreover, the growth factor for A does

OFF-DIAGONAL BOUNDS OF NON-GAUSSIAN TYPE FOR THE DIRICHLET HEAT KERNEL

by Gabriele Grillo
"... The paper considers the heat kernel KX(t,x, y) of the operator fiD on a proper Euclidean domain X, with Dirichlet boundary conditions. A general pointwise lower bound for KX, which is valid for t larger than a suitable t (x, y), is proved (the short-time behaviour being well understood). The resulti ..."
Abstract - Add to MetaCart
). The resulting non-Gaussian bounds describe simultaneously both the case of bounded domains and the case, modelled on the half-space example, of domains which satisfy a twisted infinite internal cone condition. Bounds for the Green’s function are given as well.

The Bucket Box Intersection (BBI) Algorithm For Fast Approximative Evaluation Of Diagonal Mixture Gaussians

by J. Fritsch, I. Rogina - In Proc. ICASSP , 1996
"... Today, most of the state-of-the-art speech recognizers are based on Hidden Markov modeling. Using semi-continuous or continuous density Hidden Markov Models, the computation of emission probabilities requires the evaluation of mixture Gaussian probability density functions. Since it is very expensiv ..."
Abstract - Cited by 25 (3 self) - Add to MetaCart
expensive to evaluate all the Gaussians of the mixture density codebook, many recognizers only compute the M most significant Gaussians (M = 1; : : : ; 8). This paper presents an alternative approach to approximate mixture Gaussians with diagonal covariance matrices, based on a binary feature space
Next 10 →
Results 1 - 10 of 1,269
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University